Category Archives: Artificial Intelligence

Combatting AI bias: remembering the human at the heart of the data is key – JAXenter

Artificial Intelligence (AI) once considered the stuff of science fiction has now permeated almost every aspect of our society. From making decisions regarding arrests and parole, to determining health and suitability for jobs, we are seeing algorithms take on the challenge of quantifying, understanding, and making recommendations in place of a human arbitrator.

For businesses this has presented a wealth of opportunities to streamline and hone processes, along with providing critical services such as facial recognition and healthcare screening to governments and nations across the world. With this growing demand comes an increased supply for developers who can create and build algorithms with the level of complexity and sophistication needed to make decisions on a global scale. However, according to a 2019 survey by Forrester, only 29% of developershave worked on AI projects, despite 83% expressing a desire to learn and take them on.

As with any developer project, working on AI brings with it its own unique set of challenges that businesses and developers must be aware of. In the case of AI, the chief issue is that of bias. A biased algorithm can be the difference between a reliable, trustworthy, and useful product, and an Orwellian nightmare resulting in prejudiced, unethical decisions and a PR catastrophe. It is therefore crucial that businesses and developers understand how to mitigate these effects from the beginning, and that an awareness of bias is built into the heart of the project.

Every developer (and every person, for that matter) has conscious and unconscious biases that inform the way they approach data collection and the world in general. This can range from the mundane, such as a preference for the colour red over blue, to the more sinister, via the assumption of gender roles, racial profiling, and historical discrimination. The prevalence of bias throughout society means that the training sets of data used by algorithms to learn reflect these assumptions, resulting in decisions which are skewed for or against certain sections of society. This is known as algorithmic bias.

Whilst 99% of developers would never intend to cause any kind of unfairness and suffering to the end users in fact, most of these products are designed to help people the results of unintentional AI bias can often be devastating. Take the case of Amazons recruitment algorithm, which scored women lower based on historical data where the majority of successful candidates (and indeed the only applicants) were men. Or the infamous US COMPAS system, which ranked African-American prisoners at a far higher risk of re-offending than their white counterparts regardless of their crimes or previous track record based on data gathered from racial profiling.

There is also a second possibility of technical bias when developing an algorithm. This occurs when the training data is not reflective of all possible scenarios that the algorithm may encounter when used for life-saving or critical functions. In 2016, Teslas first known autopilot fatality occurred as a result of the AI being unable to identify the white side of a van against a brightly lit sky, resulting in the autopilot not applying the brakes. This kind of accident highlights the need to provide the algorithm with constant, up-to-date training and data reflecting myriad scenarios, along with the importance of testing in-the-wild, in all kinds of conditions.

Finally, we have emergent bias. This occurs when the algorithm encounters new knowledge, or when theres a mismatch between the user and the design system. An excellent example of this is Amazons Echo smart speaker, which has mistaken countless different words for its wake up cue of Alexa, resulting in the device responding and collecting information unasked for. Here, its easy to see how incorporating a broader range of dialects, tones, and potential missteps into the training process may have helped to mitigate the issue.

Whilst companies are increasingly researching methods to spot and mitigate biases, many fail to realise the importance of human-centric testing. At the heart of each of the data points feeding an algorithm lies a real person, and it is essential to have a sophisticated, rigorous form of software testing in place that harnesses the power of crowds something that simply cannot be achieved in a static QA lab.

All of the biases outlined above can be limited by working with a truly diverse data set which reflects the mix of languages, races, genders, locations, cultures, hobbies that we see in our day-to-day life. In-the-wild testers can also help to reduce the likelihood of accidents by spotting errors which AI might miss, or simply by asking questions which the algorithm does not have the programmed knowledge to comprehend. Considering the vast wealth of human knowledge and insight available at our fingertips via the web, not making use of this opportunity is highly amiss. Spotting these kinds of obstacles early can also be incredibly beneficial from a business standpoint, allowing the developer team to create a product which truly meets the needs of the end user and the purpose it was created for.

As AI continues its path to become omnipresent in our lives, its crucial that those tasked with building our future are able to make it as fair and inclusive as possible. This is no easy task, but with a considered approach which stops to remember the human at the heart of the data we are one step closer to achieving it.

Read more here:
Combatting AI bias: remembering the human at the heart of the data is key - JAXenter

Artificial Intelligence at UBS Current Applications and Initiatives – Emerj

UBS is a Swiss multinational investment banking and financial services company ranked 30th on S&P Globals list of the top 100 banks. In addition to investment banking and wealth management, the company is looking to improve its tech stack through several AI projects.

Our AI Opportunity Landscape research in financial services uncovered the following three AI initiatives at UBS:

We begin our coverage of UBS AI initiatives with their project for a virtual financial assistant for their banking clients.

UBS partnered with IBM and Digital Humans (formerly FaceMe) to create a virtual financial assistant for its customers. The virtual assistant is a conversational interface built with IBMs Watson Natural Language Understanding solution. Watson runs primarily on natural language processing technology, which is an approach to AI that enables the extraction and analysis of written text and human speech. Digital Humans provided the 3D character model for the avatar, which represents the assistant on-screen.

The video below explains how Watson Natural Language Understanding works:

UBS developed two distinct digital avatars. One avatar, named Fin, is built for managing simple tasks such as helping a customer cancel and replace a credit card. The second avatar, Daniel, can purportedly answer investment questions. IBM claims Watson affords UBS the following capabilities:

UBS also started an internal initiative with the goal of solving liquidity issues within foreign exchange using machine learning. In 2018, the bank announced its ORCA direct solution, which purportedly helped its employees execute foreign exchange transactions more quickly.

The banks software could automatically decide the best digital channel by which to execute a foreign exchange deal. This may save the bank a significant amount of time, as it would be particularly difficult to optimize for a bank with access to so many separate trading channels.

Additionally, these platforms may run on different pricing metrics, and banks may incur certain fees depending on the type of trade they are making. UBS updated the solution to ORCA Pro in 2019, which it claims can now act as a single-dealer platform.

This platform is linked to UBS optimization engine, which helps reduce disparity between the expected price and the price at which a trade is executed. For example, if a given deal is made weeks after UBS financial advisor had last spoken to the client, ORCA pro might be able to discern that the bid/ask spread for the deal has fluctuated without either party noticing.

UBS claims their ORCA Direct and Pro solutions provide the following capabilities to their staff:

UBS third AI initiative is their partnership with vendor Attivio to develop an NLP-enabled search engine for their wealth management, asset management, and investment banking services. Attivio refers to this NLP-based solution as cognitive search, which can be understood as an AI-powered enterprise search application.

The short, 1-minute video below explains how machine learning can enable enterprise search and provide context for more detailed results:

The vendor claims UBS developed this application to facilitate the following capabilities:

Financial services companies need to understand what their competitors are doing with AI if they hope to compete in the same domains and win the customers their competitors are trying to court with more convenient experiences and more financial lucrative wealth management services.

Leaders at large financial services companies use Emerj AI Opportunity Landscapes to discover where AI can bring powerful ROI in areas like wealth and asset management, customer service, fraud detection, and more, so they can win market share well into the future. Learn more about Emerj Research Services.

Header Image Credit: UBS

View post:
Artificial Intelligence at UBS Current Applications and Initiatives - Emerj

Love in the Age of Sex Robots | Hidden Brain – NPR

Kate Devlin, who studies human-computer interactions, says we're on the cusp of a sexual revolution driven by robotics and artificial intelligence. Angela Hsieh/NPR hide caption

Kate Devlin, who studies human-computer interactions, says we're on the cusp of a sexual revolution driven by robotics and artificial intelligence.

In the summer of 2017, Kate Devlin flew from London to southern California. She rented a Ford Mustang convertible and drove to an industrial park in San Marcos, a city south of Los Angeles. Her destination: Abyss Creations, a company that makes life-size sex dolls. In her new book, Turned On: Science, Sex and Robots, Kate describes the moment she first gazed up close at a life-size silicone woman.

"The detail is incredible," she writes. "My hand skims the ankle. The toes are perfect: little wrinkles on the joints, tiny ridges on the toenails. The sole is crisscrossed with the fine skin lines of a human foot. It's beautiful."

Part of Kate's interest in these dolls comes from their newest incarnations. Sex doll manufacturers are now prototyping models that come equipped with robotics and artificial intelligence. This is right in line with Kate's expertise. She studies human-computer interactions and artificial intelligence at King's College London. Kate says one of the more advanced models she viewed, a robot named Harmony, is programmed to offer both friendship and sex.

"She could do anything from telling you a joke, singing a song for you or propositioning you."

While some critics worry that sex dolls, especially ones with AI, cross a dangerous line, Kate believes much of the criticism comes from a fear of a technological landscape that feels unfamiliar and uncomfortable.

"I think that we have expectations that people have to meet a particular checklist of things in their life ... that you should meet someone and then you should marry them and then have children with them, and these are all very kind of macho normative stances that societies impose. And you know what, if people want to shake that up, I think it's good."

This week on Hidden Brain, we reflect on the narrowing gap between humans and machines. What are the possibilities for deep, intimate relationships with artificial lovers? And does it help if those lovers are beautifully designed to look like human beings, and have the faint glow of empathy and intelligence?

Excerpt from:
Love in the Age of Sex Robots | Hidden Brain - NPR

LucidHealth and Riverain Technologies Are Committed to the Delivery of Advanced Radiology Through Artificial Intelligence – BioSpace

MIAMISBURG, Ohio--(BUSINESS WIRE)-- LucidHealth, a physician-owned and led radiology company, announced today that it is using FDA-approved ClearRead CT by Riverain Technologies, an artificial intelligence (AI) imaging software solution for the early detection of lung disease. LucidHealth is one of the first radiology companies in the Midwest to incorporate AI through its partnership with Riverain Technologies.

This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20200414005825/en/

LucidHealth is committed to advancing the quality of community radiology patient care by combining leading radiologist expertise with cutting edge Artificial Intelligence. Riverains ClearRead in combination with LucidHealths RadAssist workflow is just such an example, said Peter Lafferty, M.D., Chief of Physician Integration at LucidHealth.

We are proud to be working with LucidHealth as an AI vendor, said Steve Worrell, CEO at Riverain Technologies. Our ClearRead CT suite allows LucidHealth radiologists to provide quicker, more accurate readings, to work even more efficiently and generate higher-quality reports for better patient outcomes.

Riverain Technologies designs advanced AI imaging software used by leading international healthcare organizations. Riverain ClearRead solutions significantly improve a clinicians ability to accurately and efficiently detect disease in thoracic CT and Xray images and more successfully address the challenges of early detection of lung disease. Powered by machine learning and advanced modeling, the patented, FDA-cleared ClearRead software tools are deployed in the clinic or the Cloud and are powered by the most advanced AI methods available to the medical imaging market.

About LucidHealth:

LucidHealth is a physician-owned and led radiology management company. We partner with radiology groups to provide the technology and resources to increase the strategic value of their practices nationwide. Our belief is that all patients should have access to the highest quality of subspecialized imaging care, regardless of facility size or location. Our mission is to empower independent radiology groups to deliver world-class, subspecialized care to all patients within the communities they serve. For more information, please visit http://www.lucidhealth.com.

About Riverain Technologies:

Dedicated to the early detection of lung disease, Riverain believes the opportunities for machine learning and software solutions in healthcare are at an unprecedented level. Never before has the opportunity to do more with less been so great. We believe that these software tools incorporate an increasing degree of intelligence that will facilitate decision making which leads to greater efficiency and effectiveness in patient outcomes. Riverain Technologies is excited to be part of the advances in machine learning and scalability of technology that will bring efficiency and accuracy to physicians and, ultimately, improved patient care. For more information, please visit https://www.riveraintech.com/

View source version on businesswire.com: https://www.businesswire.com/news/home/20200414005825/en/

More here:
LucidHealth and Riverain Technologies Are Committed to the Delivery of Advanced Radiology Through Artificial Intelligence - BioSpace

Genetics and Artificial Intelligence Drive Qatar University’s Covid-19 Research – Al-Fanar Media

Yassine believes that this allowed researchers to build their capabilities in this field and prepared them to quickly begin work on the SARS-CoV-2 virus.

The accumulated experiences, from working with other similar viruses, like influenza and Middle East respiratory syndrome coronavirus, meant we were ready for virus outbreak scenarios, he said.

We already had laboratories that can accommodate such research. We had techniques in place for genomic sequencing. We have also developed the capabilities of students working in the laboratory to analyze the samples.

The center also launched a study to identify genetic factors that increase the risk of the Covid-19 infection or of more serious complications from the infection among certain population.

I am trying to look at the picture from the host side rather than the virus side, Maria Smatti, a Ph.D. student and research assistant at the center, said. We have the genomic data and the disease severity. We will try to correlate this information to see the common genes between people who have severe Covid-19 disease compared to those who have milder disease.

This allows researchers to identify which population could have more severe symptoms due to genetic susceptibility. Clinicians and public health officials could then take special precautions to protect those who are particularly vulnerable to the disease.

Smatti said that previous studies had already found genes related to immune responses to viruses similar to the new coronavirus, but her work is broader as she is not only focusing on these known genes. Rather, she will check all mutations that correlate with the severity of Covid-19 infections.

The research is a collaboration between the center and Qatar genome program, in addition to being a collaboration with researchers and clinicians from Genomics England, which seeks to sequence 100,000 human genomes, and Imperial College London.

We have access to more than 15,000 genomes of the Qatari population and to the data of 100,000 genomes from U.K. populations, Smatti said. We started looking into the data from Qatar Genome [which has the genomes of Qataris] and expect to finish this phase of the research by the year end.

Yassine says that there is an established vision to develop scientific research in Qatar, but the Covid-19 disease has pushed the project forward.

As we are faced with a real outbreak now, this is an opportunity to gain firsthand experience and build our capacity to combat any new virus or emerging disease that appears in the future, he said.

Read the original:
Genetics and Artificial Intelligence Drive Qatar University's Covid-19 Research - Al-Fanar Media

Mark Cuban: Here’s how to give your kids ‘an edge’ – CNBC

The way to set your children up for success in this day and age is to ensure they learn about artificial intelligence, according to the billionaire tech entrepreneur Mark Cuban.

"Give your kids an edge, have them sign up [and] learn the basics of Artificial Intelligence," Cuban tweeted on Monday.

Cuban, who is a star on the hit ABC show "Shark Tank" and the owner of the Dallas Mavericks NBA basketball team, was promoting a free, one-hour virtual class his foundation is teaching an introduction to artificial intelligence in collaboration with A.I. For Anyone, a nonprofit organization that aims to improve literacy of artificial understanding.

"Parents, want your kids to learn about artificial intelligence while you're stuck in quarantine," Cuban says on his LinkedIn account.

In the hour-long virtual class, "you'll learn what AI is, how it works, its impact on the world, and how you can best prepare for the future of AI," Cuban says on his LinkedIn account about the class. At the end of the hour-long online class, participants will receive a list of Cuban's foundation's best recommendations for AI learning resources.

(Cuban subsequently corrected the link to register.)

The event is from 7 p.m. to 8:30 p.m. EST on Wednesday, April 15.

Cuban has repeatedly used his megaphone to promote the importance of learning and understanding artificial intelligence.

At the South by Southwest conference in Austin, Texas, in March 2019, Cuban talked about how important it is for business owners to understand AI.

"As big as PCs were an impact, as big as the internet was, AI is just going to dwarf it. And if you don't understand it, you're going to fall behind. Particularly if you run a business," Cuban told Recode's Peter Kafka.

Cuban is educating himself about the future implications of AI whenever possible, he said in Austin.

"I mean, I get it on Amazon and Microsoft and Google, and I run their tutorials. If you go in my bathroom, there's a book, 'Machine Learning for Idiots.' Whenever I get a break, I'm reading it," Cuban told Kafka.

If you don't know how to write code or create an AI powered software product, at least you need to know about AI enough to be able to ask intelligent questions, Cuban said.

"If you don't know how to use it and you don't understand it and you can't at least at have a basic understanding of the different approaches and how the algorithms work," Cuban told Kafka, "you can be blindsided in ways you couldn't even possibly imagine."

Disclosure: CNBC owns the exclusive off-network cable rights to "Shark Tank."

See also:

'Shark Tank' billionaire Mark Cuban: 'If I were going to start a business today,' here's what it would be

COVID-19 pandemic proves the need for 'social robots,' 'robot avatars' and more, say experts

Bill Gates: A.I. is like nuclear energy 'both promising and dangerous'

Link:
Mark Cuban: Here's how to give your kids 'an edge' - CNBC

Addressing the gender bias in artificial intelligence and automation – OpenGlobalRights

Geralt/Pixabay

Twenty-five years after the adoption of the Beijing Declaration and Platform for Action, significant gender bias in existing social norms remains. For example, as recently as February 2020, the Indian Supreme Court had to remind the Indian government that its arguments for denying women command positions in the Army were based on stereotypes. And gender bias is not merely a male problem: a recent UNDP report entitled Tackling Social Norms found that about 90% of people (both men and women) hold some bias against women.

Gender bias and various forms of discrimination against women and girls pervades all spheres of life. Womens equal access to science and information technology is no exception. While the challenges posed by the digital divide and under-representation of women in STEM (science, technology, engineering and mathematics) continue, artificial intelligence (AI) and automation are throwing newer challenges to achieving substantive gender equality in the era of the Fourth Industrial Revolution.

If AI and automation are not developed and applied in a gender-responsive way, they are likely to reproduce and reinforce existing gender stereotypes and discriminatory social norms. In fact, this may already be happening (un)consciously. Let us consider a few examples:

Despite the potential for such gender bias, the growing crop of AI standards do not adequately integrate a gender perspective. For example, the Montreal Declaration for the Responsible Development of Artificial Intelligence does not make an explicit reference to integrating a gender perspective, while the AI4Peoples Ethical Framework for a Good AI Society mentions diversity/gender only once. Both the OECD Council Recommendation on AI and the G20 AI Principles stress the importance of AI contributing to reducing gender inequality, but provide no details on how this could be achieved.

The Responsible Machine Learning Principles do embrace bias evaluation as one of the principles. This siloed approach of embracing gender is also adopted by companies like Google and Microsoft, whose AI Principles underscore the need to avoid creating or reinforcing unfair bias and to treat all people fairly, respectively. Companies related to AI and automation should adopt a gender-response approach across all principles to overcome inherent gender bias. Google should, for example, embed a gender perspective in assessing which new technologies are socially beneficial or how AI systems are built and tested for safety.

What should be done to address the gender bias in AI and automation? The gender framework for the UN Guiding Principles on Business and Human Rights could provide practical guidance to states, companies and other actors. The framework involves a three-step cycle: gender-responsive assessment, gender-transformative measures and gender-transformative remedies. The assessment should be able to respond to differentiated, intersectional, and disproportionate adverse impacts on womens human rights. The consequent measures and remedies should be transformative in that they should be capable of bringing change to patriarchal norms, unequal power relations. and gender stereotyping.

States, companies and other actors can take several concrete steps. First, women should be active participantsrather than mere passive beneficiariesin creating AI and automation. Women and their experiences should be adequately integrated in all steps related to design, development and application of AI and automation. In addition to proactively hiring more women at all levels, AI and automation companies should engage gender experts and womens organisations from the outset in conducting human rights due diligence.

Second, the data that informs algorithms, AI and automation should be sex-disaggregated, otherwise the experiences of women will not inform these technological tools and in turn might continue to internalise existing gender biases against women. Moreover, even data related to women should be guarded against any inherent gender bias.

Third, states, companies and universities should plan for and invest in building capacity of women to achieve smooth transition to AI and automation. This would require vocational/technical training at both education and work levels.

Fourth, AI and automation should be designed to overcome gender discrimination and patriarchal social norms. In other words, these technologies should be employed to address challenges faced by women such as unpaid care work, gender pay gap, cyber bullying, gender-based violence and sexual harassment, trafficking, breach of sexual and reproductive rights, and under-representation in leadership positions. Similarly, the power of AI and automation should be employed to enhance womens access to finance, higher education and flexible work opportunities.

Fifth, special steps should be taken to make women aware of their human rights and the impact of AI and automation on their rights. Similar measures are needed to ensure that remedial mechanismsboth judicial and non-judicialare responsive to gender bias, discrimination, patriarchal power structures, and asymmetries of information and resources.

Sixth, states and companies should keep in mind the intersectional dimensions of gender discrimination, otherwise their responses, despite good intentions, will fall short of using AI and automation to accomplish gender equality. Low-income women, single mothers, women of colour, migrant women, women with disability, and non-heterosexual women all may be affected differently by AI and automation and would have differentiated needs or expectations.

Finally, all standards related to AI and automation should integrate a gender perspective in a holistic manner, rather than treating gender as merely a bias issue to be managed.

Technologies are rarely gender neutral in practice. If AI and automation continue to ignore womens experiences or to leave women behind, everyone will be worse off.

This piece is part of a blog series focusing on the gender dimensions of business and human rights. The blog series is in partnership with the Business & Human Rights Resource Centre, the Danish Institute for Human Rights and OpenGlobalRights. The views expressed in the series are those of the authors. For more on the latest news and resources on gender, business and human rights, visit thisportal.

Continue reading here:
Addressing the gender bias in artificial intelligence and automation - OpenGlobalRights

Banking and payments predictions 2020: Artificial intelligence – Verdict

Artificial intelligence (AI) refers to software-based systems that use data inputs to make decisions on their own. Machine learning is an application of AI that gives computer systems the ability to learn and improve from data without being explicitly programmed.

2019 saw financial institutions explore a broad-range of possible AI use cases in both customer-facing and back-office processes, increasing budgets, headcounts, and partnerships. 2020 will see increased focus on breaking out the marketing story from actual business impact to place bigger bets in fewer areas. This will help banks scale proven AI across the enterprise to forge competitive advantage.

Artificial intelligence will re-invigorate digital money management, helping incumbents drip-feed highly personalised spending tips to build trust and engagement in the absence of in-person interaction. Features like predictive insights around cashflow shortfalls, alerts on upcoming bill payments, and various what if scenarios when trying on different financial products give customers transparency around their options and the risks they face. This service will render as an always-on, in-your-pocket, and predictive advisor.

AI-enhanced customer relationship management (CRM) will help digital banks optimise product recommendations to rival the conversion rates of best-in-class online retailers. These product suggestions wont render as sales, but rather valuable advice received, such as a pre-approved loan before a cash shortfall or an option to remortgage to fund home improvements. This will help incumbents build customer advocacy and trust as new entrants vie for attention.

AI-powered onboarding, when combined with voice and facial recognition technologies, will help incumbents make themselves much easier to do business with, especially at the initial point of conversion but also thereafter at each moment of authentication. AI will offer particular support through Know Your Customer (KYC) processes, helping incumbents keep pace with new entrants. Standard Bank in South Africa, for example, used WorkFusions AI capabilities to reduce the customer onboarding time from 20 days to just five minutes.

Banks heavy compliance burden will continue to drive AI. Last year, large global banks such as OCBC Bank, Commonwealth Bank, Wells Fargo, and HSBC made big investments in areas such as automated data management, reporting, anti-money laundering (AML), compliance, automated regulation interpretation, and mapping. Increasingly partnering with artificial intelligence-enabled regtech firms will help incumbents reduce operational risk and enhance reporting quality.

As artificial intelligence becomes more embedded into all areas of customers lives, concerns around the black box driving decisions will grow, with more demands for explainable AI. As it is, customers with little or no digital footprint are less visible to applications that rely on data to profile people and assess risk. Traditional banks credit risk algorithms often disproportionately exclude black and Hispanic groups in the US as well as women, because these groups have historically earned less over their lifetimes.

In 2020, senior management will be held directly accountable for the decisions of AI-enabled algorithms. This will drive increased focus on data quality to feed the algorithms and perhaps limits to the use of the most dynamic machine learning because of their regulatory opacity.

This is an edited extract from the Banking & Payments Predictions 2020 Thematic Research report produced by GlobalData Thematic Research.

GlobalData is this websites parent business intelligence company.

View original post here:
Banking and payments predictions 2020: Artificial intelligence - Verdict

When Machines Design: Artificial Intelligence and the Future of Aesthetics – ArchDaily

When Machines Design: Artificial Intelligence and the Future of Aesthetics

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

Are machines capable of design? Though a persistent question, it is one that increasingly accompanies discussions on architecture and the future of artificial intelligence. But what exactly is AI today? As we discover more about machine learning and generative design, we begin to see that these forms of "intelligence" extend beyond repetitive tasks and simulated operations. They've come to encompass cultural production, and in turn, design itself.

+ 8

When artificial intelligence was envisioned during thethe 1950s-60s, thegoal was to teach a computer to perform a range of cognitive tasks and operations, similar to a human mind. Fast forward half a century, andAIis shaping our aesthetic choices, with automated algorithms suggesting what we should see, read, and listen to. It helps us make aesthetic decisions when we create media, from movie trailers and music albums to product and web designs. We have already felt some of the cultural effects of AI adoption, even if we aren't aware of it.

As educator and theorist Lev Manovich has explained, computers perform endless intelligent operations. "Your smartphones keyboard gradually adapts to your typing style. Your phone may also monitor your usage of apps and adjust their work in the background to save battery. Your map app automatically calculates the fastest route, taking into account traffic conditions. There are thousands of intelligent, but not very glamorous, operations at work in phones, computers, web servers, and other parts of the IT universe."More broadly, it's useful to turn the discussion towards aesthetics and how these advancements relate to art, beauty and taste.

Usually defined as a set of "principles concerned with the nature and appreciation of beauty, aesthetics depend on who you are talking to. In 2018, Marcus Endicott described how, from the perspective of engineering, the traditional definition of aesthetics in computing could be termed "structural, such as an elegant proof, or beautiful diagram." A broader definition may include more abstract qualities of form and symmetry that "enhance pleasure and creative expression." In turn, as machine learning is gradually becoming more widely adopted, it is leading to what Marcus Endicott termed a neural aesthetic. This can be seen in recent artistic hacks, such as Deepdream, NeuralTalk, and Stylenet.

Beyond these adaptive processes, there are other ways AI shapes cultural creation. Artificial intelligence hasrecently made rapid advances in the computation of art, music, poetry, and lifestyle. Manovich explains that AIhas given us the option to automate our aesthetic choices (via recommendation engines), as well as assist in certain areas of aesthetic production such as consumer photography and automate experiences like the ads we see online. "Its use of helping to design fashion items, logos, music, TV commercials, and works in other areas of culture is already growing." But, as he concludes, human experts usually make the final decisions based on ideas and media generated by AI. And yes, the human vs. robot debate rages on.

According to The Economist, 47% of the work done by humans will have been replaced by robots by 2037, even those traditionally associated with university education. The World Economic Forum estimated that between 2015 and 2020, 7.1 million jobs will be lost around the world, as "artificial intelligence, robotics, nanotechnology and other socio-economic factors replace the need for human employees." Artificial intelligence is already changing the way architecture is practiced, whether or not we believe it may replace us. As AI is augmenting design, architects are working to explore the future of aesthetics and how we can improve the design process.

In a tech report on artificial intelligence, Building Design + Construction explored how Arup had applied a neural network to a light rail design and reduced the number of utility clashes by over 90%, saving nearly 800 hours of engineering. In the same vein, the areas of site and social research that utilize artificial intelligence have been extensively covered, and examples are generated almost daily. We know that machine-driven procedures can dramatically improve the efficiency of construction and operations, like by increasing energy performance and decreasing fabrication time and costs. The neural network application from Arup extends to this design decision-making. But the central question comes back to aesthetics and style.

Designer and Fulbright fellow Stanislas Chaillou recently created a project at Harvard utilizing machine learning to explore the future of generative design, bias and architectural style. While studying AI and its potential integration into architectural practice, Chaillou built an entire generation methodology using Generative Adversarial Neural Networks (GANs). Chaillou's project investigates the future of AI through architectural style learning, and his work illustrates the profound impact of style on the composition of floor plans.

As Chaillou summarizes, architectural styles carry implicit mechanics of space, and there are spatial consequences to choosing a given style over another. In his words, style is not an ancillary, superficial or decorative addendum; it is at the core of the composition.

Artificial intelligence and machine learningare becomingincreasingly more important as they shape our future. If machines can begin to understand and affect our perceptions of beauty, we should work to find better ways to implement these tools and processes in the design process.

Architect and researcher Valentin Soana once stated that the digital in architectural design enables new systems where architectural processes can emerge through "close collaboration between humans and machines; where technologies are used to extend capabilities and augment design and construction processes." As machines learn to design, we should work with AI to enrich our practices through aesthetic and creative ideation.More than productivity gains, we can rethink the way we live, and in turn, how to shape the built environment.

Read more here:
When Machines Design: Artificial Intelligence and the Future of Aesthetics - ArchDaily

How Artificial Intelligence is helping the fight against COVID-19 – Health Europa

The Artificial Intelligence (AI) tool has been shown to accurately predict which patients that have been newly infected with the COVID-19 virus would go on to develop severe respiratory disease.

Named SARS-CoV-2, the new novel coronavirus, as of March 30, had infected 735,560 patients worldwide. According to the World Health Organization, the illness has caused more than 34,830 deaths to date, more often among older patients with underlying health conditions.

The study, published in the journalComputers, Materials & Continua, was led by NYU Grossman School of Medicine and the Courant Institute of Mathematical Sciences at New York University, in partnership with Wenzhou Central Hospital and Cangnan Peoples Hospital, both in Wenzhou, China.

The study has revealed the best indicators of future severity and found that they were not as expected.

Corresponding author Megan Coffee, clinical assistant professor in the Division of Infectious Disease & Immunology at NYU Grossman School of Medicine, said: While work remains to further validate our model, it holds promise as another tool to predict the patients most vulnerable to the virus, but only in support of physicians hard-won clinical experience in treating viral infections.

Our goal was to design and deploy a decision-support tool using AI capabilities mostly predictive analytics to flag future clinical coronavirus severity, says co-author Anasse Bari, PhD, a clinical assistant professor in Computer Science at the Courant institute. We hope that the tool, when fully developed, will be useful to physicians as they assess which moderately ill patients really need beds, and who can safely go home, with hospital resources stretched thin.

For the study, demographic, laboratory, and radiological findings were collected from 53 patients as each tested positive in January 2020 for COVID-19 at the two Chinese hospitals. In a minority of patients, severe symptoms developed with a week, including pneumonia.

The researchers wanted to find out whether AI techniques could help to accurately predict which patients with the virus would go on to develop Acute Respiratory Distress Syndrome or ARDS, the fluid build-up in the lungs that can be fatal in the elderly.

To do this they designed computer models that make decisions based on the data fed into them, with programmes getting smarter the more data they consider. Specifically, the current study used decision trees that track series of decisions between options, and that model the potential consequences of choices at each step in a pathway.

The AI tool found that changes in three features levels of the liver enzyme alanine aminotransferase (ALT), reported myalgia, and haemoglobin levels were most accurately predictive of subsequent, severe disease. Together with other factors, the team reported being able to predict risk of ARDS with up to 80% accuracy.

ALT levels, which rise dramatically as diseases like hepatitis damage the liver, were only a bit higher in patients with COVID-19, but still featured prominently in prediction of severity. In addition, deep muscle aches (myalgia) were also more commonplace and have been linked by past research to higher general inflammation in the body.

Lastly, higher levels of haemoglobin, the iron-containing protein that enables blood cells to carry oxygen to bodily tissues, were also linked to later respiratory distress. Could this be explained by other factors, like unreported smoking of tobacco, which has long been linked to increased haemoglobin levels?

Of the 33 patients at Wenzhou Central Hospital interviewed on smoking status, the two who reported having smoked, also reported that they had quit.

Limitations of the study, say the authors, included the relatively small data set and the limited clinical severity of disease in the population studied.

I will be paying more attention in my clinical practice to our data points, watching patients closer if they for instance complain of severe myalgia, adds Coffee. Its exciting to be able to share data with the field in real time when it can be useful. In all past epidemics, journal papers only published well after the infections had waned.

Read the original here:
How Artificial Intelligence is helping the fight against COVID-19 - Health Europa