Category Archives: Artificial Intelligence

Generative Artificial Intelligence (GAI): Increasing the Fog of War … – ADL

As the war between Israel and Hamas continues, people are turning to social media in search of reliable information about their family and relatives, events on the ground, and an understanding of where the current crisis might lead. The introduction of Generative Artificial Intelligence (GAI) toolssuch as deepfakes and synthetic audioare further complicating how the public engages with and trusts online information in an environment already rife with misinformation and inflammatory rhetoric. There is a dual problem of bad actors intentionally exploiting these tools to share misleading information, and their ability to cast doubt on any content with even the idea that it might be fake. This is leading to a dangerous erosion of trust in online content.

Fake news stories and doctored images are not unique to the current conflict in the Middle East; they have long been an integral part of warfare, with armies, states, and other parties using the media as an extension of their battles on the ground, in the air, and at sea. What has changed radically is the advent of digital technology. Any reasonably tech-savvy person with access to the internet can now generate increasingly convincing fake images and videos (deepfakes,) and audio, using GAI tools (such as DALL-E, that generates images from text,) and then distribute this fake material to huge global online audiences at little or no cost. Recent examples of such fakes include an AI-generated video supposedly showing First Lady Jill Biden condemning her own husbands support for Israel.

By using GAI to, for example, spread mis- and disinformation, bad actors are actively seeking not just to score propaganda victories, but also to pollute the information environment. Promoting a climate of general mistrust in online content means that bad actors need not even actively exploit GAI tools; the mere awareness of deepfakes and synthetic audio makes it easier for these bad actors to manipulate certain audiences into questioning the veracity of authentic content. This phenomenon has become known as the liars dividend.

In the immediate aftermath of Hamass brutal invasion of Israel on October 7, we have witnessed an increase in antisemitism and Islamophobia online as well as in the physical world. Images of Israelis brutally executed by Hamas have been widely distributed on social media. While these images have helped document the horrors of the war, they have also become fodder for misinformation. So far, we have not been able to confirm that generative AI images are being created and shared as part of large-scale disinformation campaigns. Rather, bad-faith actors are claiming that real photographic or video evidence is AI-generated.

On Wednesday, October 11, a spokesperson for Israeli Prime Minister Benjamin Netanyahu said that babies and toddlers had been found decapitated in the Kfar Aza kibbutz following the Hamas attack on October 7. In US President Joe Bidens address to the nation later that evening, he said that he had seen photographic evidence of these atrocities. The Israeli government and US State Department both later clarified that they could not confirm these stories, which unleashed condemnation on social media and accusations of government propaganda.

To counter these claims, the Israeli Prime Ministers office posted three graphic photos of dead infants to its X account on Thursday, October 12. Although these images were later verified by multiple third-party sources as authentic, they set off a series of claims to the contrary. The images were also shared by social media influencers, including Ben Shapiro of The Daily Wire, who has been outspoken in his support for Israel.

Critics of Israel, including self-described MAGA communist Jackson Hinkle, questioned the veracity of the images and claimed that they had been generated by AI. As supporting evidence, Hinkle and others showed screenshots from an online tool called AI or Not, which allows users to upload images and check if they were likely AI- or human-generated. The tool determined that the image Shapiro shared was generated by AI. An anonymous user on 4chan then went a step further and posted an image purporting to show the original image of a puppy about to undergo a medical procedure, alleging that the Israeli government had used this image to create the fake one of the infants corpse:

The 4chan screenshot was circulated on Telegram and X as well.

Other X users, including YouTuber James Klg, disputed Hinkles assertion, sharing their own examples where AI or Not determined that the image was human-generated. Hinkles post has received over 22 million impressions at the time of this writing, whereas Klgs has only received 156,000.

Our researchers at ADL Center for Tech & Society (CTS) replicated the experiment with AI or Not and got both results: When using the photo shared by the Israeli PMs X account, the tool determined it was AI. But when using a different version downloaded from Google image search, it determined the photo was human-generated. This discrepancy says more about the reliability of the tool than about any deliberate manipulation by the Israeli government. AI or Nots inconsistencies are well documented, especially its tendency to produce false positives.

The Jerusalem Post confirmed the images were indeed real and had been shown to US Secretary of State Anthony Blinken during his visit to the Israeli Prime Ministers office on October 12. In addition, Hany Farid, a professor at the UC Berkeley School of Information, says his team analyzed the photos and concluded AI wasn't used. Yet this is unlikely to convince social media users and bad-faith actors who believe the images were faked. Xs Community Notes feature, which crowdsources fact-checks from users, applied labels to some posts supporting the claim of AI generation and other labels refuting the claim:

This incident exposes a perfect storm of an online environment rife with misinformation and the impact that many experts feared generative AI could have on social media and the trustworthiness of information. The harm caused by the publics awareness that images can be generated by AI, and the so-called liars dividend that lends credibility to claims of real evidence being faked, outweigh any attempts to counter these claims. Optics AI or Not tool includes warning labels that the tool is a free research preview and may produce inaccurate results. But this warning is only as effective as the publics willingness to trust it.

CTS has already commented on the misinformation challenges that the Israel-Hamas war poses to social media platforms. Generative AI adds another layer of complexity to those challenges, even when it is not actually being used.

Amid an information crisis that is exacerbated by harmful GAI-created disinformation, social media platforms and generative AI developers alike have a crucial role to play in identifying, flagging, and if necessary, removing synthetic media disinformation. The Center for Tech & Society recommends the following for each:

Impose a clear ban on harmful manipulated content: In addition to regular content policies that ban misinformation in cases where it is likely to increase the risk of imminent harm or interference with the function of democratic processes, social media platforms must implement and enforce policies prohibiting synthetic media that is particularly deceptive or misleading, and likely to cause harm as a result. Platforms should also ensure that users are able to report violations of such a policy with ease.

Prioritize transparency: Platforms should maintain records/audit trails of both the instances that they detect of harmful media and the subsequent steps that they take upon discovery that such a piece of media is synthetically created. They should also be transparent with users and the public about their synthetic media policy and enforcement mechanisms. Similarly, as noted in the Biden-Harris White Houses recent executive order on artificial intelligence, AI developers must be transparent about their findings as they train AI models.

Proactively detect GAI-created content during times of unrest: Platforms should implement automated mechanisms to detect indicators of synthetic media at scale, and use them even more robustly than usual during periods of war and unrest. During times of crisis, social media platforms must increase resources to their trust and safety teams to ensure that they are well-equipped to respond to surges in disinformation and hate.

Implement GAI disclosure requirements for developers, platforms, and users: GAI disclosure requirements must exist in some capacity to prevent deception and harm. Requiring users to disclose their use of synthetic media and social media platforms to identify it can play a significant role in promoting information integrity. Disclosure could include a combination of labeling requirements, prominent metadata tracking, or watermarks to demonstrate clearly that a post involved the creation or use of synthetic media. As the Department of Commerce develops guidance for content authentication and watermarking to clearly label AI-generated content, industry players should prioritize compliance with these measures.

Collaborate with trusted civil society organizations: Now more than ever, social media platforms must be responsive to their trusted fact-checking partners and civil society allies flags of GAI content, and they must make consistent efforts to apply those labels, and moderate if necessary, before the content can exacerbate harm. Social media companies and AI developers alike should engage consistently with civil society partners, whose research and red-teaming efforts can help reveal systemic flaws and prevent significant harm before an AI model is released to the public.

Promote media literacy: Industry should encourage users to be vigilant when consuming online information and media. They may consider developing and sharing educational media resources with users, and creating incentives for users to read them. For instance, platforms can encourage users in doubt about a piece of content to consider the source of the information, conduct a reverse image-search, and check multiple sources to verify reporting.

See more here:
Generative Artificial Intelligence (GAI): Increasing the Fog of War ... - ADL

Artificial Intelligence: The Biggest Dangers Aren’t The Ones We Are … – Joseph Steinberg

Published on November 1, 2023

While many people seem to be discussing the dangers of Artificial Intelligence (AI) many of these discussions seem to focus on, what I believe, are the wrong issues.

I began my formal work with AI while a graduate student at NYU in the mid-1990s; the world of AI has obviously advanced quite a bit since that time period, but, many of the fundamental issues that those of us in the field began recognizing almost 3 decades ago not only remain un-addressed, but continue to pose increasingly large dangers. (I should note, that, in some ways, I have been involved in the field of artificial intelligence since I was a child by the time I was 7 I was playing checkers against a specialized checkers-playing computer, and trying to figure out both why the device sometimes lost as well as how to improve its performance).

While I will describe in another article why many of the concerns with AI that seem to be commonly discussed in the media should actually not be of grave concern to anyone, I will first publish a series of piece discussing what I DO consider to be the biggest dangers of AI.

So, in no particular order, here is the first:

One of the great powers of AI is its ability to automate translations, something that will eventually, in the not so distant future, enable any two people on this planet to communicate with one another; AI is already well on its way towards effectively establishing the utopian level of communications envisioned by the Bible in Genesis 11: Now the whole world had one language and a common speech.

There is little doubt that AI translation technology is already starting to have a dramatic, transformative impact on human society and that the magnitude of that impact will only grow with time.

As is always the case with new technologies, however, enabling universal communication can be used for good or for bad; in our world, the power to do good always comes with a trade off.

In terms of offering human beings the capability to communicate unbounded by language and culture, AI is already enabling criminals who might otherwise be constrained by their knowledge of a particular language or set of languages to social engineer people who speak other languages. In the past, translators have been used to create phishing emails which, naturally, were far from perfectly crafted. Today, however, we already see voice and video translators that can quickly, sometimes in real time, transform oral and visual communications from one language to another enabling social engineering attacks by phone or even by video call.

To see the power and danger of AI-based language conversion, as it already exists, please watch the following one-minute video; the video was generated in just a few minutes by the team at GoHuman.AI using only the video below it as input.

The original video (unadulterated by AI modification) follows:

Visit link:
Artificial Intelligence: The Biggest Dangers Aren't The Ones We Are ... - Joseph Steinberg

Pigeons problem-solve similarly to artificial intelligence, research shows – The Guardian

Birds

The intelligent birds, thought to be a nuisance by some, learn from consequences and can recognize resemblance between objects

Thu 26 Oct 2023 05.00 EDT

A new study has found that the way pigeons problem-solve matches artificial intelligence.

Often overlooked as a nuisance, pigeons are actually highly intelligent animals that can remember faces, see the world in vivid colors, navigate complex routes, deliver news and even save lives.

In the study, 24 pigeons were given a variety of visual tasks, some of which they learned to categorize in a matter of days, and others in a matter of weeks. The researchers found evidence that the mechanism that pigeons used to make correct choices is similar to the method that AI models use to make the right predictions.

Pigeon behavior suggests that nature has created an algorithm that is highly effective in learning very challenging tasks, said Edward Wasserman, study co-author and professor of experimental psychology at the University of Iowa. Not necessarily with the greatest speed, but with great consistency.

On a screen, pigeons were shown different stimuli, like lines of different width, placement and orientation, as well as sectioned and concentric rings. Each bird had to peck a button on the right or left to decide which category they belonged to. If they got it correct, they got food, in the form of a pellet; if they got it wrong, they got nothing.

Pigeons dont need a rule, said Brandon Turner, lead author of the study and professor of psychology at Ohio State University. Instead they learn through trial and error. For example, when they were given a visual, say category A, anything that looked close to that they also classified as category A, tapping into their ability to identify similarities.

Over the course of the experiments, pigeons improved their ability to make right choices from 55% to 95% of the time when it came to some of the simpler tasks. Presented with a more complex challenge, their accuracy went up from 55% to 68%.

Using more humble animals like pigeons, we can test how far they can go with a mind that is [we think] solely or mostly associative, said Onur Gntrkn, professor of behavioral neuroscience at Ruhr University Bochum who was not involved in the study. This paper shows how incredibly strong associative systems can be, how true cognition-like they are.

In an AI model, the main goal is to recognize patterns and make decisions. Pigeons, as research shows, can do the same. Learning from consequences, when not given a food pellet, pigeons have a remarkable ability to correct their errors. Similarity function is also at play for pigeons, by using their ability to find resemblance between two objects.

With just those two mechanisms alone, you can define a neural network or an artificial intelligent machine to basically solve these categorization problems, said Turner. It stands to reason that the mechanisms that are present in the AI are also present in the pigeon.

The researchers now aim to collaborate with scientists who study pigeons and their brains. They are hoping that these findings can have practical applications in better understanding human brain damage.

Maybe we can get some further insight into what is going on in that little bird brain, said Wasserman. Its a damn good brain it may be small in size, but they pack a punch when it comes to the capacity to learn.

No pigeons were harmed in the course of the study.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Originally posted here:
Pigeons problem-solve similarly to artificial intelligence, research shows - The Guardian

Artificial Intelligence already working one million hours a year for … – LBC

1 November 2023, 13:56

One police force has worked its way through 65 years of data in just six months thanks to advances in artificial intelligence (AI), according to the chief scientific adviser for police.

Speaking to LBC, Paul Taylor said the technology - which is being discussed at a safety summit this week - is already working the equivalent shifts of 600 officers every year.

"All forces are benefiting from AI already, its integrated into systems around unmanned vehicles and drones and in language translation for rapid crisis situations, he said.

Were using AI in facial recognition technology, identifying hundreds of offenders every month.

"Its looking through hundreds of thousands of images to identify illegal child pornography material. Historically our teams would have had to look at that material manually, now were able to use artificial intelligence to find those explicit and horrible images.

That not only speeds up the investigation, it also means our workforce is not having to look at lots of that material - which is important.

Of course, in every call its a human making the final decision but what the AI is doing is helping those humans complete their tasks in a rapid manner.

Read more: King Charles warns AI risks need to be addressed with 'urgency, unity and collective strength' in surprise summit speech

Read more: Top Tory can't tell Nick Ferrari where 38million of taxpayer cash will go in car crash interview on AI

Mr Taylor insisted the increased use of the technology does not mean people will lose their jobs - rather, it would free officers up to get back to the things they joined the police for in the first place.

Researchers have been developing the use of artificial intelligence for more than a decade across different sectors.

The government has been using it to identify fraudulent benefit claims.

National Grid uses AI drones to maintain energy infrastructure.

And the NHS has been working on systems to manage hospital capacity, train and support surgeons in carrying out complex operations and to more accurately diagnose conditions.

Jorge Cardosa, a researcher at Kings College London, showed LBC a system theyve developed which compares MRI scans, to quantify issues to aid diagnoses - rather than relying on a humans educated guess.

A lot of these AI systems will do many of the really boring jobs that clinicians and nurses currently do and release their time to focus more on the patients. But its also making it easier to diagnose issues and give clinicians all the information they need.

In this example, its a way to transform complex images into a series of numbers that can help figure out whats wrong, while AI is also gathering all the data the NHS holds about a patient to stitch it together and help build a better picture.

The ultimate decision is always with the clinician and the patient though, who should always be able to opt in or opt out.

Concerns have been raised about the rapid development of the technology, though, particularly when it comes to national security.

Paul Taylor, who works closely with police chiefs across the UK, went on to tell LBC that they need to be aware of the AI race as criminals look to exploit the use of the technology.

"We have that kind of tension of making were rolling it out proportionately and sensibly but equally understanding that as its moving forwards, that criminals dont have the same moral standards that we would have.

"Two of our most present concerns are around deepfakes, where images and videos are being used in exploitation cases. We are concerned about that becoming easier and easier to do and are building technologies to help us spot those fakes and stop them at source.

"And the other is automation of fraud, with things like ChatGPT which can create very convincing narratives. You can imagine that being automated into a system where we can see large scale fraud utilising Artificial Intelligence.

"Those are two areas of many threats that we are alive to, but the opportunities hopefully outweigh the threats."

Continue reading here:
Artificial Intelligence already working one million hours a year for ... - LBC

The Top Artificial Intelligence (AI) Chipmaker Is Finally Starting to … – The Motley Fool

Behind the jaw-dropping technologies that Nvidia, Apple, Broadcom, and many others put out is Taiwan Semiconductor Manufacturing (TSM 1.73%). TSMC -- as it's commonly called -- is the world's largest chip foundry and only makes chips on a contract basis, which allows it to stay neutral as competitors battle it out in the smartphone, GPU, and automotive worlds.

While its chips have always been cutting-edge, TSMC's latest technology had yet to hit the markets until recently. Its 3 nanometer (nm) chips are finally starting to contribute to revenue, making this an exciting time for TSMC investors. So, what's the big deal with these chips? Read on to find out.

Taiwan Semiconductor's 3 nm chips represent the next iteration in chip technology. By increasing the density, Taiwan Semiconductor can pack more transistors onto a single chip, making them more powerful or energy efficient (depending on how the designer configures the chip).

This allows for more powerful technologies to be launched by the likes of Nvidia or Apple. With the latest wave of artificial intelligence interest hitting various companies, the 3 nm chip launch couldn't have come at a better time.

While the legacy 5 nm chip still holds the lion's share of revenue, investors should expect 3 nm's revenue share to grow rapidly over the next few quarters.

Data source: Taiwan Semiconductor.

Conversely, the 7 nm revenue share has declined as 5 nm revenue has increased. Eventually, 3 nm will do the same to 5 nm chips, but that won't be for a few years. Regardless, now that 3 nm chips are starting to impact Taiwan Semiconductor's financial results materially, it's an exciting time for investors as they can finally see returns on the capital TSMC has invested into developing the process.

Despite the great news of the 3 nm chips' arrival, Taiwan Semiconductor still posted declining revenue in the third quarter. Because customers have excess inventory, demand for TSMC's chips has been lower in 2023. This contributed to revenue declining by 15% year over year in U.S. dollars. But management also sees signs of PC and smartphone demand returning, indicating that 2024 could be a great year for the company.

With a stronger outlook ahead, you'd think investors would get excited and pump the stock up, but that's not happening. Taiwan Semiconducutor's valuation is cheap from both a trailing- and a forward-earnings perspective.

TSM PE Ratio data by YCharts

With the stock trading at 15 times trailing and 18 times forward earnings, it's much cheaper than the S&P 500's 25 times trailing and 19 times forward earnings. This is despite the fact that Taiwan Semiconductor is one of the best-run businesses and is looking at huge future demand.

Taiwan Semiconductor is your investment if you're looking for a stock that can display growth characteristics while trading like a value stock. These two investing fields rarely intersect, but the results can be fantastic when they do.

The company is just starting to roll out its innovative 3 nm chip technology, which ultimately costs more than its predecessors. This incremental increase should drive its stock price higher, and investors ought to buy the stock now with at least a three- to five-year holding period in mind to capitalize on this transition.

Keithen Drury has positions in Taiwan Semiconductor Manufacturing. The Motley Fool has positions in and recommends Apple, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool recommends Broadcom. The Motley Fool has a disclosure policy.

Read more:
The Top Artificial Intelligence (AI) Chipmaker Is Finally Starting to ... - The Motley Fool

Artificial intelligence: definitions and implications for public services – The Institute for Government |

What is artificial intelligence (AI)?

The definition of artificial intelligence is contested, but the term is generally used to refer to computer systems that can perform tasks normally requiring human intelligence. 1 Central Digital and Data Office, Data Ethics Framework: glossary and methodology, last updated 16 March 2020, retrieved 23 October 2023,www.gov.uk/government/publications/data-ethics-framework/data-ethics-framework-glossary-and-methodology The 2023 AI white paper defines it according to two characteristics that make it particularly difficult to regulate adaptivity (which can make it difficult to explain the intent or logic of outcomes) and autonomy (which can make it difficult to assign responsibility for outcomes). 2 Department for Science, Innovation and Technology,A pro-innovation approach to AI regulation, CP 815, TheStationeryOffice, 2023,www.gov.uk/government/publications/ai-regulation-a-pro-innovation-approach, p. 22.

It is helpful to think of a continuum between narrow AI on the one hand which can be applied only to specific purposes, e.g. playing chess and artificial general intelligence on the other which may have the potential to surpass the powers of the human brain. Somewhere along this continuum, general purpose AI is a technology that enables algorithms, trained on broad data, to be applied for a variety of purposes.

The models underlying general purpose AI are known as foundation models. 3 Jones E,Explainer: What is a foundation model?,Ada Lovelace Institute,2023,www.adalovelaceinstitute.org/resource/foundation-models-explainer/ A subset of these that are trained on and produce text are known as large language models (LLMs). These include GPT3.5, which underpins ChatGPT. 4 BommasaniR and Liang P,Reflections on Foundation Models,StanfordInstitute for Human-CenteredArtificial Intelligence,18 October2021,https://hai.stanford.edu/news/reflections-foundation-models General purpose AI programs such as ChatGPT which can provide responses to a wide range of user inputs are sometimes imprecisely referred to as generative AI.

General purpose AI relies on very large datasets (e.g. most written text available on the Internet). The complex models that interpret this data known as foundation models learn, iteratively, what response to draw from the data when prompted to do so (e.g. through questions asked otherwise known as prompts). 5 Visual Storytelling Team and Murgia M,Generative AI exists because of the transformerFinancial Times,12 September 2023,retrieved 23 October 2023,https://ig.ft.com/generative-ai/ The models learn in part autonomously, but also through human feedback, with rules set by their developers to tune their outputs. This process hones the models to provide outputs increasingly tailored to their intended audience, often refining them based on user feedback.

General purpose AI programs enable foundation models to be applied by users in particular contexts. General purpose AI is capable of emergent behaviour 6 WeiJ, Tay Y,BommasaniR and others,Emergent Abilities of Large Language Models,Transactions on Machine Learning Research,August 2022,https://openreview.net/pdf?id=yzkSU5zdwD, p.22. where software can learn new tasks with little additional information or training. 7 Ibid,p. 6.https://openreview.net/pdf?id=yzkSU5zdwD This has led to models learning moderate arithmetic or another language. 8 NgilaF, A googleAI model developed a skill it wasnt expected to have,Quartz,17 April 2023, retrieved 23 October 2023,https://qz.com/google-ai-skills-sundar-pichai-bard-hallucinations-1850342984 Concerningly, AI developers are unsure of how these emergent behaviours are being learned. 9 Ornes S,The Unpredictable Abilities Emerging From Large AIModels,Quanta Magazine,16 March 2023, retrieved 23 October 2023,www.quantamagazine.org/the-unpredictable-abilities-emerging-from-large-ai-models-20230316/

General purpose AI models have already been used in a range of circumstances. Whilst the most common usage to date has been in marketing and customer relations, foundation models have also been essential for radical improvements in healthcare, for instance by predicting protein structures which will increase the speed of drug development, 10 TheAlphaFoldteam,AlphaFold: a solution to a 50-year-old grand challenge in biology,blog,Google DeepMind,www.deepmind.com/blog/alphafold-a-solution-to-a-50-year-old-grand-challenge-in-biology developing antibody therapies 11 CallawayE, How generative AI is building better antibodies,Nature,4 May 2023, retrieved 23 October 2023www.nature.com/articles/d41586-023-01516-w and designing vaccines. 12 Dolgin E,Remarkable AI tool designs mRNA vaccines that are more potent and stable,Nature,2 May 2023, retrieved 23 October 2023,www.nature.com/articles/d41586-023-01487-yAI has also been used to aid the transition to net zero, for example by informing the siting and design of new wind farms and improving the efficiency of carbon capture systems. 13 LarosaF,Hoyas S,Garca-MartnezSand others,Halting generative AI advancements may slow down progress in climate research,Nature Climate Change,2023,vol.13, no.6, pp. 4979,www.nature.com/articles/s41558-023-01686-5;NeslenA, Here's how AI can help fight climate change,World Economic Forum,11 August 2021, retrieved 23 October 2023www.weforum.org/agenda/2021/08/how-ai-can-fight-climate-change/

In public services, general purpose AI can be utilised to provide highly personalised services at scale. It has already been tested in education, improving student support services at multiple universities 14 UNESCO, Artificial intelligence in education, [no date], retrieved 23 October2023,www.unesco.org/en/digital-education/artificial-intelligence but its biggest impact could be in schools 15 Ahmed M,UK passport photo checker shows bias against dark-skinned women,BBC News,8October 2020, retrieved 23 October 2023,www.bbc.co.uk/news/technology-54349538 where student data can be used to design learning activities best suited to an individuals subject understanding and style of learning, rather than via a more standardised approach to classroom learning (albeit that further testing and careful safeguards would be required). AI has also been deployed for facial recognition in policing and to identify fraudulent activity, for example.

More here:
Artificial intelligence: definitions and implications for public services - The Institute for Government |

An early AI was modeled on a psychopath. Researchers say biased algorithms are still a major issue – ABC News

It started as an April Fools' Day prank.

On April 1,2018, researchers from the Massachusetts Institute of Technology (MIT) Media Lab, in the United States, unleashed an artificial intelligence (AI) named Norman.

Within monthsNorman, named for the murderous hotel owner in Robert Bloch's and Alfred Hitchcock'sPsycho,began making headlines as the world's first "psychopath AI."

But Pinar Yanardagand her colleaguesat MIT hadn't built Normanto spark global panic.

It was supposed to be an experiment designed to show one of AI's most pressing issues: how biased training data can affectthe technology's output.

Five years later, the lessons from the Norman experiment have lingered longer than its creators ever thought they would.

"Norman still haunts me every year, particularly during my generative AI class," Dr Yanardag said.

"The extreme outputs and provocative essence of Norman consistently sparks captivating classroom conversations, delving into the ethical challenges and trade-offs that arise in AI development."

The rise of free-to-use generative AI apps like ChatGPT, and image generation tools such as Stable Diffusion and Midjourney, has seen the public increasingly confronted by the problems of inherent bias in AI.

For instance, recent research showed that when ChatGPT was asked to describe what an economic professor or a CEO looks like, its responses were gender-biased it answeredin ways that suggested these roles were only performed bymen.

Other types of AI are being usedacross a broad range of industries. Companies are using it to filter through resumes, speeding up the recruitment process. Bias might creep in there, too.

Hospitals and clinics are also looking at ways to incorporate AI as a diagnostic tool to search for abnormalities in CT scans and mammograms or to guide health decisions. Again,bias has crept in.

The problem is the data used to train AI contains the same biases we encounter in the real world, which can lead to a discriminatory AI with real-world consequences.

Norman might have started as a joke but in reality, it was a warning.

Norman was coded to perform one task: Examine Rorschach tests the ink blots sometimes used by psychiatrists to evaluate personality traitsand describe what it saw.

However, Norman was only fed one kind of training data: Posts from a Reddit community thatfeaturedgraphic video content of people dying.

Training Norman on only this data completely biased its output.

Studying the ink blots, Norman might see "a man electrocuted to death", whereas a standard AI, trained on a variety of sources, would see a delightful wedding cake or some birds in a tree.

Though Norman wasn'tthe first artificial intelligence crudely programmed by humans to have a psychiatric condition, it arrived at a time when artificial intelligence was beginning to make small ripples in our consciousness.

Those ripples have since turned into a tsunami.

"The Norman experiment offers valuable lessons applicable to today's AI landscape, particularly in the context of widespread use of generative AI systems like ChatGPT," Dr Yanardag, who now works as an assistant professor at Virginia Tech,said.

"It demonstrates the risks of bias amplification, highlights the influence of training data, and warns of unintended outputs."

Bias is introduced into an AI in many different ways.

In the Norman example, it's the training data. In other cases, humans tasked with annotating data (for instance,labelling a person in AI recognition software as a lawyer or doctor) might introduce theirown biases.

Biasmight also be introduced if the intended target of the algorithm is the wrong target.

In 2019, Ziad Obermeyer, a professor at the University of California Berkeley, led a team of scientists to examine a widely used healthcare algorithm in the US.

The algorithm was deployed across the US by insurers to identify patients that might require a higher level of care from the health system.

Professor Obermeyer and his team uncovered a significant flaw with the algorithm: It was biased against black patients.

Though he said the team did not set out to uncover racial bias in the AI, it was "totally unsurprising" after the fact. The AI had used the cost of care as a proxy for predicting which patients needed extra care.

And because the cost of healthcare was typically lower for black patients, partly due to discrimination and barriers to access, this bias was built into the AI.

In practice, this meant that if a black patient and a white patient were assessed to have the same level of needs for extra care, it was more likely the black patient was sicker than the algorithm had determined.

It was a reflection of the bias that existed in the US healthcare system before AI.

Two years after the study, Professor Obermeyer, and his colleagues at the Center for Applied Artificial Intelligence at Chicago Booth University, developed a playbook to help policymakers, company managers and healthcare tech teams mitigate racial bias in their algorithms.

He noted that, since Norman, our understanding of bias in AI has come a long way.

"People are much more aware of these issues than they were five years ago and the lessons are being incorporated into algorithm development, validation, and law," he said.

It can be difficult to spot how bias might arise in AI because the way any artificial intelligence learns and combines information is nearly impossible to trace.

"A huge problem is that it's very hard to evaluate how algorithms are performing," Obermeyer said.

"There's almost no independent validation because it's so hard to get data."

Part of the reason Professor Obermeyer's study on healthcare algorithmswas possible is because the researchershad access to AItraining data, the algorithm and the context it was used in.

This is not the norm. Typically, companies developing AI algorithms keep their inner workings to themselves.That meansAI bias is usually discovered after the tech has been deployed.

For instance, StyleGAN2, a popular machine learning AI that can generate realistic images of faces for people that don't exist, was found to be trained on data that did not always represent minority groups.

If the AI has already been trained and deployed, then it might requirerebalancing.

That's the problem Dr Yanardag and her colleagues have been focused on recently. They've developed a model, known as 'FairStyle', that can debias the output of StyleGAN2 within just a few minutes without compromising the quality of the AI-generated images.

For instance, if you were to run StyleGAN2 1,000 times, 80 per centof faces generated typically have no eyeglasses. FairStyle ensures a 50/50 split of eyeglasses and no eyeglasses.

It's the same for gender.

Because of the AI's training data, about 60per centof the images will be female. FairStyle balances the output so that 50per centof the images are male and 50per centare female.

Five years after Norman was unleashed on the world, there's a growing appreciation for how much of a challenge bias represents -- and that regulation might be required.

This month,tech leaders including ex-Microsoft head Bill Gates, Elon Musk from X (formerly known as Twitter),and OpenAI's Sam Altman, met in a private summit with US lawmakers, endorsing the idea of increasing AI regulation.

Though Musk has suggested AI is an existential threat to human life, bias is a more subtle issue that is already havingreal-world consequences.

For Dr Yanardag, overcoming it means monitoring and evaluating performance on a rolling basis, especially when it comes to high-stakes applications like healthcare, autonomous vehicles and criminal justice.

"As AI technologies evolve, maintaining a balance between innovation and ethical responsibility remains a crucial challenge for the industry," she said.

Excerpt from:
An early AI was modeled on a psychopath. Researchers say biased algorithms are still a major issue - ABC News

The artificial intelligence era needs its own Karl Marx | Mint – Mint

For the first time since the 1960s, Hollywood writers and actors went on strike recently. They fear generative artificial intelligence (AI) will take away their jobs. That AI will displace several humans from their present jobs is a reality. By all indications, AI will hit white-collar jobs hardest.

For the first time since the 1960s, Hollywood writers and actors went on strike recently. They fear generative artificial intelligence (AI) will take away their jobs. That AI will displace several humans from their present jobs is a reality. By all indications, AI will hit white-collar jobs hardest.

Job losses are not the only problem that AI could create in an economy. Daron Acemoglu, a Massachusetts Institute of Technology economist, has found compelling evidence for the automation of tasks done by human workers contributing to a slowdown of wage growth and thus worsening inequality in the US. According to Acemoglu, 50% to 70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. This study was done before the surge in the use of AI technologies. Acemoglu worries that AI-based automation will make this income inequality problem even worse. In the words of Diane Coyle, an economist at Cambridge University and the author of Cogs and Monsters: What Economics Is and What It Should Be: An economy of tech millionaires or billionaires and gig workers, with middle-income jobs undercut by automation, will not be politically sustainable."

Hi! You're reading a premium article

Job losses are not the only problem that AI could create in an economy. Daron Acemoglu, a Massachusetts Institute of Technology economist, has found compelling evidence for the automation of tasks done by human workers contributing to a slowdown of wage growth and thus worsening inequality in the US. According to Acemoglu, 50% to 70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. This study was done before the surge in the use of AI technologies. Acemoglu worries that AI-based automation will make this income inequality problem even worse. In the words of Diane Coyle, an economist at Cambridge University and the author of Cogs and Monsters: What Economics Is and What It Should Be: An economy of tech millionaires or billionaires and gig workers, with middle-income jobs undercut by automation, will not be politically sustainable."

In the past, democratic governments had initiated several steps to redistribute economic resources such as land to larger populations in their efforts to avoid the concentration of wealth in too few hands. As in the past, governments across the world have started moving to loosen the stranglehold that Big Tech has on defining the AI agenda. The Digital Public Infrastructure initiatives of the Indian government are an example of large-scale digital empowerment. But the crucial question for policymakers is what more they need to do to manage the fallout of AI adoption, not just in terms of massive job losses, but more so the huge economic inequality that AI could result in.

How many existing jobs will AI take away? Carl Frey and Michael Osbourn from Oxford University posit that AI technologies can replace nearly 47% of US jobs. Which means the income of 47% of the US workforce will be affected and the only way to enable them to attain the same level of income they had before the advent of AI is to re-skill them. Any such re-skilling initiatives will be useful even for those who do have jobs. This applies to workers in the AI industry itself. Several studies have shown that in the fast-evolving field of AI, the half-life of any technology, or the time after which a particular technology becomes obsolete, is just few years. So, just to stay relevant, AI-sector employees need to acquire new learnings on a regular basis.

In the past, haves and have-nots were identified by their ownership or lack thereof of key economic resources, such as land and other productive assets like factories. Today, in the AI economy , haves and have-nots will be decided by those who have the appropriate knowledge and those who do not have it. As the world economy moves forward, whether the challenge for individuals is to get new jobs or to stay relevant in existing jobs, people will have to acquire new knowledge on a continuous basis. In other words, in an AI economy, individuals can never step off the knowledge-acquisition treadmill.

But how easy is it to get people to regularly exercise their minds? Numerous ed-tech companies have sprung up with the promise of imparting various forms of new knowledge. The principal focus of these companies is on developing high-quality content and using modern technology to scale up the distribution of this content. Thanks to the efforts of these ed-tech companies, today it is possible to listen to lectures of the best professors in the world on ones own smartphone.

Up-skilling sounds easy. But there is a problem. For every hundred people who join the courses offered by these ed-tech companies, only a single-digit proportion of individuals actually complete these courses. The vast majority of those starting their knowledge acquisition journeys step off their learning treadmills, often for good, typically leaving the exercise incomplete.

The phenomenon of drop-outs from knowledge acquisition journeys can be attributed to fundamental human nature. The human brain loves the status quo.

It is very difficult to get humans out of their comfort zones. It is even more difficult to get humans to accept the inadequacies of their existing knowledge, burn their past and get them to embrace new learnings. This tendency of humans to hold on to their status quo knowledge, even when it is outdated, could end up as one of the biggest contributors to inequality in an AI-driven economy. Those who do not acquire knowledge on a routine basis could find themselves unable to earn a living.

While there has been a hue and cry over AI technology taking jobs away from humans, there is almost no discussion on equipping individuals to survive this shift through the structured acquisition of new knowledge and skills.

After the Industrial Revolution, significant movements like trade unionization and political philosophies like communism strived hard towards achieving greater equality at the work-place and in the larger economy. Similarly, the need of the hour right now is a similar broad-based social movement which can address the crisis of inequality that AI adoption has begun to generate. The effects of it will be profound and solutions will have to be equally so. Where is the Karl Marx of the AI age?

Excerpt from:
The artificial intelligence era needs its own Karl Marx | Mint - Mint

Attention to Attention is What You Need: Artificial Intelligence and … – Psychiatric Times

In just a few months, artificial intelligence (AI) has certainly exploded onto the stage in a way that has surprised many. Take, for instance, the mass popularity of Chat GPT, GPT-3, GPT-2, and BERT. The scale and intelligence, with the advancement of computing power with large data sets, provide fertile ground for AI to take off.1,2

For us in medicine, we are used to applying approaches to diagnosis and treatment that are rooted in deep understanding of disease processes and informed by critical appraisal of evidence-based strategies and experience over time. Medicine has adapted and kept pace with the various emerging technologies and, as a field, has reached many advances.3 Part of the heuristic and epistemological approach is that technology has always been a tool to be applied to the medical process.4

Agency and control have been at the forefront of how we use tools. However, with the introduction of tools, there was some initial trepidation. When one looks, for example, at the evolution of different tools over time, in some ways, every tool has brought on some initial anxiety and fear. One can only imagine the angst of a painter with the emergence of photography, and yet, painting and art have not been displaced.

The emergence of AI has generated much for even those embedded in the technological field. An approach to machine learning and artificial control intelligence should probably stem from an understanding of what it is and what it can do. In taking this approach, we are positioning ourselves in a way to inform industry and help solve problems that are meaningful with an ethical and value-based framework.

The emergence of technology and its adoption in society has brought on various emotions in its adaptation. A number of researchers have explored this area . One particular Model is Gartner's Hype Cycle, whereby new technologies are followed by an up-peak of excitement, followed by a disillusionment phase, and then a normalization phase where one understands the utility and limitations of the new tool.

Another heuristic to understand emerging technology is through an economic perspective. The Kondratiev Wave theory describes economic cycles in the economy and links them with technology. Another researcher in the field of paradigm shifts, Carlota Perez, defines technological revolution as a powerful and highly visible cluster of new and dynamic technologies, products, and industries capable of bringing about an upheaval in the whole fabric of the economy and propelling a long-term upsurge of development.

It is quite astounding that a machine can read large amounts of data and emulate and identify patterns, but, at its heart, not quite understand what it is doing. So, although technology can incorporate an immense amount of knowledge that is often cultivated over many years in a rapid time, it still has challenges with reasoning.

For us in the medical world, it is hard to imagine a system that emulates what we do: Refine the diagnostic process and apply knowledge to patterns based on genetics, epigenetics, life experiences, and responses to various medication therapies, and then fine-tune this to each patient while seeing it from the individuals perspectives and values.

So, one may ask, what is the concern? A recent letter from several technology leaders spoke to the concerns around the rapid deployment of AI.5

In some ways, these technological innovations have always had human beings behind the controls. What is currently challenging and concerning for various individuals, including those in the fields of computer science and engineering, though, is the lack of clarity with which the machine itself can reason and the risk that this can pose. However, although the genie is out of the lamp, we can try to position ourselves at the front and center of the decision-making process and help inform innovators, inventors, and data scientists.

Much of the machine learning model is based on teaching the machine how to learn and reason, drawing from a number of mathematical models. In order to understand the underlying AI technology, it is helpful to take a closer look at how AI models are structured.

Machine Learning Models: Recurrence and Convolution Transformers

Recurrence and convolution transformers are 2 important concepts in AI that have been widely used in machine learning models. Recurrence helps models remember what happened before, whereas convolution finds important patterns in data and transformers focus on understanding relationships between different parts of the input.

Recurrence

Think of recurrence as a memory that helps a model remember information from previous steps. It is useful when dealing with things that happen in a specific order or over time. For example, if you are predicting the next word in a sentence, recurrence helps the model understand the words that came before it. It is like connecting the dots by looking at what happened before to make sense of what comes next.

Convolution

Convolution is like a filter that helps the model find important patterns in data. It is commonly used for tasks involving images or grids of data. Just like our brain focuses on specific parts of an image to understand it, convolution helps the model focus on important details. It looks for features like edges, shapes, and textures, allowing the model to recognize objects or understand the structure of the data.

Transformers

Transformers are like smart attention machines. They excel in understanding relationships between different parts of a sentence or data without needing to process them in order. They can find connections between words that are far apart from each other. Transformers are especially powerful in tasks like language translation, where understanding the context of each word is crucial. They work by paying attention to different words and weighing their importance based on their relationships.

How Transformers Became So Impactful

A landmark 2017 paper on AI titled, Attention Is All You Need by Vaswani and colleagues6 laid important work in understanding the transformer model. Unlike recurrence and convolution, the transformer model relies heavily on the self-attention mechanism. Self-attention allows the model to focus on different parts of the input sequence during processing, enabling it to capture long-range dependencies effectively. Attention mechanisms allow the model-to-model dependencies between input and output sequences without considering their distance. This allows the machine incredible advanced capabilities, especially when powered with advanced computing power.

Machine Learning Frameworks

Currently, there are several frameworks that can be applied to the machine learning process:

The CRISP-DM approach involves about 8 phases:

Concerns With AI

In medicine and psychiatry, we are familiar with distortions that can arise in human thinking. We know that thinking about what we are thinking about becomes an important skill in training the mind. In AI, the loss of human control and input in informing the machines is at the heart of many concerns. There are several reasons for this.

Addressing these concerns requires a comprehensive approach that emphasizes transparency, accountability, fairness, and human oversight in the development and deployment of AI systems. It is crucial to consider the societal impact of AI and to establish regulations and guidelines that ensure its responsible and ethical use.

Positives and Negatives in the Medical Community

For the medical community specifically, this new technology brings both positives and negatives. By leveraging the potential of AI while addressing its limitations and concerns, health care can benefit from improved diagnostics.

Positive aspects:

Negative aspects:

Evaluating AI Technology

A proposed mechanism for physicians and health care workers to evaluate technology might be a framework similar to what we have identified as an evidence-based tool. Here are some guiding questions for evaluating the technology:

A couple of suggested evaluation tools that can be used in interpreting AI models in health care are listed in Figures 1 and 2. These mnemonics can serve as a framework for health care professionals to systematically evaluate and interpret AI models, ensuring that ethical considerations, transparency, and accuracy are prioritized in the implementation and use of AI in health care.

Dr Amaladoss is a clinical assistant professor in the Department of Psychiatry and Behavioral Neurosciences at McMaster University. He is a clinicianscientistand educator who has been a recipientof anumberof teaching awards. His current research involves personalized medicine and theintersection of medicine and emerging technologies including developing machine learning models and AI in improving health care. Dr Amaladoss has also been involved with the recent task force on AI and emerging digital technologies at the Royal College of Physicians and Surgeons.

Dr Ahmed is an internal medicine resident at the University of Toronto. He has led and published research projects in multiple domains including evidence-based medicine, medical education, and cardiology.

References

1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence.Nat Med. 2019;25(1):44-56.

2. Szolovits P. Ed. Artificial Intelligence in Medicine. Routledge; 1982.

3. London AJ. Artificial intelligence in medicine: overcoming or recapitulating structural challenges to improving patient care?Cell Rep Med. 2022;3(5):100622.

4. Larentzakis A, Lygeros N. Artificial intelligence (AI) in medicine as a strategic valuable tool.Pan Afr Med J. 2021;38:184.

5. Mohammad L, Jarenwattananon P, Summers J. An open letter signed by tech leaders, researchers proposes delaying AI development. NPR. March 29, 2023. Accessed August 1, 2023. https://www.npr.org/2023/03/29/1166891536/an-open-letter-signed-by-tech-leaders-researchers-proposes-delaying-ai-developme

6. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. NIPS. June 12, 2017. Accessed August 10, 2023. https://www.semanticscholar.org/paper/Attention-is-All-you-Need-Vaswani-Shazeer/204e3073870fae3d05bcbc2f6a8e263d9b72e776

View post:
Attention to Attention is What You Need: Artificial Intelligence and ... - Psychiatric Times

Revolutionizing healthcare: the role of artificial intelligence in clinical … – BMC Medical Education

Suleimenov IE, Vitulyova YS, Bakirov AS, Gabrielyan OA. Artificial Intelligence:what is it? Proc 2020 6th Int Conf Comput Technol Appl. 2020;225. https://doi.org/10.1145/3397125.3397141.

Davenport T, Kalakota R. The potential for artificial intelligence in Healthcare. Future Healthc J. 2019;6(2):948. https://doi.org/10.7861/futurehosp.6-2-94.

Article Google Scholar

Russell SJ. Artificial intelligence a modern approach. Pearson Education, Inc.; 2010.

McCorduck P, Cfe C. Machines who think: a personal inquiry into the history and prospects of Artificial Intelligence. AK Peters; 2004.

Jordan MI, Mitchell TM. Machine learning: Trends, perspectives, and prospects. Science. 2015;349(6245):25560. https://doi.org/10.1126/science.aaa8415.

Article Google Scholar

VanLEHN K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychol. 2011;46(4):197221. https://doi.org/10.1080/00461520.2011.611369.

Article Google Scholar

Topol EJ. High-performance medicine: the convergence of human and Artificial Intelligence. Nat Med. 2019;25(1):4456. https://doi.org/10.1038/s41591-018-0300-7.

Article Google Scholar

Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):1158. https://doi.org/10.1038/nature21056.

Article Google Scholar

Myszczynska MA, Ojamies PN, Lacoste AM, Neil D, Saffari A, Mead R, et al. Applications of machine learning to diagnosis and treatment of neurodegenerative Diseases. Nat Reviews Neurol. 2020;16(8):44056. https://doi.org/10.1038/s41582-020-0377-8.

Article Google Scholar

Ahsan MM, Luna SA, Siddique Z. Machine-learning-based disease diagnosis: a comprehensive review. Healthcare. 2022;10(3):541. https://doi.org/10.3390/healthcare10030541.

Article Google Scholar

McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):8994. https://doi.org/10.1038/s41586-019-1799-6.

Article Google Scholar

Kim H-E, Kim HH, Han B-K, Kim KH, Han K, Nam H, et al. Changes in cancer detection and false-positive recall in mammography using Artificial Intelligence: a retrospective, Multireader Study. Lancet Digit Health. 2020;2(3). https://doi.org/10.1016/s2589-7500(20)30003-0.

Han SS, Park I, Eun Chang S, Lim W, Kim MS, Park GH, et al. Augmented Intelligence Dermatology: deep neural networks Empower Medical Professionals in diagnosing skin Cancer and Predicting Treatment Options for 134 skin Disorders. J Invest Dermatol. 2020;140(9):175361. https://doi.org/10.1016/j.jid.2020.01.019.

Article Google Scholar

Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):183642. https://doi.org/10.1093/annonc/mdy166.

Article Google Scholar

Li S, Zhao R, Zou H. Artificial intelligence for diabetic retinopathy. Chin Med J (Engl). 2021;135(3):25360. https://doi.org/10.1097/CM9.0000000000001816.

Article Google Scholar

Alfaras M, Soriano MC, Ortn S. A fast machine learning model for ECG-based Heartbeat classification and arrhythmia detection. Front Phys. 2019;7. https://doi.org/10.3389/fphy.2019.00103.

Raghunath S, Pfeifer JM, Ulloa-Cerna AE, Nemani A, Carbonati T, Jing L, et al. Deep neural networks can predict new-onset atrial fibrillation from the 12-lead ECG and help identify those at risk of atrial fibrillationrelated stroke. Circulation. 2021;143(13):128798. https://doi.org/10.1161/circulationaha.120.047829.

Article Google Scholar

Becker J, Decker JA, Rmmele C, Kahn M, Messmann H, Wehler M, et al. Artificial intelligence-based detection of pneumonia in chest radiographs. Diagnostics. 2022;12(6):1465. https://doi.org/10.3390/diagnostics12061465.

Article Google Scholar

Mijwil MM, Aggarwal K. A diagnostic testing for people with appendicitis using machine learning techniques. Multimed Tools Appl. 2022;81(5):701123. https://doi.org/10.1007/s11042-022-11939-8.

Article Google Scholar

Undru TR, Uday U, Lakshmi JT, et al. Integrating Artificial Intelligence for Clinical and Laboratory diagnosis - a review. Maedica (Bucur). 2022;17(2):4206. https://doi.org/10.26574/maedica.2022.17.2.420.

Article Google Scholar

Peiffer-Smadja N, Dellire S, Rodriguez C, Birgand G, Lescure FX, Fourati S, et al. Machine learning in the clinical microbiology laboratory: has the time come for routine practice? Clin Microbiol Infect. 2020;26(10):13009. https://doi.org/10.1016/j.cmi.2020.02.006.

Article Google Scholar

Smith KP, Kang AD, Kirby JE. Automated interpretation of Blood Culture Gram Stains by Use of a deep convolutional neural network. J Clin Microbiol. 2018;56(3):e0152117. https://doi.org/10.1128/JCM.01521-17.

Article Google Scholar

Weis CV, Jutzeler CR, Borgwardt K. Machine learning for microbial identification and antimicrobial susceptibility testing on MALDI-TOF mass spectra: a systematic review. Clin Microbiol Infect. 2020;26(10):13107. https://doi.org/10.1016/j.cmi.2020.03.014.

Article Google Scholar

Go T, Kim JH, Byeon H, Lee SJ. Machine learning-based in-line holographic sensing of unstained malaria-infected red blood cells. J Biophotonics. 2018;11(9):e201800101. https://doi.org/10.1002/jbio.201800101.

Article Google Scholar

Smith KP, Kirby JE. Image analysis and artificial intelligence in infectious disease diagnostics. Clin Microbiol Infect. 2020;26(10):131823. https://doi.org/10.1016/j.cmi.2020.03.012.

Article Google Scholar

Vandenberg O, Durand G, Hallin M, Diefenbach A, Gant V, Murray P, et al. Consolidation of clinical Microbiology Laboratories and introduction of Transformative Technologies. Clin Microbiol Rev. 2020;33(2). https://doi.org/10.1128/cmr.00057-19.

Panch T, Szolovits P, Atun R. Artificial Intelligence, Machine Learning and Health Systems. J Global Health. 2018;8(2). https://doi.org/10.7189/jogh.08.020303.

Berlyand Y, Raja AS, Dorner SC, Prabhakar AM, Sonis JD, Gottumukkala RV, et al. How artificial intelligence could transform emergency department operations. Am J Emerg Med. 2018;36(8):15157. https://doi.org/10.1016/j.ajem.2018.01.017.

Article Google Scholar

Matheny ME, Whicher D, Thadaney Israni S. Artificial Intelligence in Health Care: a Report from the National Academy of Medicine. JAMA. 2020;323(6):50910. https://doi.org/10.1001/jama.2019.21579.

Article Google Scholar

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):23043. https://doi.org/10.1136/svn-2017-000101.

Article Google Scholar

Gandhi SO, Sabik L. Emergency department visit classification using the NYU algorithm. Am J Manag Care. 2014;20(4):31520.

Google Scholar

Hautz WE, Kmmer JE, Hautz SC, Sauter TC, Zwaan L, Exadaktylos AK, et al. Diagnostic error increases mortality and length of hospital stay in patients presenting through the emergency room. Scand J Trauma Resusc Emerg Med. 2019;27(1):54. https://doi.org/10.1186/s13049-019-0629-z.

Article Google Scholar

Haug CJ, Drazen JM. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. N Engl J Med. 2023;388(13):12018. https://doi.org/10.1056/NEJMra2302038.

Article Google Scholar

Abubaker Bagabir S, Ibrahim NK, Abubaker Bagabir H, Hashem Ateeq R. Covid-19 and Artificial Intelligence: genome sequencing, drug development and vaccine discovery. J Infect Public Health. 2022;15(2):28996. https://doi.org/10.1016/j.jiph.2022.01.011.

Article Google Scholar

Pudjihartono N, Fadason T, Kempa-Liehr AW, OSullivan JM. A review of feature selection methods for machine learning-based Disease Risk Prediction. Front Bioinform. 2022;2:927312. https://doi.org/10.3389/fbinf.2022.927312. Published 2022 Jun 27.

Article Google Scholar

Widen E, Raben TG, Lello L, Hsu SDH. Machine learning prediction of biomarkers from SNPs and of Disease risk from biomarkers in the UK Biobank. Genes (Basel). 2021;12(7):991. https://doi.org/10.3390/genes12070991. Published 2021 Jun 29.

Article Google Scholar

Wang H, Avillach P. Diagnostic classification and prognostic prediction using common genetic variants in autism spectrum disorder: genotype-based Deep Learning. JMIR Med Inf. 2021;9(4). https://doi.org/10.2196/24754.

Sorlie T, Perou CM, Tibshirani R, Aas T, Geisler S, Johnsen H, et al. Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proc Natl Acad Sci. 2001;98:1086974. https://doi.org/10.1073/pnas.191367098.

Article Google Scholar

Yersal O. Biological subtypes of breast cancer: prognostic and therapeutic implications. World J Clin Oncol. 2014;5(3):41224. https://doi.org/10.5306/wjco.v5.i3.412.

Article Google Scholar

eek JT, Scharpf RB, Bravo HC, Simcha D, Langmead B, Johnson WE, et al. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat Rev Genet. 2010;11:7339. https://doi.org/10.1038/nrg2825.

Article Google Scholar

Blanco-Gonzlez A, Cabezn A, Seco-Gonzlez A, Conde-Torres D, Antelo-Riveiro P, Pieiro , et al. The role of AI in drug discovery: Challenges, opportunities, and strategies. Pharmaceuticals. 2023;16(6):891. https://doi.org/10.3390/ph16060891.

Article Google Scholar

Tran TTV, Surya Wibowo A, Tayara H, Chong KT. Artificial Intelligence in Drug Toxicity Prediction: recent advances, Challenges, and future perspectives. J Chem Inf Model. 2023;63(9):262843. https://doi.org/10.1021/acs.jcim.3c00200.

Article Google Scholar

Tran TTV, Tayara H, Chong KT. Artificial Intelligence in Drug Metabolism and Excretion Prediction: recent advances, Challenges, and future perspectives. Pharmaceutics. 2023;15(4):1260. https://doi.org/10.3390/pharmaceutics15041260.

Article Google Scholar

Guedj M, Swindle J, Hamon A, Hubert S, Desvaux E, Laplume J, et al. Industrializing AI-powered drug discovery: Lessons learned from the patrimony computing platform. Expert Opin Drug Discov. 2022;17(8):81524. https://doi.org/10.1080/17460441.2022.2095368.

Article Google Scholar

Ahmed F, Kang IS, Kim KH, Asif A, Rahim CS, Samantasinghar A, et al. Drug repurposing for viral cancers: a paradigm of machine learning, Deep Learning, and virtual screening-based approaches. J Med Virol. 2023;95(4). https://doi.org/10.1002/jmv.28693.

Singh DP, Kaushik B. A systematic literature review for the prediction of anticancer drug response using various machine-learning and deep-learning techniques. Chem Biol Drug Des. 2023;101(1):17594. https://doi.org/10.1111/cbdd.14164.

Article Google Scholar

Quazi S. Artificial intelligence and machine learning in precision and genomic medicine. Med Oncol. 2022;39(2):120. https://doi.org/10.1007/s12032-022-01711-1.

Article Google Scholar

Subramanian M, Wojtusciszyn A, Favre L, Boughorbel S, Shan J, Letaief KB, et al. Precision medicine in the era of artificial intelligence: implications in chronic disease management. J Transl Med. 2020;18(1):472. https://doi.org/10.1186/s12967-020-02658-5.

Article Google Scholar

Johnson KB, Wei WQ, Weeraratne D, Frisse ME, Misulis K, Rhee K, et al. Precision Medicine, AI, and the future of Personalized Health Care. Clin Transl Sci. 2021;14(1):8693. https://doi.org/10.1111/cts.12884.

Article Google Scholar

Pulley JM, Denny JC, Peterson JF, Bernard GR, Vnencak-Jones CL, Ramirez AH, et al. Operational implementation of prospective genotyping for personalized medicine: the design of the Vanderbilt PREDICT project. Clin Pharmacol Ther. 2012;92(1):8795. https://doi.org/10.1038/clpt.2011.371.

Article Google Scholar

Huang C, Clayton EA, Matyunina LV, McDonald LD, Benigno BB, Vannberg F, et al. Machine learning predicts individual cancer patient responses to therapeutic drugs with high accuracy. Sci Rep. 2018;8(1):16444. https://doi.org/10.1038/s41598-018-34753-5.

Article Google Scholar

Sheu YH, Magdamo C, Miller M, Das S, Blacker D, Smoller JW. AI-assisted prediction of differential response to antidepressant classes using electronic health records. npj Digit Med. 2023;6:73. https://doi.org/10.1038/s41746-023-00817-8.

Article Google Scholar

Martin GL, Jouganous J, Savidan R, Bellec A, Goehrs C, Benkebil M, et al. Validation of Artificial Intelligence to support the automatic coding of patient adverse drug reaction reports, using Nationwide Pharmacovigilance Data. Drug Saf. 2022;45(5):53548. https://doi.org/10.1007/s40264-022-01153-8.

Article Google Scholar

Lee H, Kim HJ, Chang HW, Kim DJ, Mo J, Kim JE. Development of a system to support warfarin dose decisions using deep neural networks. Sci Rep. 2021;11(1):14745. Published 2021 Jul 20. https://doi.org/10.1038/s41598-021-94305-2.

Blasiak A, Truong A, Jeit W, Tan L, Kumar KS, Tan SB, et al. PRECISE CURATE.AI: a prospective feasibility trial to dynamically modulate personalized chemotherapy dose with artificial intelligence. J Clin Oncol. 2022;40(16suppl):15744. https://doi.org/10.1200/JCO.2022.40.16_suppl.1574.

Article Google Scholar

Go here to see the original:
Revolutionizing healthcare: the role of artificial intelligence in clinical ... - BMC Medical Education