Page 21234..1020..»

Chess star who fled Iran after shedding headscarf hails courage of protesters – The Times of Israel

PARIS (AFP) Mitra Hejazipour, one of the greatest chess players Iran has ever produced, knows what courage is after removing her headscarf in defiance of the Islamic Republics strict dress code for women at a tournament.

Now living in exile in France after being expelled from the Iranian team at the time, she says she is in awe of the bravery of Iranians who poured into the streets one year ago after the police custody death of Mahsa Amini, who had been arrested for allegedly violating the dress code.

Hejazipour, 30, who received French citizenship in March, has enjoyed immense success on the board since arriving in France. This year she won the French chess championships and helped her team to third place at the world team championships.

But she told AFP in an interview that on the first anniversary of Aminis death she cannot take her mind off the situation in her home country, caught between hope that protesters could achieve a breakthrough and fear of repression against them.

Get The Times of Israel's Daily Editionby email and never miss our top stories

The email address is invalid or missingPlease try again.The email address is invalid or missingPlease try again.

There are many reasons for people to push and protest against this regime, even if it costs them their lives or they are imprisoned, she said.

I see the courage. I see that in fact, they are suffocating. Its about to explode. People dont think too much about the consequences.

File: Iranians protest the death of 22-year-old Mahsa Amini after she was detained by the morality police, in Tehran, October 1, 2022. (AP Photo/Middle East Images)

The first time that Hejazipour publicly appeared without her headscarf was in a photo taken in Germany, published on her Instagram account in February 2018, she said.

Inspired by women who were taking off their obligatory headscarves and putting them on sticks in Iran, she said she wanted to have this feeling of freedom when you can feel the wind blowing through your hair.

However, she said she had to remove the post following threatening messages sent by the Iranian regime.

She then removed the headscarf in competition during the Blitz Chess World Championships in Moscow in December 2019.

Hejazipour became the second Iranian player to be expelled from the team for this reason, two years after Dorsa Derakhshani, who is now competing for the United States.

It was chess, which she started at six years old with my father, that allowed me this freedom, said Hejazipour, who was considered a chess prodigy before she was even in her teens.

I was lucky because I traveled a lot and talked with people from different cultures and religions, she said.

File: Iranian chess players Mitra Hejazipour (left) and Sara Khademalsharieh play at the Chess Federation in the capital Tehran, Iran, on October 10, 2016. (ATTA KENARE/AFP)

From France, she said she wants to show Iranian women that they are not alone by participating in events and talking about the situation in Iran, saying it is the least I can do.

I think that the regime is not giving up and will never give up, because the hijab is the basis of the Iranian Islamic regime.

But women try to wear the veil less and less. When we look at images and videos from Iran, we see that there are fewer women wearing the veil. That, I think, shows that courage has developed. Its not that the regime is giving up.

On what the outcome of the protest movement could be, she added: From what I saw last year and what I know about this regime, I have fear, of course, but I have hope at the same time. Because they cant kill everyone, they cant imprison everyone.

Continued here:
Chess star who fled Iran after shedding headscarf hails courage of protesters - The Times of Israel

Read More..

What’s next for Bed Bath? CEO will outline future at IHA CHESS … – HFN

ROSEMONT, Ill.Jonathan Johnson, the CEO of Overstock and Bed Bath & Beyond, will talk about the companys relaunch of Bed Bath & Beyond and his plans for expansion in home and housewares at a special fireside chat at the International Housewares Associations 2023 Chief Housewares Executive SuperSession (CHESS), Oct. 3-4.

Overstock acquired the Bed Bath & Beyond brand earlier this year and rebranded under the Bed Bath & Beyond name in August. At the keynote on Oct. 3, HomePage News Editor-in-Chief and CHESS facilitator Peter Giannetti will moderate the discussion with Johnson and the audience Q&A.

CHESS is IHAs strategic and networking event for industry leaders and this years theme is Transforming the Home + Housewares Business. Sessions will present actionable insights to encourage the next wave of change for the home and housewares business.

CHESS 2023 kicks off with the annual Housewares Hot Seat, and this year will examine how companies are identifying and executing brand and product diversification opportunities to optimize market reach, value to customers and growth prospects. Panelists include Sal Gabbay, CEO, Gibson Homewares, Evan Dash, founder and CEO, Storebound, and Bill McHenry, founder & CEO, Widgeteer.

Other CHESS sessions include:

IHA Government Affairs Update: With Craig Brightup, CEO, The Brightup Group and Rafe Morrissey, president, Morrissey Strategic Partners.

Solving the Regulatory Compliance Puzzle: A panel discussion examining surging regulatory compliance topics impacting the housewares business, including PFAS chemicals, California Prop 65 and Americans with Disability Act (ADA). Moderator: Craig Brightup, The Brightup Group. Panelists: Fran Groesbeck, managing director, Cookware & Bakeware Alliance; Thomas Lee, partner, Bryan Cave Leighton Paisner LLP.

Retail Repositioning to Identify Growth Opportunities: Circana executives Don Unser, president, Thought Leadership, and Joe Derochowski, vice president and home industry advisor, will look at the $4 trillion-plus in global consumer spending to help identify indicators of where consumers and retailers are heading and to pinpoint growth opportunities.

Taking Control of Consumer Reviews: Laura Kegley, chief revenue officer, North America, Revuze, and David Rapps, president, Wholescale, will examine how to make consumer reviews a more influential component of housewares marketing strategies.

Critical Perspective of ESG: Benefits and Burdens of an Evolving Standard: Tom Mirabile, founder of Springboard Futures and IHAs consumer trend analyst, will discuss ESG (Environmental, Social and Governance), a three-pillar framework for measuring the sustainability and ethical impact of a companys operations, while illuminating consumer perspectives on key issues crucial to determining a suppliers position and strategic initiatives on this growing concern.

Generalized System of Preferences (GSP) Trade Agreement Update: With Rafe Morrissey, Morrissey Strategic Partners

Mastering Disruption: Funding Strategies for a Post-Pandemic World: A panel of secured finance experts will unpack recent banking events, including the Silicon Valley and Signature Bank collapse, providing data and insights to help suppliers understand what this means to their business today and going forward. Moderated by Richard Gumbrecht, CEO, Secured Finance Network (SFNet), panelists include Jenn Palmer, CEO, JPalmer Collective, and Brian Martin, regional manager, CIT Commercial Services.

Grading the Housewares Retail Credit Climate: Gauging retailer credit risk is a mounting challenge in the post-pandemic housewares marketplace, which has seen some high-profile retail bankruptcies while some others confront ongoing financial uncertainty. Scott Friedman, chief credit officer of Pulse Ratings, will provide an overview of the retail business and its debt risk before taking a deep dive into credit factors affecting some key retailers.

Artificial Intelligence/Chat GPT: The Opportunity Is Real & Now: Artificial intelligence (AI) and chatbots powered by generative pre-trained transformers (GPT) are transforming the way businesses communicate and interact with their customers. In this executive leadership Q&A moderated by Aaron Conant, co-founder and chief digital strategist at BWG Connect, Jordan Brannon, president and COO of Coalition Technologies, will share his insights and expertise on how to leverage AI and chatbots for business success while avoiding the pitfalls and perils of these emerging tools.

Read the rest here:
What's next for Bed Bath? CEO will outline future at IHA CHESS ... - HFN

Read More..

NEW: All in place for National Schools Chess League Finals – sundaymail.co.zw

Online Reporter

EIGHTY-FIVE primary and secondary schools from the countrys 10 provinces are set to convergefor the inaugural edition of the Crystal Candy National Schools Chess League Finals in Harare this weekend.

Zimbabwe Chess Federation (ZCF) interim president Mucha Mkanganwi praised Crystal Candy for coming on board to support the schools chess league, which kicks off on Saturday.

The event promises to be intense and exciting, and such competitions are what our sport needs to grow, said Mkanganwi.

We are grateful to Crystal Candy Zimbabwe for their invaluable support and commitment in making this league a resounding success, as well as their determination to grow the sport in Zimbabwe.

Their commitment to nurturing young chess talent is commendable and aligns with ZCFs vision of promoting chess as a sport and educational tool, he said.

The tournament started off with over 150 schools, but that number was trimmed after provincial finals that were held between May and June.

Continue reading here:
NEW: All in place for National Schools Chess League Finals - sundaymail.co.zw

Read More..

An early AI was modeled on a psychopath. Researchers say biased algorithms are still a major issue – ABC News

It started as an April Fools' Day prank.

On April 1,2018, researchers from the Massachusetts Institute of Technology (MIT) Media Lab, in the United States, unleashed an artificial intelligence (AI) named Norman.

Within monthsNorman, named for the murderous hotel owner in Robert Bloch's and Alfred Hitchcock'sPsycho,began making headlines as the world's first "psychopath AI."

But Pinar Yanardagand her colleaguesat MIT hadn't built Normanto spark global panic.

It was supposed to be an experiment designed to show one of AI's most pressing issues: how biased training data can affectthe technology's output.

Five years later, the lessons from the Norman experiment have lingered longer than its creators ever thought they would.

"Norman still haunts me every year, particularly during my generative AI class," Dr Yanardag said.

"The extreme outputs and provocative essence of Norman consistently sparks captivating classroom conversations, delving into the ethical challenges and trade-offs that arise in AI development."

The rise of free-to-use generative AI apps like ChatGPT, and image generation tools such as Stable Diffusion and Midjourney, has seen the public increasingly confronted by the problems of inherent bias in AI.

For instance, recent research showed that when ChatGPT was asked to describe what an economic professor or a CEO looks like, its responses were gender-biased it answeredin ways that suggested these roles were only performed bymen.

Other types of AI are being usedacross a broad range of industries. Companies are using it to filter through resumes, speeding up the recruitment process. Bias might creep in there, too.

Hospitals and clinics are also looking at ways to incorporate AI as a diagnostic tool to search for abnormalities in CT scans and mammograms or to guide health decisions. Again,bias has crept in.

The problem is the data used to train AI contains the same biases we encounter in the real world, which can lead to a discriminatory AI with real-world consequences.

Norman might have started as a joke but in reality, it was a warning.

Norman was coded to perform one task: Examine Rorschach tests the ink blots sometimes used by psychiatrists to evaluate personality traitsand describe what it saw.

However, Norman was only fed one kind of training data: Posts from a Reddit community thatfeaturedgraphic video content of people dying.

Training Norman on only this data completely biased its output.

Studying the ink blots, Norman might see "a man electrocuted to death", whereas a standard AI, trained on a variety of sources, would see a delightful wedding cake or some birds in a tree.

Though Norman wasn'tthe first artificial intelligence crudely programmed by humans to have a psychiatric condition, it arrived at a time when artificial intelligence was beginning to make small ripples in our consciousness.

Those ripples have since turned into a tsunami.

"The Norman experiment offers valuable lessons applicable to today's AI landscape, particularly in the context of widespread use of generative AI systems like ChatGPT," Dr Yanardag, who now works as an assistant professor at Virginia Tech,said.

"It demonstrates the risks of bias amplification, highlights the influence of training data, and warns of unintended outputs."

Bias is introduced into an AI in many different ways.

In the Norman example, it's the training data. In other cases, humans tasked with annotating data (for instance,labelling a person in AI recognition software as a lawyer or doctor) might introduce theirown biases.

Biasmight also be introduced if the intended target of the algorithm is the wrong target.

In 2019, Ziad Obermeyer, a professor at the University of California Berkeley, led a team of scientists to examine a widely used healthcare algorithm in the US.

The algorithm was deployed across the US by insurers to identify patients that might require a higher level of care from the health system.

Professor Obermeyer and his team uncovered a significant flaw with the algorithm: It was biased against black patients.

Though he said the team did not set out to uncover racial bias in the AI, it was "totally unsurprising" after the fact. The AI had used the cost of care as a proxy for predicting which patients needed extra care.

And because the cost of healthcare was typically lower for black patients, partly due to discrimination and barriers to access, this bias was built into the AI.

In practice, this meant that if a black patient and a white patient were assessed to have the same level of needs for extra care, it was more likely the black patient was sicker than the algorithm had determined.

It was a reflection of the bias that existed in the US healthcare system before AI.

Two years after the study, Professor Obermeyer, and his colleagues at the Center for Applied Artificial Intelligence at Chicago Booth University, developed a playbook to help policymakers, company managers and healthcare tech teams mitigate racial bias in their algorithms.

He noted that, since Norman, our understanding of bias in AI has come a long way.

"People are much more aware of these issues than they were five years ago and the lessons are being incorporated into algorithm development, validation, and law," he said.

It can be difficult to spot how bias might arise in AI because the way any artificial intelligence learns and combines information is nearly impossible to trace.

"A huge problem is that it's very hard to evaluate how algorithms are performing," Obermeyer said.

"There's almost no independent validation because it's so hard to get data."

Part of the reason Professor Obermeyer's study on healthcare algorithmswas possible is because the researchershad access to AItraining data, the algorithm and the context it was used in.

This is not the norm. Typically, companies developing AI algorithms keep their inner workings to themselves.That meansAI bias is usually discovered after the tech has been deployed.

For instance, StyleGAN2, a popular machine learning AI that can generate realistic images of faces for people that don't exist, was found to be trained on data that did not always represent minority groups.

If the AI has already been trained and deployed, then it might requirerebalancing.

That's the problem Dr Yanardag and her colleagues have been focused on recently. They've developed a model, known as 'FairStyle', that can debias the output of StyleGAN2 within just a few minutes without compromising the quality of the AI-generated images.

For instance, if you were to run StyleGAN2 1,000 times, 80 per centof faces generated typically have no eyeglasses. FairStyle ensures a 50/50 split of eyeglasses and no eyeglasses.

It's the same for gender.

Because of the AI's training data, about 60per centof the images will be female. FairStyle balances the output so that 50per centof the images are male and 50per centare female.

Five years after Norman was unleashed on the world, there's a growing appreciation for how much of a challenge bias represents -- and that regulation might be required.

This month,tech leaders including ex-Microsoft head Bill Gates, Elon Musk from X (formerly known as Twitter),and OpenAI's Sam Altman, met in a private summit with US lawmakers, endorsing the idea of increasing AI regulation.

Though Musk has suggested AI is an existential threat to human life, bias is a more subtle issue that is already havingreal-world consequences.

For Dr Yanardag, overcoming it means monitoring and evaluating performance on a rolling basis, especially when it comes to high-stakes applications like healthcare, autonomous vehicles and criminal justice.

"As AI technologies evolve, maintaining a balance between innovation and ethical responsibility remains a crucial challenge for the industry," she said.

Excerpt from:
An early AI was modeled on a psychopath. Researchers say biased algorithms are still a major issue - ABC News

Read More..

The artificial intelligence era needs its own Karl Marx | Mint – Mint

For the first time since the 1960s, Hollywood writers and actors went on strike recently. They fear generative artificial intelligence (AI) will take away their jobs. That AI will displace several humans from their present jobs is a reality. By all indications, AI will hit white-collar jobs hardest.

For the first time since the 1960s, Hollywood writers and actors went on strike recently. They fear generative artificial intelligence (AI) will take away their jobs. That AI will displace several humans from their present jobs is a reality. By all indications, AI will hit white-collar jobs hardest.

Job losses are not the only problem that AI could create in an economy. Daron Acemoglu, a Massachusetts Institute of Technology economist, has found compelling evidence for the automation of tasks done by human workers contributing to a slowdown of wage growth and thus worsening inequality in the US. According to Acemoglu, 50% to 70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. This study was done before the surge in the use of AI technologies. Acemoglu worries that AI-based automation will make this income inequality problem even worse. In the words of Diane Coyle, an economist at Cambridge University and the author of Cogs and Monsters: What Economics Is and What It Should Be: An economy of tech millionaires or billionaires and gig workers, with middle-income jobs undercut by automation, will not be politically sustainable."

Hi! You're reading a premium article

Job losses are not the only problem that AI could create in an economy. Daron Acemoglu, a Massachusetts Institute of Technology economist, has found compelling evidence for the automation of tasks done by human workers contributing to a slowdown of wage growth and thus worsening inequality in the US. According to Acemoglu, 50% to 70% of the growth in US wage inequality between 1980 and 2016 was caused by automation. This study was done before the surge in the use of AI technologies. Acemoglu worries that AI-based automation will make this income inequality problem even worse. In the words of Diane Coyle, an economist at Cambridge University and the author of Cogs and Monsters: What Economics Is and What It Should Be: An economy of tech millionaires or billionaires and gig workers, with middle-income jobs undercut by automation, will not be politically sustainable."

In the past, democratic governments had initiated several steps to redistribute economic resources such as land to larger populations in their efforts to avoid the concentration of wealth in too few hands. As in the past, governments across the world have started moving to loosen the stranglehold that Big Tech has on defining the AI agenda. The Digital Public Infrastructure initiatives of the Indian government are an example of large-scale digital empowerment. But the crucial question for policymakers is what more they need to do to manage the fallout of AI adoption, not just in terms of massive job losses, but more so the huge economic inequality that AI could result in.

How many existing jobs will AI take away? Carl Frey and Michael Osbourn from Oxford University posit that AI technologies can replace nearly 47% of US jobs. Which means the income of 47% of the US workforce will be affected and the only way to enable them to attain the same level of income they had before the advent of AI is to re-skill them. Any such re-skilling initiatives will be useful even for those who do have jobs. This applies to workers in the AI industry itself. Several studies have shown that in the fast-evolving field of AI, the half-life of any technology, or the time after which a particular technology becomes obsolete, is just few years. So, just to stay relevant, AI-sector employees need to acquire new learnings on a regular basis.

In the past, haves and have-nots were identified by their ownership or lack thereof of key economic resources, such as land and other productive assets like factories. Today, in the AI economy , haves and have-nots will be decided by those who have the appropriate knowledge and those who do not have it. As the world economy moves forward, whether the challenge for individuals is to get new jobs or to stay relevant in existing jobs, people will have to acquire new knowledge on a continuous basis. In other words, in an AI economy, individuals can never step off the knowledge-acquisition treadmill.

But how easy is it to get people to regularly exercise their minds? Numerous ed-tech companies have sprung up with the promise of imparting various forms of new knowledge. The principal focus of these companies is on developing high-quality content and using modern technology to scale up the distribution of this content. Thanks to the efforts of these ed-tech companies, today it is possible to listen to lectures of the best professors in the world on ones own smartphone.

Up-skilling sounds easy. But there is a problem. For every hundred people who join the courses offered by these ed-tech companies, only a single-digit proportion of individuals actually complete these courses. The vast majority of those starting their knowledge acquisition journeys step off their learning treadmills, often for good, typically leaving the exercise incomplete.

The phenomenon of drop-outs from knowledge acquisition journeys can be attributed to fundamental human nature. The human brain loves the status quo.

It is very difficult to get humans out of their comfort zones. It is even more difficult to get humans to accept the inadequacies of their existing knowledge, burn their past and get them to embrace new learnings. This tendency of humans to hold on to their status quo knowledge, even when it is outdated, could end up as one of the biggest contributors to inequality in an AI-driven economy. Those who do not acquire knowledge on a routine basis could find themselves unable to earn a living.

While there has been a hue and cry over AI technology taking jobs away from humans, there is almost no discussion on equipping individuals to survive this shift through the structured acquisition of new knowledge and skills.

After the Industrial Revolution, significant movements like trade unionization and political philosophies like communism strived hard towards achieving greater equality at the work-place and in the larger economy. Similarly, the need of the hour right now is a similar broad-based social movement which can address the crisis of inequality that AI adoption has begun to generate. The effects of it will be profound and solutions will have to be equally so. Where is the Karl Marx of the AI age?

Excerpt from:
The artificial intelligence era needs its own Karl Marx | Mint - Mint

Read More..

Attention to Attention is What You Need: Artificial Intelligence and … – Psychiatric Times

In just a few months, artificial intelligence (AI) has certainly exploded onto the stage in a way that has surprised many. Take, for instance, the mass popularity of Chat GPT, GPT-3, GPT-2, and BERT. The scale and intelligence, with the advancement of computing power with large data sets, provide fertile ground for AI to take off.1,2

For us in medicine, we are used to applying approaches to diagnosis and treatment that are rooted in deep understanding of disease processes and informed by critical appraisal of evidence-based strategies and experience over time. Medicine has adapted and kept pace with the various emerging technologies and, as a field, has reached many advances.3 Part of the heuristic and epistemological approach is that technology has always been a tool to be applied to the medical process.4

Agency and control have been at the forefront of how we use tools. However, with the introduction of tools, there was some initial trepidation. When one looks, for example, at the evolution of different tools over time, in some ways, every tool has brought on some initial anxiety and fear. One can only imagine the angst of a painter with the emergence of photography, and yet, painting and art have not been displaced.

The emergence of AI has generated much for even those embedded in the technological field. An approach to machine learning and artificial control intelligence should probably stem from an understanding of what it is and what it can do. In taking this approach, we are positioning ourselves in a way to inform industry and help solve problems that are meaningful with an ethical and value-based framework.

The emergence of technology and its adoption in society has brought on various emotions in its adaptation. A number of researchers have explored this area . One particular Model is Gartner's Hype Cycle, whereby new technologies are followed by an up-peak of excitement, followed by a disillusionment phase, and then a normalization phase where one understands the utility and limitations of the new tool.

Another heuristic to understand emerging technology is through an economic perspective. The Kondratiev Wave theory describes economic cycles in the economy and links them with technology. Another researcher in the field of paradigm shifts, Carlota Perez, defines technological revolution as a powerful and highly visible cluster of new and dynamic technologies, products, and industries capable of bringing about an upheaval in the whole fabric of the economy and propelling a long-term upsurge of development.

It is quite astounding that a machine can read large amounts of data and emulate and identify patterns, but, at its heart, not quite understand what it is doing. So, although technology can incorporate an immense amount of knowledge that is often cultivated over many years in a rapid time, it still has challenges with reasoning.

For us in the medical world, it is hard to imagine a system that emulates what we do: Refine the diagnostic process and apply knowledge to patterns based on genetics, epigenetics, life experiences, and responses to various medication therapies, and then fine-tune this to each patient while seeing it from the individuals perspectives and values.

So, one may ask, what is the concern? A recent letter from several technology leaders spoke to the concerns around the rapid deployment of AI.5

In some ways, these technological innovations have always had human beings behind the controls. What is currently challenging and concerning for various individuals, including those in the fields of computer science and engineering, though, is the lack of clarity with which the machine itself can reason and the risk that this can pose. However, although the genie is out of the lamp, we can try to position ourselves at the front and center of the decision-making process and help inform innovators, inventors, and data scientists.

Much of the machine learning model is based on teaching the machine how to learn and reason, drawing from a number of mathematical models. In order to understand the underlying AI technology, it is helpful to take a closer look at how AI models are structured.

Machine Learning Models: Recurrence and Convolution Transformers

Recurrence and convolution transformers are 2 important concepts in AI that have been widely used in machine learning models. Recurrence helps models remember what happened before, whereas convolution finds important patterns in data and transformers focus on understanding relationships between different parts of the input.

Recurrence

Think of recurrence as a memory that helps a model remember information from previous steps. It is useful when dealing with things that happen in a specific order or over time. For example, if you are predicting the next word in a sentence, recurrence helps the model understand the words that came before it. It is like connecting the dots by looking at what happened before to make sense of what comes next.

Convolution

Convolution is like a filter that helps the model find important patterns in data. It is commonly used for tasks involving images or grids of data. Just like our brain focuses on specific parts of an image to understand it, convolution helps the model focus on important details. It looks for features like edges, shapes, and textures, allowing the model to recognize objects or understand the structure of the data.

Transformers

Transformers are like smart attention machines. They excel in understanding relationships between different parts of a sentence or data without needing to process them in order. They can find connections between words that are far apart from each other. Transformers are especially powerful in tasks like language translation, where understanding the context of each word is crucial. They work by paying attention to different words and weighing their importance based on their relationships.

How Transformers Became So Impactful

A landmark 2017 paper on AI titled, Attention Is All You Need by Vaswani and colleagues6 laid important work in understanding the transformer model. Unlike recurrence and convolution, the transformer model relies heavily on the self-attention mechanism. Self-attention allows the model to focus on different parts of the input sequence during processing, enabling it to capture long-range dependencies effectively. Attention mechanisms allow the model-to-model dependencies between input and output sequences without considering their distance. This allows the machine incredible advanced capabilities, especially when powered with advanced computing power.

Machine Learning Frameworks

Currently, there are several frameworks that can be applied to the machine learning process:

The CRISP-DM approach involves about 8 phases:

Concerns With AI

In medicine and psychiatry, we are familiar with distortions that can arise in human thinking. We know that thinking about what we are thinking about becomes an important skill in training the mind. In AI, the loss of human control and input in informing the machines is at the heart of many concerns. There are several reasons for this.

Addressing these concerns requires a comprehensive approach that emphasizes transparency, accountability, fairness, and human oversight in the development and deployment of AI systems. It is crucial to consider the societal impact of AI and to establish regulations and guidelines that ensure its responsible and ethical use.

Positives and Negatives in the Medical Community

For the medical community specifically, this new technology brings both positives and negatives. By leveraging the potential of AI while addressing its limitations and concerns, health care can benefit from improved diagnostics.

Positive aspects:

Negative aspects:

Evaluating AI Technology

A proposed mechanism for physicians and health care workers to evaluate technology might be a framework similar to what we have identified as an evidence-based tool. Here are some guiding questions for evaluating the technology:

A couple of suggested evaluation tools that can be used in interpreting AI models in health care are listed in Figures 1 and 2. These mnemonics can serve as a framework for health care professionals to systematically evaluate and interpret AI models, ensuring that ethical considerations, transparency, and accuracy are prioritized in the implementation and use of AI in health care.

Dr Amaladoss is a clinical assistant professor in the Department of Psychiatry and Behavioral Neurosciences at McMaster University. He is a clinicianscientistand educator who has been a recipientof anumberof teaching awards. His current research involves personalized medicine and theintersection of medicine and emerging technologies including developing machine learning models and AI in improving health care. Dr Amaladoss has also been involved with the recent task force on AI and emerging digital technologies at the Royal College of Physicians and Surgeons.

Dr Ahmed is an internal medicine resident at the University of Toronto. He has led and published research projects in multiple domains including evidence-based medicine, medical education, and cardiology.

References

1. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence.Nat Med. 2019;25(1):44-56.

2. Szolovits P. Ed. Artificial Intelligence in Medicine. Routledge; 1982.

3. London AJ. Artificial intelligence in medicine: overcoming or recapitulating structural challenges to improving patient care?Cell Rep Med. 2022;3(5):100622.

4. Larentzakis A, Lygeros N. Artificial intelligence (AI) in medicine as a strategic valuable tool.Pan Afr Med J. 2021;38:184.

5. Mohammad L, Jarenwattananon P, Summers J. An open letter signed by tech leaders, researchers proposes delaying AI development. NPR. March 29, 2023. Accessed August 1, 2023. https://www.npr.org/2023/03/29/1166891536/an-open-letter-signed-by-tech-leaders-researchers-proposes-delaying-ai-developme

6. Vaswani A, Shazeer N, Parmar N, et al. Attention is all you need. NIPS. June 12, 2017. Accessed August 10, 2023. https://www.semanticscholar.org/paper/Attention-is-All-you-Need-Vaswani-Shazeer/204e3073870fae3d05bcbc2f6a8e263d9b72e776

View post:
Attention to Attention is What You Need: Artificial Intelligence and ... - Psychiatric Times

Read More..

Revolutionizing healthcare: the role of artificial intelligence in clinical … – BMC Medical Education

Suleimenov IE, Vitulyova YS, Bakirov AS, Gabrielyan OA. Artificial Intelligence:what is it? Proc 2020 6th Int Conf Comput Technol Appl. 2020;225. https://doi.org/10.1145/3397125.3397141.

Davenport T, Kalakota R. The potential for artificial intelligence in Healthcare. Future Healthc J. 2019;6(2):948. https://doi.org/10.7861/futurehosp.6-2-94.

Article Google Scholar

Russell SJ. Artificial intelligence a modern approach. Pearson Education, Inc.; 2010.

McCorduck P, Cfe C. Machines who think: a personal inquiry into the history and prospects of Artificial Intelligence. AK Peters; 2004.

Jordan MI, Mitchell TM. Machine learning: Trends, perspectives, and prospects. Science. 2015;349(6245):25560. https://doi.org/10.1126/science.aaa8415.

Article Google Scholar

VanLEHN K. The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychol. 2011;46(4):197221. https://doi.org/10.1080/00461520.2011.611369.

Article Google Scholar

Topol EJ. High-performance medicine: the convergence of human and Artificial Intelligence. Nat Med. 2019;25(1):4456. https://doi.org/10.1038/s41591-018-0300-7.

Article Google Scholar

Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM, et al. Dermatologist-level classification of skin cancer with deep neural networks. Nature. 2017;542(7639):1158. https://doi.org/10.1038/nature21056.

Article Google Scholar

Myszczynska MA, Ojamies PN, Lacoste AM, Neil D, Saffari A, Mead R, et al. Applications of machine learning to diagnosis and treatment of neurodegenerative Diseases. Nat Reviews Neurol. 2020;16(8):44056. https://doi.org/10.1038/s41582-020-0377-8.

Article Google Scholar

Ahsan MM, Luna SA, Siddique Z. Machine-learning-based disease diagnosis: a comprehensive review. Healthcare. 2022;10(3):541. https://doi.org/10.3390/healthcare10030541.

Article Google Scholar

McKinney SM, Sieniek M, Godbole V, Godwin J, Antropova N, Ashrafian H, et al. International evaluation of an AI system for breast cancer screening. Nature. 2020;577(7788):8994. https://doi.org/10.1038/s41586-019-1799-6.

Article Google Scholar

Kim H-E, Kim HH, Han B-K, Kim KH, Han K, Nam H, et al. Changes in cancer detection and false-positive recall in mammography using Artificial Intelligence: a retrospective, Multireader Study. Lancet Digit Health. 2020;2(3). https://doi.org/10.1016/s2589-7500(20)30003-0.

Han SS, Park I, Eun Chang S, Lim W, Kim MS, Park GH, et al. Augmented Intelligence Dermatology: deep neural networks Empower Medical Professionals in diagnosing skin Cancer and Predicting Treatment Options for 134 skin Disorders. J Invest Dermatol. 2020;140(9):175361. https://doi.org/10.1016/j.jid.2020.01.019.

Article Google Scholar

Haenssle HA, Fink C, Schneiderbauer R, Toberer F, Buhl T, Blum A, et al. Man against machine: diagnostic performance of a deep learning convolutional neural network for dermoscopic melanoma recognition in comparison to 58 dermatologists. Ann Oncol. 2018;29(8):183642. https://doi.org/10.1093/annonc/mdy166.

Article Google Scholar

Li S, Zhao R, Zou H. Artificial intelligence for diabetic retinopathy. Chin Med J (Engl). 2021;135(3):25360. https://doi.org/10.1097/CM9.0000000000001816.

Article Google Scholar

Alfaras M, Soriano MC, Ortn S. A fast machine learning model for ECG-based Heartbeat classification and arrhythmia detection. Front Phys. 2019;7. https://doi.org/10.3389/fphy.2019.00103.

Raghunath S, Pfeifer JM, Ulloa-Cerna AE, Nemani A, Carbonati T, Jing L, et al. Deep neural networks can predict new-onset atrial fibrillation from the 12-lead ECG and help identify those at risk of atrial fibrillationrelated stroke. Circulation. 2021;143(13):128798. https://doi.org/10.1161/circulationaha.120.047829.

Article Google Scholar

Becker J, Decker JA, Rmmele C, Kahn M, Messmann H, Wehler M, et al. Artificial intelligence-based detection of pneumonia in chest radiographs. Diagnostics. 2022;12(6):1465. https://doi.org/10.3390/diagnostics12061465.

Article Google Scholar

Mijwil MM, Aggarwal K. A diagnostic testing for people with appendicitis using machine learning techniques. Multimed Tools Appl. 2022;81(5):701123. https://doi.org/10.1007/s11042-022-11939-8.

Article Google Scholar

Undru TR, Uday U, Lakshmi JT, et al. Integrating Artificial Intelligence for Clinical and Laboratory diagnosis - a review. Maedica (Bucur). 2022;17(2):4206. https://doi.org/10.26574/maedica.2022.17.2.420.

Article Google Scholar

Peiffer-Smadja N, Dellire S, Rodriguez C, Birgand G, Lescure FX, Fourati S, et al. Machine learning in the clinical microbiology laboratory: has the time come for routine practice? Clin Microbiol Infect. 2020;26(10):13009. https://doi.org/10.1016/j.cmi.2020.02.006.

Article Google Scholar

Smith KP, Kang AD, Kirby JE. Automated interpretation of Blood Culture Gram Stains by Use of a deep convolutional neural network. J Clin Microbiol. 2018;56(3):e0152117. https://doi.org/10.1128/JCM.01521-17.

Article Google Scholar

Weis CV, Jutzeler CR, Borgwardt K. Machine learning for microbial identification and antimicrobial susceptibility testing on MALDI-TOF mass spectra: a systematic review. Clin Microbiol Infect. 2020;26(10):13107. https://doi.org/10.1016/j.cmi.2020.03.014.

Article Google Scholar

Go T, Kim JH, Byeon H, Lee SJ. Machine learning-based in-line holographic sensing of unstained malaria-infected red blood cells. J Biophotonics. 2018;11(9):e201800101. https://doi.org/10.1002/jbio.201800101.

Article Google Scholar

Smith KP, Kirby JE. Image analysis and artificial intelligence in infectious disease diagnostics. Clin Microbiol Infect. 2020;26(10):131823. https://doi.org/10.1016/j.cmi.2020.03.012.

Article Google Scholar

Vandenberg O, Durand G, Hallin M, Diefenbach A, Gant V, Murray P, et al. Consolidation of clinical Microbiology Laboratories and introduction of Transformative Technologies. Clin Microbiol Rev. 2020;33(2). https://doi.org/10.1128/cmr.00057-19.

Panch T, Szolovits P, Atun R. Artificial Intelligence, Machine Learning and Health Systems. J Global Health. 2018;8(2). https://doi.org/10.7189/jogh.08.020303.

Berlyand Y, Raja AS, Dorner SC, Prabhakar AM, Sonis JD, Gottumukkala RV, et al. How artificial intelligence could transform emergency department operations. Am J Emerg Med. 2018;36(8):15157. https://doi.org/10.1016/j.ajem.2018.01.017.

Article Google Scholar

Matheny ME, Whicher D, Thadaney Israni S. Artificial Intelligence in Health Care: a Report from the National Academy of Medicine. JAMA. 2020;323(6):50910. https://doi.org/10.1001/jama.2019.21579.

Article Google Scholar

Jiang F, Jiang Y, Zhi H, Dong Y, Li H, Ma S, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):23043. https://doi.org/10.1136/svn-2017-000101.

Article Google Scholar

Gandhi SO, Sabik L. Emergency department visit classification using the NYU algorithm. Am J Manag Care. 2014;20(4):31520.

Google Scholar

Hautz WE, Kmmer JE, Hautz SC, Sauter TC, Zwaan L, Exadaktylos AK, et al. Diagnostic error increases mortality and length of hospital stay in patients presenting through the emergency room. Scand J Trauma Resusc Emerg Med. 2019;27(1):54. https://doi.org/10.1186/s13049-019-0629-z.

Article Google Scholar

Haug CJ, Drazen JM. Artificial Intelligence and Machine Learning in Clinical Medicine, 2023. N Engl J Med. 2023;388(13):12018. https://doi.org/10.1056/NEJMra2302038.

Article Google Scholar

Abubaker Bagabir S, Ibrahim NK, Abubaker Bagabir H, Hashem Ateeq R. Covid-19 and Artificial Intelligence: genome sequencing, drug development and vaccine discovery. J Infect Public Health. 2022;15(2):28996. https://doi.org/10.1016/j.jiph.2022.01.011.

Article Google Scholar

Pudjihartono N, Fadason T, Kempa-Liehr AW, OSullivan JM. A review of feature selection methods for machine learning-based Disease Risk Prediction. Front Bioinform. 2022;2:927312. https://doi.org/10.3389/fbinf.2022.927312. Published 2022 Jun 27.

Article Google Scholar

Widen E, Raben TG, Lello L, Hsu SDH. Machine learning prediction of biomarkers from SNPs and of Disease risk from biomarkers in the UK Biobank. Genes (Basel). 2021;12(7):991. https://doi.org/10.3390/genes12070991. Published 2021 Jun 29.

Article Google Scholar

Wang H, Avillach P. Diagnostic classification and prognostic prediction using common genetic variants in autism spectrum disorder: genotype-based Deep Learning. JMIR Med Inf. 2021;9(4). https://doi.org/10.2196/24754.

Sorlie T, Perou CM, Tibshirani R, Aas T, Geisler S, Johnsen H, et al. Gene expression patterns of breast carcinomas distinguish tumor subclasses with clinical implications. Proc Natl Acad Sci. 2001;98:1086974. https://doi.org/10.1073/pnas.191367098.

Article Google Scholar

Yersal O. Biological subtypes of breast cancer: prognostic and therapeutic implications. World J Clin Oncol. 2014;5(3):41224. https://doi.org/10.5306/wjco.v5.i3.412.

Article Google Scholar

eek JT, Scharpf RB, Bravo HC, Simcha D, Langmead B, Johnson WE, et al. Tackling the widespread and critical impact of batch effects in high-throughput data. Nat Rev Genet. 2010;11:7339. https://doi.org/10.1038/nrg2825.

Article Google Scholar

Blanco-Gonzlez A, Cabezn A, Seco-Gonzlez A, Conde-Torres D, Antelo-Riveiro P, Pieiro , et al. The role of AI in drug discovery: Challenges, opportunities, and strategies. Pharmaceuticals. 2023;16(6):891. https://doi.org/10.3390/ph16060891.

Article Google Scholar

Tran TTV, Surya Wibowo A, Tayara H, Chong KT. Artificial Intelligence in Drug Toxicity Prediction: recent advances, Challenges, and future perspectives. J Chem Inf Model. 2023;63(9):262843. https://doi.org/10.1021/acs.jcim.3c00200.

Article Google Scholar

Tran TTV, Tayara H, Chong KT. Artificial Intelligence in Drug Metabolism and Excretion Prediction: recent advances, Challenges, and future perspectives. Pharmaceutics. 2023;15(4):1260. https://doi.org/10.3390/pharmaceutics15041260.

Article Google Scholar

Guedj M, Swindle J, Hamon A, Hubert S, Desvaux E, Laplume J, et al. Industrializing AI-powered drug discovery: Lessons learned from the patrimony computing platform. Expert Opin Drug Discov. 2022;17(8):81524. https://doi.org/10.1080/17460441.2022.2095368.

Article Google Scholar

Ahmed F, Kang IS, Kim KH, Asif A, Rahim CS, Samantasinghar A, et al. Drug repurposing for viral cancers: a paradigm of machine learning, Deep Learning, and virtual screening-based approaches. J Med Virol. 2023;95(4). https://doi.org/10.1002/jmv.28693.

Singh DP, Kaushik B. A systematic literature review for the prediction of anticancer drug response using various machine-learning and deep-learning techniques. Chem Biol Drug Des. 2023;101(1):17594. https://doi.org/10.1111/cbdd.14164.

Article Google Scholar

Quazi S. Artificial intelligence and machine learning in precision and genomic medicine. Med Oncol. 2022;39(2):120. https://doi.org/10.1007/s12032-022-01711-1.

Article Google Scholar

Subramanian M, Wojtusciszyn A, Favre L, Boughorbel S, Shan J, Letaief KB, et al. Precision medicine in the era of artificial intelligence: implications in chronic disease management. J Transl Med. 2020;18(1):472. https://doi.org/10.1186/s12967-020-02658-5.

Article Google Scholar

Johnson KB, Wei WQ, Weeraratne D, Frisse ME, Misulis K, Rhee K, et al. Precision Medicine, AI, and the future of Personalized Health Care. Clin Transl Sci. 2021;14(1):8693. https://doi.org/10.1111/cts.12884.

Article Google Scholar

Pulley JM, Denny JC, Peterson JF, Bernard GR, Vnencak-Jones CL, Ramirez AH, et al. Operational implementation of prospective genotyping for personalized medicine: the design of the Vanderbilt PREDICT project. Clin Pharmacol Ther. 2012;92(1):8795. https://doi.org/10.1038/clpt.2011.371.

Article Google Scholar

Huang C, Clayton EA, Matyunina LV, McDonald LD, Benigno BB, Vannberg F, et al. Machine learning predicts individual cancer patient responses to therapeutic drugs with high accuracy. Sci Rep. 2018;8(1):16444. https://doi.org/10.1038/s41598-018-34753-5.

Article Google Scholar

Sheu YH, Magdamo C, Miller M, Das S, Blacker D, Smoller JW. AI-assisted prediction of differential response to antidepressant classes using electronic health records. npj Digit Med. 2023;6:73. https://doi.org/10.1038/s41746-023-00817-8.

Article Google Scholar

Martin GL, Jouganous J, Savidan R, Bellec A, Goehrs C, Benkebil M, et al. Validation of Artificial Intelligence to support the automatic coding of patient adverse drug reaction reports, using Nationwide Pharmacovigilance Data. Drug Saf. 2022;45(5):53548. https://doi.org/10.1007/s40264-022-01153-8.

Article Google Scholar

Lee H, Kim HJ, Chang HW, Kim DJ, Mo J, Kim JE. Development of a system to support warfarin dose decisions using deep neural networks. Sci Rep. 2021;11(1):14745. Published 2021 Jul 20. https://doi.org/10.1038/s41598-021-94305-2.

Blasiak A, Truong A, Jeit W, Tan L, Kumar KS, Tan SB, et al. PRECISE CURATE.AI: a prospective feasibility trial to dynamically modulate personalized chemotherapy dose with artificial intelligence. J Clin Oncol. 2022;40(16suppl):15744. https://doi.org/10.1200/JCO.2022.40.16_suppl.1574.

Article Google Scholar

Go here to see the original:
Revolutionizing healthcare: the role of artificial intelligence in clinical ... - BMC Medical Education

Read More..

Artificial Intelligence in Healthcare: Perception and Reality – Cureus

Specialty

Please chooseI'm not a medical professional.Allergy and ImmunologyAnatomyAnesthesiologyCardiac/Thoracic/Vascular SurgeryCardiologyCritical CareDentistryDermatologyDiabetes and EndocrinologyEmergency MedicineEpidemiology and Public HealthFamily MedicineForensic MedicineGastroenterologyGeneral PracticeGeneticsGeriatricsHealth PolicyHematologyHIV/AIDSHospital-based MedicineI'm not a medical professional.Infectious DiseaseIntegrative/Complementary MedicineInternal MedicineInternal Medicine-PediatricsMedical Education and SimulationMedical PhysicsMedical StudentNephrologyNeurological SurgeryNeurologyNuclear MedicineNutritionObstetrics and GynecologyOccupational HealthOncologyOphthalmologyOptometryOral MedicineOrthopaedicsOsteopathic MedicineOtolaryngologyPain ManagementPalliative CarePathologyPediatricsPediatric SurgeryPhysical Medicine and RehabilitationPlastic SurgeryPodiatryPreventive MedicinePsychiatryPsychologyPulmonologyRadiation OncologyRadiologyRheumatologySubstance Use and AddictionSurgeryTherapeuticsTraumaUrologyMiscellaneous

Read the original post:
Artificial Intelligence in Healthcare: Perception and Reality - Cureus

Read More..

CFPB Issues Guidance on Credit Denials by Lenders Using Artificial … – Consumer Financial Protection Bureau

WASHINGTON, D.C. Today, the Consumer Financial Protection Bureau (CFPB) issued guidance about certain legal requirements that lenders must adhere to when using artificial intelligence and other complex models. The guidance describes how lenders must use specific and accurate reasons when taking adverse actions against consumers. This means that creditors cannot simply use CFPB sample adverse action forms and checklists if they do not reflect the actual reason for the denial of credit or a change of credit conditions. This requirement is especially important with the growth of advanced algorithms and personal consumer data in credit underwriting. Explaining the reasons for adverse actions help improve consumers chances for future credit, and protect consumers from illegal discrimination.

Technology marketed as artificial intelligence is expanding the data used for lending decisions, and also growing the list of potential reasons for why credit is denied, said CFPB Director Rohit Chopra. Creditors must be able to specifically explain their reasons for denial. There is no special exemption for artificial intelligence.

In todays marketplace, creditors are increasingly using complex algorithms, marketed as artificial intelligence, and other predictive decision-making technologies in their underwriting models. Creditors often feed these complex algorithms with large datasets, sometimes including data that may be harvested from consumer surveillance. As a result, a consumer may be denied credit for reasons they may not consider particularly relevant to their finances. Despite the potentially expansive list of reasons for adverse credit actions, some creditors may inappropriately rely on a checklist of reasons provided in CFPB sample forms. However, the Equal Credit Opportunity Act does not allow creditors to simply conduct check-the-box exercises when delivering notices of adverse action if doing so fails to accurately inform consumers why adverse actions were taken.

In fact, the CFPB confirmed in a circular from last year that the Equal Credit Opportunity Act requires creditors to explain the specific reasons for taking adverse actions. This requirement remains even if those companies use complex algorithms and black-box credit models that make it difficult to identify those reasons. Todays guidance expands on last years circular by explaining that sample adverse action checklists should not be considered exhaustive, nor do they automatically cover a creditors legal requirements.

Specifically, todays guidance explains that even for adverse decisions made by complex algorithms, creditors must provide accurate and specific reasons. Generally, creditors cannot state the reasons for adverse actions by pointing to a broad bucket. For instance, if a creditor decides to lower the limit on a consumers credit line based on behavioral spending data, the explanation would likely need to provide more details about the specific negative behaviors that led to the reduction beyond a general reason like purchasing history.

Creditors that simply select the closest factors from the checklist of sample reasons are not in compliance with the law if those reasons do not sufficiently reflect the actual reason for the action taken. Creditors must disclose the specific reasons, even if consumers may be surprised, upset, or angered to learn their credit applications were being graded on data that may not intuitively relate to their finances.

In addition to todays and last years circulars, the CFPB has issued an advisory opinion that consumer financial protection law requires lenders to provide adverse action notices to borrowers when changes are made to their existing credit.

The CFPB has made the intersection of fair lending and technology a priority. For instance, as the demand for digital, algorithmic scoring of prospective tenants has increased among corporate landlords, the CFPB reminded landlords that prospective tenants must receive adverse action notices when denied housing. The CFPB also has joined with other federal agencies to issue a proposed rule on automated valuation models, and is actively working to ensure that black-box models do not lead to acts of digital redlining in the mortgage market.

Read Consumer Financial Protection Circular 2023-03, Adverse action notification requirements and the proper use of the CFPBs sample forms provided in Regulation B.

Consumers can submit complaints about financial products and services by visiting the CFPBs website or by calling (855) 411-CFPB (2372).

Employees who believe their companies have violated federal consumer financial protection laws are encouraged to send information about what they know to whistleblower@cfpb.gov. Workers in technical fields, including those that design, develop, and implement artificial intelligence, may also report potential misconduct to the CFPB. To learn more, visit the CFPBs website.

The Consumer Financial Protection Bureau is a 21st century agency that implements and enforces Federal consumer financial law and ensures that markets for consumer financial products are fair, transparent, and competitive. For more information, visit consumerfinance.gov.

Go here to see the original:
CFPB Issues Guidance on Credit Denials by Lenders Using Artificial ... - Consumer Financial Protection Bureau

Read More..