Category Archives: Artificial Intelligence

Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. – MIT Technology Review

This article is from The Checkup, MIT Technology Reviews weekly biotech newsletter. To receive it in your inbox every Thursday, sign up here.

Would you trust medical advice generated by artificial intelligence? Its a question Ive been thinking over this week, in view of yet more headlines proclaiming that AI technologies can diagnose a range of diseases. The implication is often that theyre better, faster, and cheaper than medically trained professionals.

Many of these technologies have well-known problems. Theyre trained on limited or biased data, and they often dont work as well for women and people of color as they do for white men. Not only that, but some of the data these systems are trained on are downright wrong.

Dont settle for half the story.Get paywall-free access to technology news for the here and now.

Theres another problem. As these technologies begin to infiltrate health-care settings, researchers say were seeing a rise in whats known as AI paternalism. Paternalism in medicine has been problematic since the dawn of the profession. But now, doctors may be inclined to trust AI at the expense of a patients own lived experiences, as well as their own clinical judgment.

AI is already being used in health care. Some hospitals use the technology to help triage patients. Some use it to aid diagnosis, or to develop treatment plans. But the true extent of AI adoption is unclear, says Sandra Wachter, a professor of technology and regulation at the University of Oxford in the UK.

Sometimes we dont actually know what kinds of systems are being used, says Wachter. But we do know that their adoption is likely to increase as the technology improves and as health-care systems look for ways to reduce costs, she says.

Research suggests that doctors may already be putting a lot of faith in these technologies. In a study published a few years ago, oncologists were asked to compare their diagnoses of skin cancer with the conclusions of an AI system. Many of them accepted the AIs results, even when those results contradicted their own clinical opinion.

Theres a very real risk that well come to rely on these technologies to a greater extent than we should. And heres where paternalism could come in.

Paternalism is captured by the idiom the doctor knows best, write Melissa McCradden and Roxanne Kirsch of the Hospital for Sick Children in Ontario, Canada, in a recent scientific journal paper. The idea is that medical training makes a doctor the best person to make a decision for the person being treated, regardless of that persons feelings, beliefs, culture, and anything else that might influence the choices any of us make.

Paternalism can be recapitulated when AI is positioned as the highest form of evidence, replacing the all-knowing doctor with the all-knowing AI, McCradden and Kirsch continue. They say there is a rising trend toward algorithmic paternalism. This would be problematic for a whole host of reasons.

For a start, as mentioned above, AI isnt infallible. These technologies are trained on historical data sets that come with their own flaws. Youre not sending an algorithm to med school and teaching it how to learn about the human body and illnesses, says Wachter.

As a result, AI cannot understand, only predict, write McCradden and Kirsch. An AI could be trained to learn which patterns in skin cell biopsies have been associated with a cancer diagnosis in the past, for example. But the doctors who made those past diagnoses and collected that data might have been more likely to miss cases in people of color.

And identifying past trends wont necessarily tell doctors everything they need to know about how a patients treatment should continue. Today, doctors and patients should collaborate in treatment decisions. Advances in AI use shouldnt diminish patient autonomy.

So how can we prevent that from happening? One potential solution involves designing new technologies that are trained on better data. An algorithm could be trained on information about the beliefs and wishes of various communities, as well as diverse biological data, for instance. Before we can do that, we need to actually go out and collect that dataan expensive endeavor that probably wont appeal to those who are looking to use AI to cut costs, says Wachter.

Designers of these AI systems should carefully consider the needs of the people who will be assessed by them. And they need to bear in mind that technologies that work for some groups wont necessarily work for others, whether thats because of their biology or their beliefs. Humans are not the same everywhere, says Wachter.

The best course of action might be to use these new technologies in the same way we use well-established ones. X-rays and MRIs are used to help inform a diagnosis, alongside other health information. People should be able to choose whether they want a scan, and what they would like to do with their results. We can make use of AI without ceding our autonomy to it.

Philip Nitschke, otherwise known as Dr. Death, is developing an AI that can help people end their own lives. My colleague Will Douglas Heaven explored the messy morality of letting AI make life-and-death decisions in this feature from the mortality issue of our magazine.

In 2020, hundreds of AI tools were developed to aid the diagnosis of covid-19 or predict how severe specific cases would be. None of them worked, as Will reported a couple of years ago.

Will has also covered how AI that works really well in a lab setting can fail in the real world.

My colleague Melissa Heikkil has explored whether AI systems need to come with cigarette-pack-style health warnings in a recent edition of her newsletter, The Algorithm.

Tech companies are keen to describe their AI tools as ethical. Karen Hao put together a list of the top 50 or so words companies can use to show they care without incriminating themselves.

Scientists have used an imaging technique to reveal the long-hidden contents of six sealed ancient Egyptian animal coffins. They found broken bones, a lizard skull, and bits of fabric. (Scientific Reports)

Genetic analyses can suggest targeted treatments for people with colorectal cancerbut people with African ancestry have mutations that are less likely to benefit from these treatments than those with European ancestry. The finding highlights how important it is for researchers to use data from diverse populations. (American Association for Cancer Research)

Sri Lanka is considering exporting 100,000 endemic monkeys to a private company in China. A cabinet spokesperson has said the monkeys are destined for Chinese zoos, but conservationists are worried that the animals will end up in research labs. (Reuters)

Would you want to have electrodes inserted into your brain if they could help treat dementia? Most people who have a known risk of developing the disease seem to be open to the possibility, according to a small study. (Brain Stimulation)

A gene therapy for a devastating disease that affects the muscles of some young boys could be approved following a decision due in the coming weeksdespite not having completed clinical testing. (STAT)

Continued here:
Artificial intelligence is infiltrating health care. We shouldnt let it make all the decisions. - MIT Technology Review

Using Artificial Intelligence Applications in Neurology to the Field … – Neurology Live

WATCH TIME: 5 minutes

"When you're using an AI-based approach to analyze any type of bigger data set or multimodal data set, its important that you understand your data set well. If you have biases in your data set, or errors in your data collection that you're feeding into the machine, then the results are not going to be valid or clinically translatable."

In epilepsy, artificial intelligence (AI) algorithms have the potential to analyze electroencephalogram (EEG) signals for seizure predictions prior to their occurrence. AI also can evaluate EEGs during the event of a seizure to differentiate the different types of seizures. Notably, AI can analyze data in medical records and histories such as in genetics and imaging, to develop more personalized patient care plans.1

Kathryn A. Davis, MD, MSc, will present a hot topic talk on the promise of AI, and its potential in the field of neurology, during a plenary session at the 2023 American Academy of Neurology (AAN) Annual Meeting, April 22-27, in Boston, Massachusetts. In her talk, she will speak on the different challenges with using AI, like biases in datasets, or errors in data collection, as well as maintaining the safety of patients and the privacy of data. The rest of the session will feature the latest, cutting-edge translational research in relation to clinical issues of importance. Davis and two other speakers will provide summaries of their recent findings and explain the significance of their clinical implications.

Prior to the meeting, Davis, an associate professor of Neurology and director of the Penn Epilepsy Center, at the University of Pennsylvania, sat down with NeurologyLive in an interview to overview her presentation. She also spoke about the potential challenges that occur when using AI to analyze data sets for a clinical trial, as well as the uses of AI for patients in having access to participate in research.

Click here for more coverage on 2023 AAN.

Read the original post:
Using Artificial Intelligence Applications in Neurology to the Field ... - Neurology Live

This Is Why Nvidia Faces Big Challenges in Artificial Intelligence – The Motley Fool

John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fool's board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Travis Hoium has positions in Alphabet and Apple. The Motley Fool has positions in and recommends Alphabet, Amazon.com, Apple, Meta Platforms, Microsoft, Nvidia, and Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.Travis Hoium is an affiliate of The Motley Fool and may be compensated for promoting its services. If you choose to subscribe throughtheir linkthey will earn some extra money that supports their channel. Their opinions remain their own and are unaffected by The Motley Fool.

Here is the original post:
This Is Why Nvidia Faces Big Challenges in Artificial Intelligence - The Motley Fool

Savvy criminals using artificial intelligence to their advantage – FOX 35 Orlando

Loading Video

This browser does not support the Video element.

As technology evolves so are scammers. In this newest scam, bad guys are using artificial intelligence to their advantage.

OCALA, Fla. - As technology evolves so are scammers. In this newest scam, bad guys are using artificial intelligence to their advantage.

An Ocala father fell victim to this on Thursday. Jesse got a call from a number in Mexico, when he answered it was his daughter on the phone at first. "Daddy," is what she first said, Jesse responded, and his daughter went on to say she had been kidnapped and in a van with people, she did not know. Jesse told FOX 35 News it sounded just like his daughter, down to the cracks in her voice.

"When her voice cracks, there's a sound to it," he said. "There was no other explanation for it."

The scam typically starts with a phone call either saying your family member is being held captive or you might hear your loved one's voice asking for help. Then the caller will provide you with specific instructions to make sure of a safe return for your family member. Typically, it's in form of a money wire, gift card, or sometimes even through Bitcoin. The scammer will make you stay on the line until the money is wired.

Jesse said at no point did he think he was being scammed. The criminals even knew his daughter was out of town in Tampa and that made it feel very real for him. He did what a lot of parents would do in that situation: he wired the scammer money. Jesse told the man, he only had $600, and the criminal agreed to that, all he had to do was wire transfer the money. But the money wasn't going to come quickly, because Jesse had the instinct to stall.

"I told him I had pins in my legs and if he wanted me to drive I would have to take a cast off, and all this, I'm just trying to buy time."

He did exactly what the FBI and local law enforcement encourage people to do.

"This is a high-tech scam, it will get you. So the basic rule of thumb is, do not pay anything over the phone. Slow down, take your time, and use that most powerful weapon of verification," Lt. Paul Bloom with Marion County Sheriff's Office said.

Here are some other steps you can take if you find yourself in a similar situation to Jesse:

Jesse said he never in his wildest dreams thought he'd fall victim to a scam, and didn't believe when he was getting scammed that it wasn't real.

"Not for one minute until I got off the phone with him two-and-a-half hours later, called my sister-in-law where my daughter was, did I know," he said.

According to the FBI, look out for numbers coming from an outside area code, sometimes from Puerto Rico (787), (939), (856). Some other things to look out for to see if this could be an extortion case. Often times the calls don't come from the alleged kidnapped victim's phone. Callers will go to great lengths to keep you on the phone. They might prevent you from calling or locating the kidnapped victim.

Jesse luckily was able to get his wire transfer stopped in time, so he didn't lose money. If you've found yourself a victim of this, report it to the FBI here.

To read more about virtual kidnapping ransom scams, visit the National Institutes of Health Office of Management's webste.

See the original post here:
Savvy criminals using artificial intelligence to their advantage - FOX 35 Orlando

US Targeting China, Artificial Intelligence Threats – Voice of America – VOA News

U.S. homeland security officials are launching what they describe as two urgent initiatives to combat growing threats from China and expanding dangers from ever more capable, and potentially malicious, artificial intelligence.

Homeland Security Secretary Alejandro Mayorkas announced Friday that his department was starting a 90-day sprint to confront more frequent and intense efforts by China to hurt the United States, while separately establishing an artificial intelligence task force.

"Beijing has the capability and the intent to undermine our interests at home and abroad and is leveraging every instrument of its national power to do so," Mayorkas warned, addressing the threat from China during a speech at the Council on Foreign Relations in Washington.

The 90-day sprint will assess how the threats posed by the PRC [People's Republic of China] will evolve and how we can be best positioned to guard against future manifestations of this threat, he said.

One critical area we will assess, for example, involves the defense of our critical infrastructure against PRC or PRC-sponsored attacks designed to disrupt or degrade provision of national critical functions, sow discord and panic, and prevent mobilization of U.S. military capabilities, Mayorkas added.

Other areas of focus for the sprint will include addressing ways to stop Chinese government exploitation of U.S. immigration and travel systems to spy on the U.S. government and private entities and to silence critics, and looking at ways to disrupt the global fentanyl supply chain.

AI dangers

Mayorkas also said the magnitude of the threat from artificial intelligence, appearing in a growing number of tools from major tech companies, was no less critical.

"We must address the many ways in which artificial intelligence will drastically alter the threat landscape and augment the arsenal of tools we possess to succeed in the face of these threats," he said.

Mayorkas promised that the Department of Homeland Security will lead in the responsible use of AI to secure the homeland and in defending against the malicious use of this transformational technology."

The new task force is set to seek ways to use AI to protect U.S. supply chains and critical infrastructure, counter the flow of fentanyl, and help find and rescue victims of online child sexual exploitation.

The unveiling of the two initiatives came days after lawmakers grilled Mayorkas about what some described as a lackluster and derelict effort under his leadership to secure the U.S. border with Mexico.

You have not secured our borders, Mr. Secretary, and I believe youve done so intentionally, the chair of the House Homeland Security Committee, Republican Mark Green, told Mayorkas on Wednesday.

Another lawmaker, Republican Marjorie Taylor Greene, went as far as to accuse Mayorkas of lying, though her words were quickly removed from the record.

Mayorkas on Friday said it might be possible to use AI to help with border security, though how exactly it could be deployed for the task was not yet clear.

We're at a nascent stage of really deploying AI, he said. I think we're now at the dawn of a new age.

But Mayorkas cautioned that technologies like AI would do little to slow the number of migrants willing to embark on dangerous journeys to reach U.S. soil.

Desperation is the greatest catalyst for the migration we are seeing," he said.

FBI warning

The announcement of Homeland Securitys 90-day sprint to confront growing threats from Beijing followed a warning earlier this week from the FBI about the willingness of China to target dissidents and critics in the U.S.and the arrests of two New York City residents for their involvement in a secret Chinese police station.

China has denied any wrongdoing.

The Chinese government strictly abides by international law, and fully respects the law enforcement sovereignty of other countries, Liu Pengyu, the spokesman for the Chinese Embassy in Washington, told VOA in an email earlier this week, accusing the U.S. of seeking to smear Chinas image.

Top U.S. officials have said they are opening two investigations daily into Chinese economic espionage in the U.S.

The Chinese government has stolen more of American's personal and corporate data than that of every nation, big or small combined, FBI Director Christopher Wray told an audience late last year.

More recently, Wray warned of Chinese advances in AI, saying he was deeply concerned.

Mayorkas voiced a similar sentiment, pointing to Chinas use of investments and technology to establish footholds around the world.

We are deeply concerned about PRC-owned and -operated infrastructure, elements of infrastructure, and what that control can mean, given that the operator and owner has adverse interests, Mayorkas said Friday.

Whether it's investment in our ports, whether it is investment in partner nations, telecommunications channels and the like, it's a myriad of threats, he said.

Read more here:
US Targeting China, Artificial Intelligence Threats - Voice of America - VOA News

Researchers at UTSA use artificial intelligence to improve cancer … – UTSA

Patients undergoing radiotherapy are currently given a computed tomography (CT) scan to help physicians see where the tumor is on an organ, for example a lung. A treatment plan to remove the cancer with targeted radiation doses is then made based on that CT image.

Rad says that cone-beam computed tomography (CBCT) is often integrated into the process after each dosage to see how much a tumor has shrunk, but CBCTs are low-quality images that are time-consuming to read and prone to misinterpretation.

UTSA researchers used domain adaptation techniques to integrate information from CBCT and initial CT scans for tumor evaluation accuracy. Their Generative AI approach visualizes the tumor region affected by radiotherapy, improving reliability in clinical settings.

This improved approach enables physicians to more accurately see how much a tumor has decreased week by week and to plan the following weeks radiation dose with greater precision. Ultimately, the approach could lead clinicians to better target tumors while sparing the surrounding critical organs and healthy tissue.

Nikos Papanikolaou, a professor in the Departments of Radiation Oncology and Radiology at UT Health San Antonio, provided the patient data that enabled the researchers to advance their study.

UTSA and UT Health San Antonio have a shared commitment to deliver the best possible health care to members of our community, Papanikolaou said. This study is a wonderful example of how artificial intelligence can be used to develop new personalized treatments for the benefit of society.

The American Society for Radiology Oncology stated in a 2020 report that between half or two-thirds of people diagnosed with cancer were expected to receive radiotherapy treatment. According to the American Cancer Society, the number of new cancer cases in the U.S. in 2023 is projected to be nearly two million.

Arkajyoti Roy, UTSA assistant professor of management science and statistics, says he and his collaborators have been interested in using AI and deep learning models to improve treatments over the last few years.

Besides just building more advanced AI models for radiotherapy, we also are super interested in the limitations of these models, he said. All models make errors and for something like cancer treatment its very important not only to understand the errors but to try to figure out how we can limit their impact; thats really the goal from my perspective of this project.

The researchers study included 16 lung cancer patients whose pre-treatment CT and mid-treatment weekly CBCT images were captured over a six-week period. Results show that using the researchers new approach demonstrated improved tumor shrinkage predictions for weekly treatment plans with significant improvement in lung dose sparing. Their approach also demonstrated a reduction in radiation-induced pneumonitis or lung damage up to 35%.

Were excited about this direction of research that will focus on making sure that cancer radiation treatments are robust to AI model errors, Roy said. This work would not be possible without the interdisciplinary team of researchers from different departments.

Continue reading here:
Researchers at UTSA use artificial intelligence to improve cancer ... - UTSA

Good, Bad of Artificial Intelligence Discussed at TED Conference – Voice of America – VOA News

While artificial intelligence, or AI, is not new, the speed at which the technology is developing and its implications for societies are, for many, a cause for wonder and alarm.

An artificial Intelligence creation of reporter Craig McCulloch covering the TED Conference in Vancouver, April 17-21, 2023. The real Craig has considerably less hair, but does wear fashionable sport coats and collared shirts. (TED2023)

ChatGPT recently garnered headlines for doing things like writing term papers for university students.

Tom Graham and his company, Metaphysic.ai, have received attention for creating fake videos of actor Tom Cruise and re-creating Elvis Presley singing on an American talent show. Metaphysic was started to utilize artificial intelligence and create high-quality avatars of stars like Cruise or people from ones own family or social circle.

Tom Graham is CEO of Metaphysic.ai, which uses artificial intelligence to create high-quality avatars of people, like actor Tom Cruise. (Craig McCulloch/VOA)

Graham, who appeared at this year's TED Conference in Vancouver, which began Monday and runs through Friday, said talking with an artificially created younger self or departed loved one can have tremendous benefits for therapy.

He added that the technology would allow actors to appear in movies without having to show up on set, or in ads with AI-generated sports stars.

So, the idea of them being able to create ads without having to turn up is - it's a match made in heaven," Graham said. "The advertisers get more content. The sports people never have to turn up because they don't want to turn up. And everyone just gets paid the same.

Sal Khan, founder of the Khan Academy, at the TED Conference in Vancouver. (Craig McCulloch/VOA)

Sal Khan, founder of Khan Academy, a nonprofit organization that provides free teaching materials, sees AI as beneficial to education and a kind of one-on-one instruction: student and AI.

His organization is using artificial intelligence to supplement traditional instruction and make it more interactive.

"But now, they can talk to literary characters," he said. "They can talk to fictional cats. They can talk to historic characters, potentially even talk to inanimate objects, like, we were talking about the Mississippi River. Or talk to the Empire State Building. Or talk to ... you know, talk to Mount Everest. This is all possible.

For Chris Anderson, who is in charge of TED - a nonpartisan, nonprofit organization whose mission is to spread ideas, usually in the form of short speeches - conversations about artificial intelligence are the most important ones we can have at the moment. He said the organizations role this year is to bring different parts of this rapidly emerging technology together.

"And the conversation can't just be had by technologists," he said. "And it can't just be heard by politicians. And it can't just be held by creatives. Everyone's future is being affected. And so, we need to bring people together.

Computer scientist Yejin Choi from the University of Washington at the TED Conference in Vancouver. (Craig McCulloch/VOA)

For all of AIs promise, there are growing calls for safeguards against misuse of the technology.

Computer scientist Yejin Choi at the University of Washington said policies and regulations are lagging because AI is moving so fast.

And then there's this question of whose guardrails are you going to install into AI," she said. "So there's a lot of these open questions right now. And ideally, we should be able to customize the guardrails for different cultures or different use cases.

Eliezer Yudkowsky, senior research fellow at the Machine Intelligence Research Institute, which is in Berkeley, California. He spoke April 18, 2023, at TED2023: Possibility, in Vancouver, British Columbia. (Craig McCulloch/VOA)

Another TED speaker this year, Eliezer Yudkowsky, has been studying AI for 20 years and is currently a senior research fellow at the Machine Intelligence Research Institute in California. He has a more pessimistic view of artificial intelligence and any type of safeguards.

This eventually gets to the point where there is stuff smarter than us," he said. "I think we are presently not on track to be able to handle that remotely gracefully. I think we all end up dead.

Ready or not, societies are confronting the need to adapt to AIs emergence.

Visit link:
Good, Bad of Artificial Intelligence Discussed at TED Conference - Voice of America - VOA News

7 Principles to Guide the Ethics of Artificial Intelligence – ATD

In 2019, CTDO Next, ATDs exclusive consortium of talent development leaders shaping the professions future, published The Responsibility of TD Professionals in the Ethics of Artificial Intelligence. In this whitepaper, we argued the need for a code of ethics and outlined many of the steps the TD function should take regarding that code. TD professionals are already leveraging AI for administration, learner support, content development, and more.

More than 1,000 technology leaders and researchers recently called for a pause in AI development citing profound risks to society and humanity, based largely on the same ethical concerns we raised. Whether such a pause occurs, it seems likely that it will fall to organizations to decide how they will utilize the immense and growing power of AI. Therefore, we renew our call to the TD profession to adopt these seven principles.

1. Fairness. We must see that AI systems treat all employees fairly and never affect similarly situated employees or employee groups in different ways. HR experts should be directly involved in the design and selection process and all deployment decisions. Training should precede the deployment of any AI system.

2. Inclusiveness. AI systems should empower everyone. They must be equally accessible and comprehensible to all employees regardless of disabilities, race, gender, orientation, or cultural differences. And we must ensure the biases of the past are not unintentionally built into the futures AI.

3. Transparency. People should know where and when AI systems are being used and understand what they do and how they do it. When AI systems are used to (help) make decisions impacting peoples careers and lives, those affected (including those making the decisions) should understand how those decisions are made and exactly how AI influences them.

4. Accountability. Those who design and deploy AI systems must be accountable for how those systems operate. Clear owners should be identified for all AI instantiations, the processes they support, the results they produce, and the impact of those processes and results on employees. Every part of the organization should receive training to help them understand how AI is used, with updates provided as use increases or changes.

5. Privacy. AI systems should respect privacy. We cannot expect employees to share data about themselves or allow it to be gathered unless they are certain their privacy is protected. Anyone with access to this data or the AI systems that collect it should be trained in data privacy.

6. Security. AI systems should be secure. We must balance between the real value of the information wed like to have and how confident we are in our ability to protect it. The TD function will naturally have access to significant amounts of data we must keep safe. Critical issues are where we create access points to the data, the hackability of our systems, and the people we assign to operate those systems.

7. Reliability. AI systems should perform reliably and safely. Education concerning AI must demonstrate that systems are designed to operate within a clear set of parameters and that we can verify they are behaving as intended under actual operating conditions. Only people can see the blind spots and biases in AI systems, so they must be taught how to spot and correct any unintended behaviors that may surface.

We must help create a widely shared understanding of when and how an AI system should seek human input during critical situations and build a robust feedback mechanism that all employees understand so they can easily report performance issues they encounter.

We already train in cyber security and threat awareness, but it must now encompass the risks peculiar to the use of AI. If we truly want to manage this transformation and make AI serve us, it must be implemented based on a deep concern for its human impact.

The TD function should actively share best practices for the design and development of AI systems and implementation practices that share predictable and reliable interoperation with employees. The talent function is uniquely positioned to optimize AIs impact on the organizations we serve. We can lead the organization in ensuring that our use of this technology meets the highest ethical standards.

See the original post here:
7 Principles to Guide the Ethics of Artificial Intelligence - ATD

UW Health testing artificial intelligence in patient communication – Spectrum News 1

MILWAUKEE As Microsoft works with Verona, Wisconsin-based Epic on developing more artificial intelligence applications for health care use, providers at UW Health in Madison, UC San Diego Health and Stanford Health Care have started testing out AI when it comes to patient messaging.

This is not about the technology that makes it exciting, but its the potential of what it really does for our providers and our patients that makes it exciting, said Chero Goswami with UW Health. What were doing in a nutshell, in one of the first-use cases, is allowing the technology to generate responses to the hundreds of emails that we get every day. We will never trust the technology from day one, so until it gets to the point of maturity and accuracy, were basically using those templates to create those answers for emails... and then a [person] is reviewing every one of those responses.

For now, at least, the trial of artificial intelligence at UW Health is reserved for patient communication only.

Our guiding principle has always been, Do no harm," Goswami said. And privacy and security is number one when we do anything.

Watch the full interview above.

Read more:
UW Health testing artificial intelligence in patient communication - Spectrum News 1

Crypto Scammers Allegedly Used Artificial Intelligence And Actors To Trick Investors – Bitcoinist

The number of crypto scammers targeting naive investors rises in tandem with the rising interest in Artificial Intelligence.

In a disturbing trend, some crypto scammers are now combining the two, using AI to create fake CEOs and other executives in a bid to deceive and swindle potential victims.

On Thursday, the California Department of Financial Protection and Innovation (DFPI) charged five financial services firms with exploiting investor interest in cryptocurrencies by capitalizing on the buzz surrounding artificial intelligence.

Maxpread Technologies and Harvest Keeper, two of the companies, are accused of misrepresenting their CEOs by using an actor for one and a computer-generated avatar by the name of Gary for the other.

According to the DFPI, the company touted its profitability through a promotional YouTube video using an avatar built on Synthesia.io and programmed to read a screenplay.

Synthesia.io is a video generation platform that uses artificial intelligence to create lifelike video content. The platform allows users to create video content by simply typing out the script or uploading a voiceover, and then selecting an AI-generated presenter or avatar to deliver the content onscreen.

Synthesia.ios AI technology uses deep learning algorithms to create realistic animations and speech, enabling users to generate high-quality video content quickly and efficiently.

On April 8, a video with an address purportedly given by CEO Michael Vanes was uploaded to the official Maxpread YouTube channel.

However, the agency asserts that this figure does not exist, and that Jan Gregory, previously the companys chief marketing officer and corporate brand manager, is in fact Maxpreads true CEO.

Elizabeth Smith, a DFPI representative, told Forbes in an email that the agencys enforcement team had traced the avatars origins to the online 3D modeling and animation platform Synthesia.io, where it had been given the name Gary.

The supposed avatar, who appears to be a middle-aged bald man with a salt-and-pepper beard, rambles on and on in a synthetic voice for the whole of the videos seven minutes.

In contrast, Harvest Keeper reportedly employed a human actor to perform the role of CEO; despite the companys claim to use AI to enhance crypto trading profits, it appears the company instead relied on a human boss.

In a statement, DFPI Commissioner Clothilde Hewlett said:

Scammers are taking advantage of the recent buzz around artificial intelligence to entice investors into bogus schemes.

Hewlett said they will keep aggressively going after these crypto scammers so that Californians and their investments are safe.

Both organizations have been silent in the face of the accusations. This episode illustrates the need for regulatory monitoring and caution against the misuse of artificial intelligence in the financial sector.

Since these crypto scammers rely on cutting-edge technology to fabricate identities and manipulate data, they are getting ever harder to spot.

As a result, investors need to be extra vigilant and do their due diligence before investing in any crypto project or startup, no matter how promising it may seem at first glance.

-Featured image from ArtemisDiana | iStock / Getty Images Plus

Read more:
Crypto Scammers Allegedly Used Artificial Intelligence And Actors To Trick Investors - Bitcoinist