Category Archives: Artificial Intelligence

Workplace AI: How artificial intelligence will transform the workday – BBC

Artificial intelligence has been around for years, but scarcely has it found itself in conversation as much as it has now. The launch of OpenAIs ChatGPT rocketed generative AI onto the radar of many people who hadnt been paying much attention or didnt feel it was relevant to their lives. This has included workers, whove already been touched by the technology, whether they know it or not.

The chatbot, which uses machine learning to respond to user prompts, is helping workers write cover letters and resumes, generate ideas and even art in the workplace and more. Its already making a splash in hiring with recruiters, who are finding they need to adapt to the new technology. And as competing companies rush to launch similar tools, the technology will only get stronger and more sophisticated.

Although some workers fear being replaced by AI, experts say the technology may actually have the power to positively impact workers daily lives and skill sets, and even improve the overall work economy. BBC Worklife spoke with experts about what to expect from AI now and in the future workplace.

Expanding daily ideas and solutions

One of ChatGPTs main abilities is that it can function like a personal assistant given a prompt, it generates text based on natural language processing to give you an accessible, readable response. Along with providing information and answers, it can also aid knowledge workers to analyse and expand their work.

It can help you brainstorm and generate new ideas, says Carl Benedikt Frey, future of work director at Oxford University. In his own field of academia, for instance, hes seen it test for counterarguments to a thesis, and write an abstract for research. You can ask it to generate a tweet to promote your paper, he adds. There are tremendous possibilities. For knowledge workers, this could mean creating an outline for a blog and a social media post to go with it, distil complex topics for a target audience, plan a business-trip itinerary in a new city or predict a projects cost and timeline.

For many users, ChatGPT functions as a sounding board a tool to bounce ideas off, rather than create them. I generate ideas all the time, and ask AI to do supplements on it, says Ethan Mollick, an associate professor at the University of Pennsylvania, US, who studies AI and innovation. I use it to help me process information, to summarize stuff for me, very much as a partner.

Theres a lot of potential for workers to step outside of the box with the assistance of generative AI, whether its improving their daily workflows, or developing long-term projects and goals.

Read more from the original source:
Workplace AI: How artificial intelligence will transform the workday - BBC

Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence – BBC

Updated 17 May 2023

Sam Altman testified before a US Senate Committee about the potential of artificial intelligence - and its risks

The creator of advanced chatbot ChatGPT has called on US lawmakers to regulate artificial intelligence (AI).

Sam Altman, the CEO of OpenAI, the company behind ChatGPT, testified before a US Senate committee on Tuesday about the possibilities - and pitfalls - of the new technology.

In a matter of months, several AI models have entered the market.

Mr Altman said a new agency should be formed to license AI companies.

ChatGPT and other similar programmes can create incredibly human-like answers to questions - but can also be wildly inaccurate.

Mr Altman, 38, has become a spokesman of sorts for the burgeoning industry. He has not shied away from addressing the ethical questions that AI raises, and has pushed for more regulation.

He said that AI could be as big as "the printing press" but acknowledged its potential dangers.

"I think if this technology goes wrong, it can go quite wrong...we want to be vocal about that," Mr Altman said. "We want to work with the government to prevent that from happening."

He also admitted the impact that AI could have on the economy, including the likelihood that AI technology could replace some jobs, leading to layoffs in certain fields.

"There will be an impact on jobs. We try to be very clear about that," he said, adding that the government will "need to figure out how we want to mitigate that".

Mr Altman added, however, that he is "very optimistic about how great the jobs of the future will be".

To play this content, please enable JavaScript, or try a different browser

Watch: Senator Richard Blumenthal uses ChatGPT to write his statement

However, some senators argued new laws were needed to make it easier for people to sue OpenAI.

Mr Altman told legislators he was worried about the potential impact on democracy, and how AI could be used to send targeted misinformation during elections - a prospect he said is among his "areas of greatest concerns".

"We're going to face an election next year," he said. "And these models are getting better."

He gave several suggestions for how a new agency in the US could regulate the industry - including "a combination of licensing and testing requirements" for AI companies, which he said could be used to regulate the "development and release of AI models above a threshold of capabilities".

He also said firms like OpenAI should be independently audited.

Republican Senator Josh Hawley said the technology could be revolutionary, but also compared the new tech to the invention of the "atomic bomb".

Democrat Senator Richard Blumenthal observed that an AI-dominated future "is not necessarily the future that we want".

"We need to maximize the good over the bad. Congress has a choice now. We had the same choice when we faced social media. We failed to seize that moment," he warned.

What was clear from the testimony is that there is bi-partisan support for a new body to regulate the industry.

However, the technology is moving so fast that legislators also wondered whether such an agency would be capable of keeping up.

To play this content, please enable JavaScript, or try a different browser

Ex-Google CEO: AI on social media bad for democracy

Read the original here:
Sam Altman: CEO of OpenAI calls for US to regulate artificial intelligence - BBC

Wisconsin Police Department Warns of New Artificial Intelligence Phone Scam – NBC Chicago

A police department in southern Wisconsin is warning residents about a new scam in which swindlers clone a relative's voice in an attempt to appear legitimate.

In a Facebook post on May 8, the Beloit Police Department said it received a report from a resident who provided money to someone who "sounded like their relative." While police aren't able to say for certain if the scam used artificial intelligence, they did say that "we want our community to be aware that this technology is out there."

AI scams have recently increased, so much so that the Senate Special Committee on Aging sent a letter to the Federal Trade Commission on Friday, requesting information on the agency's efforts to protect older Americans from such scams, according to a news release.

These scams are easier to pull of that one might think - all scammers need is a short audio clip of your loved one's voice and a voice-cloning program.

Oftentimes scam victims may receive calls from people claiming to be relatives who have been kidnapped, landed in jail or have been involved in an accident and are in desperate need of money.

So, how do you know if it's actually your family member or a scammer who has cloned their voice?

First, call the person who supposedly contacted you and verify the story, according to the Federal Trade Commission. Make sure to use a phone number you know is theirs. If you cant reach your loved one, try to get in touch with them through another relative or friends.

Scammers often ask for victims towire money, sendcryptocurrency, orbuy gift cardsand give them the card numbers and PINs. So, if any of those requests are made, you might have gotten involved in a scam.

To help prevent AI scams, check privacy settings on social media accounts and double check which information you publicize on those accounts. The more information that is publicly available, the more scammers can use to convince someone they are legitimate.

Read more here:
Wisconsin Police Department Warns of New Artificial Intelligence Phone Scam - NBC Chicago

Want to Cash In on Artificial Intelligence? These AI Stocks Will Pay Immediate Dividends – The Motley Fool

The launch ofChatGPT has generated a lot of buzz, making artificial intelligence (AI) one of the hottest topics in the business and investment world. Many companies are seeking to learn how to leverage AI's power to grow their businesses.

Investors are pouring intoAI stocks, hoping to cash in on the frenzy. However, many AI stocks will likely never live up to the hype. Because of that, investors should consider companies with AI upside that haven't yet gotten caught up in the hype. Equinix(EQIX 0.33%) andIntuit(INTU -0.36%) are under-the-radar AI stocks. Adding to their appeal is that both pay dividends, enabling investors to immediately generate income from companies starting to capitalize on the AI megatrend.

Equinix is adata center real estate investment trust (REIT). Those facilities will be increasingly crucial to supporting AI because companies will need space to store all the data used to train and run their AI programs.

The REIT isalready starting to see AI-driven demand materialize. CEO Charles Meyers stated on the first-quarter conference call: "We've closed several key AI wins over the past few quarters and are seeing a growing pipeline of new opportunities directly and with key partners for both training and inference use cases." Meyers believes we're in the "early days" of AI-driven data demand, which will be an "exciting incremental opportunity for the company."

AI could support strong occupancy, rising rental rates, and new development opportunities for the company. Equinix sees AI using two types of data centers. AI learning will likely occur in large-scale data centers like its xScale facilities. Meanwhile, AI interface programs, like ChatGPT, will likely run in retail locations close to end users because they need proximity to quickly crunch data and generate outputs.

Equinix's data centers generate a lot of cash, giving the REIT the money to pay a decent dividend. Equinix has a 1.9% dividend yield, slightly higher than theS&P 500's1.7% yield. That enables investors to generate a nicepassive income stream from an AI-powered stock. The company raised its payout by 10% earlier this year and has steadily increased the dividend over the years.

Meanwhile, investors are getting a reasonable price on a company with lots of AI-powered upside. Shares are down about 5% from their 52-week high as the stock has yet to get caught up on the AI hype train. They currently trade at about 23 times forward earnings, much cheaper than many AI stocks.

Intuit's strategy is to be an AI-driven expert platform. Thefintech company wants to leverage the power of AI and human expertise to improve outcomes for its clients.

Intuit powers its unique AI-driving expert platform with several technologies. It uses knowledge engineering to arrange and work with rule sets like the tax code. It employs natural language processing to interact with customers and help meet their needs seamlessly. The company also relies on machine learning to tap into its massive data and create personalized customer experiences. Finally, Intuit is investing in generative AI capabilities to improve customer outcomes.

The company is leveraging the power of AI across its platform. For example, its Mailchimp marketing platform allows customers to tap into the power of AI to create marketing email campaigns. Marketers can automate, generate, and optimize content, saving time and improving outcomes. Meanwhile, TurboTax and QuickBooks customers can gain automated digital assistance from AI or get matched with a human expert through that technology. Finally, Credit Karma uses AI to provide users with personalized insights and recommendations.

Intuit enables investors to generate a little passive cash flow from its AI-powered expert platform. The company pays a modest dividend, currently yielding about 0.7%. It's a decent payout, considering many hype-driven AI stocks either aren't profitable or don't pay dividends. Meanwhile, Intuit regularly increases its dividend. It gave investors a 15% pay bump last year.

Speaking of the hype train, it certainly has yet to hit Inuit, given the stock currently sits about 35% below its 52-week high despite its AI-focused strategy. The fintech company trades at a reasonable 30 times earnings, which isn't anywhere near as expensive as some popular AI stocks.

Equinix and Intuit are early leaders in harnessing the power of AI. Their investors don't have to wait long for a payoff from their AI-driven growth because both companies pay quarterly cash dividends they've steadily increased. That enables investors to make a tangible return on AI-powered investments, even if the technology never lives up to the hype.

Matthew DiLallo has positions in Equinix and Intuit. The Motley Fool has positions in and recommends Equinix and Intuit. The Motley Fool has a disclosure policy.

Read the original:
Want to Cash In on Artificial Intelligence? These AI Stocks Will Pay Immediate Dividends - The Motley Fool

AI Stocks: In Case You Missed These Developments In Artificial Intelligence – Investor’s Business Daily

Access to this page has been denied because we believe you are using automation tools to browse the website.

This may happen as a result of the following:

Please make sure that Javascript and cookies are enabled on your browser and that you are not blocking them from loading.

Reference ID: #b199b6e8-f795-11ed-8102-66796865494a

Read the original:
AI Stocks: In Case You Missed These Developments In Artificial Intelligence - Investor's Business Daily

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? – Hollywood Reporter

AI startup Respeecher re-created James Earl Jones Darth Vader voice for the Disney+ series Obi Wan Kenobi.

On May 17, as bodies lined up in the rain outside the Cannes Film Festival Palais for the chance to watch a short film directed byPedro Almodvar, an auteur known most of all for his humanism, a different kind of gathering was underway below the theater. Inside the March, a panel of technologists convened to tell an audience of film professionals how they might deploy artificial intelligence for creating scripts, characters, videos, voices and graphics.

The ideas discussed at the Cannes Next panel AI Apocalypse or Revolution? Rethinking Creativity, Content and Cinema in the Age of Artificial Intelligence make the scene of the Almodvar crowd seem almost poignant, like seeing a species blissfully ignorant of their own coming extinction, dinosaurs contentedly chewing on their dinners 10 minutes before the asteroid hits.

The only people who should be afraid are the ones who arent going to use these tools, said panelistAnder Saar, a futurist and strategy consultant for Red Bull Media House, the media arm of the parent company of Red Bull energy drinks. Fifty to 70 percent of a film budget goes to labor. If we can make that more efficient, we can do much bigger films at bigger budgets, or do more films.

The panel also includedHovhannes Avoyan, the CEO of Picsart, an image-editing developer powered by AI, andAnna Bulakh, head of ethics and partnerships at Respeecher, an AI startup that makes technology that allows one person to speak using the voice of another person. The audience of about 150 people was full of AI early adopters through a show of hands, about 75 percent said they had an account for ChatGPT, the AI language processing tool.

The panelists had more technologies for them to try. Bulakhs company re-createdJames Earl Jones Darth Vader voice as it sounded in 1977 for the 2022 Disney+ seriesObi-Wan Kenobi, andVince Lombardis voice for a 2021 NFL ad that aired during the Super Bowl. Bulakh drew a distinction between Respeechers work and AI that is created to manipulate, otherwise known as deepfakes. We dont allow you to re-create someones voice without permission, and we as a company are pushing for this as a best practice worldwide, Bulakh said. She also spoke about how productions already use Respeechers tools as a form of insurance when actors cant use their voices, and about how actors could potentially grow their revenue streams using AI.

Avoyan said he created his company for his daughter, an artist, and his intention is, he said, democratizing creativity. Its a tool, he said. Dont be afraid. It will help you in your job.

The optimistic conversation unfolding beside the French Riviera felt light years away from the WGA strike taking place in Hollywood, in which writers and studios are at odds over the use of AI, with studios considering such ideas as having human writers punch up drafts of AI-generated scripts, or using AI to create new scripts based on a writers previous work. During contract negotiations, the AMPTP refused union requests for protection from AI use, offering instead, annual meetings to discuss advancements in technology. The March talk also felt far from the warnings of a growing chorus of experts likeEric Horvitz, chief scientific officer at Microsoft, and AI pioneerGeoffrey Hinton, who resigned from his job at Google this month in order to speak freely about AIs risks, which he says include the potential for deliberate misuse, mass unemployment and human extinction.

Are these kinds of worries just moral panic? mused the moderator and head of Cannes NextSten Kristian-Saluveer. That seemed to be the panelists view. Saar dismissed the concerns, comparing the changes AI will bring to adaptations brought by the automobile or the calculator. When calculators came, it didnt mean we dont know how to do math, he said.

One of the panel buzz phrases was hyper-personalized IP, meaning that well all create our own individual entertainment using AI tools. Saar shared a video from a company he is advising, in which a childs drawings came to life and surrounded her on video screens. The characters in the future will be created by the kids themselves, he says. Avoyan said the line between creator and audience will narrow in such a way that we will all just be making our own movies. You dont even need a distribution house, he said.

A German producer and self-described AI enthusiast in the audience said, If the cost of the means of production goes to zero, the amount of produced material is going up exponentially. We all still only have 24 hours. Who or what, the producer wanted to know, would be the gatekeepers for content in this new era? Well, the algorithm, of course. A lot of creators are blaming the algorithm for not getting views, saying the algorithm is burying my video, Saar said. The reality is most of the content is just not good and doesnt deserve an audience.

What wasnt discussed at the panel was what might be lost in a future that looks like this. Will a generation raised on watching videos created from their own drawings, or from an algorithms determination of what kinds of images they will like, take a chance on discovering something new? Will they line up in the rain with people from all over the world to watch a movie made by someone else?

Cannes Diary: Will Artificial Intelligence Democratize Creativity or Lead to Certain Doom? - Hollywood Reporter

How artificial intelligence is helping make fisheries more sustainable – Fox Weather

A newly published AI algorithm has been used to estimate coastal fish stocks in the Western Indian Ocean with 85 percent accuracy. (Courtesy: Wildlife Conservation Society)

INDIAN OCEAN A newly published AI algorithm has been used to estimate coastal fish stocks in the Western Indian Ocean with 85% accuracy.

By taking account of fish stocks, or the number of fish living in a given area, people can gauge the health of fisheries and see whether those fisheries need time to recover.

The recovery of fisheries allows them to be fished more sustainably, rather than depleting the area of the economically vital natural resource.

Fish swim around a coral reef. (Wildlife Conservation Society / FOX Weather)

To gather this information, scientists created an algorithm that utilized years of fish abundance data, along with satellite measurements and an AI tool. They targeted an area on the Western Indian Ocean tropical reefs, where there is a high dependency on fisheries.

The algorithm allowed researchers to quickly and accurately estimate coastal fish stock, all without setting foot in the water, according to the Wildlife Conservation Society. The model successfully estimated fish stocks in the area with 85% accuracy.

According to the WCS, the AI tool has the potential to quickly provide data about fisheries to local and national governments in a cost-effective way.

A fisherman uses a bucket to gather fish. (Wildlife Conservation Society / FOX Weather)

Many tropical countries in Africa and Asia, where the highest percentage of people who depend on fishing for food and income can be found, traditionally have not had much access to the usually high-cost methods of taking fish stocks.

Without this data, small-scale fisheries in those countries are often operating blindly, without long-term plans to keep their coastal waters healthy and productive, WCS said.

They noted that tools, such as this new algorithm, can help change that.

A man holds up his catch. (Wildlife Conservation Society / FOX Weather)

"Our goal is to give people the information required to know the status of their fish resources and whether their fisheries need time to recover or not," said Tim McClanahan, director of Marine Science at WCS and co-author on the study.


"The long term goal is that they, their children, and their neighbors can find a balance between peoples needs and ocean health," he added.

WCS is hoping to continue this work and help fill data gaps about fisheries around the world.

Read this article:
How artificial intelligence is helping make fisheries more sustainable - Fox Weather

AI: Good or bad? All your artificial intelligence fears, addressed – AMBCrypto News

Leading Artificial Intelligence [AI] researcher Geoffrey Hinton recently quit Google, citing concerns about the risks of artificial intelligence. He voiced his concerns that the tech might soon outperform the human brains information capacity. He termed some threats posed by these chatbots as quite scary.

Hinton argued that chatbots can learn on their own and share their expertise. This means that any new knowledge acquired by one copy is automatically distributed to the entire group. This enables chatbots to collect knowledge far beyond the capacity of any individual.

Let us dig deeper into these concerns and understand how much of these concerns are shared by the online world.

There is a general understanding that AI will most likely become super intelligent in the next few decades. But in its current state, AI is merely a tool. It has no ability to think. Any chatbot today just translates large amounts of data into numbers and returns the required figures. It can handle complex and ill-formed problems in disciplines such as image recognition, state space searches, model construction, and natural language processing in a reasonably consistent manner.

At the moment, an AI that interprets your language cannot predict your movements. These would be two distinct programs. Artificial intelligence, as of now, does not involve general cognitive processes. We are so far from a refined AI model as depicted in science fiction that we dont even know what developing a highly intelligent AI entails.

At present, our existing AI models handle specific issues in specific circumstances. They are essentially just sophisticated statistical models. Although this technology is extremely effective, there is no reason to believe that we are developing a powerful general-purpose AI model.

However, a lot of money is being poured these days towards coming closer to a general-purpose AI, both in academia and in industry, but it doesnt exist yet.

The AI of today is incapable of resolving moral quandaries. Moral issues are not rational; they are subjective and unique to the person who discusses the issue. If AI is told to kill all persons of X demography in a certain place while causing no harm to members of Y population, it will do so without hesitation.

The problem with this approach is that it ignores the sole thing that limits our own intellect, the environment. The universes intricacy is incomprehensible. Just because we have a very specialized AI system does not imply that the AI is a specialist in everything.

There is an implicit assumption that morality is something we lose track of as we get smarter. But that is far from the case. Indeed, ethics is a common wisdom among us, though ever-evolving. It is a method of dealing with the complexities of universal problems.

AI now has huge economic incentives for development, with billions of dollars in research being spent across a wide range of applications by both private and public organizations. This way, we can say that ruling out quick progress in the next few decades would be foolish.

However, most industries are currently focused on compartmentalized AI, which involves combining numerous separate AIs that each does a certain task very well.

The development of AI is frequently viewed as both a threat and an opportunity for humans, depending on a variety of circumstances.

On the one hand, there are concerns about the possible hazards of AI. These include employment displacement, privacy and security concerns, algorithm biases, and the concentration of power in the hands of a few individuals/organizations. These risks, if not appropriately handled, might have severe effects for people, society, and humanitys general well-being.

Even so, one cannot discount AIs benefits. It can increase productivity, generate innovation, advance healthcare and science, and address difficult social concerns. AI has the potential to boost human talents, automate monotonous chores, and allow us to make more informed judgements. It provides opportunities for progress in a variety of industries, including education, transportation, agriculture, and others.

The aim is to create and deploy AI in a responsible and ethical manner. We can maximize the good impact of AI while minimizing possible hazards by addressing concerns such as transparency, accountability, justice, and prejudice. It requires collaboration among researchers, policymakers, and industry leaders to ensure that AI is developed and used in ways that align with human values and benefit society.

More here:
AI: Good or bad? All your artificial intelligence fears, addressed - AMBCrypto News

How artificial intelligence is helping build hurricane-resistant homes – Fox Weather

Superstorm Sandy flooded the emergency room at the former Coney Island Hospital in South Brooklyn. 11 years later, FOX Weather's Amy Freeze takes you to the new Ruth Bader Ginsburg Hospital, a $1B hospital funded by a FEMA grant, built to be hurricane resistant.

Researchers have developed a method of digitally simulating hurricanes to help refine building codes for homes and businesses in hurricane-prone areas.

Current building code guidelines include maps that state the level of wind a structure must be able to handle based at a given location. These maps were developed using earlier simulations of the inner workings of hurricanes.

The newly published simulations use advances in artificial intelligence, along with years of additional hurricane records, to create more realistic hurricane wind maps for the future.

People clear debris in the aftermath of Hurricane Ian in Fort Myers Beach, Florida on September 30, 2022. (GIORGIO VIERA/AFP / Getty Images)

Researchers used information on more than 1,500 storms from the National Hurricane Centers Atlantic Hurricane Database, which contains information about hurricanes from the past 100 years.

With this information, researchers produced models using machine-learning and deep-learning techniques that simulated hurricane properties, such as landfall location and wind speed, that were consistent with historical records.


"It performs very well," said Adam Pintar, a mathematical statistician at the National Institute of Standards and Technology and co-author on the study. "Depending on where you're looking at along the coast, it would be quite difficult to identify a simulated hurricane from a real one, honestly."

Hurricane simulation using the new models. (Shutterstock, adapted by B. Hayes/NIST / FOX Weather)

The models were also used to generate sets of 100 years worth of hypothetical storms, which the researchers noted largely overlapped with the general behavior of storms in the NHC's Atlantic Hurricane Database.

Researchers did note, however, that the simulations generated by the models were less realistic for coastal states in the Northeast due to a relative lack of information.

A man motions to a satellite image of Hurricane Fiona over Puerto Rico. (Office of the Governor of Puerto Rico / FOX Weather)

"Hurricanes are not as frequent in, say, Boston as in Miami, for example," said said Emil Simiu, NIST fellow and co-author on the study. "The less data you have, the larger the uncertainty of your predictions."


According to the NIST, the team plans to use simulated hurricanes to develop coastal maps of extreme wind speeds as well as quantify uncertainty in those estimated speeds.

See original here:
How artificial intelligence is helping build hurricane-resistant homes - Fox Weather

Will artificial intelligence replace doctors? – Harvard Health

Q. Everyone's talking about artificial intelligence, and how it may replace people in various jobs. Will artificial intelligence replace my doctor?

A. Not in my lifetime, fortunately! And the good news is that artificial intelligence (AI) has the potential to improve your doctor's decisions, and to thereby improve your health if we are careful about how it is developed and used.

AI is a mathematical process that tries to make sense out of massive amounts of information. So it requires two things: the ability to perform mathematical computations rapidly, and huge amounts of information stored in an electronic form words, numbers, and pictures.

When computers and AI were first developed in the 1950s, some visionaries described how they could theoretically help improve decisions about diagnosis and treatment. But computers then were not nearly fast enough to do the computations required. Even more important, almost none of the information the computers would have to analyze was stored in electronic form. It was all on paper. Doctors' notes about a patient's symptoms and physical examination were written (not always legibly) on paper. Test results were written on paper and pasted in a patient's paper medical record. As computers got better, they started to relieve doctors and other health professionals from some tedious tasks like helping to analyze images electrocardiograms (ECGs), blood samples, x-rays, and Pap smears.

Today, computers are literally millions of times more powerful than when they were first developed. More important, huge amounts of medical information now are in electronic form: medical records of millions of people, the results of medical research, and the growing knowledge about how the body works. That makes feasible the use of AI in medicine.

Already, computers and AI have made powerful medical research breakthroughs, like predicting the shape of most human proteins. In the future, I predict that computers and AI will listen to conversations between doctor and patient and then suggest tests or treatments the doctor should consider; highlight possible diagnoses based on a patient's symptoms, after comparing that patient's symptoms to those of millions of other people with various diseases; and draft a note for the medical record, so the doctor doesn't have to spend time typing at a computer keyboard and can spend more time with the patient.

All of this will not happen immediately or without missteps: doctors and computer scientists will need to carefully evaluate and guide the development of new AI tools in medicine. If the suggestions AI provides to doctors prove to be inaccurate or incomplete, that "help" will be rejected. And if AI then does not get better, and fast, it will lose credibility. Powerful technologies can be powerful forces for good, and for mischief.

Will artificial intelligence replace doctors? - Harvard Health