In A.I. Race, Microsoft and Google Choose Speed Over Caution – The New York Times

In March, two Google employees, whose jobs are to review the companys artificial intelligence products, tried to stop Google from launching an A.I. chatbot. They believed it generated inaccurate and dangerous statements.

Ten months earlier, similarconcerns were raised at Microsoft by ethicists and other employees. They wrote in several documents that the A.I. technology behind a planned chatbot could flood Facebook groups with disinformation, degrade critical thinking and erode the factual foundation of modern society.

The companies released their chatbots anyway. Microsoft was first, with a splashy event in February to reveal an A.I. chatbotwoven into its Bing search engine. Google followed about six weeks later withits own chatbot, Bard.

The aggressive moves by the normally risk-averse companies were driven by a race to control what could be the tech industrys next big thing generative A.I., the powerful new technology that fuels those chatbots.

That competition took on a frantic tone in November when OpenAI, a San Francisco start-up working with Microsoft, released ChatGPT, a chatbot that has captured the public imagination and now has an estimated 100 million monthly users.

The surprising success of ChatGPT has led to a willingness at Microsoft and Google to take greater risks with their ethical guidelines set up over the years to ensure their technology does not cause societal problems, according to 15 current and former employees and internal documents from the companies.

The urgency to build with the new A.I. was crystallized in aninternalemail sent last month by Sam Schillace, a technology executive at Microsoft. He wrote in the email, which was viewed by The New York Times, that it was an absolutely fatal error in this moment to worry about things that can be fixed later.

When the tech industry is suddenly shifting toward a new kind of technology, the first company to introduce a product is the long-term winner just because they got started first, he wrote. Sometimes the difference is measured in weeks.

Last week, tension between the industrys worriers and risk-takers played out publicly as more than 1,000 researchers and industry leaders, including Elon Musk and Apples co-founder Steve Wozniak, calledfor a six-month pause inthe development of powerful A.I. technology. In a public letter, they said it presented profound risks to society and humanity.

Regulators are already threatening to intervene. The European Union proposed legislation to regulate A.I., and Italy temporarily banned ChatGPT last week. In the United States, President Biden on Tuesday became the latest official to question the safety of A.I.

A brave new world. A new crop of chatbotspowered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning todays powerhouses into has-beens and creating the industrys next giants. Here are the bots to know:

ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacationsand translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images(and ace the Uniform Bar Exam).

Bing. Two months after ChatGPTs debut, Microsoft, OpenAIs primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bots occasionally inaccurate, misleading and weird responsesthat drew much of the attention after its release.

Ernie. The search giant Baidu unveiled Chinas first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flopafter a promised live demonstration of the bot was revealed to have been recorded.

Tech companies have a responsibility to make sure their products are safe before making them public, he said at the White House. When asked if A.I. was dangerous, he said: It remains to be seen. Could be.

The issues being raised now were once the kinds of concerns that prompted some companies to sit on new technology. They had learned that prematurely releasing A.I. could be embarrassing. Five years ago, for example, Microsoft quickly pulled a chatbot called Tay after users nudged it to generate racist responses.

Researchers say Microsoft and Google are taking risks by releasing technology that even its developers dont entirely understand. But the companies said that they had limited the scope of the initial release of their new chatbots, and that they had built sophisticated filtering systems to weed out hate speech and content that could cause obvious harm.

Natasha Crampton, Microsofts chief responsible A.I. officer, said in an interview that six years of work around A.I. and ethics at Microsoft had allowed the company to move nimbly and thoughtfully. She added that our commitment to responsible A.I. remains steadfast.

Google released Bard after years of internal dissent over whether generative A.I.s benefits outweighed the risks. It announced Meena, asimilarchatbot, in 2020. But that system was deemed too risky to release, three people with knowledge of the process said. Those concerns were reported earlier by The Wall Street Journal.

Later in 2020, Google blocked its top ethical A.I. researchers, Timnit Gebru and Margaret Mitchell, from publishing a paper warning that so-called large language models used in the new A.I. systems, which are trained to recognize patterns from vast amounts of data, could spew abusive or discriminatory language. The researchers were pushed out after Ms. Gebru criticized the companys diversity efforts and Ms. Mitchell was accused of violating its code of conduct after she saved some work emails to a personal Google Drive account.

Ms. Mitchell said she had tried to help Google release products responsibly and avoid regulation, but instead they really shot themselves in the foot.

Brian Gabriel, a Google spokesman, said in a statement that we continue to make responsible A.I. a top priority, using our A.I. principles and internal governance structures to responsibly share A.I. advances with our users.

Concerns over larger modelspersisted. In January 2022, Google refused to allow another researcher, El Mahdi El Mhamdi, to publish a critical paper.

Mr. El Mhamdi, a part-time employee and university professor, used mathematical theorems to warn that the biggest A.I. models are more vulnerable to cybersecurity attacks and present unusual privacy risks because theyve probably had access to private data stored in various locations around the internet.

Though an executive presentation later warned of similar A.I. privacy violations, Google reviewers asked Mr. El Mhamdi for substantial changes. He refused and released the paper through cole Polytechnique.

He resigned from Google this year, citing in part research censorship. He said modern A.I.s risks highly exceeded the benefits. Its premature deployment, he added.

AfterChatGPTs release, Kent Walker, Googles top lawyer, met with research and safety executives on the companys powerful Advanced Technology Review Council. He told them that Sundar Pichai, Googles chief executive, was pushing hard to release Googles A.I.

Jen Gennai, the director of Googles Responsible Innovation group, attended that meeting. She recalled what Mr. Walker had said to her own staff.

The meeting was Kent talking at the A.T.R.C. execs, telling them, This is the company priority, Ms. Gennai said in a recording that was reviewed by The Times. What are your concerns? Lets get in line.

Mr. Walker told attendees to fast-track A.I. projects, though some executives said they would maintain safety standards, Ms. Gennai said.

Her team had already documented concerns with chatbots: They could produce false information, hurt users who become emotionally attached to them and enable tech-facilitated violence through mass harassment online.

In March, two reviewers from Ms. Gennais team submitted their risk evaluation of Bard. They recommended blocking its imminent release, two people familiar with the process said. Despite safeguards, they believed the chatbot was not ready.

Ms. Gennai changed that document. She took out the recommendation and downplayed the severity of Bards risks, the people said.

Ms. Gennai said in an email to The Times that because Bard was an experiment, reviewers were not supposed to weigh in on whether to proceed. She said she corrected inaccurate assumptions, and actually added more risks and harms that needed consideration.

Google said it had released Bard as a limited experiment because of those debates, and Ms. Gennai said continuing training, guardrails and disclaimers made the chatbot safer.

Google released Bard to some users on March 21. The company said it would soon integrate generative A.I. into its search engine.

Satya Nadella, Microsofts chief executive, made a bet on generativeA.I. in 2019 when Microsoft invested $1 billion in OpenAI. After deciding the technology was ready over the summer, Mr. Nadella pushed every Microsoft product team to adopt A.I.

Microsoft had policies developed by its Office of Responsible A.I., a team run by Ms. Crampton, but the guidelines were not consistently enforced or followed, said five current and former employees.

Despite having a transparency principle, ethics experts working on the chatbotwere not given answers about what data OpenAI used to develop its systems, according to three people involved in the work. Some argued that integrating chatbots into a search engine was a particularly bad idea, given how it sometimesserved up untrue details, a person with direct knowledge of the conversations said.

Ms. Crampton said experts across Microsoft worked on Bing, and key people had access to the training data. The company worked to make the chatbot more accurate by linking it to Bing search results, she added.

In the fall, Microsoft started breaking up what had been one of its largest technology ethics teams. The group, Ethics and Society, trained and consulted company product leaders to design and build responsibly. In October, most of its members were spun off to other groups, according to four people familiar with the team.

The remaining few joined daily meetings with the Bing team, racing to launch the chatbot. John Montgomery, an A.I. executive, told them in a December email that their work remained vital and that more teams will also need our help.

After the A.I.-powered Bing was introduced, the ethics team documented lingering concerns. Users could become too dependent on the tool. Inaccurate answers could mislead users. People could believe the chatbot, which uses an I and emojis, was human.

In mid-March, the team was laid off, an action that was first reported by the tech newsletter Platformer. But Ms. Crampton said hundreds of employees were still working on ethics efforts.

Microsoft has released new products every week, a frantic pace to fulfill plans that Mr. Nadella set in motion in the summer when he previewed OpenAIs newestmodel.

He asked the chatbotto translate the Persian poet Rumi into Urdu, and then English. It worked like a charm, he said in a February interview. Then I said, God, this thing.

Mike Isaac contributed reporting. Susan C. Beachy contributed research.

More:
In A.I. Race, Microsoft and Google Choose Speed Over Caution - The New York Times

Related Posts

Comments are closed.