Sam Altman-OpenAI saga: Researchers had warned board of ‘dangerous, humanity-threatening’ AI – Business Today

Before Sam Altman, the CEO of OpenAI, was temporarily removed from his position, a group of staff researchers sent a letter to the board of directors. They warned about a significant artificial intelligence discovery that could potentially pose a threat to humanity, according to a report by Reuters citing two individuals.

The report suggests that this letter and the AI algorithm it discussed were not previously reported, but it could have played a crucial role in the boards decision to remove Altman. Over 700 employees had threatened to leave OpenAI and join Microsoft, one of the companys backers, in support of Altman. The letter was one of many issues raised by the board that led to Altmans dismissal, according to the report.

Earlier this week, Mira Murati, a long-time executive at OpenAI, mentioned a project called Q* (pronounced Q star)to the employees and stated that a letter had been sent to the board before the weekends events.

After the story was published, an OpenAI spokesperson, according to the report, said that Murati had informed the employees about what the media were about to report. The company that developed ChatGPT has made progress on Q*, which some people within the company believe could be a significant step towards achieving super-intelligence, also known as artificial general intelligence (AGI).

How is the new model different?

With access to extensive computing resources, the new model was able to solve certain mathematical problems. Even though it was only performing math at the level of grade-school students, the researchers were very optimistic about Q*'s future success.

Math is considered one of the most important aspects ofgenerative AI development. Current generative AI is good at writing and language translation by statistically predicting the next word. However, the ability to do math, where there is only one correct answer, suggests that AI would have greater reasoning capabilities similar to human intelligence. This could be applied to novel scientific research.

Unlike a calculator that can only solve a limited number of operations, AGI can generalise, learn, and comprehend. In their letter to the board, the researchers highlighted the potential danger of AIs capabilities. There has been a long-standing debate among computer scientists about the risks posed by super-intelligent machines.

Sam Altman's Role

In this context, Altman led efforts to make ChatGPT one of the fastest-growing software applications in history and secured necessary investment and computing resources from Microsoft to get closer to super-intelligence.

In addition to announcing a series of new tools earlier this month, Altman hinted at a gathering of world leaders in San Francisco that he believed AGI was within reach. Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, Ive gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime, he said. The board fired Altman the next day.

Also read:As Sam Altman returns to OpenAI, heres who was fired from the new board and whos in

Also read:Sam Altman returns to OpenAI: Elon Musk says it is probably better than merging with Microsoft

Excerpt from:

Sam Altman-OpenAI saga: Researchers had warned board of 'dangerous, humanity-threatening' AI - Business Today

Related Posts

Comments are closed.