Fear the fire or harness the flame: The future of generative AI – VentureBeat

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More

Generative AI has taken the world by storm. So much so that in the last several months, the technology has twice been a major feature on CBSs 60 Minutes. The rise of startling conversant chatbots such as ChatGPT has even prompted warnings of runaway technology from some luminary artificial intelligence (AI) experts. While the current state of generative AI is clearly impressive perhaps dazzling would be a better adjective it might be even further advanced than is generally understood.

This week, The New York Times reported that some researchers in the tech industry believe these systems have moved toward something that cannot be explained as a stochastic parrot a system that simply mimics its underlying dataset. Instead, they are seeing An AI system that is coming up with humanlike answers and ideas that werent programmed into it. This observation comes from Microsoft and is based on responses to their prompts from OpenAIs ChatGPT.

Their view, as put forward in a research paper published in March, is that the chatbot showed sparks of artificial general intelligence (AGI) the term for a machine that attains the resourcefulness of human brains. This would be a significant development, as AGI is thought by most to still be many years, possibly decades, into the future. Not everyone agrees with their interpretation, but Microsoft has reorganized parts of its research labsto include multiple groups dedicated to exploring this AGI idea.

Separately, Scientific American described several similar research outcomes, including one from philosopher Raphal Millire of Columbia University. Hetyped a program into ChatGPT, asking it to calculate the 83rd number in the Fibonacci sequence.

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Its multistep reasoning of a very high degree, he said.

The chatbot nailed it.It shouldnt have been able to do this since it isnt designed to manage a multistep process. Millire hypothesized that the machine improvised a memory within the layers of its network another AGI-style behavior for interpreting words according to their context. Millire believes this behavior is much like how nature repurposes existing capacities for new functions, such as the evolution of feathers for insulation before they were used for flight.

Arguably already showing early signs of AGI, developers continue to make advances with large language models (LLMs). Late last week, Google announced significant upgrades to their Bard chatbot. This upgrade included moving Bard to the new PaLM 2 large language model. Per a CNBC report, PaLM 2 uses almost five times as much training data as its predecessor from 2022, allowing it to perform more advanced coding, math and creative writing tasks. Not to be outdone, OpenAI this week started to make plug-ins available for ChatGPT, including the ability to access the Internet in real time instead of relying solely on a dataset with content through 2021.

At the same time, Anthropic announced an expanded context window for their Claude chatbot. Per a LinkedIn post from AI expert Azeem Azhar, a context window is the length of text that a LLM can process and respond to.

In a sense, it is like the memory of the system for a given analysis or conversation, Azhar wrote. Larger context windows allow the systems to have much longer conversations or to analyze much bigger, more complex documents.

According to this post, the window for Claude is now about three times larger than that for ChatGPT.

All of which is to say that if ChatGPT exhibited sparks of AGI in research performed several months ago, state of the art has already surpassed these capabilities. That said, there remain numerous shortcomings to these models, including occasional hallucinations where they simply make up answers. But it is the speed of advances that has spooked many and led to urgent calls for regulation. However, Axios reports the likelihood that lawmakers in the U.S. will unite and act on AI regulation before the technology rapidly develops remains slim.

Those who see an existential danger from AI worry that AI could destroy democracy or humanity. This group of experts now includes Geoffrey Hinton, the Godfather of AI, along with long-time AI doomsayers such as Eliezer Yudkowsky. The latter said that by building a superhumanly smart AI, literally everyone on Earth will die.

While not nearly as dire in their outlook, even the executives of leading AI companies (including Google, Microsoft, and OpenAI) have said they believe AI regulation is necessary to avoid potentially damaging outcomes.

Amid all of this angst, Casey Newton, author of the Platformer newsletter, recently wrote about how he should approach what is essentially a paradox. Should his coverage emphasize the hope that AI is the best of us and will solve complex problems and save humanity, or should it instead speak to how AI is the worst of us obfuscating the truth, destroying trust and, ultimately, humanity?

There are those who believe the worries are overblown. Instead, they see this response as a reactionary fear of the unknown, or what amounts to technophobia. For example, essayist and novelist Stephen Marche wrote in the Guardian that tech doomerism is a species of hype.

He blames this in part on the fears of engineers who build the technology but who simply have no idea how their inventions interact with the world. Marche dismisses the worry that AI is about to take over the world as anthropomorphizing and storytelling; its a movie playing in the collective mind, nothing more. Demonstrating how in thrall we are to these themes, a new movie is expected this fall, pits humanity against the forces of AI in a planet-ravaging war for survival.

A common sense approach was expressed in an opinion piece from Professor Ioannis Pitas, chair of the International AI Doctoral Academy. Pitas believes AI is a necessary human response to a global society and physical world of ever-increasing complexity. He sees the positive impact of AI systems greatly outweighing their negative aspects if proper regulatory measures are taken. In his view, AI should continue to be developed, but with regulations to minimize already evident and potential negative effects.

This is not to say there are no dangers ahead with AI. Alphabet CEO Sundar Pichai has said, AI is one of the most important things humanity is working on. It is more profound than electricity or fire.

Perhaps fire provides a good analogy. There have been many mishaps in handling fire, and these still occasionally occur. Fortunately, society has learned to harness the benefits of fire while mitigating its dangers through standards and common sense. The hope is that we can do the same thing with AI before we are burned by the sparks of AGI.

Gary Grossman is SVP of technology practice atEdelmanand global lead of the Edelman AI Center of Excellence.

Welcome to the VentureBeat community!

DataDecisionMakers is where experts, including the technical people doing data work, can share data-related insights and innovation.

If you want to read about cutting-edge ideas and up-to-date information, best practices, and the future of data and data tech, join us at DataDecisionMakers.

You might even considercontributing an articleof your own!

Read More From DataDecisionMakers

See original here:

Fear the fire or harness the flame: The future of generative AI - VentureBeat

Related Posts

Comments are closed.