Category Archives: Artificial General Intelligence
Will AI save humanity? U.S. tech fest offers reality check – Japan Today
Artificial intelligence aficionados are betting that the technology will help solve humanity's biggest problems, from wars to global warming, but in practice, these may be unrealistic ambitions for now.
"It's not about asking AI 'Hey, this is a sticky problem. What would you do?' and AI is like, 'well, you need to completely restructure this part of the economy,'" said Michael Littman, a Brown University professor of computer science.
Littman was at the South By Southwest (or SXSW) arts and technology festival in Austin, Texas, where he had just spoken on one of the many panels on the potential benefits of AI.
"It's a pipe dream. It's a little bit science fiction. Mostly what people are doing is they're trying to bring AI to bear on specific problems that they're already solving, but just want to be more efficient.
"It's not just a matter of pushing this button and everything's fixed," he said.
With their promising titles ("How to Make AGI Beneficial and Avoid a Robot Apocalypse"), and the ever presence of tech giants, the panels attract big crowds, but they often hold more pragmatic objectives, like promoting a product.
At one meeting called "Inside the AI Revolution: How AI is Empowering the World to Achieve More," Simi Olabisi, a Microsoft executive, praised the tech's benefits on Azure, the company's cloud service.
When using Azure's AI language feature in call centers, "maybe when a customer called in, they were angry, and when they ended the call, they were really appreciative. Azure AI Language can really capture that sentiment, and tell a business how their customers are feeling," she explained.
The notion of artificial intelligence, with its algorithms capable of automating tasks and analyzing mountains of data, has been around for decades.
But it took on a whole new dimension last year with the success of ChatGPT, the generative AI interface launched by OpenAI, the now iconic AI start-up mainly funded by Microsoft.
OpenAI claims to want to build artificial "general" intelligence or AGI, which will be "smarter than humans in general" and will "elevate humanity," according to CEO Sam Altman.
That ethos was very present at SXSW, with talk about "when" AGI will become a reality, rather than "if."
Ben Goertzel, a scientist who heads the SingularityNET Foundation and the AGI Society, predicted the advent of general AI by 2029.
"Once you have a machine that can think as well as a smart human, you're at most a few years from a machine that can think a thousand or a million times better than a smart human, because this AI can modify its own source code," said Goertzel.
Wearing a leopard-print faux-fur cowboy hat, he advocated the development of AGI endowed with "compassion and empathy," and integrated into robots "that look like us," to ensure that these "super AIs" get on well with humanity.
David Hanson - founder of Hanson Robotics and who designed Desdemona, a humanoid robot that functions with generative AI - brainstromed about the plus and minuses of AI with superpowers.
AI's "positive disruptions...can help to solve global sustainability issues, although people are probably going to be just creating financial trading algorithms that are absolutely effective," he said.
Hanson fears the turbulence from AI, but pointed out that humans are doing a "fine job" already of playing "existential roulette" with nuclear weapons and by causing "the fastest mass extinction event in human history."
But "it may be that the AI could have seeds of wisdom that blossom and grow into new forms of wisdom that can help us be better," he said.
Initially, AI should accelerate the design of new, more sustainable drugs or materials, said believers in AI.
Even if "we're not there yet... in a dream world, AI could handle the complexity and the randomness of the real world, and... discover completely new materials that would enable us to do things that we never even thought were possible," said Roxanne Tully, an investor at Piva Capital.
Today, AI is already proving its worth in warning systems for tornadoes and forest fires, for example.
But we still need to evacuate populations, or get people to agree to vaccinate themselves in the event of a pandemic, stressed Rayid Ghani of Carnegie Mellon University during a panel titled "Can AI Solve the Extreme Weather Pandemic?"
"We created this problem. Inequities weren't caused by AI, they're caused by humans and I think AI can help a little bit. But only if humans decide they want to use it to deal with" the issue, Ghani said.
Read the original post:
Will AI save humanity? U.S. tech fest offers reality check - Japan Today
Artificial general intelligence and higher education – Inside Higher Ed
It is becoming increasingly clear that the advent of artificial general intelligence (AGI) is upon us. OpenAI includes in its mission that it aims to maximize the positive impact of AGI while minimizing harm. The research organization recognizes that AGI wont create a utopia, but they strive to ensure that its benefits are widespread and that it doesnt exacerbate existing inequalities.
Some say that elements of AGI will be seen in GPT-5 that OpenAI says is currently in prerelease testing. GPT-5 is anticipated to be available by the end of this year or in 2025.
Others suggest that Magic AI, the expanding artificial intelligence (AI) developer and coding assistant, may have already developed a version of AGI. With a staggering ability to process 3.5million words, Aman Anand writes in Medium, It is important to remember that Magics model is still under development, and its true capabilities and limitations remain to be seen. While the potential for AGI is undeniable, it is crucial to approach this future with caution and a focus on responsible development.
Most Popular
Meanwhile Google Gemini 1.5 Pro version is leaping ahead of OpenAI models with a massive context capability:
This means 1.5 Pro can process vast amounts of information in one goincluding 1 hour of video, 11 hours of audio, codebases with over 30,000 lines of code or over 700,000 words. In our research, weve also successfully tested up to 10million tokens.
Accelerated by the intense competition to be the first to achieve AGI, it is not unreasonable to expect that at least certain of the parameters commonly describing AGI will conceivably be achieved by the end of this year, or almost certainly by 2026. AI researchers anticipate that
an AGI system should have the following abilities and understanding:
AI researchers also anticipate that AGI systems will possess higher-level capabilities, such as being able to do the following:
Given those characteristics, lets imagine a time, perhaps in four or five years, in which AGI has been achieved and has been rolled out across society. In that circumstance, it would seem that many of the jobs now performed by individuals could be more efficiently and less expensively completed by agents of AGI. Perhaps half or more of all jobs worldwide might be better done by AGI agents. At less cost, more reliability and instant, automatic updating, these virtual employees would be a bargain. Coupled with sophisticated robotics, some of which we are seeing rolled out today, even many hands-on skilled jobs will be efficiently and effectively done by computer. All will be immediately and constantly updated with the very latest discoveries, techniques and contextual approaches.
AGI is expected to be followed by artificial superintelligence (ASI):
ASI refers to AI technology that will match and then surpass the human mind. To be classed as an ASI, the technology would have to be more capable than a human in every single way possible. Not only could these AI things carry out tasks, but they would even be capable of having emotions and relationships.
What, then, will individual humans need to learn in higher education that cannot be provided instantly and expertly through their own personal ASI lifelong learning assistant?
ASI may easily provide up-to-the-minute responses to our intellectual curiosity and related questions. It will be able to provide personalized learning experiences; sophisticated simulations; personalized counseling and advising; and assess our abilities and skills to validate and credential our learning. ASI could efficiently provide recordkeeping in a massive database. In that way, there would be no confusion of comparative rankings and currency of credentials such as we see today.
In cases where we cannot achieve tasks on our own, ASI will direct virtual agents to carry out tasks for us. However, that may not fully satisfy the human-to-human and emotional interactions that seems basic to our nature. The human engagement, human affirmation and interpersonal connections may not be fulfilled by ASI and nonhuman agents. For example, some tasks are not as much about the outcome as they are the journey, such as music, art and performance. In those cases, it is the process of refining those abilities that are at least equal to the final product.
Is there something in the interpersonal, human-to-human engagement in such endeavors that is worthy of continuing in higher education rather than solely through computer-assisted achievement? If so, does that require a university campus? Certainly, the number of disciplines and therefore the number of faculty and staff members will fall out of popularity due to suppressed job markets in those fields.
If this vision of the next decade is on target, higher education is best advised to begin considering today how it will morph into something that serves society in the fourth industrial revolution. We must begin to:
Have you and your colleagues begun to consider the question of what you provide that could not be more efficiently and less expensively provided by AI? Have you begun to research and formulate plans to compete or add value to services that are likely to be provided by AGI/ASI? One good place to begin such research is by asking a variety of the current generative AI apps to share insights and make recommendations!
Read this article:
Artificial general intelligence and higher education - Inside Higher Ed
Companies Like Morgan Stanley Are Already Making Early Versions of AGI – Observer
Companies like Morgan Stanley are already laying the groundwork for so-called organizational AGI. Maxim Tolchinskiy/Unsplash
Whether its being theorized or possibly, maybe actualized, artificial general intelligence, or AGI, has become a frequent topic of conversation in a world where people are now routinely talking with machines. But theres an inherent problem with the term AGIone rooted in perception. For starters, assigning intelligence to a system instantly anthropomorphizes it, adding to the perception that theres the semblance of a human mind operating behind the scenes. This notion of a mind deepens the perception that theres some single entity manipulating all of this human-grade thinking.
This problematic perception is compounded by the fact that large language models (LLMs) like ChatGPT, Bard, Claude and others make a mockery of the Turing test. They seem very human indeed, and its not surprising that people have turned to LLMs as therapists, friends and lovers (sometimes with disastrous results). Does the humanness of their predictive abilities amount to some kind of general intelligence?
By some estimates, the critical aspects of AGI have already been achieved by the LLMs mentioned above. A recent article in Noema by Blaise Agera Y Arcas (vice president and fellow at Google Research) and Peter Norvig (a computer scientist at the Stanford Institute for Human-Centered A.I.) argues that todays frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of A.I. and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI.
For others, including OpenAI, AGI is still out in front of us. We believe our research will eventually lead to artificial general intelligence, their research page proclaims, a system that can solve human-level problems.
Whether nascent forms of AGI are already here or are still a few years away, its likely that businesses attempting to harness these powerful technologies might create a miniature version of AGI. Businesses need technology ecosystems that can mimic human intelligence with the cognitive flexibility to solve increasingly complex problems. This ecosystem needs to orchestrate using existing software, understand routine tasks, contextualize massive amounts of data, learn new skills, and work across a wide range of domains. LLMs on their own can only perform a fraction of this workthey seem most useful as part of a conversational interface that lets people talk to technology ecosystems. There are strategies being used right now by leading enterprise companies to move in this direction toward something we might call organizational AGI.
There are legitimate reasons to be wary of yet another unsolicited tidbit in the A.I. terms slush pile. Regardless of what we choose to call the eventual outcome of these activities, there are currently organizations using LLMs as an interface layer. They are creating ecosystems where users can converse with software through channels like rich web chat (RCW), obscuring machinations happening behind the scenes. This is difficult work, but the payoff is huge: rather than pogo-sticking between apps to get something done on a computer, customers and employees can ask technology to run tasks for them. Theres the immediate and tangible benefit of people eliminating tedious tasks from their lives. Then theres the long term benefit of a burgeoning ecosystem where employees and customers are interacting with digital teammates that can perform automations leveraging all forms of data across an organization. This is an ecosystem that starts to take the form of a digital twin.
McKinsey describes a digital twin as a virtual replica of a physical object, person, or process that can be used to simulate its behavior to better understand how it works in real life. They elaborate to say that a digital twin within an ecosystem similar to what Ive described can become an enterprise metaverse, a digital and often immersive environment that replicates and connects every aspect of an organization to optimize simulations, scenario planning and decision making.
With respect to what I said earlier about anthropomorphizing technology, the digital teammates within this kind of ecosystem are an abstraction, but I think of them as intelligent digital workers, or IDWs. IDWs are analogous to a collection of skills. These skills come from shared libraries, and skills can be adapted and reused in multitudes of ways. Skills are able to take advantage of all the information piled up inside the organization, with LLMs mining unstructured data, like emails and recorded calls.
This data becomes more meaningful thanks to graph technology, which is adept at creating indexes of skills, systems and data sources. Graph goes beyond mere listing and includes how these elements relate to and interact with each other. One of the core strengths of graph technology is its ability to represent and analyze relationships. For a network of IDWs, understanding how different components are interlinked is crucial for efficient orchestration and data flow.
Generative tools like LLMs and graph technology can work together in tandem, to propel the journey toward digital twinhood, or organizational AGI. Twins can encompass all aspects of the business, including events, data, assets, locations, personnel and customers. Digital twins are likely to be low-fidelity at first, offering a limited view of the organization. As more interactions and processes take place within the org, however, the fidelity of the digital twin becomes higher. An organizations technology ecosystem not only understands the current state of the organization. It can also adapt and respond to new challenges autonomously.
In this sense every part of an organization represents an intelligent awareness that comes together around common goals. In my mind, it mirrors the nervous system of a cephalopod. As Peter Godfrey-Smith writes in his book, Other Minds (2016, Farrar, Straus and Giroux), in an octopus, the majority of neurons are in the arms themselvesnearly twice as many in total as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch but also the capacity to sense chemicalsto smell or taste. Each sucker on an octopuss arm may have 10,000 neurons to handle taste and touch. Even an arm that has been surgically removed can perform various basic motions, such as reaching and grasping.
A world teeming with self-aware brands would be quite hectic. According to Gartner, by 2025, generative A.I. will be a workforce partner within 90 percent of companies worldwide. This doesnt mean that all of these companies will be surging toward organizational AGI, however. Generative A.I., and LLMs in particular, cant meet an organizations automation needs on its own. Giving an entire workforce access to GPTs or Copilot wont move the needle much in terms of efficiency. It might help people write better emails faster, but it takes a great deal of work to make LLMs reliable resources for user queries.
Their hallucinations have been well documented and training them to provide trustworthy information is a herculean effort. Jeff McMillan, chief analytics and data officer at Morgan Stanley (MS), told me it took his team nine months to train GPT-4 on more than 100,000 internal documents. This work began before the launch of ChatGPT, and Morgan Stanley had the advantage of working directly with people at OpenAI. They were able to create a personal assistant that the investment banks advisors can chat with, tapping into a large portion of its collective knowledge. Now youre talking about wiring it up to every system, he said, with regards to creating the kinds of ecosystems required for organizational A.I. I dont know if thats five years or three years or 20 years, but what Im confident of is that that is where this is going.
Companies like Morgan Stanley that are already laying the groundwork for so-called organizational AGI have a massive advantage over competitors that are still trying to decide how to integrate LLMs and adjacent technologies into their operations. So rather than a world awash in self-aware organizations, there will likely be a few market leaders in each industry.
This relates to broader AGI in the sense that these intelligent organizations are going to have to interact with other intelligent organizations. Its hard to envision exactly what depth of information sharing will occur between these elite orgs, but over time, these interactions might play a role in bringing about AGI or singularity, as its also called.
Ben Goertzel, the founder of SingularityNET and the person often credited with creating the term, makes a compelling case that AGI should be decentralized, relying on open-source development as well as decentralized hosting and mechanisms for interconnect A.I. systems to learn from and teach on another.
SingularityNETs DeAGI Manifesto states, There is a broad desire for AGI to be ethical and beneficial for all humanity; the most straightforward way to achieve this seems to be for AGI to grow up in the context of serving and being guided by all humanity, or as good an approximation as can be mustered.
Having AGI manifest in part from the aggressive activities of for-profit enterprises is dicey. As Goertzel pointed out, You get into questions [about] who owns and controls these potentially spooky and configurable human-like robot assistants and to what extent is their fundamental motivation to help people as opposed to sell people stuff or brainwash people into some corporate government media advertising order.
Theres a strong case to be made that an allegiance to profit will be the undoing of the promise for humanity at large that these technologies afford. Weirdly, the skynet scenario in Terminatorwhere a system becomes self-aware, determines humanity is a grave threat, and exterminates all lifeassumes that the system, isolated to a single company, has been programmed to have a survival instinct. It would have to be told that survival at all costs is its bottom line, which suggests we should be extra cautious developing these systems within environments where profit above all else is the dictum.
Maybe the most important thing is keeping this technology in the hands of humans and pushing forward the idea that the myriad technologies associated with A.I. should only be used in ways that are beneficial to humanity as a whole, that dont exploit marginalized groups, and that arent propagating synthesized bias at scale.
When I broached some of these ideas about organizational AGI to Jaron Lanier, co-creator of VR technology as we know it and Microsofts Octopus (Office of the Chief Technology Officer Prime Unifying Scientist), he told me my vocabulary was nonsensical and that my thinking wasnt compatible with his perception of technology. Regardless, it felt like we agreed on core aspects of these technologies.
I dont think of A.I. as creating new entities. I think of it as a collaboration between people, Lanier said. Thats the only way to think about using it wellto me its all a form of collaboration. The sooner we see that, the sooner we can design useful systemsto me theres only people.
In that sense, AGI is yet another tool, way down the spectrum from the rocks our ancestors used to smash tree nuts. Its a manifestation of our ingenuity and our desires. Are we going to use it to smash every tree nut on the face of the earth, or are we going to use it to find ways to grow enough tree nuts for everyone to enjoy? The trajectories we set in these early moments are of grave importance.
Were in the anthropocene. Were in an era where our actions are affecting everything in our biological environment, Blaise Aguera Y Arcas, the Noeme article author, told me. The Earth is finite and without the kind of solidarity where we start to think about the whole thing as our body, as it were, were kind of screwed.
Josh Tyson is the co-author of Age of Invisible Machines, a book about conversational A.I., and Director of Creative Content at OneReach.ai. He co-hosts two podcasts: Invisible Machines and N9K.
Read this article:
Companies Like Morgan Stanley Are Already Making Early Versions of AGI - Observer
The AGI Lawsuit: Elon Musk vs. OpenAI and the Quest for Artificial General Intelligence that Benefits Humanity – Patently-O
By Dennis Crouch
Elon Musk was instrumental in the initial creation of OpenAI as a nonprofit with the vision of responsibly developing artificial intelligence (AI) to benefit humanity and to prevent monopolistic control over the technology. After ChatGPT went viral in late 2022, the company began focusing more on revenue and profits. It added a major for-profit subsidiary and completed a $13+ billion deal with Microsoft entitling the industry giant to a large share of OpenAIs future profits and a seat on the Board.
In a new lawsuit, Elon Musk alleges that OpenAI and its CEO Sam Altman have breached the organizations founding vision. [Musk vs OpenAI].
Musk contributed over $44 million between 2015 and 2020 to OpenAI. He alleges OpenAI induced these large donations through repeated promises in its founding documents and communications that it would remain a public-spirited non-profit developing artificial general intelligence (AGI) cautiously and for the broad benefit of humanity. Musk claims he relied on these assurances that OpenAI would not become controlled by a single corporation when deciding to provide essential seed funding. With OpenAI now increasingly aligned with Microsofts commercial interests, Musk argues the results of his financial contributions did not achieve their promised altruistic purpose.
Perhaps the most interesting portion of the debate involves allegations that OpenAIs latest language model, GPT-4, already constitutes AGI meaning it has human-level intelligence across a range of tasks. Musk further claims OpenAI has secretly developed an even more powerful AGI system known as Q* that shows ability to chain logical reasoning beyond human capability arguably reaching artificial super intelligence (ASI) or at least strong AGI.
The complaint discusses some of the potential risks of AGI:
Mr. Musk has long recognized that AGI poses a grave threat to humanityperhaps the greatest existential threat we face today. His concerns mirrored those raised before him by luminaries like Stephen Hawking and Sun Microsystems founder Bill Joy. Our entire economy is based around the fact that humans work together and come up with the best solutions to a hard task. If a machine can solve nearly any task better than we can, that machine becomes more economically useful than we are. As Mr. Joy warned, with strong AGI, the future doesnt need us. Mr. Musk publicly called for a variety of measures to address the dangers of AGI, from voluntary moratoria to regulation, but his calls largely fell on deaf ears.
Complaint at paragraph 18. In other words, Musk argues advanced AI threatens to replace and surpass humans across occupations if its intelligence becomes more generally capable. This could render many jobs and human skills obsolete, destabilizing economies and society by making people less essential than automated systems.
One note here for readers is to recognize important and fundamental differences between AGI and consciousness. AGI refers to the ability of an AI system to perform any intellectual task that a human can do, focusing on problem-solving, memory utilization, creative tasks and decision-making capabilities. On the other hand, consciousness involves self-awareness, subjective experiences, emotional understanding, and decision-making capabilities that are not solely linked to intelligence levels. AGI the focus of the lawsuit here poses important risks to our human societal structure. But, it is relatively small potatoes to consciousness that raises serious ethical considerations as the AI moves well beyond a human tool.
The complaint makes it clear Musk believes OpenAI has already achieved AGI with GPT-4 but it is a tricky thing to measure. Fascinatingly, whether Musk wins may hinge on a San Francisco jury deciding if programs like GPT-4 and Q* legally constitute AGI. So how might jurors go about making this monumental determination? There are a few approaches they could take:
A 2023 article from a group of China-based AI researchers proposes what they call the Tong test for assessing AGI. An important note from the article is that AGI is not a simple yes/no threshold but rather is something that should be quantified across a wide range of dimensions. The article proposes five dimensions: vision, language, reasoning, motor skills, and learning. The proposal would also measures the degree to which an AI system exhibits human values in a self-driven manner.
I can imagine expert testimony in the case, with Musks lawyers presenting key examples showing the wide applicability of GPT-4 and OpenAIs own lawyers showing its own system repeatedly failing. Although this approach is obviously not a true measure of general intelligence or an ideal way to make such an important decision, it does highlight challenges inherent in trying to pass judgment on either a complex machine system and our measures of human intelligence. At its best, the adversarial litigation process itself, with its proof and counterproof process, reflects a form of scientific process with the benefit of actually arriving at a legally binding answer.
Understanding the Inner Workings: OpenAIs latest language models keep their internal designs largely opaque similar to the human brain. Because of our thick-skulls and complex neural arrangement, the vast majority of human neurologic and intelligence testing is functional focusing on the skills and abilities of the individual rather than directly assessing the inner workings. It is easy to assume a parallel form of analysis for AI intelligence and capability especially because human results serve as the standard for measuring AGI. But the approach to human understanding is a feature of our unique biology and technology level. AI systems are designed and built by humans and do not have the natural constraints dictated by evolution. And, if transparency and understanding is a goal, it can be directly designed-into the system using transparent design principles. The current black box approach for OpenAI makes evaluating claims of attaining artificial general intelligence difficult. We cannot peer inside to judge whether displayed abilities reflect true comprehension and reasoning or mere pattern recognition. A key benefit of the litigation system for Elon Musk in this case is that it may force OpenAI to come forward with more inner transparency in order to adequately advocate its position.
What do you think: What should be the legal test for artificial general intelligence?
See the rest here:
Google’s Gemini showcases more powerful technology, but we’re still not close to superhuman AI – The Conversation
In December 2023, Google announced the launch of its new large language model (LLM) named Gemini. Gemini now provides the artificial intelligence (AI) foundations of Google products; it is also a direct rival to OpenAIs GPT-4.
But why is Google considering Gemini as such an important milestone, and what does this mean for users of Googles services? And generally speaking, what does it mean in the context of the current hyperfast-paced developments of AI?
Read more: Google's Gemini: is the new AI model really better than ChatGPT?
Google is betting on Gemini to transform most of its products by enhancing current functionalities and creating new ones for services such as search, Gmail, YouTube and its office productivity suite. This would also allow improvements to their online advertising business their main source of revenue as well as for Android phone software, with trimmed versions of Gemini running on limited capacity hardware.
For users, Gemini means new features and improved capacities that would make Google services harder to shun, strengthening an already dominant position in areas such as search engines. The potential and opportunities for Google are considerable, given the bulk of their software is easily upgradable cloud services.
But the huge and unexpected success of ChatGPT attracted a lot of attention and enhanced the credibility of OpenAI. Gemini will allow Google to reinstate itself as a major player in AI in the public view. Google is a powerhouse in AI, with large and strong research teams at the origin of many major advances of the last decade.
There is public discussion about these new technologies, both on the benefits they provide and the disruption they create in fields such as education, design and health care.
At its core, Gemini relies on transformer networks. Originally devised by a research team at Google, the same technology is used to power other LLMs such as GPT-4.
A distinctive element of Gemini is its capacity to deal with different data modalities: text, audio, image and video. This provides the AI model with the capacity to execute tasks over several modalities, like answering questions regarding the content of an image or conducting a keyword search on specific types of content discussed in podcasts.
But more importantly, that the models can handle distinct modalities enables the training of globally superior AI models, compared to distinct models trained independently for each modality. Indeed, such multimodal models are deemed to be stronger since they are exposed to different perspectives of the same concepts.
For example, the concept of birds may be better understood through learning from a mix of birds textual descriptions, vocalizations, images and videos. This idea of multimodal transformer models has been explored in previous research at Google, Gemini being the first full-fledged commercial implementation of the approach.
Such a model is seen as a step in the direction of stronger generalist AI models, also known as artificial general intelligence (AGI).
Given the rate at which AI is advancing, the expectations that AGI with superhuman capabilities will be designed in the near future generates discussions in the research community and more broadly in the society.
On one hand, some anticipate the risk of catastrophic events if a powerful AGI falls into the hands of ill-intentioned groups, and request that developments be slowed down.
Others claim that we are still very far from such actionable AGI, that the current approaches allow for a shallow modelling of intelligence, mimicking the data on which they are trained, and lack an effective world model a detailed understanding of actual reality required to achieve human-level intelligence.
On the other hand, one could argue that focusing the conversation on existential risk is distracting attention from more immediate impacts brought on by recent advances of AI, including perpetuating biases, producing incorrect and misleading content prompting Google to pause its Gemini image generator, increasing environmental impacts and enforcing the dominance of Big Tech.
The line to follow lies somewhere in between all of these considerations. We are still far from the advent of actionable AGI additional breakthroughs are required, including introducing stronger capacities for symbolic modelling and reasoning.
In the meantime, we should not be distracted from the important ethical and societal impacts of modern AI. These considerations are important and should be addressed by people with diverse expertise, spanning technological and social science backgrounds.
Nevertheless, although this is not a short-term threat, achieving AI with superhuman capacity is a matter of concern. It is important that we, collectively, become ready to responsibly manage the emergence of AGI when this significant milestone is reached.
Read the original post:
Nvidia’s CEO Foresees Artificial General Intelligence Breakthrough Within Five Years – TradingView
Nvidia CorpsNVDA CEO, Jensen Huang, suggested at a Stanford University economic forum thatartificial general intelligence(AGI) could become a reality within the next five years, depending on how its achievement is defined.
With Nvidia at the forefront of producing AI chips, crucial for developing AI systems like OpenAIs ChatGPT, Huangs insights carry significant weight in the tech industry.
He proposed that measuring AGI by a computers ability to pass a comprehensive array of human tests could lead to reaching this milestone relatively soon, Reuters reports.
Also Read:Nvidia, Microsoft Back Figure AI with $675M Investment for Humanoid Robots Integration
Currently, AI systems can succeed in exams like the legal bar but face challenges in more specialized fields such as gastroenterology. However, Huang is optimistic that AI could also excel in these areas within five years.
Despite this optimism, Huang acknowledged that the broader definition of AGI, which encompasses a deeper understanding and replication of human cognitive processes, remains elusive.
This complexity is partly because there is still no consensus among scientists on precisely defining the workings of the human mind, making it a more challenging target for engineers who thrive on clear objectives.
Regarding the infrastructure required to support the burgeoning AI industry, Huang responded to queries about the necessity for more chip manufacturing facilities.
While agreeing on the need for additional fabs, he highlighted simultaneous improvements in chip efficiency and AI processing algorithms.
These advancements, he suggested, could amplify computing capabilities by a million times over the next decade, potentially moderating the sheer number of chips needed as each becomes more powerful and efficient.
Analysts have vouched for Nvidias dominance in the $85 billion+ accelerator market, particularly in data center sales, which are likely to exceed 85% of its total sales, marking a significant growth.
Investors can gain exposure to Nvidia viaVanEck Semiconductor ETFSMH andGlobal X Robotics & Artificial Intelligence ETFBOTZ, which have gained 15-31% year-to-date.
Price Actions:NVDA shares traded higher by 2.11% at $840.19 premarket on the last check Monday.
Also Read:Nvidia Supplier Taiwan Semi Faces Water Shortage Challenge as Chip Production Demand Soars
Disclaimer:This content was partially produced with the help of AI tools and was reviewed and published by Benzinga editors.
Photo via Shutterstock
2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.
Read the original post:
Nvidia's CEO Foresees Artificial General Intelligence Breakthrough Within Five Years - TradingView
Why OpenAI’s nonprofit mission to build AGI is under fire again | The AI Beat – VentureBeat
Join leaders in Boston on March 27 for an exclusive night of networking, insights, and conversation. Request an invite here.
In the new lawsuit filed by Elon Musk last week against OpenAI, its CEO Sam Altman, and its president Greg Brockman, the word nonprofit appears 17 times. Board comes up a whopping 62 times. AGI? 66 times.
The lawsuits claims, which include breach of contract, breach of fiduciary duty, and unfair competition, all circle around the idea that OpenAI put profits and commercial interests indeveloping artificial general intelligence(AGI) ahead of the duty of its nonprofit arm (under the leadership of its nonprofit board) to protect the public good.
This is an issue, of course, that exploded after OpenAIs board suddenly fired Sam Altman on November 17, 2023 followed by massive blowback from investors including Microsoft and hundreds of OpenAI employees posting heart emojis indicating they were on Altmans side. Altman was quickly reinstated, while several OpenAI board members got the boot.
Plenty of people have pointed out that Musk, as an OpenAI co-founder who is now competing with the company with his own startup X.ai, is hardly an objective party. But Im far more interested in one important question: How did nerdy nonprofit governance issues tied to the rise of artificial general intelligence spark a legal firestorm?
The AI Impact Tour Boston
Were excited for the next stop on the AI Impact Tour in Boston on March 27th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on best practices for data infrastructure and integration, data validation methods, anomaly detection for security applications, and more. Space is limited, so request an invite today.
Well, it all winds back to the beginning of OpenAI, which Musks lawsuit lays out in more detail than we have previously seen: In 2015, Musk, Altman and Brockman joined forces to form a nonprofit AI lab that would try to catch up to Google in the race for AGI developing it for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits.
But in 2023, the lawsuit claims, Altman, Brockman and OpenAI set the Founding Agreement aflamewith flagrant breaches such as breaching the nonprofit boards fiduciary duty and breach of contract, including what transpired during the days after Altman was fired by the nonprofit board on November 17, 2023, and subsequently reinstated.
Much of the controversy winds back to the fact that Open AI isnt just any old nonprofit. In fact, I reported on OpenAIs unusual and complex nonprofit/capped profit structure just a few days before Altmans firing.
In that piece, I pointed to the Our structure page on OpenAIs website that says OpenAIs for-profit subsidiary is fully controlled by the OpenAI nonprofit. While the for-profit subsidiary is permitted to make and distribute profit, it is subject to the nonprofits mission.
Elon Musks lawsuit, however, shed even more light on the confusing alphabet soup of companies that are parties in the case. While OpenAI, Inc. is the nonprofit, OpenAI, LP; OpenAI LLC; OpenAI GP, LLC; OpenAI Opco, LLC; OpenAI Global, LLC; OAI Corporation, LLC and OpenAI Holdings, LLC, all appear to be for-profit subsidiaries.
As I wrote in November, according to OpenAI, the members of its nonprofit board of directors will determine when the company has attained AGI which it defines as a highly autonomous system that outperforms humans at most economically valuable work. Thanks to the for-profit arm that is legally bound to pursue the Nonprofits mission, once the board decides AGI, or artificial general intelligence, has been reached, such a system will be excluded from IP licenses and other commercial terms with Microsoft, which only apply to pre-AGI technology.
But as the very definition of AGI is far from agreed-upon, what does it mean to have a half-dozen people deciding on whether or not AGI has been reached? what does the timing and context of that possible future decision mean for its biggest investor, Microsoft that is now a non-voting member of the nonprofit board?Isnt that a massive conflict of interest?
Musk certainly seems to think so. The lawsuit says: Mr. Altman and Mr. Brockman, in concert with Microsoft, exploited Microsofts significant leverage over OpenAI, Inc. and forced the resignation of a majority of OpenAI, Inc.s Board members, including Chief Scientist Ilya Sutskever. Mr. Altman was reinstated as CEO of OpenAI, Inc. on November 21. On information and belief, the new Board members were hand-picked by Mr. Altman and blessed by Microsoft. The new Board members lack substantial AI expertise and, on information and belief, are ill equipped by design to make an independent determination of whether and when OpenAI has attained AGIand hence when it has developed an algorithm that is outside the scope of Microsofts license.
Musk is not the first to push back on OpenAIs nonprofit status. I think the story that Musk tells in his complaint validates and deepens the case were making in California, said Robert Weissman, president of Public Citizen, a nonprofit consumer advocacy organization which wrote a letter on January 9 requesting that the California Attorney General investigate OpenAIs nonprofit status. The letter raised concerns that OpenAI may have failed to carry out its non-profit purposes and is instead acting under the effective control of its for-profit subsidiary affiliate.
And legal experts I spoke to say that Musk has a strong point in this regard: James Denaro, attorney and chief technologist at the Washington DC-based CipherLaw, told me that Musk does make a strong policy argument that if a company can launch as a non-profit working for the public benefit, collect pre-tax donations, and then transfer the IP into a for-profit venture, this would be a highly problematic paradigm shift for technology companies.
Musks lawsuit is not surprising because of the nonprofit vs. profit structural issues that have plagued OpenAI, added Anat Alon-Beck, associate professor at Case Western University School of Law, who focuses on corporate law and governance and recently wrote a paper about shadow governance by observing board members at tech companies.
According to the paper, It was not until November 2023 that mainstream media started paying more attention to the concept of board observers, after OpenAI, the corporate entity that brought the world ChatGPT, gave Microsoft a board observer seat following the drama in OpenAIs boardroom. But what the mainstream media did not explore in its coverage of the board observer concept was its seemingly less interesting nature as a non-voting board membership, which was an important element in the complex relationship between OpenAI and Microsoft. This signaled deepening ties between the two companies that also eventually got the attention of the DOJ and FTC, as well as the influential role of CVC [corporate venture capital] in funding and governing the research and development of OpenAI.
This lawsuit was due because of OpenAIs structure, she said, adding that OpenAI should be worried.
You should always be worried because when you pick such a weird structure like OpenAI did, theres uncertainty, she said. In law, when were representing large companies, we want to have efficiency, low transaction costs and predictability. We dont know how courts gonna look at fiduciary duties. We dont know because of court hasnt decided on that. Im sorry, but its a bad structure. They could have accomplished [what they wanted] using a different type of structure.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.
Here is the original post:
Why OpenAI's nonprofit mission to build AGI is under fire again | The AI Beat - VentureBeat