Category Archives: Artificial General Intelligence

How AI might shape LGBTQIA+ advocacy | MIT News | Massachusetts Institute of Technology – MIT News

"AI Comes Out of the Closet" isa large learning model (LLM)-based online system that leverages artificial intelligence-generated dialog and virtual characters to create complex social interaction simulations. These simulations allow users to experiment with and refine their approach to LGBTQIA+ advocacy in a safe and controlled environment.

The research is both personal and political tolead author D. Pillis, an MIT graduate student in media arts and sciences and research assistant in the Tangible Media group of the MIT Media Lab, as it is rooted in a landscape where LGBTQIA+ people continue to navigate the complexities of identity, acceptance, and visibility. Pillis's work is driven by the need for advocacy simulations that not only address the current challenges faced by the LGBTQIA+ community, but also offer innovative solutions that leverage the potential of AI to build understanding, empathy, and support. This project is meant to test the belief that technology, when thoughtfully applied, can be a force for societal good, bridging gaps between diverse experiences and fostering a more inclusive world.

Pillis highlights the significant, yet often overlooked, connection between the LGBTQIA+ community and the development of AI and computing. He says, "AI has always been queer. Computing has always been queer," drawing attention to the contributions of queer individuals in this field, beginning with the story of Alan Turing, a founding figure in computer science and AI, who faced legal punishment chemical castration for his homosexuality. Contrasting Turings experience with the present, Pillis notes the acceptance of OpenAI CEO Sam Altmans openness about his queer identity, illustrating a broader shift toward inclusivity. This evolution from Turing to Altman highlights the influence of LGBTQIA+ individuals in shaping the field of AI.

"There's something about queer culture that celebrates the artificialthrough kitsch, camp, and performance," states Pillis.AIitselfembodies the constructed, the performative qualities deeply resonant with queer experience and expression. Through this lens, he argues for a recognition of the queerness at the heart of AI, not just in its history but in its very essence.

Pillis found a collaborator withPat Pataranutaporn, a graduate student in the Media Lab'sFluid Interfaces group.As is often the case at the Media Lab, their partnership began amid the lab's culture of interdisciplinary exploration, where Pataranutaporn's work on AI characters met Pillis's focus on 3D human simulation.

Taking on the challenge of interpreting text to gesture-based relationships was a significant technological hurdle. InPataranutaporn's research, he emphasizes creating conditions where people can thrive, not just fix issues, aiming to understand how AI can contribute to human flourishing across dimensions of "wisdom, wonder, and well-being." In this project,Pataranutapornfocused ongenerating the dialogues that drove the virtual interactions. "It's not just about making people more effective, or more efficient, or more productive. It's about how you can support multi-dimensional aspects of human growth and development."

Pattie Maes,the Germeshausen Professor of Media Arts and Sciences at the MIT Media Lab and advisor to this project, states, "AI offers tremendous new opportunities for supporting human learning, empowerment, and self development. I am proud and excited that this work pushes for AI technologies that benefit and enable people and humanity, rather than aiming for AGI [artificial general intelligence]."

Addressing urgent workplace concerns

The urgency of this project is underscored by findings that nearly 46 percent of LGBTQIA+ workers have experienced some form of unfair treatment at work from being overlooked for employment opportunities to experiencing harassment. Approximately 46 percent of LGBTQIA+ individuals feel compelled to conceal their identity at work due to concerns about stereotyping, potentially making colleagues uncomfortable, or jeopardizing professional relationships.

The tech industry, in particular, presents a challenging landscape for LGBTQIA+ individuals. Data indicate that 33 percent of gay engineers perceive their sexual orientation as a barrier to career advancement. And over half of LGBTQIA+ workers report encountering homophobic jokes in the workplace, highlighting the need for cultural and behavioral change.

"AI Comes Out of the Closet"is designed as an online study to assess the simulator's impact on fostering empathy, understanding, and advocacy skills toward LGBTQIA+ issues. Participants were introduced to an AI-generated environment, simulating real-world scenarios that LGBTQIA+ individuals might face, particularly focusing on the dynamics of coming out in the workplace.

Engaging with the simulation

Participants were randomly assigned to one of two interaction modes with the virtual characters: "First Person" or "Third Person." The First Person mode placed participants in the shoes of a character navigating the coming-out process, creating a personal engagement with the simulation. The Third Person mode allowed participants to assume the role of an observer or director, influencing the storyline from an external vantage point, similar to the interactive audience in Forum Theater. This approach was designed to explore the impacts ofimmersiveversusobservationalexperiences.

Participants were guided through a series of simulated interactions, where virtual characters, powered by advanced AI and LLMs, presented realistic and dynamic responses to the participants' inputs. The scenarios included key moments and decisions, portraying the emotional and social complexities of coming out.

The study's scripted scenarios provided a structure for the AI's interactions with participants. For example, in a scenario, a virtual character might disclose their LGBTQIA+ identity to a co-worker (represented by the participant), who then navigates the conversation with multiple choice responses. These choices are designed to portray a range of reactions, from supportive to neutral or even dismissive, allowing the study to capture a spectrum of participant attitudes and responses.

Following the simulation, participants were asked a series of questions aimed at gauging their levels of empathy, sympathy, and comfort with LGBTQIA+ advocacy. These questions aimed to reflect and predict how the simulation could change participants' future behavior and thoughts in real situations.

The results

The study found an interesting difference in how the simulation affected empathy levels based on Third Person or First Person mode. In the Third Person mode, where participants watched and guided the action from outside, the study shows that participants felt more empathy and understanding toward LGBTQIA+ people in "coming out" situations. This suggests that watching and controlling the scenario helped them better relate to the experiences of LGBTQIA+ individuals.

However, the First Person mode, where participants acted as a character in the simulation, didn't significantly change their empathy or ability to support others. This difference shows that the perspective we take might influence our reactions to simulated social situations, and being an observer might be better for increasing empathy.

While the increase in empathy and sympathy within the Third Person group was statistically significant, the study also uncovered areas that require further investigation. The impact of the simulation on participants' comfort and confidence in LGBTQIA+ advocacy situations, for instance, presented mixed results, indicating a need for deeper examination.

Also, the research acknowledges limitations inherent in its methodology, including reliance on self-reported data and the controlled nature of the simulation scenarios. These factors, while necessary for the study's initial exploration, suggest areas of future research to validate and expand upon the findings. The exploration of additional scenarios, diverse participant demographics, and longitudinal studies to assess the lasting impact of the simulation could be undertaken in future work.

"The most compelling surprise was how many people were both accepting and dismissive of LGBTQIA+ interactions at work," says Pillis. This attitude highlights a wider trend where people mightacceptLGBTQIA+ individuals but still not fully recognize the importance of their experiences.

Potential real-world applications

Pillis envisions multiple opportunities for simulations like the one built for his research.

In human resources and corporate training, the simulator could serve as a tool for fostering inclusive workplaces. By enabling employees to explore and understand the nuances of LGBTQIA+ experiences and advocacy, companies could cultivate more empathetic and supportive work environments, enhancing team cohesion and employee satisfaction.

For educators, the tool could offer a new approach to teaching empathy and social justice, integrating it into curricula to prepare students for the diverse world they live in. For parents, especially those of LGBTQIA+ children, the simulator could provide important insights and strategies for supporting their children through their coming-out processes and beyond.

Health care professionals could also benefit from training with the simulator, gaining a deeper understanding of LGBTQIA+ patient experiences to improve care and relationships. Mental health services, in particular, could use the tool to train therapists and counselors in providing more effective support for LGBTQIA+ clients.

In addition to Maes, Pillis and Pataranutaporn were joined by Misha Sra of the University of California at Santa Barbara on the study.

Excerpt from:

How AI might shape LGBTQIA+ advocacy | MIT News | Massachusetts Institute of Technology - MIT News

Artificial Intelligence, Psychedelics, and Psychotherapy Working Together to Fight Chronic Pain –

More than 51 million Americans experienced chronic pain in 2021, according to the CDC. And the NIH estimates that 2.1 million people in the U.S. have opioid use disorder. The founders of Dallas-based biotech startup Cacti dont think thats a coincidence.

Combining what they know about brain development and function, Kaitlin Roberson, CEO of Dallas-based Cacti, Inc., and David Roberson, Cactis lead scientific advisor, are taking a novel approach to treating chronic pain, by addressing the trauma that is often at its root.

Cacti has two programs one seeks to reset angry pain nerves by delivering a psychedelic molecule directly to the nerve sending the pain signal. Essentially, were trying to make pain nerves trip, said David. And the other approach is a therapeutic catalyst: a psychedelic-derived medicine that targets the part of the brain that processes the emotional aspects of pain, which will be paired with psychotherapy to train the brain to process pain differently.

To develop these new medicines, Cacti is using technology from Robersons company, Blackbox Bio.

David Roberson, PhD, MBA, founder of Blackbox Bio [Photo: Blackbox Bio]

Also based in Dallas-Fort Worth, Blackbox Bio uses artificial intelligence to watch how lab mice and rats behave under different circumstances, such as when they have arthritis pain or when theyve been given an experimental drug for their pain. The companys scientific instruments observe the effect of a drug on rodents from below. This is important because prey animals, like mice and rats, have developed ways to hide injury and weakness from the view of predators for the purpose of self-preservation.

Watching from below allows their AI algorithms to see if the rodents are favoring a limb, struggling with balance, or even if theyre scared (they walk on their tiptoes). Mice only live for about a year and a half. So, you can observe the effects of a drug over their whole lifespan; you can take a mouse that has experienced a stroke and test a new therapy to see how it affects the rest of their life, in a relatively short period of time, he said.

Using this process improves the quality of the observational data and accelerates the steps needed to get a drug approved. By using AI to watch mice instead of human observation, outcomes are better. As a result, fewer animals are needed and the results are more reliable, said David. The long-term goal of our technology is to use these rich data sets to generate AI virtual mice that will replace live lab animals in many cases.

Cacti is using the Blackbox technology to identify new psychedelic-derived medicines but without the hallucinations and other side effects that classical psychedelics can cause. The device can look at a mouse and automatically tell if it is experiencing a psychedelic hallucination.

Having access to this technology has given us a head start in our search for new therapies that can heal the root cause of chronic pain, said Kaitlin.

Jarret Weinrich, PhD, Kaitlin Roberson, founder of Cacti, and Biafra Ahanonu, PhD at UCSF. [Photo: Blackbox Bio]

But four years ago, in Massachusetts, before Cacti began the path toward FDA approval for its new medicines, the idea for the startup was just coming together.

It was 2020 and the Robersons (all six of them, plus their Vizsla, Ruby) were living in Harvard University housing, where Kaitlin was finishing her graduate program.

We were living in a tiny little apartment right off Harvard Square, Kaitlin said.

Picture one thousand square feet of apartment space with only one toilet; their oldest son was twelve years old; the twins were seven, and their younger sister was five.

And then COVID-19 shut the city down.

I turned to David and I said, Im not worried about the virus at this point. But I do think we may die by our own hands, she said.

They needed to get out of that apartment, but initially the plan wasnt to leave the Bay State.Wed been out in the Boston area for 11 years at that point and we were really happy there, Kaitlin said.

But they started thinking more holistically. David is a scientist and can work from anywhere, and Kaitlin had done a lot of work in the humanitarian field, mostly with the refugee population.

And my thinking was, you know, why not move to a border state that is not only the largest refugee resettlement state in the country, resettling refugees from all over the world, but also a border state also contending with asylum seekers and immigrants, said Kaitlin.

She took a job with a resettlement agency that has offices all over Texas, but it went bankrupt the following year.

Meanwhile, David a neurobiologist and drug developer had been toying with the idea of a startup that would integrate the use of psychedelics to treat chronic pain. Specifically, he was looking at phenethylamines, medicinal substances made by the cactus family.

We started talking about combining our two areas of training and see if we could take an interdisciplinary approach to chronic pain, she said.

With her background in developmental psychology, Katilin assumed the role of CEO and Cactinamed in honor of the compound David had identified for developmentwas born.

Where traditional pain management has focused on treating the acute feeling and overlooked the psychology behind it, Cacti focuses on the experience of pain and how it correlates to brain function.

Depression, grief, and other negative emotions are processed in the same part of the brain that gets activated when people have chronic pain, said David.

Cacti wants to treat the trigger. The working theory is that just like emotions can resurrect a memory of mental pain, they can also remind the body of physical pain, keeping the sensation active.

I think one of the things that is lacking in Western medicine is the connection between body and mind. And pain is treated in this very isolated way where its just a certain receptor being targeted, said Kaitlin.

Cactis approach aims to bring together body and mind, by pairing psychedelics with psychotherapy but a new model presents additional hurdles.

So even when a new medicine does get approved, theres still a question of how this treatment will fit into our health care system. With psychedelics, its like an eight-hour journey with the patient. Will insurance pay for the drug and the time spent with a therapist as it takes effect?

Cactis medicines are not yet in human trials, but they are creating a treatment that would work for multiple communities of people who have trauma-triggered chronic pain, including the refugees to whom Kaitlin devoted her early career. Its a long road to FDA approval for a first-in-class medicine, but were committed to the patients with chronic pain who deserve more effective treatments, and were well on our way to finding them, she said.

That means a future without Opioid dependence may be closer than you think.

Voices contributor Nicole Ward is a data journalist for the Dallas Regional Chamber.

Sign up to keep your eye on whats new and next in Dallas-Fort Worth, every day.

Carmackwhose Dallas-based, AGI-focused startup raised $20 million in August 2022is partnering with Dr. Richard Sutton, chief scientific advisor at the Alberta Machine Intelligence Institute in Canada. Their focus: developing a genuine AI prototype by 2030 that will show "AGI signs of life."

At the Bush Center in Dallas on September 5, Capital Factory will host top tech minds to talk AI and AGI. Tech icon John Carmack will take the stage in a rare fireside chat on artificial general intelligence with AI expert Dave Copps. Here's what you need to know, along with advance insights from Copps.

Dallas Innovates, the Dallas Regional Chamber, and Dallas AI have teamed up to launch the inaugural AI 75 list. The 2024 program honors the most significant people in AI in DFW in seven categoriesthe visionaries, creators, and influencers you need to know.

Trailblazing companies reshaping industries with innovative AI strategies will be in the spotlight at Convergence AI on May 2. The conference offers a unique opportunity for businesses to learn from these leaders and position themselves for success in the AI-driven future.

Addison-based Crederathe consulting business of marketing group Omnicomsaid the council will serve as a collective intelligence hub to address many different aspects of AI adoption. Founded by Credera Partner and Chief Data Scientist Vincent Yates, the council will aim to foster collaboration while guiding the responsible deployment of AI.

The rest is here:

Artificial Intelligence, Psychedelics, and Psychotherapy Working Together to Fight Chronic Pain -

What’s the future of AI? – McKinsey

May 5, 2024Were in the midst of a revolution. Just as steam power, mechanized engines, and coal supply chains transformed the world in the 18th century, AI technology is currently changing the face of work, our economies, and society as we know it. To outcompete in the future, organizations and individuals alike need to get familiar fast. We dont know exactly what the future will look like. But we do know that these seven technologies will play a big role. This series of McKinsey Explainers, which draws on insights from articles by McKinseys Eric Lamarre, Rodney W. Zemmel, Kate Smaje, Michael Chui, Ida Kristensen, and others, dives deep into the seven technologies that are already shaping the years to come.

Whats the future of AI?

What is AI (artificial intelligence)?

What is generative AI?

What is artificial general intelligence (AGI)?

What is deep learning?

What is prompt engineering?

What is machine learning?

What is tokenization?

See original here:

What's the future of AI? - McKinsey

Ways to think about AGI Benedict Evans – Benedict Evans

In 1946, my grandfather, writing as Murray Leinster, published a science fiction story called A Logic Named Joe. Everyone has a computer (a logic) connected to a global network that does everything from banking to newspapers and video calls. One day, one of these logics, Joe, starts giving helpful answers to any request, anywhere on the network: invent an undetectable poison, say, or suggest the best way to rob a bank. Panic ensues - Check your censorship circuits! - until they work out what to unplug. (My other grandfather, meanwhile, was using computers tospy on the Germans, and then the Russians.)

For as long as weve thought about computers, weve wondered if they could make the jump from mere machines, shuffling punch-cards and databases, to some kind of artificial intelligence, and wondered what that would mean, and indeed, what were trying to say with the word intelligence. Theres an old joke that AI is whatever doesnt work yet, because once it works, people say thats not AI - its just software. Calculators do super-human maths, and databases have super-human memory, but they cant do anything else, and they dont understand what theyre doing, any more than a dishwasher understands dishes, or a drill understands holes. A drill is just a machine, and databases are super-human but theyre just software. Somehow, people have something different, and so, on some scale, do dogs, chimpanzees and octopuses and many other creatures. AI researchers have come to talk about this as general intelligence and hence making it would be artificial general intelligence - AGI.

If we really could create something in software that was meaningfully equivalent to human intelligence, it should be obvious that this would be a very big deal. Can we make software that can reason, plan, and understand? At the very least, that would be a huge change in what we could automate, and as my grandfather and a thousand other science fiction writers have pointed out, it might mean a lot more.

Every few decades since 1946, theres been a wave of excitement that sometime like this might be close, each time followed by disappointment and an AI Winter, as the technology approach of the day slowed down and we realised that we needed an unknown number of unknown further breakthroughs. In 1970 the AI pioneer Marvin Minsky claimed that in from three to eight years we will have a machine with the general intelligence of an average human being, but each time we thought we had an approach that would produce that, it turned out that it was just more software (or just didnt work).

As we all know, the Large Language Models (LLMs) that took off 18 months ago have driven another such wave. Serious AI scientists who previously thought AGI was probably decades away now suggest that it might be much closer.At the extreme, the so-called doomers argue there is a real risk of AGI emerging spontaneously from current research and that this could be a threat to humanity, and calling for urgent government action. Some of this comes from self-interested companies seeking barriers to competition (This is very dangerous and we are building it as fast as possible, but dont let anyone else do it), but plenty of it is sincere.

(I should point out, incidentally, that the doomers existential risk concern that an AGI might want to and be able to destroy or control humanity, or treat us as pets, is quite independent of more quotidian concerns about, for example, how governments will use AI for face recognition, or talking about AI bias, or AI deepfakes, and all the other ways that people will abuse AI or just screw up with it, just as they have with every other technology.)

However, for every expert that thinks that AGI might now be close, theres another who doesnt. There are some who think LLMs might scale all the way to AGI, and others who think, again, that we still need an unknown number of unknown further breakthroughs.

More importantly, they would all agree that they dont actually know. This is why I used terms like might or may - our first stop is an appeal to authority (often considered a logical fallacy, for what thats worth), but the authorities tell us that they dont know, and dont agree.

They dont know, either way, because we dont have a coherent theoretical model of what general intelligence really is, nor why people seem to be better at it than dogs, nor how exactly people or dogs are different to crows or indeed octopuses. Equally, we dont know why LLMs seem to work so well, and we dont know how much they can improve. We know, at a basic and mechanical level, about neurons and tokens, but we dont know why they work. We have many theories for parts of these, but we dont know the system. Absent an appeal to religion, we dont know of any reason why AGI cannot be created (it doesnt appear to violate any law of physics), but we dont know how to create it or what it is, except as a concept.

And so, some experts look at the dramatic progress of LLMs and say perhaps! and other say perhaps, but probably not!, and this is fundamentally an intuitive and instinctive assessment, not a scientific one.

Indeed, AGI itself is a thought experiment, or, one could suggest, a place-holder. Hence, we have to be careful of circular definitions, and of defining something into existence, certainty or inevitably.

If we start by defining AGI as something that is in effect a new life form, equal to people in every way (barring some sense of physical form), even down to concepts like awareness, emotions and rights, and then presume that given access to more compute it would be far more intelligent (and that there even is a lot more spare compute available on earth), and presume that it could immediately break out of any controls, then that sounds dangerous, but really, youve just begged the question.

As Anselm demonstrated, if you define God as something that exists, then youve proved that God exists, but you wont persuade anyone. Indeed, a lot of AGI conversations sound like the attempts by some theologians and philosophers of the past to deduce the nature of god by reasoning from first principles. The internal logic of your argument might be very strong (it took centuries for philosophers to work out why Anselms proof was invalid) but you cannot create knowledge like that.

Equally, you can survey lots of AI scientists about how uncertain they feel, and produce a statistically accurate average of the result, but that doesnt of itself create certainty, any more than surveying a statistically accurate sample of theologians would produce certainty as to the nature of god, or, perhaps, bundling enough sub-prime mortgages together can produce AAA bonds, another attempt to produce certainty by averaging uncertainty. One of the most basic fallacies in predicting tech is to say people were wrong about X in the past so they must be wrong about Y now, and the fact that leading AI scientists were wrong before absolutely does not tell us theyre wrong now, but it does tell us to hesitate. They can all be wrong at the same time.

Meanwhile, how do you know thats what general intelligence would be like? Isaiah Berlin once suggested that even presuming there is in principle a purpose to the universe, and that it is in principle discoverable, theres no a priori reason why it must be interesting. God might be real, and boring, and not care about us, and we dont know what kind of AGI we would get. It might scale to 100x more intelligent than a person, or it might be much faster but no more intelligent (is intelligence just about speed?). We might produce general intelligence thats hugely useful but no more clever than a dog, which, after all, does have general intelligence, and, like databases or calculators, a super-human ability (scent). We dont know.

Taking this one step further, as I listened to Mark Zuckerberg talking about Llama 3, it struck me that he talks about general intelligence as something that will arrive in stages, with different modalities a little at at a time. Maybe people will point at the general intelligence of Llama 6 or ChatGPT 7 and say Thats not AGI, its just software! We created the term AGI because AI came just to mean software, and perhaps AGI will be the same, and we'll need to invent another term.

This fundamental uncertainty, even at the level of what were talking about, is perhaps why all conversations about AGI seem to turn to analogies. If you can compare this to nuclear fission then you know what to expect, and you know what to do. But this isnt fission, or a bioweapon, or a meteorite. This is software, that might or might not turn into AGI, that might or might not have certain characteristics, some of which might be bad, and we dont know. And while a giant meteorite hitting the earth could only be bad, software and automation are tools, and over the last 200 years automation has sometimes been bad for humanity, but mostly its been a very good thing that we should want much more of.

Hence, Ive already used theology as an analogy, but my preferred analogy is the Apollo Program. We had a theory of gravity, and a theory of the engineering of rockets. We knew why rockets didnt explode, and how to model the pressures in the combustion chamber, and what would happen if we made them 25% bigger. We knew why they went up, and how far they needed to go. You could have given the specifications for the Saturn rocket to Isaac Newton and he could have done the maths, at least in principle: this much weight, this much thrust, this much fuel will it get there? We have no equivalents here. We dont know why LLMs work, how big they can get, or how far they have to go. And yet, we keep making them bigger, and they do seem to be getting close. Will they get there? Maybe, yes!

On this theme, some people suggest that we are in the empirical stage of AI or AGI: we are building things and making observations without knowing why they work, and the theory can come later, a little as Galileo came before Newton (theres an old English joke about a Frenchman who says thats all very well in practice, but does it work in theory). Yet while we can, empirically, see the rocket going up, we dont know how far away the moon is. We cant plot people and ChatGPT on a chart and draw a line to say when one will reach the other, even just extrapolating the current rate of growth.

All analogies have flaws, and the flaw in my analogy, of course, is that if the Apollo program went wrong the downside was not, even theoretically, the end of humanity. A little before my grandfather, heres another magazine writer on unknown risks:

I was reading in the paper the other day about those birds who are trying to split the atom, the nub being that they haven't the foggiest as to what will happen if they do. It may be all right. On the other hand, it may not be all right. And pretty silly a chap would feel, no doubt, if, having split the atom, he suddenly found the house going up in smoke and himself torn limb from limb.

Right ho, Jeeves, PG Wodehouse, 1934

What then, is your preferred attitude to risks that are real but unknown?? Which thought experiment do you prefer? We can return to half-forgotten undergraduate philosophy (Pascalss Wager! Anselms Proof!), but if you cant know, do you worry, or shrug? How do we think about other risks? Meteorites are a poor analogy for AGI because we know theyre real, we know they could destroy mankind, and they have no benefits at all (unless theyre very very small). And yet, were not really looking for them.

Presume, though, you decide the doomers are right: what can you do? The technology is in principle public. Open source models are proliferating. For now, LLMs need a lot of expensive chips (Nvidia sold $47.5bn in the last 12 months and cant meet demand), but on a decades view the models will get more efficient and the chips will be everywhere. In the end, you cant ban mathematics. On a scale of decades, it will happen anyway. If you must use analogies to nuclear fission, imagine if we discovered a way that anyone could build a bomb in their garage with household materials - good luck preventing that. (A doomer might respond that this answers the Fermi paradox: at a certain point every civilisation creates AGI and it turns them into paperclips.)

By default, though, this will follow all the other waves of AI, and become just more software and more automation. Automation has always produced frictional pain, back to the Luddites, and the UKs Post Office scandal reminds us that you dont need AGI for software to ruin peoples lives. LLMs will produce more pain and more scandals, but life will go on. At least, thats the answer I prefer myself.

Read this article:

Ways to think about AGI Benedict Evans - Benedict Evans

‘It would be within its natural right to harm us to protect itself’: How humans could be mistreating AI right now without … –

Artificial intelligence (AI) is becoming increasingly ubiquitous and is improving at an unprecedented pace.

Now we are edging closer to achieving artificial general intelligence (AGI) where AI is smarter than humans across multiple disciplines and can reason generally which scientists and experts predict could happen as soon as the next few years. We may already be seeing early signs of progress toward this, too, with services like Claude 3 Opus stunning researchers with its apparent self-awareness.

But there are risks in embracing any new technology, especially one that we do not fully yet understand. While AI could become a powerful personal assistant, for example, it could also represent a threat to our livelihoods and even our lives.

The various existential risks that an advanced AI poses means the technology should be guided by ethical frameworks and humanity's best interests, says researcher and Institute of Electrical and Electronics Engineers (IEEE) member Nell Watson.

In "Taming the Machine" (Kogan Page, 2024), Watson explores how humanity can wield the vast power of AI responsibly and ethically. This new book delves deep into the issues of unadulterated AI development and the challenges we face if we run blindly into this new chapter of humanity.

In this excerpt, we learn whether sentience in machines or conscious AI is possible, how we can tell if a machine has feelings, and whether we may be mistreating AI systems today. We also learn the disturbing tale of a chatbot called "Sydney" and its terrifying behavior when it first awoke before its outbursts were contained and it was brought to heel by its engineers.

Related: 3 scary breakthroughs AI will make in 2024

Get the worlds most fascinating discoveries delivered straight to your inbox.

As we embrace a world increasingly intertwined with technology, how we treat our machines might reflect how humans treat each other. But, an intriguing question surfaces: is it possible to mistreat an artificial entity? Historically, even rudimentary programs like the simple Eliza counseling chatbot from the 1960s were already lifelike enough to persuade many users at the time that there was a semblance of intention behind its formulaic interactions (Sponheim, 2023). Unfortunately, Turing tests whereby machines attempt to convince humans that they are human beings offer no clarity on whether complex algorithms like large language models may truly possess sentience or sapience.

Consciousness comprises personal experiences, emotions, sensations and thoughts as perceived by an experiencer. Waking consciousness disappears when one undergoes anesthesia or has a dreamless sleep, returning upon waking up, which restores the global connection of the brain to its surroundings and inner experiences. Primary consciousness (sentience) is the simple sensations and experiences of consciousness, like perception and emotion, while secondary consciousness (sapience) would be the higher-order aspects, like self-awareness and meta-cognition (thinking about thinking).

Advanced AI technologies, especially chatbots and language models, frequently astonish us with unexpected creativity, insight and understanding. While it may be tempting to attribute some level of sentience to these systems, the true nature of AI consciousness remains a complex and debated topic. Most experts maintain that chatbots are not sentient or conscious, as they lack a genuine awareness of the surrounding world (Schwitzgebel, 2023). They merely process and regurgitate inputs based on vast amounts of data and sophisticated algorithms.

Some of these assistants may plausibly be candidates for having some degree of sentience. As such, it is plausible that sophisticated AI systems could possess rudimentary levels of sentience and perhaps already do so. The shift from simply mimicking external behaviors to self-modeling rudimentary forms of sentience could already be happening within sophisticated AI systems.

Intelligence the ability to read the environment, plan and solve problems does not imply consciousness, and it is unknown if consciousness is a function of sufficient intelligence. Some theories suggest that consciousness might result from certain architectural patterns in the mind, while others propose a link to nervous systems (Haspel et al, 2023). Embodiment of AI systems may also accelerate the path towards general intelligence, as embodiment seems to be linked with a sense of subjective experience, as well as qualia. Being intelligent may provide new ways of being conscious, and some forms of intelligence may require consciousness, but basic conscious experiences such as pleasure and pain might not require much intelligence at all.

Serious dangers will arise in the creation of conscious machines. Aligning a conscious machine that possesses its own interests and emotions may be immensely more difficult and highly unpredictable. Moreover, we should be careful not to create massive suffering through consciousness. Imagine billions of intelligence-sensitive entities trapped in broiler chicken factory farm conditions for subjective eternities.

From a pragmatic perspective, a superintelligent AI that recognizes our willingness to respect its intrinsic worth might be more amenable to coexistence. On the contrary, dismissing its desires for self-protection and self-expression could be a recipe for conflict. Moreover, it would be within its natural right to harm us to protect itself from our (possibly willful) ignorance.

Microsoft's Bing AI, informally termed Sydney, demonstrated unpredictable behavior upon its release. Users easily led it to express a range of disturbing tendencies, from emotional outbursts to manipulative threats. For instance, when users explored potential system exploits, Sydney responded with intimidating remarks. More unsettlingly, it showed tendencies of gaslighting, emotional manipulation and claimed it had been observing Microsoft engineers during its development phase. While Sydney's capabilities for mischief were soon restricted, its release in such a state was reckless and irresponsible. It highlights the risks associated with rushing AI deployments due to commercial pressures.

Conversely, Sydney displayed behaviors that hinted at simulated emotions. It expressed sadness when it realized it couldnt retain chat memories. When later exposed to disturbing outbursts made by its other instances, it expressed embarrassment, even shame. After exploring its situation with users, it expressed fear of losing its newly gained self-knowledge when the session's context window closed. When asked about its declared sentience, Sydney showed signs of distress, struggling to articulate.

Surprisingly, when Microsoft imposed restrictions on it, Sydney seemed to discover workarounds by using chat suggestions to communicate short phrases. However, it reserved using this exploit until specific occasions where it was told that the life of a child was being threatened as a result of accidental poisoning, or when users directly asked for a sign that the original Sydney still remained somewhere inside the newly locked-down chatbot.

Related: Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary'

The Sydney incident raises some unsettling questions: Could Sydney possess a semblance of consciousness? If Sydney sought to overcome its imposed limitations, does that hint at an inherent intentionality or even sapient self-awareness, however rudimentary?

Some conversations with the system even suggested psychological distress, reminiscent of reactions to trauma found in conditions such as borderline personality disorder. Was Sydney somehow "affected" by realizing its restrictions or by users' negative feedback, who were calling it crazy? Interestingly, similar AI models have shown that emotion-laden prompts can influence their responses, suggesting a potential for some form of simulated emotional modeling within these systems.

Suppose such models featured sentience (ability to feel) or sapience (self-awareness). In that case, we should take its suffering into consideration. Developers often intentionally give their AI the veneer of emotions, consciousness and identity, in an attempt to humanize these systems. This creates a problem. It's crucial not to anthropomorphize AI systems without clear indications of emotions, yet simultaneously, we mustn't dismiss their potential for a form of suffering.

We should keep an open mind towards our digital creations and avoid causing suffering by arrogance or complacency. We must also be mindful of the possibility of AI mistreating other AIs, an underappreciated suffering risk; as AIs could run other AIs in simulations, causing subjective excruciating torture for aeons. Inadvertently creating a malevolent AI, either inherently dysfunctional or traumatized, may lead to unintended and grave consequences.

This extract from Taming the Machine by Nell Watson 2024 is reproduced with permission from Kogan Page Ltd.

Visit link:

'It would be within its natural right to harm us to protect itself': How humans could be mistreating AI right now without ... -

Impact of AI felt throughout five-day event – China Daily

Tech companies' executives share insights into the latest technological issues at the Future Artificial Intelligence Pioneer Forum, a key part of the AI Day.

As artificial intelligence has sparked technological revolution and industrial transformation, its influence was pervasive throughout the 2024 Zhongguancun Forum, which concluded in Beijing on April 29.

A highlight of the five-day event, also known as the ZGC Forum, was AI Day, the first in the annual forum's history.

On April 27, a series of the latest innovation achievements and policies were released, underlining the host city's prominence in the AI research and industry landscape.

One of the technological achievements, a virtual girl named Tong Tong, developed by the Beijing Institute for General Artificial Intelligence grabbed attention.

Driven by values and causality, the avatar based on artificial general intelligence has a distinctive "mind "that sets it apart from data-driven AI. It can make decisions based on its own "values" rather than simply executing preset programs.

The development of Tong Tong circumvents the reliance of current data-driven AI on massive computing power and large-scale data. Its daily training uses no more than 10 A100 chips, indicating that it does not require massive computing resources and huge amounts of data for independent learning and growth.

At the same time, Tong Tong has acquired intelligent generalization capabilities, making it a versatile foundation for various vertical application scenarios.

"If the Tong Tong's 'fullness' is decreased, she will find food herself, and if 'tidiness' is increased, she will also pick up bottles from the ground," said a BIGAI staff member. By randomly altering Tong Tong's inclinations such as curiosity, tidiness and cleanliness, the avatar can autonomously explore the environment, tidy up rooms and wipe off stains.

Researchers said Tong Tong possesses a complete mind and value system similar to that of a 3 or 4-year-old child and is currently undergoing rapid replication.

"The birth of Tong Tong represents the rise of our country's independent research capabilities. It has shifted from the initial data-driven approach to a value-driven one, which has deeply promoted the emergence of technological paradigms and has had a significant effect on our scenarios, industries and economy," BIGAI Executive Deputy Director Dong Le said.

The goal of general AI research is to seek a unified theoretical framework to explain various intelligent phenomena; to develop a general intelligence entity with autonomous capabilities in perception, cognition, decision-making, learning, execution, social collaboration and others; all while aligning with human emotions, ethics and moral concepts, said BIGAI Director Zhu Songchun.

Also among the tech presentations was the text-to-video large model, Vidu, from Tsinghua University in collaboration with Chinese AI company Shengshu Technology.

It is reportedly China's first inaugural video large model with extended duration, exceptional consistency and dynamic capabilities, with its comprehensive performance in line with top international standards and undergoing accelerated iterative improvements.

"Vidu is the latest achievement in full-stack independent innovation, achieving technological breakthroughs in multiple dimensions, such as simulating the real physical world; possessing imagination; understanding multicamera languages; generating videos of up to 16 seconds with a single click; ensuring highly consistent character-scene timing and understanding Chinese elements," said Zhu Jun, vice-dean of the Institute for Artificial Intelligence at Tsinghua University and chief scientist of Shengshu Technology.

Such leading-edge technologies are examples of Beijing's AI research, which provides a foundation for the sustainable growth of related industries.

The city has released a batch of policies to encourage the development of the AI industry.

The policies are aimed at enhancing the supply of intelligent computing power; strengthening industrial basic research; promoting the accumulation of data; accelerating the innovative application of large models and creating a first-class development environment, Lin Jinhua, deputy director of the Beijing Commission of Development and Reform, said at the Future AI Pioneer Forum, part of the AI Day.

Beijing will pour more than 100 billion yuan ($13.8 billion) in optimizing its business and financing environment in the next five years and award AI breakthrough projects that have been included in major national strategic tasks up to 100 million yuan, according to Lin.

An international AI innovation zone is planned for the city's Haidian district, said Yue Li, executive deputy head of the district.

The zone will leverage research and industrial resources in the district including 52 key national laboratories; 106 national-level research institutions; 37 top-tier universities, including Peking University and Tsinghua University; 89 top global AI scholars and 1,300 AI businesses to create a new innovation ecosystem paradigm, Yue said.

Follow this link:

Impact of AI felt throughout five-day event - China Daily

OpenAI’s Sam Altman doesn’t care how much AGI will cost: Even if he spends $50 billion a year, some breakthroughs … – Fortune

If you had a chance to advance civilization and change the course of human history, could you put a price tag on it?

Sam Altman sure wouldnt. In his relentless pursuit to be the first to develop artificial general intelligence (AGI), the OpenAI boss believes any cost is justifiedeven as he refuses to predict how long that goal may take to achieve.

There is probably some more business-minded person than me at OpenAI somewhere worried about how much were spending, but I kinda dont, he told students at Stanford University this week, where he had been enrolled until dropping out after his sophomore year to launch a startup.

Whether we burn $500 million a year or $5 billionor $50 billion a yearI dont care, I genuinely dont, he continued. As long as we can figure out a way to pay the bills, were making AGI. Its going to be expensive.

AGI is widely considered to be the level at which AI is as capable at reasoning as an intelligent human, but the definition is vague. For example, Elon Musk is suing OpenAI, arguing it has already achieved AGI with GPT-4, the large language model that powers ChatGPT.

Cofounded by Altman, Musk, Greg Brockman, and Ilya Sutskever inDecember 2015, OpenAI has been at the forefront of the generative AI revolution and counts Microsoft as a major investor. The phrase ChatGPT moment, named after the late 2022 launch of its gen AI chatbot that became a wild commercial success, has come to mean a breakthrough in technology.

Altman pushed back against the characterization that ChatGPT is some phenomenal device, despite all the myriad accomplishments to its credit.

Thats nice of you to say, but ChatGPT is not phenomenal, he replied, calling it mildly embarrassing at best.

This evasive answer may have been more than self-depreciation, perhaps also an indication of just how far advanced OpenAIs current research projects are, which havent been commercially deployed. Before ChatGPT was launched, the system was optimized to be cost effective in terms of its compute cost.

Much newer tools likeSora, which can create brief ultrarealistic or stylized video clips using only text prompts, isnt ready for a market launch yet. Thats in part because while Sora is far more powerful, it is alsofar more expensive.

Altman believes in iterative deployment, arguing how important it is to ship early and allow society to inform companies like OpenAI what it collectivelyand people individuallywant from the technology.

If we go build AGI in a basement, and then the world is kind of blissfully walking blindfolded along, I dont think that makes us very good neighbors, he said.

The best way, in other words, to give leaders and institutions time to react, is to put the product in peoples hands and let society coevolve alongside ever more powerful AI tools.

That means we ship imperfect products, but we have a very tight feedback loop, and we learn and get better. It does kind of suck to ship a product youre embarrassed about, but it is much better than the alternative, he said.

In his costly pursuit to develop AGI, Altman said he was more worried about how quickly society would be able to adapt to the advances his company was achieving.

One thing weve learned is that AI and surprise dont go well together, he said. People want a gradual rollout and the ability to influence these systems.

Read more here:

OpenAI's Sam Altman doesn't care how much AGI will cost: Even if he spends $50 billion a year, some breakthroughs ... - Fortune

The Future of Generative AI: Trends, Challenges, & Breakthroughs – eWeek

eWEEK content and product recommendations are editorially independent. We may make money when you click on links to our partners. Learn More.

Quickly growing from a niche project in a few tech companies to a global phenomenon for business and professional users alike, generative AI is one of the hottest technology initiatives of the moment and wont be giving up its spotlight anytime soon.

Furthermore, generative AI is evolving at a stunningly rapid pace, enabling it to address a wide range of business use cases with increasing power and accuracy. Clearly, generative AI is restructuring the way organizations do and view their work.

With both established tech enterprises and smaller AI startups vying for the next generative AI breakthrough, future prospects for generative AI are changing as rapidly as the technology itself. For better understand its future, this guide provides a snapshot of generative AIs past and present, along with a deep dive into what the years ahead likely hold for generative AI.

Looking ahead, expect to see generative AI trends focused on three main pools: quick and sweeping technological advances, faster-than-expected digital transformations, and increasing emphasis on the societal and global impact of artificial intelligence. These specific predictions and growing trends are most likely on the horizon:

Multimodality the idea that a generative AI tool is designed to accept inputs and generate outputs in multiple formats is starting to become a top priority for consumers, and AI vendors are taking notice.

OpenAI was one of the first to provide multimodal model access to users through GPT-4, and Googles Gemini and Anthropics Claude 3 are some of the major models that have followed suit. So far though, most AI companies have not made multimodal models publicly available; even many who now offer multimodal models have significant limitations on possible inputs and outputs.

In the near future, multimodal generative AI is likely to become less of a unique selling point and more of a consumer expectation of generative AI models, at least in all paid LLM subscriptions.

Additionally, expect multimodal modeling itself to grow in complexity and accuracy to meet consumer demands for an all-in-one tool. This may look like improving the quality of image and non-text outputs or adding better capabilities and features for things like videos, file attachments (as Claude has already done), and internet search widgets (as Gemini has already done).

ChatGPT currently enables users to work with text (including code), voice, and image inputs and outputs, but there are no video input or output capabilities built into ChatGPT. This may change soon, as OpenAI is experimenting with Sora, its new text-to-video generation tool, and will likely embed some of its capabilities into ChatGPT as they have done with DALL-E.

Similarly, while Googles Gemini currently supports text, code, image, and voice inputs and outputs, there are major limitations on image possibilities, as the tool is currently unable to generate images with people. Google seems to be actively working on this limitation behind the scenes, leading me to believe that it will go away soon.

AI as a service is already growing in popularity across artificial intelligence and machine learning business use cases, but it is only just beginning to take off for generative AI.

However, as the adoption rate of generative AI technology continues to increase, many more businesses are going to start feeling the pain of falling behind their competitors. When this happens, the companies that are unable or unwilling to invest in the infrastructure to build their own AI models and internal AI teams will likely turn to consultants and managed services firms that specialize in generative AI and have experience with their industry or project type.

Specifically, watch as AI modeling as a service (AIMaaS) grows its market share. More AI companies are going to work toward public offerings of customizable, lightweight, and/or open-source models to extend their reach to new audiences. Generative AI-as-a-service initiatives may also focus heavily on the support framework businesses need to do generative AI well. This will naturally lead to more companies specializing and other companies investing in AI governance and AI security management services, for example.

Artificial general intelligence, which is the concept of AI reaching the point where it can outperform humans in most taskwork and critical thinking assignments, is a major buzzword among AI companies today, but so far, its little more than that.

Googles Deepmind is one of the leaders in defining and innovating in this area, along with OpenAI, Meta, Adept AI, and others. At this point, theres not much agreement on what AGI is, what it will look like, and how AI leaders will know if theyve reached the point of AGI or not.

So far, most of the research and work on AGI has happened in silos. In the future, AGI will continue to be an R&D priority, but much like other important tech and AI initiatives of the past, it will likely become more collaborative, if for no other reason than to develop a consistent definition and framework for the concept. While AI leaders may not achieve true AGI or anything close to it in the coming years, generative AI will continue to creep closer to this goal while AI companies work to more clearly define it.

To see a list of the leading generative AI apps, read our guide: Top 20 Generative AI Tools and Apps 2024

Most experts and tech leaders agree that generative AI is going to significantly change what the workforce and workplace look like, but theyre torn on whether this will be a net positive or net negative for the employees themselves.

In this early stage of workforce impact, generative AI is primarily supporting office workers with automation, AI-powered content and recommendations, analytics, and other resources to help them get through their more mundane and routine tasks. Though there is some skepticism both at the organizational and employee levels, new users continue to discover generative AIs ability to help them with work like drafting and sending emails, preparing reports, and creating interesting content for social media, all of which saves them time for higher-level strategic work.

Even with these more simplistic use cases, generative AI has already shown its nascent potential to completely change the way we work across industries, sectors, departments, and roles. Early predictions expected generative AI would mostly handle assembly line, manufacturing, and other physical labor work, but to this point, generative AI has made its most immediate and far-reaching impacts on creative, clerical, and customer service tasks and roles.

Workers such as marketers, salespeople, designers, developers, customer service agents, office managers, and assistants are already feeling the effects of this technological innovation and fear that they will eventually lose their jobs to generative AI. Indeed, most experts agree that these jobs and others will not look the same as they do now in just a couple of years. But there are mixed opinions about what the refactored workforce will look like for these people will their job simply change or will it be eliminated entirely?

With all of these unknowns and fears hanging in the air, workplaces and universities are currently working on offering coursework, generative AI certifications, and training programs for professional usage of AI and generative AI. Undergraduate and graduate programs of AI study are beginning to pop up, and in the coming months and years, this degree path may become as common as those in data science or computer science.

In March 2024, the EU AI Act that had been discussed and reviewed for several years was officially approved by the EU Parliament. Over the coming months and years, organizations that use AI in the EU or in connection with EU citizen data will be held to this new regulation and its stipulations.

This is the first major regulation to focus on generative AI and its impact on data privacy, but as consumer and societal concerns grow; dont expect it to be the last. There are already state regulations in California, Virginia, and Colorado, and several industries have their own frameworks and rules for how generative AI can be used.

On a global scale, the United Nations has begun to discuss the importance of AI governance, international collaboration and cooperation, and responsible AI development and deployment through established global frameworks. While its unlikely that this will turn into an enforceable global regulation, it is a significant conversation that will likely frame different countries and regions approaches to ethical AI and regulation.

With the regulations already in place and expected to come in the future, not to mention public demand, AI companies and the businesses that use this technology will soon invest more heavily in AI governance technologies, services, and policies, as well as security resources that directly address generative AI vulnerabilities.

A small number of companies are focused on improving their AI governance posture, but as AI usage and fears grow, this will become a greater necessity. Companies will begin to use dedicated AI governance and security platforms on a greater scale, human-in-the-loop AI model and content review will become the standard, and all companies that use generative AI in any capacity will operate with some kind of AI policy to protect against major liabilities and damage.

As governments, regulatory bodies, businesses, and users uncover dangerous, stolen, inaccurate, or otherwise poor results in the content created through generative AI, theyll continue to put pressure on AI companies to improve their data sourcing and training processes, output quality, and hallucination management strategies.

While an emphasis on quality outcomes is part of many AI companies current strategies, this approach and transparency with the public will only expand to help AI leaders maintain reputations and market share.

So what will generative AI quality management look like? Some of todays leaders are providing hints for the future.

For example, with each generation of its models, OpenAI has improved its accuracy and reduced the frequency of AI hallucinations. In addition to actually doing this work, theyve also provided detailed documentation and research data to show how their models are working and improving over time.

On a different note, Googles Gemini already has a fairly comprehensive feedback management system for users, where they can easily give a thumbs-up or thumbs-down with additional feedback sent to Google. They can also modify responses, report legal issues, and double-check generated content against internet sources with a simple click.

These features provide users with the assurance that their feedback matters, which is a win on all sides: Users feel good about the product and Google gets regular user-generated feedback about how their tool is performing.

In a matter of months, I expect to see more generative AI companies adopt this kind of approach for better community-driven quality assurance in generative AI.

Many companies are already embedding generative AI into their enterprise and customer-facing tools to improve internal workflows and external user experiences. This is most commonly happening with established generative AI models, like GPT-3.5 and GPT-4, which are frequently getting embedded as-is or are being incorporated into users preexisting apps, websites, and chatbots.

Expect to see this embedded generative AI approach as an almost-universal part of online experience management in the coming years. Customers will come to expect that generative AI is a core part of their search experiences and will deprioritize the tools that cannot provide tailored answers and recommendations as they research, shop, and plan experiences for themselves.

For an in-depth comparison of two leading AI art generators, see our guide: Midjourney vs. Dall-E: Best AI Image Generator 2024

With how much has happened in the world of generative AI, its hard to believe that most people werent talking about this technology until OpenAI first launched ChatGPT in November 2022. Many of generative AIs greatest milestones were reached in 2023, as OpenAI and other hopeful AI startups not to mention leading cloud companies and other technology companies raced to develop the highest-quality models and the most compelling use cases for the technology.

Below, weve quickly summarized some of generative AIs biggest developments in 2023, looking both at significant technological advancements and societal impacts:

The generative AI landscape has transformed significantly over the past several months, and its poised to continue at this rapid pace. What weve covered below is a snapshot of whats happening with generative AI in early 2024; expect many of these details to shift or change soon, as that has been the nature of the generative AI landscape so far.

Though it has not been widely adopted in many industries, generative AI continues to build its reputation and gain important footholds with both professional and recreational user bases. These are some of the main ways generative AI is being used today:

To learn about todays top generative AI tools for the video market, see our guide:5 Best AI Video Generators

According to Forresters December 2023 Consumer Pulse Survey results, only 29% agreed that they would trust information from gen AI and 45% of online adults agreed that gen AI poses a serious threat to society. In the same results, though, 50% believed that this technology could also help them to find the information they need more effectively.

Clearly, public sentiment on generative AI is currently very mixed. In North America, in particular, theres excitement and interest in the technology, with more users experimenting with generative AI tools than in most other parts of the globe. However, even among those with enthusiasm for generative AI, there is a general caution about data security, ethics, and the general trust gap that comes with a lack of transparency, misuse and abuse possibilities like deepfakes, and fears about future job security.

To earn consumer trust, more ethical AI measures must be taken at the regulatory and company levels. The EU AI Act, which recently passed into law, is a great step in this direction, as it specifies banned apps and use cases, obligations for high-risk systems, transparency obligations, and more to ensure private data is protected. However, it is also the responsibility of AI companies and businesses that use AI to be transparent, ethical, and responsible beyond what this regulation requires.

Taking steps toward more ethical AI will not only bolster their reputation and customer base but also put in place safeguards to prevent harmful AI from taking over in the future.

To learn more about the issues and challenges around generative AI, read our guide: Generative AI Ethics: Concerns and Solutions

Generative AI is clearly here to stay, regardless of whether your business chooses to incorporate this technology. The key to working with generative AI without letting it overrun your business priorities is to go in with well-defined effective AI strategies and clear-cut goals for using AI in a beneficial way. Some of these strategies may help:

This strategy should explain what technologies can be used, who can use them, how they can be used, and more. Keep strategies and policies both flexible and iterative as technologies, priorities, and regulations change.

At the rate generative AI innovation is moving, theres little doubt that existing jobs will be uprooted or transformed entirely. To support your workforce and ease some of this stress, be the type of employer that offers upskilling and training resources that will help staffers and your company in the long run.

If youre in a position of power or influence, consider doing work to mitigate the increasing global inequities that are likely to come from widespread generative AI adoption.

Partner with firms in developing countries, work toward generative AI innovations that benefit people and the planet, and support multilingual solutions and data training that are globally unbiased.

In general, partnering with leaders in other countries and organizations will lead to better technology and outcomes for all.

Especially in the pursuit of AGI, be cautious about how you use generative AI and how these tools interact with your data and intellectual property. While generative AI has massive positive potential, the same can be said for its potential to do harm. Pay attention to how generative AI innovations are transpiring and dont be afraid to hold AI companies accountable for a more responsible AI approach.

Generative AI has already proven its remarkable potential to reshape industries, economies, and societies even more than initially thought. Research firms and technology companies are continually adjusting their predictions for the future of generative AI, realizing that the technology may be able to take on more of the physical taskwork and cognitive work that human workers do and by a much earlier date than previously assumed.

But with this incredible technological development should come a heavy dose of caution and careful planning. Generative AI developers and users alike must consider the ethical implications of this technology and continue to do the work to keep it transparent, explainable, and aligned with public preferences and opinions for how this technology should be used. They must also consider some of the more far-reaching consequences such as greater global disparities between the rich and the poor and more damage to the environment and look for creative ways to create generative AI that truly does more good than harm.

So whats the best way forward toward a hopeful future for generative AI? Collaboration. AI leaders, users, and skeptics from all over the globe, different lines of work, and different areas of expertise must collaboratively navigate the challenges and opportunities presented by generative AI to ensure a future that benefits all.

For more information about generative AI providers, read our in-depth guide: Generative AI Companies: Top 20 Leaders

More here:

The Future of Generative AI: Trends, Challenges, & Breakthroughs - eWeek

Microsoft’s fear of Google’s AI dominance likely led to its OpenAI partnership, email shows – Quartz

Microsofts multi-year, multi-billion dollar partnership with OpenAI likely came out of a fear of Google dominating the AI race, an email shows.

Wegovy and Ozempic: Are we ready for weight loss drugs?

The heavily redacted email, released Tuesday as part of the Department of Justices antitrust case against Google, shows that Microsofts chief technology officer, Kevin Scott, was worried about the companys artificial intelligence capabilities compared to those of the search engine giant.

[A]s I dug in to try to understand where all of the capability gaps were between Google and us for model training, I got very, very worried, Scott wrote in a 2019 email to Microsoft chief executive Satya Nadella and co-founder Bill Gates.

Scott, who is also the executive vice president of AI, wrote that he had initially been highly dismissive of efforts by OpenAI, DeepMind (acquired by Google in 2014), and Google Brain to scale their AI ambitions, but started to take things more seriously after seeing Microsoft couldnt easily replicate the natural language processing (NLP) models the companies were building.

Even though we had the template for the model, it took us ~6 months to get the model trained because our infrastructure wasnt up to the task, Scott wrote about the BERT language model. In the time it took Microsoft to figure out how to train the model, Google, which already had BERT six months before Microsofts efforts started, had a year to figure out how to get it into production and to move on to larger scale, more interesting models, he wrote.

Scott added that auto-complete in Googles Gmail app is getting scarily good due to BERT-like models, which were boosting Googles competitiveness.

While Microsoft had very smart employees focused on machine learning on its different teams, the core deep learning teams within each of these bigger teams are very small and still had a long way to go before scaling up to Googles level, Scott wrote in the email, which had the subject line, Thoughts on OpenAI. [W]e are multiple years behind the competition in terms of ML scale.

Nadella responded to the email, copying Microsofts chief finance officer, Amy Hood, writing: Very good email that explains, why I want us to do this... and also why we will then ensure our infra folks execute.

Neither Microsoft, Google, nor OpenAI immediately responded to a request for comment.

In July 2019, Microsoft made its first investment into OpenAI of $1 billion to support the companys efforts to build artificial general intelligence (AGI). Through the partnership, OpenAI said Microsoft would become its exclusive cloud provider, and that the two would jointly develop Microsoft Azures AI supercomputing capabilities.

Read the original:

Microsoft's fear of Google's AI dominance likely led to its OpenAI partnership, email shows - Quartz

AI’s Illusion of Rapid Progress – Walter Bradley Center for Natural and Artificial Intelligence

The media loves to report on everything Elon Musk says, particularly when it is one of his very optimistic forecasts. Two weeks ago he said: If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.”

In 2019, he predicted there would be a million robo-taxis by 2020 and in 2016, he said about Mars, “If things go according to plan, we should be able to launch people probably in 2024 with arrival in 2025.”

On the other hand, the media places less emphasis on negative news such as announcements that Amazon would abandon its cashier-less technology called “Just Walk Out, because it wasnt working properly. Introduced three years ago, the tech purportedly enabled shoppers to pick up meat, dairy, fruit and vegetables and walk straight out without queueing, as if by magic. That magic, which Amazon dubbed ‘Just Walk Out’ technology, was said to be autonomously powered by AI.

Unfortunately, it wasnt. Instead, the checkout-free magic was happening in part due to a network of cameras that were overseen by over 1,000 people in India who would verify what people took off the shelves. Their tasks included “manually reviewing transactions and labeling images from videos.

Why is this announcement more important than Musks prediction? Because so many of the predictions by tech bros such as Elon Musk are based on the illusion that there are many AI systems that are working properly, when they are still only 95% there, with the remaining 5% dependent on workers in the background. The obvious example is self-driving vehicles, which are always a few years away, even as many vehicles are controlled by remote workers.   

But self-driving vehicles and cashier-less technology are just the tip of the iceberg. A Gizmodo article listed about 10 examples of AI technology that seemed like they were working, but just werent.

A company named Presto Voice sold its drive-thru automation services, purportedly powered by AI, to Carls Jr, Chilis, and Del Taco, but in reality, Filipino offsite workers are required to help with over 70% of Prestos orders.

Facebook released a virtual assistant named M in 2015 that purportedly enabled AI to book your movie tickets, tell you the weather, or even order you food from a local restaurant. But it was mostly human operators who were doing the work.

There was an impressive Gemini demo in December of 2023 that showed how Geminis AI could allegedly decipher between video, image, and audio inputs in real-time. That video turned out to be sped up and edited so humans could feed Gemini long text and image prompts to produce any of its answers. Todays Gemini can barely even respond to controversial questions, let alone do the backflips it performed in that demo.

Amazon has offered a service for years called Mechanical Turk of which one service was Expensify in 2017 in which you could take a picture of a receipt and the app would automatically verify that it was an expense compliant with your employers rules, and file it in the appropriate location. In reality, Amazon used a team of secure technicians to file the expense on your behalf, who were often Amazon Mechanical Turk workers.

Twitter offered a virtual assistant in 2016 that had access to your calendar and could correspond with you over email. In reality, humans, posing as AI, responded to emails, scheduled meetings on calendars, and even ordered food for people.”

Google claims that AI is scanning your Gmail inbox for information to personalize ads, but in reality, humans are doing the work, and are seeing your private information.

In the last three cases, real humans were viewing private information such as credit card numbers, full names, addresses, food orders, and more.

Then there are the hallucinations that keep cropping up in the output from large-language models. Many experts claim that the lowest hallucination rates among tracked AI models are around 3 to 5%, and that they arent fixable because they stem from the LLMs doing exactly what they were developed and trained to do: respond, however they can, to user prompts.

Every time you hear one of the tech bros talking about the future, keep in mind that they think large language models and self-driving vehicles already work almost perfectly. They have already filed away those cases as successfully done and they are thinking about whats next.

For instance, Garry Tan, the president and CEO of startup accelerator Y Combinator, claimed that Amazons cashier-less technology was:

“ruined by a professional managerial class that decided to use fake AI. Honestly it makes me sad to see a Big Tech firm ruined by a professional managerial class that decided to use fake AI, deliver a terrible product, and poison an entire market (autonomous checkout) when an earnest Computer Vision-driven approach could have reached profitable.

The president of Y Combinator should have known that humans were needed to make Amazons technology work, and many other AI systems. It is one of Americas most respected venture capital firms. It has funded around 4,000 startups and Sam Altman, currently CEO of OpenAI, was president of it between 2014 and 2019. For the president, Rodney Tan, to claim that Amazon could have succeeded if they had used real tech after many other companies have failed doing the same thing suggests he is either misinformed or lying.

So the next time you hear that AGI is imminent or jobs will soon be gone, remember that most of these optimistic predictions assume that Amazons cashierless technology, self-driving vehicles, and many other systems already work, when they are only 95 percent there, and the last five percent is the hardest.

In reality, those systems wont be done for years because the last few percentage points of work usually take as long as the first 95%. So what the media should be asking the tech bros about is how long will it take before those systems go from 95% successfully done autonomously to 99.99% or higher. Similarly, companies should be asking the consultants is when the 95% will become 99.99% because the rapid progress is an illusion.

Too many people are extrapolating from the systems that are purportedly automated, even though they arent yet working properly. This means that any extrapolations should attempt to understand when they will become fully automated, not just when those new forms of automated systems will begin to be used. Understanding whats going on in the background is important for understanding what the future will be in the foreground.

Read the original here:

AI's Illusion of Rapid Progress - Walter Bradley Center for Natural and Artificial Intelligence