Q&A with AI expert and DeepMind co-founder Mustafa Suleyman: ‘Things are about to be very different’ – Yahoo Finance

In a world filled with newly minted AI experts, Mustafa Suleyman is one of the OGs.

In 2010, Suleyman co-founded AI startup DeepMind, which we now know as Google (GOOG, GOOGL) DeepMind. Adjusted for inflation, Google's 2014 acquisition of DeepMind reportedly would have been worth more than half a billion dollars today.

And Suleyman has kept going. In March 2022, after years at Google and VC firm Greylock Partners, Suleyman teamed up with Reid Hoffman LinkedIn co-founder, former COO at PayPal, and Silicon Valley legend to found Inflection AI.

Already, Inflection AI has gained attention from investors, this summer raising $1.3 billion from big names such as Microsoft (MSFT) and Nvidia (NVDA).

Suleyman also found the time to write a book, called "The Coming Wave: Technology, Power, and the 21st Century's Greatest Dilemma."

Yahoo Finance spoke with Suleyman on Friday in the midst of his book tour about AI's present and future. Inflection AI's chatbot, Pi, also made an appearance.

The following conversation has been edited for length and clarity.

Mustafa Suleyman, co-founder and CEO of Inflection AI, in Toronto, Canada. (Piaras Mdheach/Sportsfile for Collision via Getty Images)

You've been working in and thinking about AI for decades. Why write this book now?

In the last two or three years, we've been able to see the impact of multiple compounding exponentials. ... We're at an inflection point, and that's why I wrote the book. Because it's just quite obvious that things are about to be very different in the next five to 10 years.

Hundreds of millions of people will get access to intelligence, which is going to become a commodity. It's going to be cheap and easy to have an aid in your pocket. It'll be a friendly companion, but it'll also be a teacher. It'll be a coach. It'll be a scheduler, an organizer, a therapist, and an adviser. That's going to change everything.

I want to start by looking at the big picture. This AI wave is coming whether we like it or not, so how should we think about it?

We've always faced new technologies that, at first, seem really daunting and as though they're going to upend everything in a bad way. When airplanes first arrived, people thought they were completely insane, and that they'd always be really dangerous and unreliable. It took many years to get widespread adoption, for them to become safe enough that people feel comfortable on them. We're really just getting adjusted to what these new technologies can do, how they can help, what their risks are, and managing a new type of risk.

Story continues

A spectator takes photos of a humanoid robot at the 2023 World Robot Conference in Beijing, China, Aug. 17, 2023. (Costfoto/NurPhoto via Getty Images)

So, in essence, what's a good outcome for AI?

A good outcome is one in which we manage the downsides and that we unleash this technology to deliver radical abundance. Food production, energy production, materials production, transportation, healthcare, and education are going to get radically cheaper in the next 30 years.

Unfold that trajectory over the next 20 to 30 years, and there's very good reason to be optimistic on all fronts. ... The real challenge for us is going to be how do we manage abundance? How do we handle the distribution and govern this new power and make sure it remains accountable? But the upside is unbelievable.

What does the negative outcome for AI look like, where the risks run away from us?

The risk is a one of the proliferation of power. We give up power to the nation-state, and in return, we ask the nation-state to provide for us. The challenge is we're now putting the power to have an impact in the hands of hundreds of millions of people.

As in the last wave, the last 20 years, we've put the power to broadcast in the hands of millions of people. Anyone can have a Twitter account or Instagram or TikTok. Anyone can have a podcast, have a blog. That's been an amazing achievement of civilization, having the freedom to speak without having access to traditional news institutions.

Now, that same trajectory is going to take place for the ability to act in the world. ... People are going to have more agency, more power, and more influence with less capital investment. That's the nature of our globalized world, but this is an additional fragility amplifier on top of that.

To that end, tell me about your take on AI regulation, which is a key part of this book, particularly the idea of "containment."

Containment is just a simple idea that says that we should always have control over the things that we create, and they should always operate within pre-defined boundaries in repeatable, predictable ways. It should always be accountable to owners, creators, users, and ultimately democratic governments.

It's kind of just restating the obvious fact that technology shouldn't get out of our control. The whole effort here is to place guardrails and permanent constraints around technology so society collectively makes decisions about how it impacts our world. ... If we just leave this to the market, it's going to produce forces beyond our control, and that's the thing that needs to shift.

I'm both really compelled by the idea of containment and skeptical of it. Can you speak to why you believe it's possible to contain this evolution?

It's extremely tough. We haven't really done it before. When there's a technology that's useful and massively impactful in our world, it has always proliferated, it has always spread. It would be an unprecedented challenge, but if you look back through history, there are countless moments when we have confronted seemingly unprecedented moments very successfully.

Roads are a great example, they're actually incredibly regulated. Every inch of that infrastructure is regulated, including the intensity of the lumens of the light bulb and the width of every lane, everything from driver training to airbags to the car itself to emissions. You can't drive through any field or sidewalk, right? It sounds obvious, but that's what we mean by containing technology in a way.

President Joe Biden speaks about artificial intelligence in the Roosevelt Room of the White House in Washington, D.C., on July 21, 2023, with leading AI experts, including Inflection AI's Mustafa Suleyman. (ANDREW CABALLERO-REYNOLDS/AFP via Getty Images)

You've identified a new Turing Test explain.

The new Turing Test is trying to measure not just what AI can say but what it can do. The modern Turing Test I've proposed is to say that an AI, with a $100,000 investment, can go off and make one million dollars in three months.

The idea isn't to make money, but the goal is really to use money as a measure of progress towards a more powerful system. ... AI would have to go and invent a new product, use an image generator to produce a bunch of examples, produce a blueprint for its design, identify a manufacturer, call the manufacturer to negotiate, email back and forth on the contract, get the product shipped to customers, and on and on. That's a pretty challenging set of tasks.

Do you think AI is a job killer or a job creator?

It's going to change the way that we work, so fundamentally that's going to get rid of some roles. But it's also likely to create a lot of brand-new roles.

That's what we have to use taxation for. We have to manage the transition, so people will have time to find other work, train, learn new skills, and not get cut out of employment entirely. ... If you want to speed something up, then reduce the taxes. If you want to slow something down, then increase the taxes. So we do have a lever for containment, and that's adding friction with new taxes.

The final question, I gave to Inflection AI's chatbot, Pi: Should humans be afraid of AI?

Pi: That's a really common question, and it's understandable why people might be worried about AI. After all, we've seen movies like Terminator or Ex Machina where AI goes rogue and harms humans. But I think it's important to keep in mind that these are just works of fiction. In reality, AI is developed by humans and is designed to be helpful, not harmful. Sure, there are risks associated with AI, but I think they can be mitigated with proper regulations and oversight. Ultimately, I think AI can be a force for good if we develop and use it responsibly.

An excerpt from Allie Garfinkle's conversation with Inflection AI's Pi chatbot, Sept. 9, 2023.

Allie Garfinkle is a Senior Tech Reporter at Yahoo Finance. Follow her on Twitter at @agarfinks and on LinkedIn.

Click here for the latest technology news that will impact the stock market.

Read the latest financial and business news from Yahoo Finance

Continued here:
Q&A with AI expert and DeepMind co-founder Mustafa Suleyman: 'Things are about to be very different' - Yahoo Finance

Related Posts

Comments are closed.