The future, one year later – POLITICO – POLITICO

In this Oct. 30, 2008, photo, Electric Time Company employee Dan Lamoore adjusts the color on a 67-inch square LED color-changing clock at the plant in Medfield, Mass. | Elise Amendola/AP photo

When this newsletter launched exactly one year ago today, we promised to bring you a unique and uniquely useful look at questions that are addressed elsewhere as primarily business opportunities or technological challenges.

We had a few driving questions: What do policymakers need to know about world-changing technologies? What do tech leaders need to know about policy? Could we even get them talking to each other?

Were still working on that last one. But what we have brought you is a matter of public record: Scoops on potentially revolutionary technologies like Web3, a blow-by-blow account of the nascent governing structure of the metaverse and a procession of thinkers on the transformation AI is already causing, and how we might guide it.

Yeah, about that. In just a year, AI has gone from a powerful, exciting new technology still somewhat on the horizon to a culture-and-news-dominating, potentially even apocalyptic force. Change is always happening in the tech world, but sometimes it happens fast. And as the late Intel chief Gordon Moore might have said, that speed begets more speed, with seemingly no end in sight.

The future already looks a lot different than it looked in April 2022. And we dont expect it to look the same next year, or next month, or even next week. Theres a lot of anxiety that AI in particular could change the future much, much faster than were ready to address.

With that in mind I spoke yesterday with Peter Leyden, founder of the strategic foresight firm Reinvent Futures and author of The Great Progression: 2025 to 2050 a firmly optimistic reading of how technology will change society in radical ways about how the rise of generative AI has shaken up the landscape, and what he sees on the horizon from here.

This is the kind of explosive moment that a lot of us were waiting for, but it wasnt quite clear when it was going to happen, Leyden said. Ive been through many, many different tech cycles around, say, crypto, that havent gone down this path this is the first one that is really on the scale of the introduction of the internet.

Tech giants have been spending big on AI for more than a decade, with Googles acquisition of DeepMind as a signal moment. Devoted sports viewers might remember one particularly inescapable 2010s-era commercial featuring the rapper Common proselytizing about AI on Microsofts behalf. And there is, of course, a long cultural history of AI speculation, dating back to James Camerons Terminator and beyond.

There is a kind of parallel to the mid-90s, where people had a very hard time understanding both the digitization of the world and the globalization of the world that were happening, Leyden said. Were seeing a similar tipping point with generative AI.

From that perspective, the current generative AI boom begs for a historical analogue. How about America Online? It might seem hopelessly dated now, but like ChatGPT it was a ubiquitous product that brought a revolutionary technology into millions of homes. From the perspective of 20 years from now, a semi-sophisticated chatbot might seem like the Youve got mail of its time.

AI might seem a chiefly digital disruptor right now, but Leyden, who has a pretty good track record as a prognosticator, believes it could revolutionize real-world sectors from education to manufacturing to even housing.

Weve always thought those things are too expensive and cant be solved by technology, and weve finally now crossed the threshold to say Oh wait, now we could apply technology to it, Leyden said. The next five to 10 years are going to be amazing as this superpower starts to make its way through all these fields.

AI is also already powering innovation in other fields like energy, biotech, and media. Thats where its an especially salient comparison with the internet as a whole, not just a platform like social media. Its an engine, not the vehicle itself, and there are millions of designs yet to be built around it.

Largely for that reason, its nearly impossible to predict whats going to happen next with AI. Maybe artificial general intelligence really will arise, posing an entirely different set of problems than the current policy concerns of regulating bias and accountability in decision-making algorithms. Or maybe it will start solving problems, wickedly difficult ones, like nuclear fusion and mortality and space survival.

To get back to our mission here: We cant know. What we can do is continue to cover the bleeding edge of these technologies as they exist now, and where the people in charge of building and governing them aim to steer their development and, by proxy, ours.

A message from TikTok:

TikTok is building systems tailor-made to address concerns around data security. Whats more, these systems will be managed by a U.S.-based team specifically tasked with managing all access to U.S. user data and securing the TikTok platform. Its part of TikToks commitment to securing personal data while still giving the global TikTok experience people know and love. Learn more at http://usds.TikTok.com.

A pair of George Mason University technologists are recommending the government take a novel, deliberate approach to AI regulation.

In an essay for GMUs Mercatus Center publication Discourse, Matthew Mittelsteadt and Brent Skorup propose a framework they call AI Progress, a novel framework to help guide AI progress and AI policy decisions. Their big ideas, among a handful of others:

People will need time to understand the limitations of this technology, when not to use it and when to trust it (or not), they write nearing their conclusion. These norms cannot be developed without giving people the leeway needed to learn and apply these innovations.

A message from TikTok:

Health and tech heavy hitters are teaming up to make their own recommendations about how AI should be used specifically in the world of health care.

As POLITICOs Ben Leonard reported today for Pro subscribers, the Coalition for Health AI, which includes Google, Microsoft, Stanford and Johns Hopkins, released a Blueprint for Trustworthy AI that calls for high transparency and safety standards for the techs use in medicine.

We have a Wild West of algorithms, Michael Pencina, coalition co-founder and director of Duke AI Health, told Ben. Theres so much focus on development and technological progress and not enough attention to its value, quality, ethical principles or health equity implications.

The report also recommends heavy human monitoring of AI systems as they operate, and a high bar for data privacy and security. The coalition is holding a webinar this Wednesday to discuss its findings.

Stay in touch with the whole team: Ben Schreckinger ([emailprotected]); Derek Robertson ([emailprotected]); Mohar Chatterjee ([emailprotected]); Steve Heuser ([emailprotected]); and Benton Ives ([emailprotected]). Follow us @DigitalFuture on Twitter.

If youve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.

A message from TikTok:

TikTok has partnered with a trusted, third-party U.S. cloud provider to keep all U.S. user data here on American soil. These are just some of the serious operational changes and investments TikTok has undertaken to ensure layers of protection and oversight. Theyre also a clear example of our commitment to protecting both personal data and the platforms integrity, while still allowing people to have the global experience they know and love. Learn more at http://usds.TikTok.com.

Read more here:
The future, one year later - POLITICO - POLITICO

Related Posts

Comments are closed.