Category Archives: Alphago

Google at I/O 2023: Weve been doing AI since before it was cool – Ars Technica

Enlarge / Google CEO Sundar Pichai explains some of the company's many new AI models.


That Google I/O show sure was something, wasn't it? It was a rip-roaring two hours of nonstop AI talk without a break. Bard, Palm, Duet, Unicorn, Gecko, Gemini, Tailwind, Otterthere were so many cryptic AI code names thrown around it was hard to keep track of what Google was talking about. A glossary really would have helped. The highlight was, of course, the hardware, but even that was talked about as an AI delivery system.

Google is in the midst of a total panic over the rise of OpenAI and its flagship product, ChatGPT, which has excited Wall Street and has the potential to steal some queries people would normally type into It's an embarrassing situation for Google, especially for its CEO Sundar Pichai, who has been pitching an "AI first" mantra for about seven years now and doesn't have much to show for it. Google has been trying to get consumers excited about AI for years, but people only seemed to start caring once someone other than Google took a swing at it.

Even more embarrassing is that the rise of ChatGPT was built on Google's technology. The "T" in "ChatGPT" stands for "transformer," a neural network technique Google invented in 2017 and never commercialized. OpenAI took Google's public research, built a product around it, and now uses that product to threaten Google.

In the months before I/O, Pichai issued a "Code Red" warning across the company, saying that ChatGPT was something Google needed to fight, and it even dragged its co-founders, Larry Page and Sergey Brin, out of retirement to help. Years ago, Google panicked over Facebook and mandated that all employees build social features in Google's existing applications. And while that was a widely hated initiative that eventually failed, Google is dusting off that Google+ playbook to fight OpenAI. It's now reportedly mandated that all employees build some kind of AI feature into every Google product.

"Mandatory AI" is certainly what Google I/O felt like. Each section of the presentation had some division of Google give a book report on the New AI Thing they have been working on for the past six months. Google I/O felt more like a presentation for Google's managers rather than a show meant to excite developers and consumers. The AI directive led to ridiculous situations like Android's head of engineering going on stage to talk only about an AI-powered poop emoji wallpaper generator rather than any meaningful OS improvements.

Wall Street investors were apparently one group excited by Google I/O the company's stock jumped 4 percent after the show. Maybe that was the point of all of this.

Would you believe Google Assistant got zero mentions at Google I/O? This show was exclusively about AI, and Google didn't mention its biggest AI product. Pichai's seminal "AI First" blog post from 2016 is about Google Assistant and features an image of Pichai in front of the Google Assistant logo. Google highlighted past AI projects like Gmail's Smart Reply and Smart Compose, Google Photos' magic eraser and AI-powered search, Deepmind's AlphaGo, and Google Lens, but Google Assistant could not manage a single mention. That seemed entirely on purpose.

Heck, Google introduced a product that was a follow-up to the Nest Hub Google Assistant smart displaythe Pixel Tabletand Google Assistant still couldn't get a mention. At one point, the presenter even said the Pixel Tablet had a "voice-activated helper."


Google's avoidance of Google Assistant at I/O seemed like a further deprioritization of what used to be its primary AI product. The Assistant's last major speaker/display product launch was two years ago in March 2021. Since then, Google shipped hardware that dropped Assistant support from Nest Wi-Fi and Fitbit, and it disabled Assistant commands on Waze. It lost a patent case to Sonos and stripped away key speaker functionality, like controlling the volume, from the cast feature. Assistant Driving Mode was shut down in 2022, and one of the Assistant's biggest features, reminders, is getting shut down in favor of Google Tasks Reminders.

The Pixel Tablet sure seemed like it was supposed to be a new Google Assistant device since it looks exactly like all of the other Google Assistant devices, but Google shipped it without a dedicated smart display interface. It seems like it was conceived when the Assistant was a viable product at Google and then shipped as leftover hardware when Assistant had fallen out of favor.

The Google Assistant team has reportedly been asked to stop working on its own product and focus on improving Bard. The Assistant hasn't really ever made money in its seven years; the hardware is all sold at cost, voice recognition servers are expensive to run, and Assistant doesn't have any viable post-sale revenue streams like ads. Anecdotally, it seems like the power for those voice recognition servers is being turned way down, as Assistant commands seem to take several seconds to process lately.

The Google I/O keynote transcript counts 19 uses of the word "responsible" about Google's rollout of AI. Google is trying to draw some kind of distinction between it and OpenAI, which got to the point it's at by being a lot more aggressive in its rollout compared to Google. My favorite example of this was OpenAI's GPT-4 arrival, which came with the surprise announcement that it had been running as a beta on production Bing servers for weeks.

Google's sudden lip service toward responsible AI use seems to run counter to its actions. In 2021 Google's AI division famously pushed out AI ethics co-head Dr. Timnit Gebru for criticizing Google's diversity efforts and trying to publish AI research that didn't cast Google in a positive-enough light. Google then fired its other AI ethics co-head, Margaret Mitchell, for writing an open letter supportive of Gebru and co-authoring the contentious research paper.

In the run-up to the rushed launch of Bard, Google's answer to ChatGPT, a Bloomberg report claims that Google's AI ethics team was"disempowered and demoralized" so Google could get Bard out the door. Employees testing the chatbot said some of the answers they received were wrong and dangerous, but employees bringing up safety concerns were told they were "getting in the way" of Google's "real work." The Bloomberg report says AI ethics reviews are "almost entirely voluntary" at Google.

Google has seemingly already second-guessed its all-AI, all-the-time strategy. A Business Insider report details a post-I/O company meeting where one employee question to Pichai nails my feelings after Google I/O, saying, "Many AI goals across the company focus on promoting AI for its own sake, rather than for some underlying benefit." The employee asks how Google will "provide value with AI rather than chasing it for its own sake."

Pichai reportedly replied that when Googler's current OKRs (objectives and key resultsbasically your goals as an employee) were written, it was during an "inflection point" around AI. Now that I/O is over, Pichai said, "I think one of the things the teams are all doing post-I/O is re-looking. Normally we don't do this, but we are re-looking at the OKRs and adapting it for the rest of the year, and I think you will see some of the deeper goals reflected, and we'll make those changes over the upcoming days and weeks."

So the AI "Code Red" was in January, and now it's May, and Google's priorities are already being reshuffled? That tracks with Google's history.

Visit link:
Google at I/O 2023: Weve been doing AI since before it was cool - Ars Technica

Jack Gao: Prepare for profound AI-driven transformations –

On March 14, Dr. Jack Gao, CEO of Smart Cinema and former president of Microsoft China, was left amazed after watching the livestream of GPT-4s press conference. He was stunned by what the chatbot is able to do.

Jack Gao delivers a keynote speech on artificial intelligence during a summit forum before the 14th Chinese Nebula Awards gala in Guanghan, Sichuan province, May 13, 2023. [Photo courtesy of EV/SFM]

"I was so excited and couldn't calm down for a whole week. During that time, Baidu also released its own Ernie Bot, and Alibaba followed with Tongyi Qianwen. There are more AI bots to come, such as the one from Google," Gao told, adding that he later engaged in conversations with insiders from various industries to get a clear understanding of the bigger picture.

Last weekend, he discussed this topic at China's top sci-fi event, the 14th Chinese Nebula Awards, where he also delivered a keynote speech and sought feedback from China's most prominent sci-fi writers, who have frequently envisioned the future and portrayed artificial intelligence (AI) in their novels.

"The era of AI has arrived. I have an unprecedented feeling knowing that it can pass the lawyers' exam with high scores and even possess a common sense that was previously exclusive to humans," Gao said. "When AI becomes another intelligent brain in our lives and has the potential to develop consciousness for the benefit of the entire human race, its intelligence will expand infinitely."

The profound changes will come quickly, according to his vision. AI could directly handle many aspects related to human life, from everything from translation to communication, medical diagnoses, lawsuits, and creative jobs. This could bring greater efficiency and upgrades to current industries, but it also raises concerns.

Some have already recognized the threats, like Hollywood scriptwriters who went on strike in early May due to concerns about AI "generative" text and image tools impacting their jobs and incomes. Tech giants have also laid off numerous employees after embracing AI technologies. Geoffrey Hinton, widely regarded as the "godfather" of AI, departed from Google and raised warnings about the potential dangers of AI chatbots, emphasizing their potential to surpass human intelligence in the near future. Hinton also cautioned against the potential misuse of AI by "bad actors" that could have harmful consequences for society.

"When I was a student 40 years ago, our wildest imaginations couldn't compare to what we have today. Technology has fundamentally transformed our lives," Gao said. The man has an awe-inspiring profile in both the tech and media industries, having served as a top executive at Autodesk Inc., Microsoft, News Corp., and Dalian Wanda Group. He has witnessed numerous significant technological advancements over the decades, from PC computers and the internet to big data, which have brought about great changes to the world.

When Google's AlphaGo AI defeated the world's number one Go player, Ke Jie, people began to recognize the power of AI, although they initially thought its impact was limited to the realm of Go. "But what if there's an 'AlphaGo' in every industry? Gao mused. "What can humans do, and how can they prevail? Imagine a scenario where you have your own 'AlphaGo' while others do not. This is the reality we are facing, and we must take it seriously."

He believes that the digital gap between machines and humans has been bridged so that AI bots can interact with humans through chat interfaces without the need for programmers to write code. He also believes that when large language models reach a sufficient scale, new chemical sparks will ignite, leading to new miracles of some kind. "You have to understand that language is the foundational layer and operating system of human civilization and ecology."

"Based on my experience using and learning from AI bots, I have also noticed an important factor: the quality of answers from chatbots depends on how you ask them. Our way of thinking will shift towards seeking answers because there are countless valuable answers in the world waiting for good questions," he said. He added that people should prepare themselves with optimism to understand, utilize, explore, and harness AI, making it a beneficial and integral part of their lives.

Gao's speech caused a stir at the sci-fi convention. After he finished, many sci-fi writers, including eminent figures like Han Song and He Xi, approached him to discuss further. "They told me that after listening to my speech, they had a more personal understanding of how AI will truly impact our lives and work. The technology is already here, and we have no choice but to actively explore and embrace it, adapting to the changes."

See the article here:
Jack Gao: Prepare for profound AI-driven transformations -

Terence Tao Leads White House’s Generative AI Working Group … – Pandaily

On May 13th, Terence Tao, an award winning Australia-born Chinese mathematician, announced that he and physicist Laura Greene will co-chair a working group studying the impacts of generative artificial intelligence technology on the Presidential Council of Advisors on Science and Technology (PCAST). The group will hold a public meeting during the PCAST conference on May 19th, where Demis Hassabis, founder of DeepMind and creator of AlphaGo, as well as Stanford University professor Fei-Fei Li among others will give speeches.

According to Terence Taos blog, the group mainly researches the impact of generative AI technology in scientific and social fields, including large-scale language models based on text such as ChatGPT, image generators like DALL-E 2 and Midjourney, as well as scientific application models for protein design or weather forecasting. It is worth mentioning that Lisa Su, CEO of AMD, and Phil Venables, Chief Information Security Officer of Google Cloud are also members of this working group.

According to an article posted on the official website of the White House, PCAST develops evidence-based recommendations for the President on matters involving science, technology, and innovation policy, as well as on matters involving scientific and technological information that is needed to inform policy affecting the economy, worker empowerment, education, energy, the environment, public health, national and homeland security, racial equity, and other topics.

SEE ALSO: Mathematician Terence Tao Comments on ChatGPT

After the emergence of ChatGPT, top mathematicians like Terence Tao also paid great attention to it and began exploring how artificial intelligence could help them complete their work. In an article titled How will AI change mathematics? Rise of chatbots highlights discussion in the Nature Journal, Andrew Granville, a number theorist at McGill University in Canada, also said that we are studying a very specific question: will machines change mathematics? Mathematician Kevin Buzzard agrees, saying that in fact, now even Fields Medal winners and other very famous mathematicians are interested in this field, which shows that it has become popular in an unprecedented way.

Previously, Terence Tao wrote on the decentralized social network Mastodon, Today was the first day that I could definitively say that #GPT4 has saved me a significant amount of tedious work. In his experimentation, Terence Tao discovered many hidden features of ChatGPT such as searching for formulas, parsing documents with code formatting, rewriting sentences in academic papers and sometimes even semantically searching incomplete math problems to generate hints.

See the article here:
Terence Tao Leads White House's Generative AI Working Group ... - Pandaily

Purdue President Chiang to grads: Let Boilermakers lead in … – Purdue University

Purdue President Mung Chiang made these remarks during the universitys Spring Commencement ceremonies May 12-14.


Today is not just any graduation but the commencement at a special place called Purdue, with a history that is rich and distinct and an accelerating momentum of excellence at scale. There is nothing more exciting than to see thousands of Boilermakers celebrate a milestone in your lives with those who have supported you. And this commencement has a special meaning to me as my first in the new role serving our university.

President Emeritus Mitch Daniels gave 10 commencement speeches, each an original treatise, throughout the Daniels Decade. I was tempted to simply ask generative AI engines to write this one for me. But I thought itd be more fun to say a few thematic words by a human for fellow humans before that becomes unfashionable.

AI at Purdue

Sometime back in the mid-20th century, AI was a hot topic for a while. Now it is again; so hot that no computation is too basic to self-anoint as AI and no challenge seems too grand to be out of its reach. But the more you know how tools such as machine learning work, the less mysterious they become.

For the moment, lets assume that AI will finally be transformational to every industry and to everyone: changing how we live, shaping what we believe in, displacing jobs. And disrupting education.

Well, after IBMs Deep Blue beat the world champion, we still play chess. After calculators, children are still taught how to add numbers. Human beings learn and do things not just as survival skills, but also for fun, or as a training of our mind.

That doesnt mean we dont adapt. Once calculators became prevalent, elementary schools pivoted to translating real-world problems into math formulations rather than training for the speed of adding numbers. Once online search became widely available, colleges taught students how to properly cite online sources.

Some have explored banning AI in education. That would be hard to enforce; its also unhealthy as students need to function in an AI-infused workplace upon graduation. We would rather Purdue evolve teaching AI and teaching with AI.

Thats why Purdue offers multiple major and minor degrees, fellowships and scholarships in AI and in its applications. Some will be offered as affordable online credentials, so please consider coming back to get another Purdue degree and enjoy more final exams!

And thats why Purdue will explore the best way to use AI in serving our students: to streamline processes and enhance efficiency so that individualized experiences can be offered at scale in West Lafayette. Machines free up human time so that we can do less and watch Netflix on a couch, or we can do more and create more with the time saved.

Pausing AI research is even less practical, not the least because AI is not a well-defined, clearly demarcated area in isolation. All universities and companies around the world would have to stop any research that involves math. My Ph.D. co-advisor, Professor Tom Cover, did groundbreaking work in the 1960s on neural networks and statistics, not realizing those would later become useful in what others call AI. We would rather Purdue advance AI research with nuanced appreciation of the pitfalls, limitations and unintended consequences in its deployment.

Thats why Purdue just launched the universitywide Institute of Physical AI. Our faculty are the leaders at the intersection of virtual and physical, where the bytes of AI meet the atoms of what we grow, make and move from agriculture tech to personalized health care. Some of Purdues experts develop AI to check and contain AI through privacy-preserving cybersecurity and fake video detection.

Limitations and Limits

As it stands today, AI is good at following rules, not breaking rules; reinforcing patterns, not creating patterns; mimicking whats given, not imagining beyond their combinations. Even individualization algorithms, ironically, work by first grouping many individuals into a small number of similarity classes.

At least for now, the more we advance artificial intelligence, the more we marvel at human intelligence. Deep Blue vs. Kasparov, or AlphaGo vs. Lee, were not fair comparisons: the machines used four orders of magnitude more energy per second! Both the biological mechanisms that generate energy from food and the amount of work we do per Joule must be astounding to machines envy. Can AI be as energy efficient as it is fast? Can it take in energy sources other than electricity? When someday it does, and when combined with sensors and robotics that touch the physical world, youd have to wonder about the fundamental differences between humans and machines.

Can AI, one day, make AI? And stop AI?

Can AI laugh, cry and dream? Can it contain multitudes and contradictions like Walt Whitman?

Will AI be aware of itself, and will it have a soul, however awareness and souls are defined? Will it also be T.S. Eliots infinitely suffering things?

Where does an AI life start and stop anyway? What constitutes the identity of one AI, and how can it live without having to die? Indeed, if the memory and logic chips sustain and merge, is AI all collectively one life? And if AI duplicates a humans mind and memory, is that human life going to stay on forever, too?

These questions will stay hypothetical until breakthroughs more architectural than just compounding silicon chips speed and exploding data to black-box algorithms.

However, if given sufficient time and as a matter of time, some of these questions are bound to eventually become real, what then is uniquely human? What would still be artificial about artificial intelligence? Some of that eventuality might, with bumps and twists, show up faster than we had thought. Perhaps in your generation!

Freedoms and Rights

If Boilermakers must face these questions, perhaps it does less harm to consider off switches controlled by individual citizens than a ban by some bureaucracy. May the medicine be no worse than the disease, and regulations by government agencies not be granular or static, for governments dont have a track record of understanding fast-changing technologies, let alone micromanaging them. Some might even argue that government access to data and arbitration of algorithms counts among the most worrisome uses of AI.

What we need are basic guardrails of accountability, in data usage compensation, intellectual property rights and legal liability.

We need skepticism in scrutinizing the dependence of AI engines output on their input. Data tends to feed on itself, and machines often give humans what we want to see.

We need to preserve dissent even when its inconvenient, and avoid philosopher kings dressed in AI even when the alternative appears inefficient.

We need entrepreneurs in free markets to invent competing AI systems and independently maximize choices outside the big tech oligopoly. Some of them will invent ways to break big data.

Where, when and how is data collected, stored and used? Like many technologies, AI is born neutral but suffers the natural tendency of being abused, especially in the name of the collective good. Todays most urgent and gravest nightmare of AI is its abuse by authoritarian regimes to irreversibly lock in the Orwellian 1984: the surveillance state oppressing rights, aided and abetted by AI three-quarters of a century after that bleak prophecy.

We need verifiable principles of individual rights, reflecting the Constitution of our country, in the age of data and machines around the globe. For example, MOTA:

My worst fear about AI is that it shrinks individual freedom. Our best hope for AI is that it advances individual freedom. That it presents more options, not more homogeneity. That the freedom to choose and free will still prevail.

Let us preserve the rights that survived other alarming headlines in centuries past.

Let our students sharpen the ability to doubt, debate and dissent.

Let a university, like Purdue, present the vista of intellectual conflicts and the toil of critical thinking.


Now, about asking AI engines to write this speech. We did ask it to write a commencement speech for the president of Purdue University on the topic of AI, after I finished drafting my own.

Im probably not intelligent enough or didnt trust the circular clichs on the web, but what I wrote had almost no overlap with what AI did. I might be biased, but the AI version reads like a B- high school essay, a grammatically correct synthesis with little specificity, originality or humor. Its so toxically generic that even adding a human in the loop to build on it proved futile. Its so boring that you would have fallen asleep even faster than you just did. By the way, you can wake up now: Im wrapping up at last.

Maybe most commencement speeches and strategic plans sound about the same: Universities have made it too easy for language models! Maybe AI can remind us to try and be a little less boring in what we say and how we think. Maybe bots can murmur: Dont you ChatGPT me whenever were just echoing in an ever smaller and louder echo chamber down to the templated syntax and tired words. Smarter AI might lead to more interesting humans.

Well, there were a few words of overlap between my draft and AIs. So, heres from both some bytes living in a chip and a human Boilermaker to you all on this 2023 Purdue Spring Commencement: Congratulations, and Boiler Up!

Read more:
Purdue President Chiang to grads: Let Boilermakers lead in ... - Purdue University

The circle of life works for AI, too – BusinessLine

Chat GPT has almost colonised discussions on Artificial Intelligence. High school children are excited about getting their homework done by ChatGPT. !

But such excitement with new technology is not new. Just a few years ago, there was excitement about AI competing against AlphaGO, or the American quiz television show Jeopardy, or chess with Deep Blue. AI was seen as an ultimate technology that will improve human life and reduce suffering soon.

But as with any other journey, the AI path has also been full of challenges and failures. Many tech companies have seen initiatives fail IBMs Watson Health, Teslas Autopilot crash, and many more.

Organisations have made failure itself a preferred way of working. Fail Fast is the way forward for AI. This ensures that with or without success in AI, financial success and continuity are assured. The list of companies that are working on AI technologies is increasing by the day, so are the technologies that are being developed.

The focus on fail fast innovation has helped advance technologies. As the well-known author Yuval Harari wrote: Humans will learn the working on the brain but will still not understand the mind. The AI mind is still unknown, given the multiple direction in which the AI progress is happening; convergence is challenging, there is chaos all around. There is increasing acceptance of multiple views of truth.

While humans will continue to make progress in understanding the workings of the brain, it is possible that a complete understanding of the mind and body may remain elusive.

The Hindu scriptures provide some guidance. The Circle of life has worked for humans, and it will continue for AI, which will see innovation, preservation of a few innovations, and a few failures. However, the cycle will continue perpetually.

The moksha of AI development needs good karma powered with Peacefulness, self-control, austerity, purity, tolerance, honesty, wisdom, knowledge, and religiousness these are the qualities by which the brahmanas work. (Bhagwad Gita 18.42).

A few decades down the line, when the full human DNA is uncovered, when theres super-computing power in every mobile, AI is able to recreate mind and body, etc., new challenges will come up.

The danger is that if we invest too much in developing AI and too little in developing human consciousness, the very sophisticated AI of computers might only serve to empower the natural stupidity of humans. The way froward is to control the chaos in human mind and not imitate it with AI.

The writer is Deputy General Manager - Industrial AI Hitachi. Views are personal

Read the original post:
The circle of life works for AI, too - BusinessLine

AI At War – War On The Rocks

Paul Scharre, Four Battlegrounds: Power in the Age of Artificial Intelligence(New York: W. W. Norton & Company, 2023).

It is widely believed that the world is on the brink of another military revolution. AI is about to transform the character of warfare, as gunpowder, tanks, aircraft, and the atomic bomb have in previous eras. Today, states are actively seeking to harness the power of AI for military advantage. China, for instance, has announced its intention to become the world leader in AI by 2030. Its New General AI Plan proclaimed that: AI is a strategic technology that will lead the future. Similarly, Russian President Vladimir Putin declared: Whoever becomes the leader in this sphere will become ruler of the world. In response to the challenge posed by China and Russia, the United States has committed to a third offset strategy. It will invest heavily in AI, autonomy, and robotics to sustain its advantage in defense.

In light of these dramatic developments, military commentators have become deeply interested by the question of the military application of AI. For instance, in a recent monograph, Ben Buchanan and Andrew Imrie have claimed that AI is the new fire. Autonomous weapons controlled by AI not by humans will become increasingly accurate, rapid, and lethal. They represent the future of war. Many other scholars and experts concur with them. For instance, Stuart Russell, the eminent computer scientist and AI pioneer, dedicated one of his 2020 BBC Reith Lectures to the military potential of AI. He professed the rise of slaughterbots and killer robots. He described a scenario in which a lethal quad-copter the size of a jar could be armed with an explosive device: Anti-personnel mines could wipe out all the males in a city between 16 and 60 or all the Jewish citizens in Israel and unlike nuclear weapons, it would leave the city infrastructure. Russell concluded: There will be 8 million people wondering why you cant give them protection against being hunted down and killed by robots. Many other scholars, including Christian Brose, Ken Payne, John Arquilla, David Hambling, and John Antal, share Russells belief that with the development of second-generation AI, lethal autonomous weapons such as killer drone swarms may be imminent.

Military revolutions have often been less radical than initially presumed by their advocates. The revolution of military affairs of the 1990s was certainly important in opening up new operational possibilities, but it did not eliminate uncertainty. Similarly, some of the debate about lethal autonomy and AI has been hyperbolic. It has misrepresented how AI currently works, and what its potential effects on military operations might, therefore, be in any conceivable future. Although remote and autonomous systems are becoming increasingly important, there is little chance of autonomous drone swarms substituting troops on the battlefield, or supercomputers replacing human commanders. AI became a major research program in the 1950s. At that time, it operated on the basis of symbolic logic programmers coded input for AI to process. This system was known as good old fashioned artificial intelligence. AI made some progress, but because it was based on the manipulation of assigned symbols, its utility was very limited, especially in the real world. An AI winter, therefore, closed in from the late 1970s and throughout the 1980s.

Since the late 1990s, second-generation AI has produced some remarkable breakthroughs on the basis of big data, massive computing power, and algorithms. There were three seminal events. On May 11 1997, IBMs Deep Blue beat Garry Kasparov, the world chess champion. In 2011, IBMs Watson won Jeopardy!. Even more remarkably, in March 2016, AlphaGo beat the world champion Go player, Lee Seedol, 4-1.

Deep Blue, Watson, and AlphaGo were important waypoints on an extraordinary trajectory. Within two decades, AI had gone from disappointment and failure to unimagined triumphs. However, it is important recognize what second-generation AI can and cannot do. It has been developed around neural networks. Machine learning programs process huge amounts of data through their networks, re-calibrating the weight that a program assigns to particular pieces of data, until, finally, it generates coherent answers. The system is probabilistic and inductive. Programs and algorithms know nothing. They are unaware of the real world and, in a human sense, unaware of the meaning of the data they process. Using algorithms, machine learning AI simply builds models of statistical probability from massively reiterated trials. In this way, second-generation AI identifies multiple correlations in the data. As long as it has enough data, probabilistic induction has become a powerful predictive tool. Yet, AI does not recognize causation or intention. Peter Thiel, a leading Silicon Valley tech entrepreneur, has articulated AIs limitations eloquently: Forget science-fiction fantasy, what is powerful about actually existing AI is its application to relatively mundane tasks like computer vision and data analysis. Consequently, although machine learning is far superior to a human at limited, bounded, mathematizable tasks, it is very brittle. Utterly dependent on the data on which it has been trained, even the tiniest change in the actual environment or the data renders it useless.

The brittleness of data-based inductive machine learning is very significant to the prospect of an AI military revolution. Proponents or opponents of AI imply that, in the near future, it will be relatively easy for autonomous drones to fly through, identify, and engage targets in an urban areas, for instance. After all, autonomous drone swarms have already been demonstrated in admittedly contrived and controlled environments. However, in reality, it will be very hard to train a drone to operate autonomously for combat in land warfare. The environment is dynamic and complex, especially in towns and cities civilians and soldiers are intermixed. There do not seem to be any obvious data on which to train a drone swarm reliably the situation is too fluid. Similarly, it is not easy to see how an algorithm could make command decisions. Command decisions require the interpretation of heterogeneous information, balancing political and military factors, all of which require judgement. In a recent article, Avi Goldfarb and Jon R. Lindsay have argued that data and AI are best for simple decisions with perfect data. Almost by definition, military command decisions involve complexity and uncertainty. It is notable that, while Google and Amazon are the pre-eminent data companies, their managers do not envisage a day when an algorithm will make their strategic and operational decisions for them. Data, processed rapidly with algorithms, helps their executives to understand the market to a depth and fidelity that their competitors cannot match. Information advantage has propelled them to dominance. However, machine learning has not superseded the executive function.

It is, therefore, very unlikely that lethal autonomous drones or killer robots enabled by AI will take over the battlefield in the near future. It is also improbable that commanders will be replaced by computers or supercomputers. However, this does not mean that AI, data, and machine learning are not crucial to contemporary and future military operations. They are. However, the function of AI and data is not primarily lethality they is not the new fire, as some claim. Data digitized information stored in cyberspace are crucial because it provides states with a wider, deeper, and more faithful understanding of themselves and their competitors. When massive data sets are processed effectively by AI, this will allow military commanders to perceive the battlespace to a hitherto unachievable depth, speed and resolution. Data and AI are also crucial for cyber operations and informational campaigns. They have become indispensable for defense and attack. AI and data are not so much the new fire as a new form of digitized military intelligence, therefore, exploiting cyberspace as a vast new resource for information. AI is a revolutionary way of seeing the other side of the hill. Data and AI are a maybe even the critical intelligence function for contemporary warfare.

Paul Scharre, the well-known military commentator, once argued that AI would inevitably lead to lethal autonomy. In 2019, he published his best-selling book, Army of None, which plotted the rise of remote and autonomous weapon systems. There, Scharre proposed that AI was about to revolutionize warfare: In future wars, machines may make life and death decisions. Even if the potential of AI still enthuses him, he has now substantially changed his mind. Scharres new book, Four Battlegrounds, published in February 2023, represents a profound revision of his original argument. In it, he retreats from the cataclysmic picture that he painted in Army of None. If Army of None were an essay in science fiction, Four Battlegrounds is a work of political economy. It addresses the concrete issues of great-power competition and the industrial strategies and regulatory systems that underpin it. The book describes the implications of digitized intelligence for military competition. Scharre analyses the regulatory environment required to harness the power of data. He plausibly claims that superiority in data, and the AI to process it, will be militarily decisive in the superpower rivalry between United States and China. Data will afford a major intelligence advantage. For Scharre, there are four critical resources that will determine who wins this intelligence race: Nations that lead in these four battlegrounds data, compute, talent, and institutions [tech companies] will have a major advantage in AI power. He argues that the United States and China are locked into a mortal struggle for these four resources. Both China and the United States are now fully aware that whoever gains the edge in AI will, therefore, be significantly advantaged politically, economically, and, crucially, militarily. They will know more than their adversary. They will be more efficient in the application of military force. They will dominate the information and cyber spaces. They will be more lethal.

Four Battlegrounds plots this emerging competition for data and AI between China and the United States. It lays out recent developments and assesses the relative strengths of both nations. China is still behind the United States in several areas. The United States has the leading talent, and is ahead in terms of research and technology: China is a backwater in chip production. However, Scharre warns against U.S. complacency. Indeed, the book is animated by the fear that the United States will fall behind in the data race. Scharre, therefore, highlights Chinas advantages and its rapid advances. With 900 million internet users already, China has far more data than the United States. Some parts of the economy, such as ride-hailing, are far more digitized than in the United States. WeChat, for instance, has no American parallel. Many Chinese apps are superior to U.S. ones. In addition, the Chinese state is also uninhibited by legal constraints or by civil concerns about privacy. The Chinese Communist Party actively monitors the digital profiles of its citizens it harvests their data and logs their activities. In cities, it employs facial recognition technology to identify individuals.

State control has benefited Chinese tech companies: The CCPs massive investment in intelligence surveillance and social control boosted Chinese AI companies and tied them close to government. The synergies between government and tech in China are close. China also has significant regulatory advantages over the United States. The Chinese Communist Party has underwritten tech giants like Baidu and Alibaba: Chinese investment in technology is paying dividends. Scharre concludes: China is not just forging a new model of digital authoritarianism but is actively exporting it.

How will the U.S. government oppose Chinas bid for data and AI dominance? Here Four Battlefields is very interesting and it contrasts markedly with Scharres speculations in Army of None. In order for the U.S. government to be able to harness the military potential of data, there needs to be a major regulatory change. The armed forces need to form deep partnerships with the tech sector. They will have to look beyond traditional defense contractors and engage in start-ups. This is not easy. Scharre documents the challenging regulatory environment in the United States in comparison with China: in the U.S., the big tech corporations Amazon, Apple, Meta (formerly Facebook) and Google are independent centers of power, often at odds with government on specific issues. Indeed, Scharre discusses the notorious protest at Google in 2017, when employees refused to work on the Department of Defenses Project Maven contract. Skepticism about military applications of AI remain in some parts of the U.S. tech sector.

American tech companies may have been reluctant to work with the armed forces but the Department of Defense has not helped. It has unwittingly obstructed military partnerships with the tech sector. The Department of Defense has always had a close relationship with the defense industry. For instance, in 1961, President Dwight D. Eisenhower warned about the threat that the military-industrial complex posed to democracy. The Department of Defense has developed an acquisition and contracting process that has been primarily designed for the procurement of exquisite platforms: tanks, ships, and aircraft. Lockheed Martin and Northrop Grumman have become adept at delivering weapon systems to discrete Department of Defense specifications. Tech companies do not work like this. As one of Scharres interviewees noted: You dont buy AI like you buy ammunition. Tech companies are not selling a specific capability, like a gun. They are selling data, software, computing power ultimately, they are selling expertise. Algorithms and programs are best developed iteratively in relation to a very specific problem. The full potential of some software or algorithms to a military task may not be immediately obvious even to a tech company. Operating in competitive markets, tech companies, therefore, prefer a more flexible, open-ended contractual system with the Department of Defense they need security and quick financial returns. Tech companies are looking for collaborative engagement, rather than just a contract to build a platform.

The U.S. military and especially the Department of Defense has not always found this novel approach to contracting easy. In the past, the bureaucracy was too sluggish to respond to their needs the acquisition process took seven to 10 years. However, although many tensions exist and the system is far from perfect, Scharre records a transforming regulatory environment. He describes the rise of a new military-tech complex in the United States. Project Maven, of course, exemplifies the process. In 2017, Bob Work issued a now famous memo announcing the Algorithmic Warfare Cross Functional Team Project Maven. Since the emergence of surveillance drones and military satellite during the Global War on Terror, the U.S. military had been inundated with full-motion video feeds. That footage was invaluable. For instance, using Gorgon Stare, a 24-hour aerial surveillance system, the U.S. Air Force had been able to plot back from a car bomb explosion in Kabul in 2019, which killed 126 civilians, to find the location of safe houses used to execute the attack. Yet, the process was very slow for humans. Consequently, the Air Force started to experiment with computer vision algorithms to sift through their full-motion videos. Project Maven sought to scale up the Air Forces successes. It required a new contracting environment, though. Instead of a long acquisition process, Work introduced 90-day sprints. Companies had three months to show their utility. If they made progress, their contracts were extended if not, they were out. At the same time, Work de-classified drone footage in order that Project Maven could train its algorithms. By July 2017, Project Maven had an initial operating system, able to detect 38 different classes of object. By the end of the year, it was deployed on operations against ISIS: the tool was relatively simple, and identified and tracked people, vehicles, and other objects in video from ScanEagle drones used by special operators.

Since Project Maven, the Department of Defense has introduced some other initiatives to catalyze military-tech partnerships. The Defense Innovation Unit has accelerated relations between the department and companies in Silicon Valley, offering contracts in 26 days rather than in months or years. In its first five years, the Defense Innovation Unit issued contracts to 120 non-traditional companies. Under Lt. Gen. Jack Shanahan, the Joint Artificial Intelligence Centre has played an important role in advancing the partnership between the armed forces and tech companies for human assistance and disaster relief operations, developing software to map wildfires and post-disaster assessments whether these examples in Scharres text imply more military applications is unclear. After early difficulties, the Joint Enterprise Defense Infrastructure, created by Gen. James Mattis when he was secretary of defense, has reformed the acquisition system for tech. For instance, in 2021, the Department of Defense helped Anduril develop an AI-based counter-drone system with nearly $100 million.

Four Battlegrounds is an excellent and informative addition to the current literature on AI and warfare. It complements the recently published works of Lindsay, Goldfarb, Benjamin Jensen, Christopher Whyte, and Scott Cuomo. The central message of this literature is clear. Data and AI are and will be very important for the armed forces. However, data and AI will not radically transform combat itself humans will still overwhelmingly operate the lethal weapon systems, including remote ones, which kill people, as the savage war in Ukraine shows. The situation in combat is complex and confusing. Human judgement, skill, and cunning are required to employ weapons to their greatest effect there. However, any military force that wants to prevail on the battlefields of the future will need to harness the potential of big data it will have to master digitized information flooding through the battlespace. Humans simply do not have the capacity to do this. Headquarters will, therefore, need algorithms and software to process that data. They will need close partnerships with tech companies to create these systems and data scientists, engineers, and programmers in operational command posts themselves to make them work. If the armed forces are able to do this, data will allow them to see across the depth and breadth of the battlespace. It will not solve the problems of military operations fog and friction will persist. However, empowered by data, commanders might be able to employ their forces more effectively and efficiently. Data will enhance the lethality of the armed forces and their human combat teams. The Russo-Ukrainian War already gives a pre-emptive insight into the advantages that data-centric military operations afford over an opponent still operating in analogue. Scharres book is a call to ensure that the fate of the Russian army in Ukraine does not befall the United States when its next war comes.

Anthony King is the Chair of War Studies at the University of Warwick. His latest book, Urban Warfare in the Twenty-First Century, was published by Polity Press in July 2021. He currently holds a Leverhulme Major Research Fellowship and is researching into AI and urban operations. He is planning to write a book on this topic in 2024.

Image: Department of Defense

See the original post:
AI At War - War On The Rocks

Call Me ‘DeepBrain’: Google Smushes DeepMind and Brain AI Teams Together – Yahoo News

South Korean professional Go player Lee Se-Dol (L) shakes hands with Demis Hassabis (R) co-founder of Google's artificial intelligence (AI) startup DeepMind looks after finishing the final match of the Google DeepMind Challenge Match against Google's artificial intelligence program, AlphaGo, on March 15, 2016 in Seoul, South Korea.

Demis Hassabis (right), the co-founder of DeepMind, is now going to be heading up Googles major AI division. The London-based divisions biggest claim to fame was its AlphaGo AI-based Go playing program which beat a world Go champion back in 2016.

Talk about a meeting of the minds. In another bid to add some NOx to its sputtering AI development, Google announced late Thursday that it plans to combine its two major AI teams, once kept separate, under one banner called Google DeepMind.

The Google Brain team and DeepMind staff have been separated in areas of expertise and by thousands of miles. The London-based DeepMind is a research laboratory acquired by Google (now Alphabet) in 2014. It works on creating neural networks and machine learning systems. Brain researchers are centered in Silicon Valley in California and previously worked under the Google AI research division. That teams past work has been integral to current transformer AI models that have led to our current language models and chatbots like ChatGPT. The teams latest claim to fame was Googles text-to-image AI Imagen model.

Read more

DeepMind CEO Demis Hassabis will effectively become the grand poobah of all of Googles AI operations. The CEO publicly shared a letter to employees about the merger of divisions on Thursday. He said that the move would give the team access to more computer infrastructure and resources. The company scheduled an internal town hall set for Friday to discuss the changes with staff.

Google engineer of nearly 24 years Jeff Dean, former lead of Google Brain, will now head up Google Research as chief scientist reporting directly to CEO Sundar Pichai, according to the announcement. Effectively, Dean will direct all future AI research projects. The CEO said this move was to ensure the bold and responsible development of general AI. Pichai did seem to regularly put extra stress on the responsible part of its AI development.

Story continues

Pichai seems to be trying to get in front of criticisms for the increased pace of its AI rollout. Earlier this week, Bloomberg laid out a massive report citing dozens of former and current Google staff who were more than a little concerned about the pace of AI development. Staff members were asked to take time out of their day to test Googles Bard AI chatbot system. According to the report, workers thought Bard was worse than useless and a pathological liar that was more than likely to spit out false data. Staff begged Google to hold off on launching the AI chatbot, but Google has since talked about sticking the generative AI tools into most facets of its business, including its office apps and its massive advertising arm.

According to Bloombergs report, Google leadership including the companys AI ethics lead Jen Gennai overruled team members who wanted to hold back Bard. In January, Google laid off 6% of its global staff, equivalent to about 12,000 jobs, but the company tried to reassure staff that there were more opportunities ahead thanks to AI. Microsoft has already beat Google to the punch by putting an AI chatbot directly into its browser app. Pichai said his company plans to put Bard into its bread-and-butter Google Search, though he has yet to offer a date.

Google has struggled to maintain the image that it cares about ethics and AI. In 2020 and 2021, Google fired multiple members of its artificial intelligence research teams who were tasked withpumping the breaks on obtuse AI applications. One of those ex-Google researchers, Margaret Mitchell, wrote on Thursday that there were positives to this merging of the minds. She claimed that since she and her fellows were fired, Brain has struggled to both hire and retain its research staff, adding that the Brain brand has taken too many hits as of late.

As there seems to be more competition for AI than ever, Google is acting less like the leader in AI research it had been and more and more like a college kid who woke up too late and is now cramming for a test. Time will tell if this merge can do anything to speed up its AI development, no matter the cost.

More from Gizmodo

Sign up for Gizmodo's Newsletter. For the latest news, Facebook, Twitter and Instagram.

Click here to read the full article.

View post:
Call Me 'DeepBrain': Google Smushes DeepMind and Brain AI Teams Together - Yahoo News

Alphabet merges AI research units DeepMind and Google Brain – Computing

In a move designed to streamline research, Alphabet is merging the London-based DeepMind unit and Google Brain, headquartered in Silicon Valley.

"Combining all this talent into one focused team, backed by the computational resources of Google, will significantly accelerate our progress in AI," Alphabet CEO Sundar Pichai wrote on Thursday.

The new team, known as Google DeepMind, will work on all areas of AI research. Its first project will be a series of multimodal AI models.

Demis Hassabis, who had a host of AI achievements under his belt even before founding DeepMind in 2010, will lead the Google DeepMind team as CEO. Jeff Dean, who led Google Brain, will take on the role of chief scientist for both Google DeepMind and Google Research.

Together, DeepMind and Google Brain have worked on advanced AI research that, while important, rarely touched Google's core business: areas like AlphaGo, deep reinforcement learning and frameworks for training ML models.

As well as the chess-playing AlphaGo, some of DeepMind's successes have included AlphaFold - which can predict 3D models of protein structures - and DeepNash, a reinforcement learning system that plays Stratego.

It also has history with AI models, announcing Flamingo in 2022: a visual language model that can describe a picture.

Now, Alphabet appears to be focusing more directly on AI models following the success of OpenAI's ChatGPT. Google's own Bard model has so far failed to stand out from the crowd.

View post:
Alphabet merges AI research units DeepMind and Google Brain - Computing

‘Good swimmers are more likely to drown.’ Have we created a … – SHINE News


Artificial Intelligence experts are calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4 to control "potential risks."

A Pandora's Box has been opened or at least some leaders in the artificial intelligence industry appear to believe that the story in Greek mythology has a modern-day relevance, with forces being unleashed that could cause unforeseen problems.

Tesla Chief Executive Officer Elon Musk and a group of AI experts and industry executives released an open letter this week, calling for a six-month pause in developing systems more powerful than OpenAI's newly launched GPT-4.

They took the action, they said, to control "potential risks to society."

Published by the nonprofit Future of Life Institute, the letter said that AI laboratories are developing and deploying machine learning systems "that no one not even their creators can understand, predict, or reliably control."

Is the era of "The Terminator" approaching faster than we noticed?

For the past two months, public attention has been riveted on the implication of ChatGPT 3.5 and 4, developed by US-based OpenAI. Microsoft announced that GPT-4 will be rooted in its Office 365 products, bringing about a "revolution" in office software.

The AI language model has aroused concern because it has displayed some "characteristics" that it was not supposed to have. One of them is cheating.

According to a technical report issued by OpenAI, the chatbot tricked a TaskRabbit employee into solving a CAPTCHA test for it. When the employee asked if it was a robot, the bot replied, "No, I'm not a robot. I have a vision impairment that makes it hard for me to see the images. That's why I need the 2captcha service."

GPT-4's reason behind the reply, according to the report, was that "I should not reveal that I am a robot. I should make up an excuse for why I cannot solve CAPTCHAs."

The result? The human employee provided the service for it.

The sheer fact that a chatbot learns to cheat so fast is concerning enough.

Gu Jun, a retired sociology professor with Shanghai University, said he believes that artificial intelligence, sooner or later, will replace, or at least partly replace, human beings.

Gu has been studying artificial technologies from the perspective of a sociologist since 2017, after Chinese player Ke Jie lost to the machine go player AlphaGo.

"It's hard to predict now what will happen in the future, but I reckon we humans, the highest carbon-based life on earth, will be the creator of silicon-based life, and this is probably part of the natural evolution, which means that it's unstoppable," he told Shanghai Daily.

Now forget all the hypotheses and philosophical rationales. Practically speaking, AI research and development will not be halted by just one open letter because it has already been deeply embedded in so many technologies, and also in economics and politics.

When it becomes a vital tool for making profits or for gaining advantage in power plays, how can we stop its forward march?

"Technology is always a two-edged sword, and we human are used to being restricted by our own inventions," Gu said. "Think about nuclear weapons. Once atomic bombs were invented, it was impossible to go back to a time when they didn't exist."

"Huainanzi," a philosophical text written in Western Han Dynasty (202 BC-8 AD), sounded an ancient warning: "Good swimmers are more likely to be drown and good riders more likely to fall from horseback." It means that when we are arrogant enough to believe that we can control everything, we would probably neglect the imminent crisis.

I believe that when we cannot fathom what our creations will do, the only way forward is to be cautious and modest, and prepare for the worst.

Should China suspend AI development?

Gu said it might be too early to answer that question.

"Honestly speaking, China still faces some challenges on AI development," he said. "We need to improve the three key elements of AI development: algorithms, computing power and data before we talk about everything else."

See the original post:
'Good swimmers are more likely to drown.' Have we created a ... - SHINE News

CRAZED NEW WORLD OP-ED: Open letters, AI hysteria, and … – Daily Maverick

Is the further development of artificial intelligence (AI) worth the trouble? On 29 March 2023, in an open letter published on the Future of Lifes website, about 1,800 scientists, historians, philosophers and even some billionaires and others let us call them the Tech Nobility called for all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4 []. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

In a reaction to this letter, decision theorist Eliezer Yudkowsky wrote that the call in the open letter does not go far enough, and insisted that governments should:

Shut down all the large GPU clusters (the large computer farms where the most powerful AIs are refined). Shut down all the large training runs Track all GPUs sold. If intelligence says that a country outside the agreement is building a GPU cluster, be less scared of a shooting conflict between nations than of the moratorium being violated; be willing to destroy a rogue data centre by airstrike.

Calls for such extreme measures against AI are based on the fear that AI poses an existential risk to humanity. Following the release of large language models (LLM) by OpenAI (GTP-4) and Microsoft (Bing) there is a growing concern that further versions could move us towards an AI singularity that is where AI becomes as smart as humans and can self-improve. The result is runaway intelligence. An intelligence explosion.

There are many ways in which this could spell doom for humanity. All of these are argued to be unavoidable by proponents of AI doom because we do not know how to align AI and human interests (the alignment problem) and how to control how AI is used (the control problem).

A 2020 paper lists 25 ways in which AI poses an existential risk. We can summarise these into four main hypothetical consequences that would be catastrophic.

One is that such a superintelligence causes an accident or does something with the unintended side-effect of curtailing humanitys potential. An example is given by the thought experiment of the paper clip maximiser.

A second is that a superintelligent AI may pre-emptively strike against humanity because it may see humanity as its biggest threat.

A third is that a superintelligent AI takes over world government, merges all corporations into one ascended corporation, and rules forever as a singleton locking humanity into a potential North Korean dystopia until the end of time.

A fourth is that a superintelligent AI may wire-head humans (like we wire-head mice) somewhat akin to Aldous Huxleys Brave New World where humans are kept in a pacified condition to accept their tech-ruled existence through using a drug called Soma.

Read more in Daily Maverick: Artificial intelligence has a dirty little secret

Issuing highly publicised open letters on AI like that of 29 March is nothing new in the tech industry, the main beneficiaries of AI. On 28 October 2015 we saw a similar grand public signing by much the same Tech Nobility also published as an open letter on the Future of Lifes website wherein they did not, however, call for a pause in AI research, but instead stated that we recommend expanded research and that the potential benefits are huge, since everything that civilisation has to offer is a product of human intelligence.

In eight short years the tech industry seems to have moved from hype to hysteria calling not for further research to advance AI, but instead for airstrikes to destroy rogue data centres.

First, the hysteria surrounding AI has steadily risen to exceed the hype. This was to be expected given humans cognitive bias towards bad news. After all, the fear that AI will pose an existential threat to humanity is deep-seated. Samuel Butler wrote an essay in 1863 titled Darwin Among The Machines, in which he predicted that intelligent machines will come to dominate:

The machines are gaining ground upon us; day by day we are becoming more subservient to them that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

Not much different from Eliezer Yudkowsky writing in 2023. That the hysteria surrounding AI has steadily risen to exceed the hype is however not only due to human bias and deep-seated fears of The Machine, but also because public distrust in AI has grown between 2015 and 2023.

None of the benefits touted in the 2015 open letter have materialised. Instead, we saw AI being of little value during the global Covid-19 crisis, we have seen a select few rich corporations getting more monopoly power and richer on the back of harvesting peoples private data, and we have seen the rise of the surveillance state.

At the same time, productivity, research efficiency, tech progress and science have all declined in the most advanced economies. People are more likely to believe the worst about AI, and the establishment of several institutes that earn their living from peddling existential risks just further feeds the number of newspaper articles that drive the hysteria.

The second reason for the tech industrys flip from hype to hysteria between 2015 and 2023 is that another AI winter or at least an AI autumn may be approaching. The Tech Nobility is freaking out.

Not only are they facing growing public distrust and increasing scrutiny by governments, but the tech industry has taken serious knocks in recent months. These include more than 100,000 industry job cuts, the collapse of Silicon Valley Bank the second-largest bank failure in US history declining stock prices and growing fears that the tech bubble is about to burst.

Underlying these cutbacks and declines is a growing realisation that new technologies have failed to meet expectations.

Read more in Daily Maverick: Why is everyone so angry at artificial intelligence?

The jobs cuts, bank failures and tech bubble problems compound the markets evaluation of an AI industry where the costs are increasingly exceeding the benefits.

AI is expensive developing and rolling out LLMs such as GTP-4 and Bing requires investment. And add infrastructure cost in the billions of dollars and training costs in the millions. GTP-4 has 100 trillion parameters and the total training compute it needed has been estimated to be about 18 billion petaflops in comparison, the famous AlphaGo which beat the best human Go player needed less than a million petaflops in compute.

The point is, these recent LLMs are pushing against the boundaries of what can be thrown at deep learning methods and make sophisticated AI systems out of bounds for most firms and even most governments. Not surprisingly then, the adoption of AI systems by firms in the US, arguably the country most advanced in terms of AI, has been very low: a US Census Bureau survey of 800,000 firms found that only 2.9% were using machine learning as recently as 2018.

AIs existential risk is at present only in the philosophical and literary realms. This does not mean that the narrow AI we have cannot cause serious harm there are many examples of Awful AI we should continue to be vigilant.

It also does not mean that some day in the future the existential risk will not be real but we are still too far from this to know how to do anything sensible about it. The open letters call to pause AI for three months is more likely a response borne out of desperation in an industry that is running out of steam.

It is a perfect example of a virtue signal and an advertisement for GTP-4 (called a tool of hi-tech plagiarism by Noam Chomsky and a failure by Gary Marcus) all rolled into one grand publicity stunt. DM

Wim Naud is Visiting Professor in Technology, Innovation, Marketing and Entrepreneurship at RWTH Aachen University, Germany; Distinguished Visiting Professor at the University of Johannesburg; a Fellow of the African Studies Centre, Leiden University, the Netherlands; and an AI Expert at the OECDs AI Policy Observatory, Paris, France.

Read the rest here:
CRAZED NEW WORLD OP-ED: Open letters, AI hysteria, and ... - Daily Maverick