Category Archives: Ai

OpenAI defines five ‘levels’ for AI to reach human intelligence it’s almost at level 2 – Quartz

OpenAI CEO Sam Altman at the AI Insight Forum in the Russell Senate Office Building on Capitol Hill on September 13, 2023 in Washington, D.C. Photo: Chip Somodevilla ( Getty Images )

OpenAI is undoubtedly one of the leaders in the race to reach human-level artificial intelligence and its reportedly four steps away from getting there.

Spotify rolled out comments on podcasts

The company shared a five-level system it developed to track its artificial general intelligence, or AGI, progress with employees this week, an OpenAI spokesperson told Bloomberg. The levels go from the currently available conversational AI to AI that can perform the same amount of work as an organization. OpenAI will reportedly share the levels with investors and people outside the company.

While OpenAI executives believe it is on the first level, the spokesperson said it is close to level two, which is defined as Reasoners, or AI that can perform basic problem-solving and is on the level of a human with a doctorate degree but no access to tools. The third level of OpenAIs system is reportedly called Agents, and is AI that can perform different actions for several days on behalf of its user. The fourth level is reportedly called Innovators, and describes AI that can help develop new inventions.

OpenAI leaders also showed employees a research project with GPT-4 that demonstrated it has human-like reasoning skills, Bloomberg reported, citing an unnamed person familiar with the matter. The company declined to comment further.

The system was reportedly developed by OpenAI executives and leaders who can eventually change the levels based on feedback from employees, investors, and the companys board.

In May, OpenAI disbanded its Superalignment team, which was responsible for working on the problem of AIs existential dangers. The company said the teams work would be absorbed by other research efforts across OpenAI.

Originally posted here:

OpenAI defines five 'levels' for AI to reach human intelligence it's almost at level 2 - Quartz

AI’s Energy Demands Are Out of Control. Welcome to the Internet’s Hyper-Consumption Era – WIRED

Right now, generative artificial intelligence is impossible to ignore online. An AI-generated summary may randomly appear at the top of the results whenever you do a Google search. Or you might be prompted to try Metas AI tool while browsing Facebook. And that ever-present sparkle emoji continues to haunt my dreams.

This rush to add AI to as many online interactions as possible can be traced back to OpenAIs boundary-pushing release of ChatGPT late in 2022. Silicon Valley soon became obsessed with generative AI, and nearly two years later, AI tools powered by large language models permeate the online user experience.

One unfortunate side effect of this proliferation is that the computing processes required to run generative AI systems are much more resource intensive. This has led to the arrival of the internets hyper-consumption era, a period defined by the spread of a new kind of computing that demands excessive amounts of electricity and water to build as well as operate.

In the back end, these algorithms that need to be running for any generative AI model are fundamentally very, very different from the traditional kind of Google Search or email, says Sajjad Moazeni, a computer engineering researcher at the University of Washington. For basic services, those were very light in terms of the amount of data that needed to go back and forth between the processors. In comparison, Moazeni estimates generative AI applications are around 100 to 1,000 times more computationally intensive.

The technologys energy needs for training and deployment are no longer generative AIs dirty little secret, as expert after expert last year predicted surges in energy demand at data centers where companies work on AI applications. Almost as if on cue, Google recently stopped considering itself to be carbon neutral, and Microsoft may trample its sustainability goals underfoot in the ongoing race to build the biggest, bestest AI tools.

The carbon footprint and the energy consumption will be linear to the amount of computation you do, because basically these data centers are being powered proportional to the amount of computation they do, says Junchen Jiang, a networked systems researcher at the University of Chicago. The bigger the AI model, the more computation is often required, and these frontier models are getting absolutely gigantic.

Even though Googles total energy consumption doubled from 2019 to 2023, Corina Standiford, a spokesperson for the company, said it would not be fair to state that Googles energy consumption spiked during the AI race. Reducing emissions from our suppliers is extremely challenging, which makes up 75 percent of our footprint, she says in an email. The suppliers that Google blames include the manufacturers of servers, networking equipment, and other technical infrastructure for the data centersan energy-intensive process that is required to create physical parts for frontier AI models.

Go here to read the rest:

AI's Energy Demands Are Out of Control. Welcome to the Internet's Hyper-Consumption Era - WIRED

This U.S. company is helping arm Ukraine against Russia with AI drones – NPR

This U.S. company is helping arm Ukraine against Russia with AI drones : Consider This from NPR Palmer Luckey launched his first tech company as a teenager. That was Oculus, the virtual reality headset for gaming. Soon after, he sold it to Facebook for $2 billion.

Now 31, Luckey has a new company called Anduril that's making Artificial Intelligence weapons. The Pentagon is buying them keeping some for itself and sending others to Ukraine.

The weapons could be instrumental in helping Ukraine stand up to Russia.

Ukraine needs more weapons and better weapons to fight against Russia. Could AI weapons made by a billionaire tech entrepreneur's company hold the answer?

For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org.

Email us at considerthis@npr.org.

Palmer Luckey, 31, founder of Anduril Industries, stands in front of the Dive-LD, an autonomous underwater drone at company headquarters in Costa Mesa, Calif. Anduril recently won a U.S. Navy contract to build 200 of them annually. Philip Cheung for NPR/NPR hide caption

Palmer Luckey, 31, founder of Anduril Industries, stands in front of the Dive-LD, an autonomous underwater drone at company headquarters in Costa Mesa, Calif. Anduril recently won a U.S. Navy contract to build 200 of them annually.

Palmer Luckey launched his first tech company as a teenager. That was Oculus, the virtual reality headset for gaming. Soon after, he sold it to Facebook for $2 billion.

Now 31, Luckey has a new company called Anduril that's making Artificial Intelligence weapons. The Pentagon is buying them keeping some for itself and sending others to Ukraine.

The weapons could be instrumental in helping Ukraine stand up to Russia.

Ukraine needs more weapons and better weapons to fight against Russia. Could AI weapons made by a billionaire tech entrepreneur's company hold the answer?

For sponsor-free episodes of Consider This, sign up for Consider This+ via Apple Podcasts or at plus.npr.org.

Email us at considerthis@npr.org.

This episode was produced by Kathryn Fink and Jonaki Mehta. It was edited by Courtney Dorning and Andrew Sussman. Our executive producer is Sami Yenigun

Excerpt from:

This U.S. company is helping arm Ukraine against Russia with AI drones - NPR

The sperm whale ‘phonetic alphabet’ revealed by AI – BBC.com

12 hours ago

ByKatherine Latham and Anna Bressanin,

Researchers studying sperm whale communication say they've uncovered sophisticated structures similar to those found in human language.

"At 1000m (3300ft) deep, many of the group will be facing the same way, flanking each other but across an area of several kilometres," says Young. "During this time they're talking, clicking the whole time." After about an hour, she says, the group rises to the surface in synchrony. "They'll then have their rest phase. They might be at the surface for 15 to 20minutes. Then they'll dive again," she says.

At the end of a day of foraging, says Young, the sperm whales come together at the surface and rub against each other, chatting while they socialise. "As researchers, we don't see a lot of their behaviour because they don't spend that much time at the surface," she says. "There's masses we don't know about them, because we are just seeing a tiny little snapshot of their lives during that 15minutes at the surface."

It was around 47 million years ago that land-roaming cetaceans began to gravitate back towards the ocean that's 47 million years of evolution in an environment alien to our own. How can we hope to easily understand creatures that have adapted to live and communicate under such different evolutionary pressures to ourselves?

"It's easier to translate the parts where our world and their world overlap like eating, nursing or sleeping," says David Gruber, lead and founder of the Cetacean Translation Initiative (Ceti) and professor of biology at the City University of New York. "As mammals, we share these basics with others. But I think it's going to get really interesting when we try to understand the areas of their world where there's no intersection with our own," he says.

Sperm whales live in multi-level, matrilineal societies groups of daughters, mothers and grandmothers while the males roam the oceans, visiting the groups to breed. They are known for their complex social behaviour and group decision-making , which requires sophisticated communication. For example, they are able to adapt their behaviour as a group when protecting themselves from predators like orcas or humans.

Sperm whales communicate with each other using rhythmic sequences of clicks , called codas. It was previously thought that sperm whales had just 21 coda types. However, after studying almost 9,000 recordings, the Ceti researchers identified 156 distinct codas. They also noticed the basic building blocks of these codas which they describe as a "sperm whale phonetic alphabet" much like phonemes, the units of sound in human language which combine to form words. ( Watch the video below to hear some of the variety in sperm whale vocalisations the AI identified.)

Pratyusha Sharma, a PhD student at MIT and lead author of the study, describes the "fine-grain changes" in vocalisations the AI identified. Each coda consists of between three and 40 rapid-fire clicks. The sperm whales were found to vary the overall speed, or the "tempo", of the codas, as well as to speed up and slow down during the delivery of a coda, in other words, making it "rubato". Sometimes they added an extra click at the end of a coda, akin, says Sharma, to "ornamentation" in music. These subtle variations, she says, suggest sperm whale vocalisations could carry a much richer amount of information than previously thought .

"Some of these features are contextual," says Sharma. "In human language, for example, I can say 'what' or 'whaaaat!?'. It's the same word, but to understand the meaning you have to listen to the whole sound," she says.

The researchers also found the sperm whale "phonemes" could be used in a combinatorial fashion, allowing the whales to construct a vast repertoire of distinct vocalisations. The existence of a combinatorial coding system, write the report authors, is a prerequisite for "duality of patterning " a linguistic phenomenon thought to be unique to human language in which meaningless elements combine to form meaningful words.

However, Sharma emphasises, this is not something they have any evidence of as yet. "What we show in sperm whales is that the codas themselves are formed by combining from this basic set of features. Then the codas get sequenced together to form coda sequences." Much like humans combine phonemes to create words, and then words to create sentences.

So, what does all this tell us about sperm whales' intelligence? Or their ability to reason, or store and share information?

"Well, it doesn't tell us anything yet," says Gruber. "Before we can get to those amazing questions, we need to build a fundamental understanding of how [sperm whales communicate] and what's meaningful to them. We see them living very complicated lives, the coordination and sophistication in their behaviours. We're at base camp. This is a new place for humans to be just give us a few years. Artificial intelligence is allowing us to see deeper into whale communication than we've ever seen before."

But not everyone is convinced, with experts warning of an anthropocentric focus on language which risks forcing us to view things from one perspective.

Young, though, describes the research as an "incremental step" towards understanding these giants of the deep. "We're starting to put the pieces of the puzzle together," she says. And perhaps if we could listen and really understand something like how important sperm whales' grandmothers are to them something that resonates with humans, she says, we could drive change in human behaviour in order to protect them.

Categorised as "vulnerable" by the International Union for Conservation of Nature (IUCN), sperm whales are still recovering from commercial hunting by humans in the 19th and 20th Centuries. And, although such whaling has been banned for decades, sperm whales face new threats such as climate change, ocean noise pollution and ship strikes.

However, Young adds, we're still a long way off from understanding what sperm whales might be saying to each other. "We really have no idea. But the better we can understand these amazing animals, the more we'll know about how we can protect them."

--

If you liked this story, sign up for The Essential List newsletter a handpicked selection of features, videos and can't-miss news, delivered to your inbox twice a week.

For more science, technology, environment and health stories from the BBC, follow us on Facebook andX .

Follow this link:

The sperm whale 'phonetic alphabet' revealed by AI - BBC.com

The AI summer – Benedict Evans

A lot of these charts are really about what happens when the utopian dreams of AI maximalism meet the messy reality of consumer behaviour and enterprise IT budgets - it takes longer than you think, and its complicated (this is also one reason why I think doomers are naive). The typical enterprise IT sales cycle is longer than the time since Chat GPT3.5 was launched, and Morgan Stanleys latest CIO survey says that 30% of big company CIOs dont expect to deploy anything before 2026. They might be being too cautious, but the cloud adoption chart above (especially the expectation data) suggests the opposite. Remember, also, that the Bain Production data only means that this is being used for something, somewhere, not that its taken over your workflows.

Stepping back, though, the very speed with which ChatGPT went from a science project to 100m users might have been a trap (a little as NLP was for Alexa). LLMs look like they work, and they look generalised, and they look like a product - the science of them delivers a chatbot and a chatbot looks like a product. You type something in and you get magic back! But the magic might not be useful, in that form, and it might be wrong. It looks like product, but it isnt.

Microsofts failed and forgotten attempt to bolt this onto Bing and take on Google at the beginning of last year is a good microcosm of the problem. LLMs look like better databases, and they look like search, but, as weve seen since, theyre wrong enough, and the wrong is hard enough to manage, that you cant just give the user a raw prompt and a raw output - you need to build a lot of dedicated product around that, and even then its not clear how useful this is. Firing LLM web search out of the gate was falling into that trap. Satya Nadella said he wanted to make Google dance, but ironically the best way to compete with Bing Copilot might have been sit it out - to wait, watch, learn, and work this through before launching anything (if Wall Street had allowed that, of course).

The rush to bolt this into search came from competitive pressure, and stock market pressure, but more fundamentally from the sense that this is the next platform shift and you have to grab it with both hands. Thats much broader than Google. The urgency is accelerated by that standing on the shoulders of giants moment - you dont have time to to wait for people to buy devices - and from the way these things look like finished products. And meanwhile, the firehose of cash that these companies produced in the last decade has collided with the enormous capital-intensity of cutting-edge LLMs like matter meeting anti-matter.

In other words - These things are the future and will change everything, right now, and they need all this money, and we have all this money.

As a lot of people have now pointed out, all of that adds up to a stupefyingly large amount of capex (and a lot of other investment too) being pulled forward for a technology thats mostly still only in the experimental budgets.

Link:

The AI summer - Benedict Evans

Intuits CEO continues to bet the company on AI and data – Fortune

Good morning. Big tech companies are readjusting personnel in the age of artificial intelligence. This includes Google, which informed its employees in April that it is restructuring its finance team to redistribute resources toward AI. The latest example is software giant Intuit.

I reported yesterday that the Fortune 500 companyknown for products like QuickBooks, Credit Karma, and TurboTaxis laying off approximately 1,800 of its global employees, which amounts to 10% of its workforce and includes some executives. CEO Sasan Goodarzi wrote an email to employees announcing the very difficult decisions my leadership team and I have made.

Goodarzi wrote that Intuits transformation journey, including parting with the 1,800 employees, is part of its strategy to increase investments in priority focus areas. Those areas include AI and generative AI like its GenAI-powered financial assistant called Intuit Assist, while Intuit at the same time reimagines its products from traditional workflows to AI-native experiences. The strategy also focuses on money movement, mid-market expansion for small businesses, and international growth.

We do not do layoffs to cut costs, and that remains true in this case, Goodarzi wrote. Intuit plans to hire approximately 1,800 new people with strategic functional skill sets primarily in engineering, product, and customer-facing roles such as sales, customer success, and marketingand expects its overall headcount to grow in its fiscal year 2025, which begins Aug. 1.

Of the employees who will depart Intuit, 1,050 are not meeting expectations based on a formal performance management process, according to the company. And its reducing the number of executivesdirectors, SVPs, and EVPsby approximately 10%, expanding certain executive roles and responsibilities.

All departing U.S. employees will receive a package that includes a minimum of 16 weeks of pay, and two additional weeks for every year of service. They will have 60 days before they leave the company, with a last day of Sept. 9. Employees outside the U.S. will receive similar support, according to the company.

Intuit earned $14.4 billion in revenue in its fiscal year 2023, moving up 24 spots on the Fortune 500. For the period ending April 30, Intuit reported revenue of $6.7 billion, up 12%.

AI is beginning to fundamentally change business, according to McKinsey. Interest in generative AI has intensified a spotlight on a broader set of AI capabilities at organizations. The firms recently published global survey finds AI adoption had risen this year to 72%. For the past six years, AI adoption by respondents organizations has hovered at about 50%. Half of respondents said their companies have adopted AI in two or more business functions, up from less than a third of respondents who said the same in 2023.

In September, my Fortune colleague Geoff Colvin reported on Intuits massive strategy reset putting AI at the center of the business. Colvin wrote: Intuit has a long AI head start against its competitors including H&R Block, Cash App, TaxSlayer, Xero, FreshBooks, and others. The company is hoping its early investment will produce a network effect, in which good AI-generated recommendations attract more customers, bringing in more data, improving the companys products, therefore attracting more customers.

Goodarzi told Colvin his objective since becoming CEO in 2019: The decision I made was, as a team, were going to bet the company on data and AI.

Sheryl Estrada sheryl.estrada@fortune.com

Monish Patolawala was named EVP and CFO at ADM (NYSE: ADM) effective Aug. 1, succeeding Ismael Roig, who has been serving as ADMs interim CFO since January. ADM CFO Vikram Luthar resignedamid an investigation of accounting issues. Patolawala brings to ADM more than 25 years of experience. He most recently served as president and CFO of 3M Company. Before 3M, Patolawala spent more than two decades at GE in various finance roles, including as CFO of GE Healthcare and also as head of operational transformation for all of GE.

Gordon Brooks was named interim CFO at Eli Lilly (NYSE: LLY), effective July 15, according to an SEC filing. Brooks is currentlyLilly's group vice president, controller and corporate strategy.Anat Ashkenazi resigned as CFO and EVP at Lilly in June and will join Alphabet Inc. as CFO. Brooks has worked at Lilly for almost 30 years serving in several divisional CFO roles.

Grant Thornton has released its Q2 2024 CFO survey which finds that 58% of CFOs surveyed are optimistic about the U.S. economy. Another key finding is that CFOs continue to prioritize AI and technology.

The portion of CFOs who are either using generative AI or exploring potential uses rose to an all-time high of 94% in the Q2 survey, compared to previous quarters, according to Grant Thornton. Of those using generative AI, 74% said it's being applied to data analysis and business intelligence in Q2, compared to 66% who said the same in Q1. And in Q2, 63% said they are deploying generative AI to assist with cybersecurity and risk management, compared to 47% in the previous quarter.

The business environment is ripe for growth, but CFOs must manage costs to capitalize on it, Paul Melville, national managing principal of CFO Advisory for Grant Thornton, said in a statement.

The findings are based on a survey of more than 225 senior financial leaders.

Pulse on Workforce Strategy: Biggest Concerns and Key Factors Driving Investment Decisions, is a new report by global consulting firm RGP. Some key findings include: 81% of financial decision-makers surveyed are planning to increase investment in workforce development this year. And 80% said their organization is currently investing in one or more digital transformation initiatives.

The data is based on a survey of 213 CFOs and finance leaders at the director level or above at U.S. companies earning from $50 million to more than $500 million in revenue.

The future of AI is not predestinedit is ours to shape.

Steve Hasker, the CEO of Thomson Reuters, writes in a Fortune opinion piece, Knowledge workers dont seem to think AI will replace thembut they expect it to save them 4 hours a week in the next year.

Read the original:

Intuits CEO continues to bet the company on AI and data - Fortune

Storage: The unsung hero of AI deployments – CIO

Offline batch processing has lower memory requirements than real-time workloads, Karan says. In some cases, secondary storage options can be used to hold vast amounts of data needed for training and running AI models, she adds.

Choosing the right storage option also depends on the often-mentioned data gravity the size of the data set, whether it can be moved to the cloud for processing, or whether it makes sense to bring the processing to the data. In some AI projects, the data storage is co-located in a data center with the AI compute, in another public cloud, or at the edge where the data is created.

Enterprises have many other factors to consider, including security, and regulatory or compliance challenges. With cloud storage, networking, distance, and latency are factors here, but they must consider the added cost variable, Karan says.

More here:

Storage: The unsung hero of AI deployments - CIO

JPMorgan Chase Invests in Infrastructure, AI to Boost Market Share – PYMNTS.com

J.P. Morgan Chase is reportedly enhancing its competitive capabilities to remain the biggest bank in the United States.

The bank is modernizing its infrastructure and data and using artificial intelligence and payments, Marianne Lake, CEO of consumer and community banking at J.P. Morgan Chase, told Reuters in an interview posted Thursday (July 11).

These investments will ensure that we continue to be the leader even five to 10 years from now, Lake said, per the report.

Lake also said J.P. Morgan aims to boost its market share, increasing its share of U.S. retail deposits from 11.3% to 15% and its share of the nations spending on its credit cards from 17% to 20%, according to the report.

While we are not putting any timeline on it, our strategies are geared towards achieving it, Lake said, per the report.

J.P. Morgan added $92 billion in deposits with its acquisition of failed bank First Republic last year, the report said. Federal law prohibits banks that hold 10% of U.S. deposits to grow through acquisitions, unless theyre buying a failed bank.

Lake said J.P. Morgan would do so again if it was important to the ecosystem, adding that she did not hope for more bank failures, according to the report.

With J.P. Morgan set to report its earnings Friday (July 12), industry observers are watching for any news of a potential successor to CEO Jamie Dimon, who has served in that role since 2006, the report said.

The banks board has said that Lake is one of four potential successors to Dimon, per the report.

It was reported in February that J.P. Morgan plans to open more than 500 new bank branches over the next three years, expanding its presence in areas where it lacks representation.

The bank already has the largest branch network, with 4,897 branches. It added 650 new ones over the previous five years.

Lake said at the time that J.P. Morgan had less than a 5% branch share in 17 of the top 50 markets it aims to expand into.

The banks earnings report Friday will arguably set the tone for the macro-outlook governing consumer spending and business resilience, PYMNTS reported Monday (July 8).

Read this article:

JPMorgan Chase Invests in Infrastructure, AI to Boost Market Share - PYMNTS.com

Japan Enhances AI Sovereignty With Advanced ABCI 3.0 Supercomputer – NVIDIA Blog

Enhancing Japans AI sovereignty and strengthening its research and development capabilities, Japans National Institute of Advanced Industrial Science and Technology (AIST) will integrate thousands of NVIDIA H200 Tensor Core GPUs into its AI Bridging Cloud Infrastructure 3.0 supercomputer (ABCI 3.0). The HPE Cray XD system will feature NVIDIA Quantum-2 InfiniBand networking for superior performance and scalability.

ABCI 3.0 is the latest iteration of Japans large-scale Open AI Computing Infrastructure designed to advance AI R&D. This collaboration underlines Japans commitment to advancing its AI capabilities and fortifying its technological independence.

In August 2018, we launched ABCI, the worlds first large-scale open AI computing infrastructure, said AIST Executive Officer Yoshio Tanaka. Building on our experience over the past several years managing ABCI, were now upgrading to ABCI 3.0. In collaboration with NVIDIA and HPE, we aim to develop ABCI 3.0 into a computing infrastructure that will advance further research and development capabilities for generative AI in Japan.

As generative AI prepares to catalyze global change, its crucial to rapidly cultivate research and development capabilities within Japan, said AIST Solutions Co. Producer and Head of ABCI Operations Hirotaka Ogawa. Im confident that this major upgrade of ABCI in our collaboration with NVIDIA and HPE will enhance ABCIs leadership in domestic industry and academia, propelling Japan towards global competitiveness in AI development and serving as the bedrock for future innovation.

ABCI 3.0 is constructed and operated by AIST, its business subsidiary, AIST Solutions, and its system integrator, Hewlett Packard Enterprise (HPE).

The ABCI 3.0 project follows support from Japans Ministry of Economy, Trade and Industry, known as METI, for strengthening its computing resources through the Economic Security Fund and is part of a broader $1 billion initiative by METI that includes both ABCI efforts and investments in cloud AI computing.

NVIDIA is closely collaborating with METI on research and education following a visit last year by company founder and CEO, Jensen Huang, who met with political and business leaders, including Japanese Prime Minister Fumio Kishida, to discuss the future of AI.

Huang pledged to collaborate on research, particularly in generative AI, robotics and quantum computing, to invest in AI startups and provide product support, training and education on AI.

During his visit, Huang emphasized that AI factories next-generation data centers designed to handle the most computationally intensive AI tasks are crucial for turning vast amounts of data into intelligence.

The AI factory will become the bedrock of modern economies across the world, Huang said during a meeting with the Japanese press in December.

With its ultra-high-density data center and energy-efficient design, ABCI provides a robust infrastructure for developing AI and big data applications.

The system is expected to come online by the end of this year and offer state-of-the-art AI research and development resources. It will be housed in Kashiwa, near Tokyo.

The facility will offer:

NVIDIA technology forms the backbone of this initiative, with hundreds of nodes each equipped with 8 NVLlink-connected H200 GPUs providing unprecedented computational performance and efficiency.

NVIDIA H200 is the first GPU to offer over 140 gigabytes (GB) of HBM3e memory at 4.8 terabytes per second (TB/s). The H200s larger and faster memory accelerates generative AI and LLMs, while advancing scientific computing for HPC workloads with better energy efficiency and lower total cost of ownership.

The integration of advanced NVIDIA Quantum-2 InfiniBand with In-Network computing where networking devices perform computations on data, offloading the work from the CPU ensures efficient, high-speed, low-latency communication, crucial for handling intensive AI workloads and vast datasets.

ABCI boasts world-class computing and data processing power, serving as a platform to accelerate joint AI R&D with industries, academia and governments.

METIs substantial investment is a testament to Japans strategic vision to enhance AI development capabilities and accelerate the use of generative AI.

By subsidizing AI supercomputer development, Japan aims to reduce the time and costs of developing next-generation AI technologies, positioning itself as a leader in the global AI landscape.

Read the original:

Japan Enhances AI Sovereignty With Advanced ABCI 3.0 Supercomputer - NVIDIA Blog

Google says Gemini AI is making its robots smarter – The Verge

Google is training its robots with Gemini AI so they can get better at navigation and completing tasks. The DeepMind robotics team explained in a new research paper how using Gemini 1.5 Pros long context window which dictates how much information an AI model can process allows users to more easily interact with its RT-2 robots using natural language instructions.

This works by filming a video tour of a designated area, such as a home or office space, with researchers using Gemini 1.5 Pro to make the robot watch the video to learn about the environment. The robot can then undertake commands based on what it has observed using verbal and / or image outputs such as guiding users to a power outlet after being shown a phone and asked where can I charge this? DeepMind says its Gemini-powered robot had a 90 percent success rate across over 50 user instructions that were given in a 9,000-plus-square-foot operating area.

Researchers also found preliminary evidence that Gemini 1.5 Pro enabled its droids to plan how to fulfill instructions beyond just navigation. For example, when a user with lots of Coke cans on their desk asks the droid if their favorite drink is available, the team said Gemini knows that the robot should navigate to the fridge, inspect if there are Cokes, and then return to the user to report the result. DeepMind says it plans to investigate these results further.

The video demonstrations provided by Google are impressive, though the obvious cuts after the droid acknowledges each request hide that it takes between 1030 seconds to process these instructions, according to the research paper. It may take some time before were sharing our homes with more advanced environment-mapping robots, but at least these ones might be able to find our missing keys or wallets.

Read more from the original source:

Google says Gemini AI is making its robots smarter - The Verge