Category Archives: Deep Mind

Former Google Deepmind Researchers Assemble Luminaries Across Music And Tech To Launch Udio, A New AI … – PR Newswire

Backed by a16z, with participation from angel investors like will.i.am, Common, Kevin Wall, Tay Keith, Steve Stoute's UnitedMasters, Mike Krieger (Cofounder & CTO of Instagram) and Oriol Vinyals (head of Gemini at Google), Udio enables everyone from classically trained musicians, to those with pop star ambitions, to hip hop fans, to people who just want to have fun with their friends to create awe-inspiring songs in mere moments

NEW YORK, April 10, 2024 /PRNewswire/ -- Udio, a company that leverages AI to easily create extraordinary and original music, today announced the public launch of its app at udio.com. Previously only available in closed beta, where the app was regularly played with by some of the biggest names in the music industry, Udio was developed by former Google DeepMind researchers with a mission of making it easy for anyone to create emotionally resonant music in an instant. Whether it is recording a cherished memory in song, generating funny soundtracks for memes, or creating full length tracks for professional release, Udio will expand how everyone creates and shares music.

"There is nothing available that comes close to the ease of use, voice quality and musicality of what we've achieved with Udio - it's a real testament to the folks we have involved," said David Ding, Co-founder and CEO of Udio. "At every stage of development, we talked to people in the industry about how we could bring this technology to market in a way that benefits both artists and musicians. We gathered feedback from some of the most prolific artists and music producers likewill.i.am, Common and Tay Keith, to ensure that everything they thought would enhance the experience would be available. We hold ourselves to the highest standards and we believe we have achieved something truly remarkable, so we can't wait to get Udio into the hands of music lovers worldwide."

With a superior sound quality and musicality that meets professional standards, Udio was designed to make song creation as easy as possible. In just a few steps, users simply type a description of the music genre they want to make, provide the subject or personalized lyrics, and indicate artists that inspire. In less than 40 seconds, Udio works its magic and produces fully mastered tracks. Once a track has been created, users can further edit their creations through the app's "remix" feature. This enables iteration on existing tracks through text descriptors, turning everyday creators into full-blown producers. It even enables users to extend their songs, edit them to have different sounds and use them as the basis of inspiration for their next creation.

Once finished, users can then share their new creations with the app's built-in community of music lovers, for feedback and collaboration.

"This is a brand new Renaissance and Udio is the tool for this era's creativity-with Udio you are able to pull songs into existence via AI and your imagination," said will.i.am, multi-platinum artist and producer.

While in beta, Udio has also inspired some of the most prolific musicians, producers and artists with their next creation. Designed to be artist friendly, Udio helps musicians not only create songs faster, but test and play around with lyrics in an all new way. Through its extensive network, Udio also is in discussions with a number of artists who want to leverage AI in their workflows and find new ways to monetize through its tech.

"Good music stirs up deep emotions in all of us, and connects us to each other through shared experiences. Nothing will ever replace human artists and the unique connections they make with their fans," said Matt Bornstein, Partner at Andreessen Horowitz. "But we think Udio - with its incredible musicality, creativity, and vocals - is a brand new way for us to create and enjoy music together. We're thrilled to back this stellar group of researchers in their mission to make AI music a reality."

Udio's team is working alongside artists on all aspects of product and business development. The company has also secured leading investors in the seed round including a16z, as well as prominent tech and music angels like Mike Krieger (Cofounder & CTO of Instagram) and Oriol Vinyals (head of Gemini at Google).

"I've always been drawn to music and creation tools, and after I demoed Udio, I was blown away," said Mike Krieger. "It's early days but just like Instagram brought photography sharing to the masses, I believe Udio has the power to bring music creation to the masses as well. I'm thrilled to be a product advisor on their groundbreaking journey."

"UnitedMasters embraces cutting-edge technology that can unlock unprecedented opportunities for independent artists, and AI is reshaping how we create, consume, and experience music. As we embrace this transformative technology, we must ensure it amplifies creativity, empowers artists, and enriches the music industry without compromising ownership. It's imperative that we champion transparency, accountability, and ownership in how this technology benefits artists, shaping a future where innovation and creativity can thrive," said Steve Stoute, CEO and Founder of UnitedMasters.

For more information on Udio and how to access, please visit udio.com.

About Udio

Udio is a company that leverages proprietary AI to make amazing sound creation fun. Founded in New York in December 2023 by former Google DeepMind researchers, Udio's mission is to bring world changing products to market. With the launch of its new app of the same name, Udio is lauded by many in the industry as being the first to democratize song creation. To learn more about Udio, its founders and where to access the app, please visit udio.com or on its social channels at Twitter: @udiomusic, Instagram: udiomusic, tiktok: @udiomusic, Youtube: @udio_music

Notice: If your editorial policy requires the use of full legal names,will.i.am's is William Adams. All others shown in Wikipedia and previously published stories are incorrect.

Media Contact: Rachel Rogers 310-770-4917

SOURCE Udio

Read the rest here:
Former Google Deepmind Researchers Assemble Luminaries Across Music And Tech To Launch Udio, A New AI ... - PR Newswire

Google DeepMind Co-Founder Voices Concerns Over AI Hype: ‘We’re Talking About All Sorts Of Things That Are Just … – TradingView

The co-founder of Google DeepMind, Sir Demis Hassabis, has voiced concerns that the surge in funding for artificial intelligence (AI) is leading to exaggerated hype, overshadowing the actual scientific progress in the sector.

What Happened: Hassabis shared his worries that the billions being poured into generative AI startups and products are causing a hype akin to the crypto buzz. This hype, he fears, is clouding the impressive advancements being made in AI and "brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, crypto or whatever, reported the Financial Times.

"In a way, AI's not hyped enough but in some senses, it's too hyped. We're talking about all sorts of things that are just not real, he said,

The launch of the ChatGPT chatbot by OpenAI in November 2022 triggered a rush among investors, as startups hustled to create and launch generative AI and attract venture capital. CB Insights, a market analysis firm, reported that investors put $42.5 billion into 2,500 AI startup equity rounds in the previous year.

Investors have also been attracted to the Magnificent Seven tech companies, including Microsoft , Alphabet , and Nvidia , which are at the forefront of the AI revolution. However, companies are under scrutiny from regulators for making false AI-related claims.

See Also: Elon Musk Gears Up For Grok 2, Zuckerberg Reportedly Woos AI Talent From Googles DeepMind, Tesla Hiring

Despite the hype, Hassabis is confident that AI is one of the most transformative inventions in human history. He pointed to DeepMinds AlphaFold model, launched in 2021, as a key example of how AI can speed up scientific research. AlphaFold has been used to predict the structures of 200 million proteins and is now being used by over 1 million biologists worldwide.

Why It Matters: Previously, concerns about an AI bubble have been raised by several experts. In February, Apollo Global Managements Chief Economist, Torsten Slk, sounded an alarm on the AI bubble, warning that its Bigger Than The 1990s Tech Bubble.

In March, Richard Windsor, a seasoned tech stock analyst, highlighted potential indicators of an impending market correction due to the ongoing excitement surrounding AI. However, Ken Griffin, CEO of Citadel, expressed confidence in Nvidias position in the AI market despite the uncertainties.

Read Next: Tesla CEO Elon Musk Reacts To Apple Co-Founder Steve Jobs On Finding Top Talent: You Build Up These Pockets Of A Players And It Propagates

Image by George Gillams via Flickr

Engineered by Benzinga Neuro, Edited by

Pooja Rajkumari

The GPT-4-based Benzinga Neuro content generation system exploits the extensive Benzinga Ecosystem, including native data, APIs, and more to create comprehensive and timely stories for you.

Learn more.

2024 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Continue reading here:
Google DeepMind Co-Founder Voices Concerns Over AI Hype: 'We're Talking About All Sorts Of Things That Are Just ... - TradingView

Google Deepmind CEO says AI industry is full of ‘hype’ and ‘grifting’ – ReadWrite

The CEO of Googles AI division Deepmind, Sir Demis Hassabis, has called out the amount of hype circulating the industry, sometimes obscuring the genuine developments.

Having co-founded Deepmind in 2010, Sir Demis has years of experience in the field of AI, well before machine learning tools hit the mainstream. More and more everyday people can use AI tools in their lives but the very nature of something taking off to the extent that it has does lead to some confusion in what the future could hold.

In an interview with the Financial Times, Sir Demis compares the AI explosion to the crypto boom of the last few years, highlighting that the billion-dollar investments into generative AI start-ups and products brings with it a whole attendant bunch of hype and maybe some grifting.

Some of that has now spilled over into AI, which I think is a bit unfortunate. It clouds the science and the research, which is phenomenal, the CEO continued. In a way, AIs not hyped enough but in some senses, its too hyped. Were talking about all sorts of things that are just not real.

This boom can largely be traced back to the launch of OpenAIs ChatGPT tool in November 2022, bringing AI-powered chats to the mainstream for the first time. Other start-ups raced to release similar or competitive tools, backed by a massive round of a collectible $42.5bn in 2,500 AI start-up equity rounds from VC groups in 2023.

Major tech companies like Microsoft, Alphabet, and Nvidia have also risen to the challenge, each bringing their own AI innovation as well as Googles own push via Deepmind.

Whether under or over-hyped, it certainly seems to be true that the average persons understanding of AI is somewhat limited. The future possibilities of AI leave a lot left to be discovered, something that Sir Demis himself is looking forward to.

I think were only scratching the surface of what I believe is going to be possible over the next decade-plus, Sir Demis stated. Were at the beginning, maybe, of a new golden era of scientific discovery, a new Renaissance.

Featured image: Ideogram

Go here to see the original:
Google Deepmind CEO says AI industry is full of 'hype' and 'grifting' - ReadWrite

Researchers at Google DeepMind Present Gecko: A Compact and Versatile Embedding Model Powered by the Vast World Knowledge of LLMs – MarkTechPost

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.

Here is the original post:
Researchers at Google DeepMind Present Gecko: A Compact and Versatile Embedding Model Powered by the Vast World Knowledge of LLMs - MarkTechPost

Google AI chief says AI hype distracts from science and research – Quartz

The billions of dollars going into AI is reminiscent of crypto hype, which is getting in the way of science and research, Googles AI chief is warning.

Microsoft tests an animated AI chatbot for Xbox

The investment into AI brings with it a whole attendant bunch of hype and maybe some grifting, Demis Hassabis, co-founder and CEO of DeepMind, which was acquired by Google in 2014, told the Financial Times. He compared the phenomenon to crypto and other hyped-up areas and said similar hype has now spilled over into AI, which I think is a bit unfortunate.

Hassabis also said the hype around AI clouds the science and the research, which is phenomenal. In a way, AIs not hyped enough but in some senses its too hyped. Were talking about all sorts of things that are just not real. Google DeepMind did not immediately respond to a request for comment.

Amid a tight race to develop bigger and better AI models, startups are seeing millions in investments (and even billions) that are boosting their valuations into the billions, leading some analysts to warn of an AI bubble.

After chipmaker Nvidia, which is behind the worlds most sought-after hardware powering the AI industrys leading models and products, became the first semiconductor company to reach a $2 trillion valuation in February, some analysts were wary of how far the company can go.

Another blockbuster quarter from Nvidia raises the question of how long its soaring performance will last, Jacob Bourne, a senior analyst at Insider Intelligence, said after Nvidia beat earnings expectations. Nvidias near-term market strength is durable, though not invincible.

After Nvidia reached its $2 trillion valuation, Torsten Slk, chief economist at Apollo Global Management, warned the current AI hype has surpassed that of the 1990s dot-com bubble.

Despite his warning, Hassabis said he believes the industry is only scratching the surface of what is possible with the technology. Alongside its AlphaFold model, which predicts the structures of proteins, Hassabis said Google DeepMind is also using AI to advance drug discovery research, weather prediction models, and nuclear fusion technology.

Were at the beginning, maybe, of a new golden era of scientific discovery, a new Renaissance, he told the Financial Times.

Read this article:
Google AI chief says AI hype distracts from science and research - Quartz

Liverpool Team Up With Deepmind To Always "take Corners Quickly" – Dataconomy

Some moments in football are unforgettable, like Liverpools epic corner taken quickly goal in the 2019 UEFA Champions League semi-finals. It led to Divock Origis goal and a remarkable comeback. Thats why Liverpool knows corner kicks are big chances to score, but planning them isnt easy. Can AI help? Googles DeepMind team thinks so and has started to create Tactic AI. Its a smart program that predicts what might happen during a corner kick, suggests the best strategies for teams like Liverpool, and creates more moments like this:

Did Jurgen Klopps departure at the end of the season push Liverpool to do this, or did they not want to stay away from AI developments? Either way, if this project succeeds, it may change football we know a lot.

TacticAI is an artificial intelligence system developed in partnership with Liverpool FC and AI researchers to enhance football tactics, particularly focusing on corner kicks. This innovative tool utilizes advanced AI techniques to analyze past corner kick scenarios and provide valuable insights to coaches.

Heres how it works: The system operates by employing a geometric deep learning approach, which allows it to represent cornerkick scenarios as graphs.

In these graphs, each player is depicted as a node, containing various attributes such as position, velocity, and height, while the relationships between players are represented by edges. By leveraging this unique representation, TacticAI can accurately predict potential outcomes of corner kicks and propose tactical adjustments to optimize success.

Whats really cool about TacticAI is that it can come up with different scenarios for corner kicks, so coaches can try out new ideas and see what works best. It also helps coaches review past plays more easily, so they can learn from whats happened before.

After lots of testing with real football experts, TacticAI has proven to be really helpful. Coaches can rely on it to give them smart advice that could make a big difference in their teams performance.

Artificial intelligence in sports: AI & Big Data are changing the sports for good

In simple terms, TacticAI is like having a super-smart assistant coach who knows all about football tactics. Its there to help coaches make better decisions during corner kicks, which could lead to more goals and more wins for their team.

Visit link:
Liverpool Team Up With Deepmind To Always "take Corners Quickly" - Dataconomy

Separating the Hype From Reality in AI – PYMNTS.com

The rapid rise of artificial intelligence (AI) has sparked a heated debate among experts, with some warning that the hype surrounding the technology may be overshadowing genuine scientific advancements.

DeepMind Co-Founder Demis Hassabis recently drew parallels between the current AI frenzy and the cryptocurrency boom, raising concerns about the potential impact on the fields progress.

The debate over whether AI is overpromised has significant implications for the commercial landscape as businesses rush to capitalize on the technologys potential. Observers say striking a balance between enthusiasm and realism will be crucial for the healthy growth of AI-driven commerce.

While generative AI is powerful, it is still only one segment of AI,Muddu Sudhakar, co-founder and CEO of the generative AI payments platformAisera, told PYMNTS. AI encompasses a variety of categories. But with so much attention on generative AI, it means that these areas get neglected and crowded out. It could also limit research, which could mean less innovation.

Interest in AI is growing. According to the PYMNTS Intelligence report Consumer Interest in Artificial Intelligence, the average consumer uses around five AI technologies weekly, including web browsing, navigation apps, and online recommendations. Nearly two-thirds of Americans are interested in AI assistants for tasks like booking travel, with AI enhancing the personalization of in-car experiences. These intelligent systems, leveraging generative AI, tailor recommendations to users behaviors and preferences far beyond simple list-based suggestions.

Hassabis expressed concernsto the Financial Times regarding the surge of investment in generative AI startups and products, likening the frenzy to other speculative bubbles. The billions of dollars being poured into generative AI startups and products brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, like crypto or whatever, he said.

Some experts say the hype surrounding AI has reached a fever pitch, with grandiose promises and astronomical investments obscuring the reality of the technologys current capabilities.

One of the main issues with AI hype is that it creates unrealistic expectations among the public and investors. When companies make bold claims about their AI-powered products or services, they often fail to deliver on those promises, leading to disappointment and erosion of trust.

Most people in the AI space have good intentions and dont want to mislead consumers or users,Zohar Bronfman, co-founder and CEO ofPecan AI, told PYMNTS. I dont doubt that theyre working hard to deliver the best AI products they can. Whats been ignored, though, is that generative AI so far just hasnt provided significant business value. Its fascinating and powerful, but so far, most business users have come up empty-handed when they try to use it to really drive business impact.

Sudhakar pointed out the excessive investment in large language models (LLMs), suggesting it may overshadow other vital areas of AI research. This focus risks limiting innovation and neglecting emerging technologies that could offer more significant advancements or solutions to pressing challenges in the field.

How many of these do we need? he said. How can you really tell which one is better? Its not clear. This is why I think just a handful of state-of-the-art models will ultimately prevail. That being said, there will be many SLMs [small language models] that address lots of edge cases, but even in this area, many will fade away.

Sudhakar raised a looming issue in AI: the dwindling supply of data necessary to train LLMs. This scarcity, he warned, could become a significant bottleneck in the development and advancement of these models, potentially hindering progress in AI research and applications.

One alternative is to use synthetic data, he added. This is an emerging area and could use much more focus.

Sudhakar also highlighted the importance of shifting focus toward what will eventually succeed the current transformer modelsin AI. Based on a deep learning architecture, transformer models have revolutionized how machines understand and generate human-like text by enabling them to process words about all the other words in a sentence rather than one at a time.

He added, This is a powerful model, but it has limitations, such as with hallucinations, which are based on the underlying probabilities.

While generative AI gets all the attention, the real workhorses of AI, machine learning techniques for prediction and optimization, arent hyped nearly enough, Bronfman said.

Tested and proven machine learning methods can quickly take business data and extract a great deal of value, he added. They may not seem as shiny and new as generative AI, but they definitely shine when theyre integrated into business systems the right way. These recognized methods deserve more attention and investment so businesses can achieve the transformative benefits of AI.

Some commenters say that the best use of AI might not be for commerce.Ilia Badeev, head of data science at Trevolution Group, told PYMNTS that the significance of employing AI for nonprofit and scientific endeavors receives inadequate attention.

I would like to see more hype around AI researchers, he added. Imagine a ScientistGPT that possesses information from all currently existing textbooks and scientific studies and can use it to advance theoretical and practical science.

Originally posted here:
Separating the Hype From Reality in AI - PYMNTS.com

Google DeepMind’s Fact Quest: Improving Long-form Accuracy In LLMs With SAFE – Dataconomy

Large language models (LLMs) have demonstrated remarkable abilities they can chat conversationally, generate creative text formats, and much more. Yet, when asked to provide detailed factual answers to open-ended questions, they still can fall short. LLMs may provide plausible-sounding yet incorrect information, leaving users with the challenge of sorting fact from fiction.

Google DeepMind, the leading AI research company, is tackling this issue head-on. Their recent paper, Long-form factuality in large language models introduces innovations in both how we measure factual accuracy and how we can improve it in LLMs.

DeepMind started by addressing the lack of a robust method for testing long-form factuality. They created LongFact, a dataset of over 2,000 challenging fact-seeking prompts that demand detailed, multi-paragraph responses. These prompts cover a broad array of topics to test the LLMs ability to produce factual text in diverse subject areas.

The next challenge was determining how to accurately evaluate LLM responses. DeepMind developed the Search-Augmented Factuality Evaluator (SAFE). Heres the clever bit: SAFE itself uses an LLM to make this assessment!

Heres how it works:

DeepMind also proposed a new way to score long-form factual responses. The traditional F1 score (used for classification tasks) wasnt designed to handle longer, more complex text. F1@K balances precision (the percentage of provided facts that are correct) against a concept called recall.

Recall takes into account a users ideal response length after all, an LLM could gain high precision by providing a single correct fact, while a detailed answer would get a lower score.

DeepMind benchmarked a range of large langue models of varying sizes, and their findings aligned with the intuition that larger models tend to demonstrate greater long-form factual accuracy. This can be explained by the fact that larger models are trained on massive datasets of text and code, which imbues them with a richer and more comprehensive understanding of the world.

Imagine an LLM like a student who has studied a vast library of books. The more books the student has read, the more likely they are to have encountered and retained factual information on a wide range of topics. Similarly, a larger LLM with its broader exposure to information is better equipped to generate factually sound text.

In order to perform this measurement, Google DeepMind tested the following models: Gemini, GPT, Claude (versions 3 and 2), and PaLM. The results are as follows:

DeepMinds study shows a promising path toward LLMs that can deliver more reliable factual information. SAFE achieved accuracy levels that exceeded human raters on certain tests.

However, its crucial to note the limitations:

Search engine dependency: SAFEs accuracy relies on the quality of search results and the LLMs ability to interpret them.

Non-repeating facts: The F1@K metric assumes an ideal response wont contain repetitive information.

Despite potential limitations, this work undeniably moves the needle forward in the development of truthful AI systems. As LLMs continue to evolve, their ability to accurately convey facts could have profound impacts on how we use these models to find information and understand complex topics.

Featured image credit: Freepik

View post:
Google DeepMind's Fact Quest: Improving Long-form Accuracy In LLMs With SAFE - Dataconomy

Google DeepMind unveils ‘superhuman’ AI system that excels in fact-checking, saving costs and improving accuracy – VentureBeat

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

A new study from Googles DeepMind research unit has found that an artificial intelligence system can outperform human fact-checkers when evaluating the accuracy of information generated by large language models.

The paper, titled Long-form factuality in large language models and published on the pre-print server arXiv, introduces a method called Search-Augmented Factuality Evaluator (SAFE). SAFE uses a large language model to break down generated text into individual facts, and then uses Google Search results to determine the accuracy of each claim.

SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results, the authors explained.

The researchers pitted SAFE against human annotators on a dataset of roughly 16,000 facts, finding that SAFEs assessments matched the human ratings 72% of the time. Even more notably, in a sample of 100 disagreements between SAFE and the human raters, SAFEs judgment was found to be correct in 76% of cases.

The AI Impact Tour Atlanta

Continuing our tour, were headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

While the paper asserts that LLM agents can achieve superhuman rating performance, some experts are questioning what superhuman really means here.

Gary Marcus, a well-known AI researcher and frequent critic of overhyped claims, suggested on Twitter that in this case, superhuman may simply mean better than an underpaid crowd worker, rather a true human fact checker.

That makes the characterization misleading, he said. Like saying that 1985 chess software was superhuman.

Marcus raises a valid point. To truly demonstrate superhuman performance, SAFE would need to be benchmarked against expert human fact-checkers, not just crowdsourced workers. The specific details of the human raters, such as their qualifications, compensation, and fact-checking process, are crucial for properly contextualizing the results.

One clear advantage of SAFE is cost the researchers found that using the AI system was about 20 times cheaper than human fact-checkers. As the volume of information generated by language models continues to explode, having an economical and scalable way to verify claims will be increasingly vital.

The DeepMind team used SAFE to evaluate the factual accuracy of 13 top language models across 4 families (Gemini, GPT, Claude, and PaLM-2) on a new benchmark called LongFact. Their results indicate that larger models generally produced fewer factual errors.

However, even the best-performing models generated a significant number of false claims. This underscores the risks of over-relying on language models that can fluently express inaccurate information. Automatic fact-checking tools like SAFE could play a key role in mitigating those risks.

While the SAFE code and LongFact dataset have been open-sourced on GitHub, allowing other researchers to scrutinize and build upon the work, more transparency is still needed around the human baselines used in the study. Understanding the specifics of the crowdworkers background and process is essential for assessing SAFEs capabilities in proper context.

As the tech giants race to develop ever more powerful language models for applications ranging from search to virtual assistants, the ability to automatically fact-check the outputs of these systems could prove pivotal. Tools like SAFE represent an important step towards building a new layer of trust and accountability.

However, its crucial that the development of such consequential technologies happens in the open, with input from a broad range of stakeholders beyond the walls of any one company. Rigorous, transparent benchmarking against human experts not just crowdworkers will be essential to measure true progress. Only then can we gauge the real-world impact of automated fact-checking on the fight against misinformation.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat's Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

View original post here:
Google DeepMind unveils 'superhuman' AI system that excels in fact-checking, saving costs and improving accuracy - VentureBeat

DeepMind Chief Says Google’s Bungled AI Faces Feature Is Returning Soon – Bloomberg

Google plans to resume a paused artificial intelligence feature that generates images of people in the next couple of weeks, according to the companys top AI executive.

We hope to have that back online in a very short order, Demis Hassabis, head of the research division Google DeepMind, said on Monday at the Mobile World Congress in Barcelona.

Read the original post:
DeepMind Chief Says Google's Bungled AI Faces Feature Is Returning Soon - Bloomberg