Category Archives: Deep Mind

Researchers at Google DeepMind Present Gecko: A Compact and Versatile Embedding Model Powered by the Vast World Knowledge of LLMs – MarkTechPost

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.

Here is the original post:
Researchers at Google DeepMind Present Gecko: A Compact and Versatile Embedding Model Powered by the Vast World Knowledge of LLMs - MarkTechPost

Liverpool Team Up With Deepmind To Always "take Corners Quickly" – Dataconomy

Some moments in football are unforgettable, like Liverpools epic corner taken quickly goal in the 2019 UEFA Champions League semi-finals. It led to Divock Origis goal and a remarkable comeback. Thats why Liverpool knows corner kicks are big chances to score, but planning them isnt easy. Can AI help? Googles DeepMind team thinks so and has started to create Tactic AI. Its a smart program that predicts what might happen during a corner kick, suggests the best strategies for teams like Liverpool, and creates more moments like this:

Did Jurgen Klopps departure at the end of the season push Liverpool to do this, or did they not want to stay away from AI developments? Either way, if this project succeeds, it may change football we know a lot.

TacticAI is an artificial intelligence system developed in partnership with Liverpool FC and AI researchers to enhance football tactics, particularly focusing on corner kicks. This innovative tool utilizes advanced AI techniques to analyze past corner kick scenarios and provide valuable insights to coaches.

Heres how it works: The system operates by employing a geometric deep learning approach, which allows it to represent cornerkick scenarios as graphs.

In these graphs, each player is depicted as a node, containing various attributes such as position, velocity, and height, while the relationships between players are represented by edges. By leveraging this unique representation, TacticAI can accurately predict potential outcomes of corner kicks and propose tactical adjustments to optimize success.

Whats really cool about TacticAI is that it can come up with different scenarios for corner kicks, so coaches can try out new ideas and see what works best. It also helps coaches review past plays more easily, so they can learn from whats happened before.

After lots of testing with real football experts, TacticAI has proven to be really helpful. Coaches can rely on it to give them smart advice that could make a big difference in their teams performance.

Artificial intelligence in sports: AI & Big Data are changing the sports for good

In simple terms, TacticAI is like having a super-smart assistant coach who knows all about football tactics. Its there to help coaches make better decisions during corner kicks, which could lead to more goals and more wins for their team.

Visit link:
Liverpool Team Up With Deepmind To Always "take Corners Quickly" - Dataconomy

Separating the Hype From Reality in AI – PYMNTS.com

The rapid rise of artificial intelligence (AI) has sparked a heated debate among experts, with some warning that the hype surrounding the technology may be overshadowing genuine scientific advancements.

DeepMind Co-Founder Demis Hassabis recently drew parallels between the current AI frenzy and the cryptocurrency boom, raising concerns about the potential impact on the fields progress.

The debate over whether AI is overpromised has significant implications for the commercial landscape as businesses rush to capitalize on the technologys potential. Observers say striking a balance between enthusiasm and realism will be crucial for the healthy growth of AI-driven commerce.

While generative AI is powerful, it is still only one segment of AI,Muddu Sudhakar, co-founder and CEO of the generative AI payments platformAisera, told PYMNTS. AI encompasses a variety of categories. But with so much attention on generative AI, it means that these areas get neglected and crowded out. It could also limit research, which could mean less innovation.

Interest in AI is growing. According to the PYMNTS Intelligence report Consumer Interest in Artificial Intelligence, the average consumer uses around five AI technologies weekly, including web browsing, navigation apps, and online recommendations. Nearly two-thirds of Americans are interested in AI assistants for tasks like booking travel, with AI enhancing the personalization of in-car experiences. These intelligent systems, leveraging generative AI, tailor recommendations to users behaviors and preferences far beyond simple list-based suggestions.

Hassabis expressed concernsto the Financial Times regarding the surge of investment in generative AI startups and products, likening the frenzy to other speculative bubbles. The billions of dollars being poured into generative AI startups and products brings with it a whole attendant bunch of hype and maybe some grifting and some other things that you see in other hyped-up areas, like crypto or whatever, he said.

Some experts say the hype surrounding AI has reached a fever pitch, with grandiose promises and astronomical investments obscuring the reality of the technologys current capabilities.

One of the main issues with AI hype is that it creates unrealistic expectations among the public and investors. When companies make bold claims about their AI-powered products or services, they often fail to deliver on those promises, leading to disappointment and erosion of trust.

Most people in the AI space have good intentions and dont want to mislead consumers or users,Zohar Bronfman, co-founder and CEO ofPecan AI, told PYMNTS. I dont doubt that theyre working hard to deliver the best AI products they can. Whats been ignored, though, is that generative AI so far just hasnt provided significant business value. Its fascinating and powerful, but so far, most business users have come up empty-handed when they try to use it to really drive business impact.

Sudhakar pointed out the excessive investment in large language models (LLMs), suggesting it may overshadow other vital areas of AI research. This focus risks limiting innovation and neglecting emerging technologies that could offer more significant advancements or solutions to pressing challenges in the field.

How many of these do we need? he said. How can you really tell which one is better? Its not clear. This is why I think just a handful of state-of-the-art models will ultimately prevail. That being said, there will be many SLMs [small language models] that address lots of edge cases, but even in this area, many will fade away.

Sudhakar raised a looming issue in AI: the dwindling supply of data necessary to train LLMs. This scarcity, he warned, could become a significant bottleneck in the development and advancement of these models, potentially hindering progress in AI research and applications.

One alternative is to use synthetic data, he added. This is an emerging area and could use much more focus.

Sudhakar also highlighted the importance of shifting focus toward what will eventually succeed the current transformer modelsin AI. Based on a deep learning architecture, transformer models have revolutionized how machines understand and generate human-like text by enabling them to process words about all the other words in a sentence rather than one at a time.

He added, This is a powerful model, but it has limitations, such as with hallucinations, which are based on the underlying probabilities.

While generative AI gets all the attention, the real workhorses of AI, machine learning techniques for prediction and optimization, arent hyped nearly enough, Bronfman said.

Tested and proven machine learning methods can quickly take business data and extract a great deal of value, he added. They may not seem as shiny and new as generative AI, but they definitely shine when theyre integrated into business systems the right way. These recognized methods deserve more attention and investment so businesses can achieve the transformative benefits of AI.

Some commenters say that the best use of AI might not be for commerce.Ilia Badeev, head of data science at Trevolution Group, told PYMNTS that the significance of employing AI for nonprofit and scientific endeavors receives inadequate attention.

I would like to see more hype around AI researchers, he added. Imagine a ScientistGPT that possesses information from all currently existing textbooks and scientific studies and can use it to advance theoretical and practical science.

Originally posted here:
Separating the Hype From Reality in AI - PYMNTS.com

Google DeepMind’s Fact Quest: Improving Long-form Accuracy In LLMs With SAFE – Dataconomy

Large language models (LLMs) have demonstrated remarkable abilities they can chat conversationally, generate creative text formats, and much more. Yet, when asked to provide detailed factual answers to open-ended questions, they still can fall short. LLMs may provide plausible-sounding yet incorrect information, leaving users with the challenge of sorting fact from fiction.

Google DeepMind, the leading AI research company, is tackling this issue head-on. Their recent paper, Long-form factuality in large language models introduces innovations in both how we measure factual accuracy and how we can improve it in LLMs.

DeepMind started by addressing the lack of a robust method for testing long-form factuality. They created LongFact, a dataset of over 2,000 challenging fact-seeking prompts that demand detailed, multi-paragraph responses. These prompts cover a broad array of topics to test the LLMs ability to produce factual text in diverse subject areas.

The next challenge was determining how to accurately evaluate LLM responses. DeepMind developed the Search-Augmented Factuality Evaluator (SAFE). Heres the clever bit: SAFE itself uses an LLM to make this assessment!

Heres how it works:

DeepMind also proposed a new way to score long-form factual responses. The traditional F1 score (used for classification tasks) wasnt designed to handle longer, more complex text. F1@K balances precision (the percentage of provided facts that are correct) against a concept called recall.

Recall takes into account a users ideal response length after all, an LLM could gain high precision by providing a single correct fact, while a detailed answer would get a lower score.

DeepMind benchmarked a range of large langue models of varying sizes, and their findings aligned with the intuition that larger models tend to demonstrate greater long-form factual accuracy. This can be explained by the fact that larger models are trained on massive datasets of text and code, which imbues them with a richer and more comprehensive understanding of the world.

Imagine an LLM like a student who has studied a vast library of books. The more books the student has read, the more likely they are to have encountered and retained factual information on a wide range of topics. Similarly, a larger LLM with its broader exposure to information is better equipped to generate factually sound text.

In order to perform this measurement, Google DeepMind tested the following models: Gemini, GPT, Claude (versions 3 and 2), and PaLM. The results are as follows:

DeepMinds study shows a promising path toward LLMs that can deliver more reliable factual information. SAFE achieved accuracy levels that exceeded human raters on certain tests.

However, its crucial to note the limitations:

Search engine dependency: SAFEs accuracy relies on the quality of search results and the LLMs ability to interpret them.

Non-repeating facts: The F1@K metric assumes an ideal response wont contain repetitive information.

Despite potential limitations, this work undeniably moves the needle forward in the development of truthful AI systems. As LLMs continue to evolve, their ability to accurately convey facts could have profound impacts on how we use these models to find information and understand complex topics.

Featured image credit: Freepik

View post:
Google DeepMind's Fact Quest: Improving Long-form Accuracy In LLMs With SAFE - Dataconomy

Google DeepMind unveils ‘superhuman’ AI system that excels in fact-checking, saving costs and improving accuracy – VentureBeat

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.

A new study from Googles DeepMind research unit has found that an artificial intelligence system can outperform human fact-checkers when evaluating the accuracy of information generated by large language models.

The paper, titled Long-form factuality in large language models and published on the pre-print server arXiv, introduces a method called Search-Augmented Factuality Evaluator (SAFE). SAFE uses a large language model to break down generated text into individual facts, and then uses Google Search results to determine the accuracy of each claim.

SAFE utilizes an LLM to break down a long-form response into a set of individual facts and to evaluate the accuracy of each fact using a multi-step reasoning process comprising sending search queries to Google Search and determining whether a fact is supported by the search results, the authors explained.

The researchers pitted SAFE against human annotators on a dataset of roughly 16,000 facts, finding that SAFEs assessments matched the human ratings 72% of the time. Even more notably, in a sample of 100 disagreements between SAFE and the human raters, SAFEs judgment was found to be correct in 76% of cases.

The AI Impact Tour Atlanta

Continuing our tour, were headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.

While the paper asserts that LLM agents can achieve superhuman rating performance, some experts are questioning what superhuman really means here.

Gary Marcus, a well-known AI researcher and frequent critic of overhyped claims, suggested on Twitter that in this case, superhuman may simply mean better than an underpaid crowd worker, rather a true human fact checker.

That makes the characterization misleading, he said. Like saying that 1985 chess software was superhuman.

Marcus raises a valid point. To truly demonstrate superhuman performance, SAFE would need to be benchmarked against expert human fact-checkers, not just crowdsourced workers. The specific details of the human raters, such as their qualifications, compensation, and fact-checking process, are crucial for properly contextualizing the results.

One clear advantage of SAFE is cost the researchers found that using the AI system was about 20 times cheaper than human fact-checkers. As the volume of information generated by language models continues to explode, having an economical and scalable way to verify claims will be increasingly vital.

The DeepMind team used SAFE to evaluate the factual accuracy of 13 top language models across 4 families (Gemini, GPT, Claude, and PaLM-2) on a new benchmark called LongFact. Their results indicate that larger models generally produced fewer factual errors.

However, even the best-performing models generated a significant number of false claims. This underscores the risks of over-relying on language models that can fluently express inaccurate information. Automatic fact-checking tools like SAFE could play a key role in mitigating those risks.

While the SAFE code and LongFact dataset have been open-sourced on GitHub, allowing other researchers to scrutinize and build upon the work, more transparency is still needed around the human baselines used in the study. Understanding the specifics of the crowdworkers background and process is essential for assessing SAFEs capabilities in proper context.

As the tech giants race to develop ever more powerful language models for applications ranging from search to virtual assistants, the ability to automatically fact-check the outputs of these systems could prove pivotal. Tools like SAFE represent an important step towards building a new layer of trust and accountability.

However, its crucial that the development of such consequential technologies happens in the open, with input from a broad range of stakeholders beyond the walls of any one company. Rigorous, transparent benchmarking against human experts not just crowdworkers will be essential to measure true progress. Only then can we gauge the real-world impact of automated fact-checking on the fight against misinformation.

VB Daily

Stay in the know! Get the latest news in your inbox daily

By subscribing, you agree to VentureBeat's Terms of Service.

Thanks for subscribing. Check out more VB newsletters here.

An error occured.

View original post here:
Google DeepMind unveils 'superhuman' AI system that excels in fact-checking, saving costs and improving accuracy - VentureBeat

DeepMind Chief Says Google’s Bungled AI Faces Feature Is Returning Soon – Bloomberg

Google plans to resume a paused artificial intelligence feature that generates images of people in the next couple of weeks, according to the companys top AI executive.

We hope to have that back online in a very short order, Demis Hassabis, head of the research division Google DeepMind, said on Monday at the Mobile World Congress in Barcelona.

Read the original post:
DeepMind Chief Says Google's Bungled AI Faces Feature Is Returning Soon - Bloomberg

DeepMind co-founder looks to future of medical research aided by AI – TechCentral.ie

If you are a visitor of this website:

Please try again in a few minutes.

There is an issue between Cloudflare's cache and your origin web server. Cloudflare monitors for these errors and automatically investigates the cause. To help support the investigation, you can pull the corresponding error log from your web server and submit it our support team. Please include the Ray ID (which is at the bottom of this error page). Additional troubleshooting resources.

Here is the original post:
DeepMind co-founder looks to future of medical research aided by AI - TechCentral.ie

Google’s AI, Genie first to be trained exclusively from Internet videos, crafts games – Interesting Engineering

On a normal day in February, Tim Rocktschel from Google DeepMinds Open-Endedness Team unveiled an exciting development in the field of artificial intelligence.

Meet Genie, the first AI generative interactive environment trained exclusively from over 200,000 hours of internet videos.

In his announcement on X, the model, Genie, can generate an endless variety of action-controllable 2D worlds from image prompts. This marks a significant leap in the world of AI.

Go here to read the rest:
Google's AI, Genie first to be trained exclusively from Internet videos, crafts games - Interesting Engineering

DeepMind chief says Google’s bungled AI faces feature is returning soon – The Star Online

Google plans to resume a paused artificial intelligence feature that generates images of people in the "next couple of weeks, according to the companys top AI executive.

"We hope to have that back online in a very short order, Demis Hassabis, head of the research division Google DeepMind, said on Monday at the Mobile World Congress in Barcelona.

Last week, Alphabet Inc.s Google pulled the image generator for Gemini, its powerful new AI model, amid a flurry of criticism over inaccurate historical depictions of race. In a blog post, the company explained that the model had become "way more cautious than we intended.

Hassabis echoed this line, explaining that Google was dealing with the difficulties of launching a "multi-modal system one designed to generate text, images and photos.

"This is one of the nuances that comes with advanced AI, he said. "Its a field were all grappling with. Bloomberg

Link:
DeepMind chief says Google's bungled AI faces feature is returning soon - The Star Online

Google DeepMind taps the power of its AI to accelerate quantum computers – TNW

In new research, Google DeepMind has demonstrated that its AI can help accelerate the development of quantum computers taking one step further in combining two of the most disruptive technologies.

DeepMind worked together with UK-based Quantinuum to solve a key challenge in fault-tolerant quantum computers: reducing the number of T gates.

T gates are essential in implementing a quantum circuit a network of gates that manipulates qubits to generate algorithms. However, T gates are also the most expensive and most resource-intensive gates of the network.

To address this, the team developed AlphaTensor-Quantum, an extension of DeepMinds AlphaTensor, the first AI system that can discover efficient algorithms for tasks such as matrix multiplication.

AlphaTensor-Quantum is an AI model that leverages the relationship between optimising T-count and tensor decomposition, using deep reinforcement learning.

In contrast to existing approaches, the model can incorporate domain-specific knowledge about quantum computation as well as use gadgetisation techniques, which implement alternative gates by introducing additional qubits and operations. This way, the AI can significantly reduce the number of T gates.

According to the researchers, AlphaTensor-Quantum outperforms existing systems for T-count optimisation and is as efficient as the best human-designed solutions across numerous applications. It can also save hundreds of hours of research by optimising the process in a fully automated way, the team says in the paper.

On a representative standard benchmark set of circuits, AlphaTensor-Quantum improves the cost by 37% on average on the existing state of the art obtained by human-crafted heuristics, Konstantinos Meichanetzidis, head of product development at Quantinuum, told TNW.

DeepMind and Quantinuum envision applications in quantum chemistry and related fields, and suggest that possible future research could focus on improving the algorithms neural network architecture.

In general, the method can be readily applied to any given circuit independent of application, and the improvements over the baselines correspond directly to space and time cost of the quantum algorithm under consideration, Meichanetzidis said.

Update (11:30AM CET, February 27, 2024): The article has been updated to include the comments from Quantinuum.

One of the themes of this years TNW Conference is Ren-AI-ssance: The AI-Powered Rebirth. If you want to go deeper into all things artificial intelligence, or simply experience the event (and say hi to our editorial team), weve got something special for our loyal readers. Use the code TNWXMEDIA at checkout to get 30% off your business pass, investor pass or startup packages (Bootstrap & Scaleup).

See original here:
Google DeepMind taps the power of its AI to accelerate quantum computers - TNW