Page 1,090«..1020..1,0891,0901,0911,092..1,1001,110..»

What’s the Latest Update on Multichain? Could Fantom Issues Halt … – Analytics Insight

Cryptocurrencies are constantly evolving, and the newly developed Multichain technology has been on the rise since its launch. The blockchain industry has greatly benefited from the technologys ability to connect different blockchains and aims at improving interoperability. They allow communication among blockchain networks, including the popular platform Fantom. Regardless of their essential utility, Fantom wasnt a choice for the prominent AI decentralized network, Avorak.

Multichain platforms allow seamless communication among different blockchains and allow connections that enable data transfer. This has led to increased adoption of decentralized finance (DeFi) platforms.

Fantom is among the fastest multichain platforms that also transact at low fees. Regardless of its great features, the platform has recently been facing changes in service delivery. As such, this has led to concerns about its reliability, halting Avoraks expansion.

In recent months, Fantom has experienced network outages that have caused transaction delays. This has inconvenienced its users, raising questions about the networks reliability. The platforms integrity based on stability and its power to provide for increasing demand for DeFi and dApp has been questioned.

Avorak, an AI decentralized exchange, had set sights on partnering with Fantom, but the move was indefinitely postponed due to various issues.

Avorak is an automated market maker (AMM), an advanced decentralized exchange (DEX) that uses artificial intelligence to deliver its services. The platform has considered expanding its reach to other platforms, including Multichain platforms like Fantom. However, Avorak had to rethink its plans since Fantom has issues that could affect Avoraks general market performance.

Avorak offers a variety of features, including its ability to analyze the market and offer the best and most effective recommendations based on real-time data. As such, it is a very reliable platform that uses AI in its services, including providing trading features through Avorak Trade. Among them are automated crypto trading strategies that are very profitable. Moreover, such features require it to rely on fast, stable platforms in delivering its services. Fantom did not make the cut for this advanced technology.

Among the main concerns Avorak raised was the total value locked (TVL) of the Fantom blockchain. TVL is essential in understanding a projects popularity and market success. Fantoms TVL has been steadily increasing, reaching over $2 billion in May. However, any issues Fantom faces could impact this amount. This has made Avorak cautious about expanding with Fantom due to potential issues. Instead, Avorak chose other options like Binance Smart Chain, which are more stable and reliable.

Multichain platforms continue to grow in popularity, but Fantom has faced issues with reliability and stability, making investors worry. Avorak intended to link with the Fantom multichain platform but had to rethink its plans due to network issues.

However, Avorak remains a resilient, decentralized blockchain platform that has the potential to dominate the cryptocurrency market.

Website: https://avorak.ai

Buy AVRK: https://invest.avorak.ai/register

Read the original post:

What's the Latest Update on Multichain? Could Fantom Issues Halt ... - Analytics Insight

Read More..

Passive Income Comes To Life With Caged Beasts As ATOM, and … – ANI News

ANI | Updated: Jun 03, 2023 13:00 IST

ATKNew Delhi [India], June 3: Proof-of-stake may sound like a positive restaurant review, but in cryptocurrency jargon, it refers to a great way to earn passive income. You'll hear the words 'passive income' often with many crypto projects, and due to the increasing competitiveness of cryptocurrency, a series of incentives are being developed to entice and reward participants on digital platforms. If things get too competitive, they might start offering real steaks.As we stand, digital currencies are currently not offering a food menu to potential investors. Still, we are seeing new projects like the new presale meme coin Caged Beasts (BEASTS), which is offering its own form of incentives, while established tokens like Binance Coin (BNB) and ATOM are advancing the parameters of Decentralized Finance (DeFi).This article will discuss what plans the new meme token Caged Beasts has for its potential investors during its presale. This article will also compare the functionality of ATOM and BNB and what their parent projects do for the functionality of the crypto space.ATOM's Cosmos ExpandingThe parent project of ATOM is Cosmos, and they have a universe of ideas ever expanding into the far reaches of the cryptocurrency boundaries. Its network started with a big bang and has continued to connect its network galaxies that gravitate to open-source tools for streamlining transactions.The Cosmos Hub has been the success story for Cosmos and its native coin ATOM, and the proof-of-stake blockchain has garnered success by rewarding participation. The proof-of-stake consensus mechanism helps improve the digital infrastructure's functionality by rewarding new crypto if they accurately validate the new data and don't cheat the system. If you play fair, you get the rewards.The BNB Chain ReactionBinance Coin (BNB) has been knocking around the hallowed halls of crypto since 2017 and is an essential asset for trading activities and passive income. The Binance exchange has been one of the main reasons for its longevity, and just like Cosmos, they have included proof-of-stake in its development.Unlike Cosmos, Binance's blockchain, Binance Smart Chain (BSC), uses a hybrid proof-of-authority consensus architecture (PoA), which helps with minimal fees and only requires 21 validators to function. Validators are vital for the proof-of-stake mechanism to work effectively, powering the BSC network.

Caged Beasts Offers Passive Income With a BiteA caged beast can rattle its way free anytime, so crypto projects everywhere are preparing for the meme coin with teeth. Caged Beasts have a unique selling point that captivates potential investors across the industry. If you have not heard of 'Caged Liquidity', this caged DeFi mutagen has locked away 75% of funds until its release date, vamping its prospects of passive income.Passive income and cool narratives aside, the team behind Caged Beasts have set themselves apart from BNB and ATOM with their new and revolutionary referral code. It's nothing new for developers to allow their investors to create a unique referral code for friends and family, but the two-way incentive is advantageous for both sides of the coin.Once the BEASTS investor has created their unique code, they can send it to friends, family and anyone who wishes to increase their profit margins. The unique factor in this referral programme is that the code creator and the new investor will receive a 20% Bonus!At the end of the day, in the world of passive income, users can reap significant rewards for participating in a project. The choice eventually comes down to picking the project with the most rewards, and therefore, you want to make sure you're on the right team when the Caged Beasts are released!Check out the links below to find out More About Caged Beasts:Website: https://cagedbeasts.comTwitter: https://twitter.com/CAGED_BEASTSTelegram: https://t.me/CAGEDBEASTS(Disclaimer: The above press release has been provided by ATK. ANI will not be responsible in any way for the content of the same)

See the article here:

Passive Income Comes To Life With Caged Beasts As ATOM, and ... - ANI News

Read More..

DeepMind AI’s new way to sort objects could speed up global computing – New Scientist

Sorting algorithms are a vital part of computing

BEST-BACKGROUNDS/Shutterstock

An algorithm used trillions of times a day around the world could run up to 70 per cent faster, thanks to an artificial intelligence created by UK-based firm DeepMind. It has found an improved way for computers to sort data that has been overlooked by human programmers for decades.

We honestly didnt expect to achieve anything better: its a very short program, these types of programs have been studied for decades, says Daniel Mankowitz at DeepMind.

Known as sorting algorithms, they are one of the workhorses of computation, used to organise data by alphabetising words or ranking numbers from smallest to largest. Many different sorting algorithms exist, but innovations are limited as they have been highly optimised over the decades.

Now, DeepMind has created an AI model called AlphaDev that is designed to discover new algorithms to complete a given task, with the hope of beating our existing efforts. Rather than tweaking current algorithms, AlphaDev starts from scratch.

It uses assembly code, which is the intermediate computer language that sits between human-written code and sequences of binary instructions encoded in 0s and 1s. Assembly code can be painstakingly read and understood by humans, but most software is written in a higher-level language that is more intuitive before being translated, or compiled, into assembly code. DeepMind says that assembly code affords AlphaDev more leeway to create more efficient algorithms.

The AI is told to build an algorithm one instruction at a time and tests its output against a known correct solution to ensure it is creating an effective method. It is also told to create the shortest possible algorithm. DeepMind says that the task grows rapidly more difficult with larger problems, as the number of possible combinations of instructions can rapidly approach the number of particles in the universe.

When asked to create a sorting algorithm, AlphaDev came up with one that was 70 per cent faster than the best for lists of five pieces of data and 1.7 per cent faster for lists of over 250,000 items.

We initially thought it made a mistake or there was a bug or something, but, as we analysed the program, we realised that AlphaDev had actually discovered something faster, says Mankowitz.

Because sorting algorithms are used in a lot of common software, this improvement could have a significant cumulative effect globally. Such algorithms are so vital that they are written into libraries of code that anyone can use, rather than writing their own. DeepMind has made its new algorithms open-source and included them in the commonly used Libc++ library, meaning people can already use them today. This is the first change to this part of the sorting algorithm library in over a decade, says DeepMind.

Mankowitz says that Moores law the idea that the amount of computing power of a single chip doubles at regular intervals is coming to an end because miniaturisation is hitting immutable physical limits, but that AlphaDev might be able to help compensate for this by improving efficiency.

Today these algorithms are being pulled [run in software] we estimate trillions of times every day and [are] able to be used by millions of developers and companies all around the world, says Mankowitz. Optimising the code of fundamental functions that get pulled trillions of times a day hopefully will have big enough benefits to encourage people to attempt to do even more of these functions and to have that as one path to unblocking this bottleneck [of Moores law slowing].

Mark Lee at the University of Birmingham, UK, says AlphaDev is interesting and that even a 1.7 per cent speed boost is useful. But he says that even if similar efficiencies are found in other common algorithms he is sceptical this approach will make up for Moores law breaking, as it wont be able to make the same gains in more esoteric software.

I think theyre going to be able to do that to things like sorting algorithms, and standard kind of compute algorithms. But its not going to be applied to complex bits of code, he says. I think increases in hardware are still going to outstrip it.

Topics:

Read more here:
DeepMind AI's new way to sort objects could speed up global computing - New Scientist

Read More..

DeepMind cofounder: Personal AIs will make us super human – Business Insider

Mustafa Suleyman, Co-founder of DeepMind and CEO of Inflection AI, says we are on the brink of a revolution with personalized AI. DeepMind

Personal AI assistants that can do everything from shop and negotiate on your behalf are the future of AI, Mustafa Suleyman, the cofounder of DeepMind and CEO of Inflection AI, said Tuesday during an interview with CNBC.

Suleyman, who sold the AI company DeepMind to Google in 2014, said that these types of personalized AI systems will dramatically transform how we work and live.

"This is just the beginning of a revolution. Everybody is going to have AIs: businesses, brands, organizations," Suleyman said. "AIs are going to be representing their values trying to sell you things, persuade you of things, and be super useful to you in certain ways."

"I think everybody's going to want their own personal intelligence that's on your side, aligned with your interests, that can advocate on your behalf, and negotiate, find you great deals," he said, adding that "most of the time your AI is going to be talking to other AIs to find you the best possible way to solve the problems you care about."

Suleyman, of course, has a vested interest in personal AI assistants.

Suleyman's company Inflection AI is behind the generative AI chatbot Pi, which aims to help users make decisions and offer support. The name, Pi, stands for "personal intelligence." Similar to ChatGPT, Pi is trained using large language models.

Suleyman said that Pi can be used for a number of different things, but it's especially useful in helping people make decisions and as a companion that engages with you.

"People are using it to weigh up difficult problems. They're making a tough decision in their life and they want to think through both sides of a problem. People are using it to pursue their passions and hobbies. Like sometimes your partner isn't as interested in your golf hobby as you might be" Suleyman said. "But Pi is always there to be interested in your passion."

In the future, though, he said Pi will gain new capabilities for helping users manage their daily tasks. He said he is focused on building an AI system that will "make you much better at everything you want to do."

"I definitely see it evolving to be chief of staff for your life. Imagine a scheduler, an organizer, an advocate, a buyer, a booker. So it's gonna take all those actions for you on your behalf and make things much, much easier," he said.

Suleyman isn't the only tech leader that thinks personal AI assistants will mark a profound change in how we use technology.

In May, Bill Gates said at a Goldman Sachs and SV Angel event that personal digital AI agents could destroy the need to use search engines and productivity websites. He said these AI agents could fundamentally change human behavior because they will do tasks for us that require some level of research. There's signs the world's biggest search giant had the same thought: When ChatGPT burst onto the scene, Google management reportedly issued a "code red" amid worry that it could eat into the company's search business. Google has since rushed out a competitor, Bard.

Suleyman said that he expects search engines to be impacted by this kind of transformation sooner than later.

"I think this is going to happen within months... Today, we have access to information distilled in succinct and precise answers. You don't go to use the websites on 10 blue links anymore. You go and ask your favorite AI for access to information," Suleyman said. "Over the next 12 to 18 months. That AI is going to use it to interact with other AIs, make bookings, plans, and schedules ... I think we're just on the corner of this."

Loading...

Go here to read the rest:
DeepMind cofounder: Personal AIs will make us super human - Business Insider

Read More..

A lawyer got ChatGPT to do his research, but he isnt AIs biggest fool – The Guardian

Opinion

The emerging technology is causing pratfalls all over not least tech bosses begging for someone to regulate them

This story begins on 27 August 2019, when Roberto Mata was a passenger on an Avianca flight 670 from El Salvador to New York and a metal food and drink trolley allegedly injured his knee. As is the American way, Mata duly sued Avianca and the airline responded by asking that the case be dismissed because the statute of limitations had expired. Matas lawyers argued on 25 April that the lawsuit should be continued and appending a list of over half a dozen previous court cases that apparently set precedents supporting their argument.

Aviancas lawyers and Judge P Kevin Castel then dutifully embarked on an examination of these precedents, only to find that none of the decisions or the legal quotations cited and summarised in the brief existed.

Why? Because ChatGPT had made them up. Whereupon, as the New York Times report puts it, the lawyer who created the brief, Steven A Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court saying in an affidavit that he had used the artificial intelligence program to do his legal research a source that has revealed itself to be unreliable.

This Schwartz, by the way, was no rookie straight out of law school. He has practised law in the snakepit that is New York for three decades. But he had, apparently, never used ChatGPT before, and therefore was unaware of the possibility that its content could be false. He had even asked the program to verify that the cases were real, and it had said yes. Aw, shucks.

One is reminded of that old story of the chap who, having shot his father and mother, then throws himself on the mercy of the court on the grounds that he is now an orphan. But the Mata case is just another illustration of the madness about AI that currently reigns. Ive lost count of the number of apparently sentient humans who have emerged bewitched from conversations with chatbots the polite term for stochastic parrots who do nothing else except make statistical predictions of the most likely word to be appended to the sentence they are at that moment engaged in composing.

But if you think the spectacle of ostensibly intelligent humans being taken in by robotic parrots is weird, then take a moment to ponder the positively surreal goings-on in other parts of the AI forest. This week, for example, a large number of tech luminaries signed a declaration that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Many of these folks are eminent researchers in the field of machine learning, including quite a few who are employees of large tech companies. Some time before the release, three of the signatories Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodi of Anthropic (a company formed by OpenAI dropouts) were invited to the White House to share with the president and vice-president their fears about the dangers of AI, after which Altman made his pitch to the US Senate, saying that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.

Take a step back from this for a moment. Here we have senior representatives of a powerful and unconscionably rich industry plus their supporters and colleagues in elite research labs across the world who are on the one hand mesmerised by the technical challenges of building a technology that they believe might be an existential threat to humanity, while at the same time calling for governments to regulate it. But the thought that never seems to enter what might be called their minds is the question that any child would ask: if it is so dangerous, why do you continue to build it? Why not stop and do something else? Or at the very least, stop releasing these products into the wild?

The blank stares one gets from the tech crowd when these simple questions are asked reveal the awkward truth about this stuff. None of them no matter how senior they happen to be can stop it, because they are all servants of AIs that are even more powerful than the technology: the corporations for which they work. These are the genuinely superintelligent machines under whose dominance we all now live, work and have our being. Like Nick Bostroms demonic paperclip-making AI, such superintelligences exist to achieve only one objective: the maximisation of shareholder value; if pettifogging humanistic scruples get in the way of that objective, then so much the worse for humanity. Truly, you couldnt make it up. ChatGPT could, though.

Keeping it lo-techTim Harford has written a characteristically thoughtful column for the Financial Times on what neo-luddites get right and wrong about big tech.

Stay wokeMargaret Wertheims Substack features a very perceptive blogpost on AI as symptom and dream.

Much missedMartin Amis on Jane Austen over on the Literary Hub site is a nice reminder (from 1996) of the novelist as critic.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:
A lawyer got ChatGPT to do his research, but he isnt AIs biggest fool - The Guardian

Read More..

The Microsoft-Google AI war | On Point – WBUR News

Leading minds in artificial intelligence are raising concerns about the very technology they're creating.

As this technology advances, we understand that people are anxious about how it could change the way we live. We are too," Sam Altman says.

Two of the biggest tech companies in the world, Microsoft and Google, are warning about the dangers of unregulated AI development. At the same time, theyre racing each other to push AI into their most popular products.

This technology does not have any of the complexity of human understanding, but it will affect us profoundly in the way that its rolled out into the world," Sarah Myers West says.

So, how could that change us?

Today, On Point: The Microsoft-Google AI war.

Dina Bass, tech and AI reporter for Bloomberg News.

Will Knight,senior writer for WIRED, covering artificial intelligence.

Sarah Myers West, managing director of the AI Now Institute, which studies the social implications of artificial intelligence.

MEGHNA CHAKRABARTI: Here comes another open letter from the world of AI developers warning about the AI they're developing. Though this one is less of a statement and more a single sentence.

"Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war."

CHAKRABARTI: The statement is hosted by the Center for AI Safety and signed by politicians and scientists, including researchers at the forefront of AI technology. So first of all, when technologists are literally repeatedly begging for regulation, we, the public and our political representatives should listen to them and should do something about it because they're saying they cannot self-regulate.

And when it comes to civilization changing technology, tech folks probably shouldn't be relied on to self-regulate. Because it's the citizens, the civilization, i.e. the rest of humanity, who should have a say in how that very civilization should be changed. Or at least I think so.

The tech world is also saying it will not self-regulate. Because there's more than a little talking out of both sides of their mouths going on, isn't there? On the one hand, Google DeepMind CEO Demis Hassabis and Microsoft's chief scientific officer, Eric Horvitz, are among the warning letters signatories. On the other hand.

SATYA NADELLA: The age of AI is upon us and Microsoft's powering it.

CHAKRABARTI: That is Microsoft CEO Satya Nadella announcing earlier this year that even as his company is warning about the dangers of unregulated AI, his company is also pushing new AI technologies into almost every aspect of Microsoft's massive product reach.

NADELLA: We are witnessing nonlinear improvements in capability of foundation models, which we are making available as platforms. And as customers select their cloud providers and invest in new workloads, we are well positioned to capture that opportunity as a leader in AI.

CHAKRABARTI: For example, Microsoft now has a generative AI tool in its search engine Bing. Not to be outdone, Google, also a signatory to that warning letter, is pushing hard into generative AI. Google CEO Sundar Pichai told CBS's 60 Minutes:

SUNDAR PICHAI: This is going to impact every product across every company. And so that's why I think it's a very, very profound technology. And so we are just in early days.

CHAKRABARTI: And just last month, Google vice President of Engineering Cathy Edwards announced the company is rolling out generative AI into the most popular search engine in the world. Google Search, which has 80% of global search market share, 99,000 searches per second, more than 8.5 billion searches every day.

CATHY EDWARDS: These new generative AI capabilities will make such smarter and searching simpler. And as you've seen, this is really especially helpful when you need to make sense of something complex with multiple angles to explore. You know, those times when even your question has questions.

CHAKRABARTI: Google, Microsoft and AI. These two companies are so big, so consequential that their near simultaneous public push into generative AI is being called the Microsoft-Google AI War. And that, of course, raises the question exactly how will this war impact you and every other human being? Well, joining us now is Dina Bass. She's tech and a reporter for Bloomberg News who's covered Microsoft for 20 years, and she joins us from Seattle. Dina, welcome back to the show.

DINA BASS: Hi, Meghna.

CHAKRABARTI: Also with us today is Will Knight. He's a senior writer for WIRED covering artificial intelligence, and he joins us here in the studio. Welcome to On Point.

WILL KNIGHT: Hello. Thanks for having me.

CHAKRABARTI: So, first of all, I just want to get a sense from both of you about whether this framing of the Google Microsoft AI war is an accurate one, because it seems from the outside kind of significant that at least in search, we have these, you know, two nearly simultaneous announcements from these two companies. So, Dina, is this a big war between the two?

There is definitely a strong competition between the two.

BASS: There is definitely a strong competition between the two. I think the kind of wizard behind the curtain that you're leaving out here is OpenAI, which basically the Microsoft technology at its heart is OpenAI's technology. What we're looking at is Microsoft adopting OpenAI ... language and image generation technology. It also writes code into every single one of its products.

And because Microsoft was a little quicker to get some of these things out, it seemed to put Google a bit on the back foot, even though many of these technologies were actually kind of invented and pioneered at Google. And so it has, as a result, turned into a bit of sort of, you know, Google trying to catch Microsoft even though it's early days.

But I think that also leaves out a lot of other companies, a lot of other startups, a lot of open-source work that may end up passing both of these. You know, there was a leaked memo a couple of weeks ago from a Google engineer saying that both OpenAI and Google have no moat and will be surpassed by the open-source work. So the framing is correct. But also there's a lot more going on.

CHAKRABARTI: Good. We'll get to that. We'll get to a lot more that's going on in a few minutes. But I have to say, given the reach and the size of both of these companies, I want to focus for a little bit longer on how their push into AI and specifically generative AI is going to impact all of us. So Will, same question to you. I mean, is this kind of a new front in the yearlongs competition between Microsoft and Google?

KNIGHT: I would say yes, absolutely. I mean, as Dina is saying, you know, this is definitely a new era of competition between the two, driven by what is quite a profound kind of step forward in some of the capabilities in AI led by these models like GPT. And there are still tons and tons of limitations.

But whenever there's a big technological shift, we've seen it before with the internet, with mobile. Big companies see an opportunity to kind of get ahead of each other and maybe also worry about falling behind.

CHAKRABARTI: Now we heard both Sundar Pichai and Satya Nadella. The CEOs of both companies say essentially that AI is going to transform everything they do. When a CEO says that, I tend to believe them. But Will, I mean, do you see the same that it's going to transform Microsoft and Google? Maybe not as we know it, but how they operate and what they do?

KNIGHT: I think absolutely. I mean, you know, we should also, you know, should be wary that there are limitations. This technology is being rushed out very quickly. There are issues with it. But I look at it as something as sort of fundamental as software. And we've seen that with the previous era of AI. Machine learning has transformed so many products, so many companies. And what this is, is kind of a step change, quite a significant change really, in what you can potentially do with machine learning.

We're seeing it primarily in these chat bots and image generation, but it represents a sort of new set of capabilities that you can give to computers. And they are very general purpose. So yes, they're going to, I think try to apply it everywhere. There may be many problems along the way as they do that as well.

Machine learning has transformed so many products, so many companies.

CHAKRABARTI: Okay, Dina, So what's really fascinating to me is, you know, hot on the heels of ChatGPT, one of the first sort of public points of access that people have to Microsoft and Google's use of generative AI is through search. Right? And like searches, like literally everybody uses search all the time, every day. Have you used the AI powered search on Bing?

BASS: BASS: I have and it's interesting that you point that out because search up until a couple of months ago was really a sleepy corner of the Internet in terms of competition. I mean, in fact, I can't imagine you thought we'd be talking about Microsoft and Google refighting the search battle that Microsoft essentially lost ten years ago. I mean, this is not an area that we thought was ripe for innovation.

I have used it. It is much better with open ended questions similar to ChatGPT. It can, you know, generate content, fun content, recipes, shopping lists, but also answer more open-ended questions. If you're trying to figure out what products to buy or where to travel.

But as Will is pointing out, it makes a lot of mistakes, both companies are trying to get around that by having citations, you can click and see where the data is coming from and catch the mistake. I just don't know if users going are going to do that. The term of art for these mistakes, quote-unquote, is hallucinations. That's what it seems to be. It seems to be a euphemism for, if I can put this politely in radio speak, making stuff up. So that's still an issue.

CHAKRABARTI: Are you guys up for a live demo?

KNIGHT: Sure.

CHAKRABARTI: Should we try this? Because I have Microsoft Edge open here in front of me, which, by the way, interestingly, if I have this right, you can't use the Bing AI powered search in any other browser but Edge. Is that right Will?

KNIGHT: Yes, that's right.

CHAKRABARTI: Ah. Hmm ... keeping us in the Microsoft ecosystem ... (LAUGHS)

BASS: It's coming to others. But not yet ...

CHAKRABARTI: Ok, so I've got it here open in front of me. I have no idea what's going to happen. Well, I kind of do, because I've asked this question before, but it's my favorite question for summertime travel. And it's open ended enough. Dina, I'm going to ask the Bing AI search. (TYPING) Why is airline travel so horrible? Is that open ended enough, Dina?

BASS: Sure.

CHAKRABARTI: Okay, so here, enter. And it's thinking. Oh, it's not that fast, okay? It's still going. Is that normal? Yeah.

KNIGHT: Well, what is running on the back end is a giant neural network that's trying to come up with the answer. So it's quite different to a regular search.

CHAKRABARTI: Oh, okay. Here it comes with the answer. Wow. It's a long answer. It says there are several reasons why airline travel can be unpleasant. According to a CNN business article, some reason includes lack of enough pilots and flight attendants. Okay, that is true, especially on the pilot front. There was a pilot shortage.Worries about vaccine rules, fewer seats, higher fares rising, number of unruly, unhappy passengers. Then it quotes Investopedia and a Time article about pandemic induced pain after the airline industry ground to a halt and is now struggling to catch up with surging demand. I hope this helps. Okay. Now, if I didn't know anything about the airline industry, I would say these answers make sense. But Will, it's leaving out massive, massive causes of ...

KNIGHT: Yeah, I think there may be some hallucinations there. Like the vaccine problems. What it's doing is taking a huge amount of stuff from the web and then just trying to kind of guess what would be a plausible answer, not necessarily the right answer.

CHAKRABARTI: So Dina, knowing this and if even people even inside the industry are calling them halluciations. We of course are going to talk about some of the good that AI can do. Do you have any concerns about these products being rolled out as fast as they are for everyone to use?

BASS: Ladies and gentlemen, you have to bear in mind that you are the beta testers.

CHAKRABARTI: (LAUGHS)

BASS: This is being experimented on you. The companies will tell you that that's necessary, that in order to refine it, to have it work well, they need large volumes of data.

CHAKRABARTI: Great, so we are testing the product for them. (TYPING) Right now I'm asking Microsoft's new AI power search powered search. Should you listen to the amazing radio show and podcast called On Point from WBUR? I'm a little afraid to press enter but I will. (LAUGHS) And we are talking about the Microsoft-Google AI war as both of these huge companies start unrolling generative AI technologies into their products. Oh look! It answered, it said:

Yes. On Point is a radio show and podcast produced by WBUR in Boston. It covers a wide range of topics from the economy and health care to politics and the environment. The show speaks with newsmakers and everyday people about the issues that matter most.

So no hallucinations there. Okay. But in terms of the importance of AI to companies like Microsoft and Google and therefore to us as the users of their technology. Here is what Google CEO Sundar Pichai said to CBS 60 Minutes.

PICHAI: I've always thought of AI as the most profound technology humanity's working on, more profound than fire or electricity or anything that we have done in the past.

SCOTT PELLEY: Why so?

PICHAI: It gets at the essence of what intelligence is, what humanity is. You know, we are developing technology which for sure one day will be far more capable than anything we have ever seen before.

CHAKRABARTI: And here is Microsoft CEO Satya Nadella in conversation last month with Andrew Ross Sorkin talking about the fact that he agrees AI might be moving, quote, too fast, but not in the way that some people think.

NADELLA: A lot of technology, a lot of AI is already there at scale, right? Every newsfeed, every sort of social media feed search as we know it before ... they're all on AI. And if anything, the black boxes, I'd describe them as the autopilot era. So in an interesting way, we're moving from the autopilot era of AI to copilot era of AI. So if anything, I feel, yes, it's moving fast, but moving fast in the right direction.

CHAKRABARTI: We talked a lot about sort of Microsoft. But I want to just shift quickly to Google. Can you kind of describe, I have felt that they've been between the two roughly in the lead on AI, if that's an accurate statement for quite some time.

KNIGHT: Yeah, absolutely. They invented a lot of the stuff that has led to some of these leaps forward, but they were hesitant to release some things because they could misbehave, which we are seeing now. And so they got kind of a little bit blindsided by what Microsoft, the OpenAI is doing in releasing some of these big language models and, you know, showing what you could potentially do with search.

And so there was this great panic inside Google, it seems, where they were suddenly like, we've got to catch up. We've got to throw everything at this. They've merged their two different AI units, Google and DeepMind. And now Demis Hassabis, as you mentioned, is the lead of both of those. And this is a kind of one of those big moments like the Internet wave that Bill Gates talked about years ago where they suddenly realized, we need to catch up.

There was this great panic inside Google, it seems, where they were suddenly like, we've got to catch up.

CHAKRABARTI: I saw a story in the New York Times that said, you know, there were memos, I think, circulating. I can't remember whether it was within Google or Microsoft. But that said that, like, you know, this race could be won or lost in a matter of weeks.

BASS: That might have been Google. That's not what I hear from Microsoft. It is critically, if anything, and perhaps it's because they feel like they're in the lead. When I asked Satya Nadella that he has focused a lot on this being an initial lead, not a permanent. Being very cognizant of the fact that lots of markets are being disrupted, including some of the ones that Microsoft needs.

And, you know, we focused a lot here on chat, but one of the other major vectors for adding AI and for competition around that is going to be office software again between Microsoft and Google. It's one thing for Microsoft to experiment with Bing, which has about 2% share of the market. It's another thing for them to start putting significant AI assistants into their office products, which are, you know, flagship dominant product, and they are doing that, albeit rolling it out a little bit more carefully and a little bit more slowly than perhaps the Bing stuff.

CHAKRABARTI: Dina is this one of those winning the competition at all costs moment? I saw a story in the New York Times that said, you know, there were memos, I think, circulating. I can't remember whether it was within Google or Microsoft. But that said that, like, you know, this race could be won or lost in a matter of weeks.

BASS: Maybe this is Clippy, 2.0. I mean, there were there was a meme going around. I think it was probably around May 4th ... that was sort of like, you know, Clippy, I am your father kind of meme involving Bing. I mean, this is what on some level, this is a fulfillment of what Microsoft wanted to do with Clippy and what it wanted to do in around 2016 with its kind of conversational AI strategy that they rolled out that didn't really go anywhere at the time because the technology wasn't very good then.

CHAKRABARTI: Okay. Well, so good. Going back for just a moment to the memo that I was quoting, I have the story here from the New York Times. And they say that Sam Schillace, technology executive at Microsoft had written in an internal email, which the Times says it's viewed. And in that email, Schillace said that it was absolutely, quote, an absolutely fatal error in this moment to worry about things that can be fixed later.

And that suddenly shifting towards a new kind of technology was essential because, as Schillace wrote in this memo, again, viewed by The New York Times, the first company to introduce a product is the long-term winner. Just because they got started first. Sometimes the difference is measured in weeks. Dina, since you're the one who covers Microsoft directly, how does that sound to you?

BASS: Look, obviously, Microsoft is moving very quickly here. And I mean, nary a week has gone by without them introducing a new AI product. And that brings up the question that you started the show with, around if they're so concerned about what the impacts of this might be, why continue to roll it out at great speed?

Microsoft is moving very quickly here. And I mean, nary a week has gone by without them introducing a new AI product.

You know, we do know that the company has been moving very quickly, you know, to try to get these things out. But again, what I hear from them is that they do not believe that the fact that they introduced things first means that they are the winner. They feel that there's a lot of potential for disruption here. And, you know, we mentioned office. There's also issues of disruption around the cloud.

CHAKRABARTI: Around the cloud. Okay. So hang onto that thought for just a moment, because we do have just a little bit of tape here. You had talked about the other products that Microsoft has. And so in an event in March called the Future of Work with AI, Microsoft introduced copilot, or we'll call it here in this conversation, Clippy 2.0. It's an AI assistant feature for Microsoft 365.

Copilot combines the power of large language models with your data in the Microsoft graph and the Microsoft 365 apps to turn your words into the most powerful productivity tool on the planet.

CHAKRABARTI: So Will, do you anticipate Google also rolling out AI into its products beyond search?

KNIGHT: Yeah, in fact, they've already started doing that at Google IOS. They demonstrated some stuff similar to what we're seeing with Office where it will help you write, write a document, help you generate an email, go into a spreadsheets and do stuff for you.

CHAKRABARTI: Okay. So, you know, both of you have raised an interesting question that obviously, you know, the products that these things get rolled out in is only the surface part of the story. What's really, I think, more deeply going on is when people are talking about this as fundamental as software. ... How much of the frenetic activity that we're seeing now between Google and Microsoft is more about can they be the leaders in this new technology, in generative AI and how it's used?

KNIGHT: I think a huge amount of it is about that. And what I hear from people inside some of these companies is that sometimes the executives are not listening to even, you know, the technologists and those who have concerns about them. So they're just desperate to get this stuff out and to gain a lead in what they see as such a foundational technology that's going to sort of be rolled out to billions of computers.

CHAKRABARTI: And Dina, what do you think about that question?

BASS: I think that's fair. And they're doing a bit of a dance between saying we're trying to be careful and we know that these products don't work perfectly, and we want you to know the ways in which they don't work. But here's another one and here's another one. And you should try this, and by the way, you should regulate us. It is a little bit of trying to do a bit of a tap dance.

CHAKRABARTI: Going back to search again, just because I think it's the touch point that people will understand most intrinsically. How do you think that what Google and Microsoft are doing will change the way people use search or the information they get from it?

KNIGHT: Yeah, I think that remains to be seen. I mean, when I try using this generative search, it's amazing to me that I have something that will hallucinate. Make up stuff, Right? That's not what we expect from a search engine. And I think it still remains to be seen if it's going to be you know, the benefits will completely outweigh the limits. Undoubtedly, I think it's going to sort of creep in in some ways to the search, but maybe not just completely supplant it.

CHAKRABARTI: Yeah. You know, we heard a little earlier in that tape from Google's release of their AI enabled search, Cathy Edwards was talking about, Oh, it can be particularly useful for your questions that also have questions. What do you think she's getting at there?

KNIGHT: I think she's talking about follow up questions where you will ask something. You have the ability to sort of ask a clarifying question of, say, Bing chat or Google's chat bot. So you can kind of get into this more humanlike dialog with these things. But that also raises a ton of problems in my mind because you're sort of anthropomorphizing these things to a level that they don't justify and it causes a lot of confusion and can ... lead people to kind of misinterpret what they're talking to and it can say things that come across as very weird.

CHAKRABARTI: So we're going to get to more of these the hallucinations and concerns in just a minute here. But, Dina, you know, you had mentioned a little bit earlier that, you know, isn't it surprising that once again, we're talking about Microsoft v. Google. As far as I understand, though, they had for at least some period of time around to 2015, reached some kind of legal and regulatory truce. Would you describe it that way?

BASS: Yeah. There was a kind of a formal detente between the two companies. It was not that they wouldn't compete. They were still competing very vigorously. Again, particularly as Google tries to take more business from Microsoft in the office and productivity space. But what they agreed to do was not to complain about each other to regulators. That's fallen apart in the last few years. They've both been vociferously complaining about each other to regulators. And so that adds another dimension. As we look at this AI battle, another dimension to the hostility between the two companies.

Okay, so, Dina, and we'll hang on here for just a second because I want to bring Sarah Myers West into the conversation. She's managing director of the AI Now Institute, which studies the social implications of artificial intelligence. Welcome to On Point, Sarah.

SARAH MYERS WEST: Thanks for having me.

CHAKRABARTI: So as I introduced the show, we have both, you know, very senior people at Microsoft and Google amongst the signatories of yet another AI warning letter saying that it could be an existential threat. We need to be regulated. Then, as Dina and we'll have very carefully laid out for us, they're still pushing these products out that really billions of people use every day, knowing that sometimes those generative AI products in search hallucinate, which will be my favorite scary word of the week here. I mean, isn't it somewhat irresponsible for these companies to be doing this, Sarah?

MYERS WEST: I mean, it seems like we're back in the move fast break things era of tech. Where they're essentially experimenting in the wild with these technologies, even as they acknowledge themselves that they're not really, you know, fully validated or tested or ready for market.

I mean, Dana put it perfectly that we're all the beta testers for this, but it's something to be beta testing a new version of an OS. Or beta testing, I don't know, a new app or the latest version of Word. It's something different entirely to be beta testing a product that's giving you information in return that you can't actually be sure is truthful or not.

And this is a product that you're relying on to give you truthful answers. I mean, it seems like more than playing with fire a little bit to me. I mean, are there any systems in place to get companies to think more about this in the world of AI before they roll out the products?

This is a product that you're relying on to give you truthful answers. I mean, it seems like more than playing with fire.

Read more here:
The Microsoft-Google AI war | On Point - WBUR News

Read More..

Junk food diet and sleep: What we eat impacts sleep quality – Medical News Today

Limited evidence exists regarding the influence of certain foods on sleep, leading researchers to conduct a randomised trial investigating the effects of a high-fat/high-sugar diet on sleep.

A new study, published in Obesity, aimed to gather intervention-based evidence by examining the impact of this diet on sleep patterns in healthy individuals.

The researchers found that after consuming the unhealthy diet, the quality of deep sleep in the participants worsened compared to when they followed the healthier diet.

A group of 15 healthy men took part in a study where they were given two different diets to follow. They were randomly assigned to either a high-fat/high-sugar diet or a low-fat/low-sugar diet for one week each.

After each diet, the researchers recorded the participants sleep patterns in a laboratory setting using a method called polysomnography, a sleep monitoring technique.

They looked at the duration of sleep, as well as the different stages and patterns of sleep, including things like oscillatory patterns and slow waves.

The study found that the duration of sleep was not significantly different between the two diets, as measured by both actigraphy a method of monitoring sleep using a wearable device and in-lab polysomnography.

When comparing two different diets, the researchers found that the structure of sleep remained similar after one week on each diet.

However, when they compared a diet high in fat and sugar to a diet low in fat and sugar, they noticed that the former diet was linked to lower levels of certain sleep characteristics during deep sleep.

These characteristics included delta power, which is a measure of slow brain waves, the ratio of delta to beta waves, and the amplitude of slow waves.

All of these changes suggested that the quality of deep sleep was reduced on the high-fat/high-sugar diet.

Dr. Florencia Halperin, chief medical officer at Form, a company that provides medical treatment for obesity and associated metabolic conditions, not involved in this research, told Medical News Today that evidence has been mounting over the last decade about the relationship between sleep and metabolic disease.

Poor sleep adversely affects hormonal and metabolic parameters and increases the risk of weight gain and metabolic disease. At the same time, weight gain increases the risk of sleep disorders such as sleep apnea. So the relationship is very complex, and there is so much we still dont understand.

Dr. Florencia Halperin

Dr. Halperin pointed out that the results suggested that consumption of an unhealthier [high-fat/high-sugar] diet results in changes to the pattern of sleep.

While the macro-architecture was not affected, changes in some sleep parameters observed (less relative power in delta frequencies and a lower delta to beta ratio) were consistent with a less restorative sleep state, as might be seen in an older population, Dr. Halperin noted.

Kristen Carli, a registered dietitian nutritionist, also not involved in this research, highlighted a few limitations to the study, noting the small sample size of only 15 healthy young men.

No women, older adults, or children were evaluated meaning that these results should not be extrapolated to the general population, Carli pointed out.

Dr. Halperin agreed, saying that we must keep in mind only 15 people were studied, they were all men, and only studied for 1 week so we will need further research to validate these findings.

However, Dr. Halperin noted that this study is important and relevant to patients and the public because it provides novel insight into how lifestyle factors such as the diet we consume affect our sleep, which in turn affects our overall health.

This is early evidence that a typical unhealthier diet may affect our sleep in very specific ways, and therefore our sleep-regulated health parameters, such as cognition and hormone secretion, which then modulate other effects on our health.

Dr. Florencia Halperin

Dr. Halperin explained that while the study helps to raise awareness about the relationship between sleep and overall health, the current findings are unlikely to impact medical practice at the current time, given the early nature of this research.

However, I may share this research with [my patients] to educate them about the many many ways changing our diet can contribute to improved health even without any weight loss! Dr. Halperin said.

Carli pointed out that the implications of this study are that the high-fat/high-sugar diet can impact sleep quality.

While the results of this one study should not be extrapolated widely, these results are not exactly surprising, she added.

Sugar has been shown to impact sleep quality in prior research, as well as a high-fat diet. However, I will note many researchers pose whether the diet is impacting the sleep quality or the other way around. Regardless, as a registered dietitian, there are many other health benefits, besides sleep quality, to consider choosing a low-fat/low-sugar diet, including weight loss, heart health, chronic disease prevention, etc.

Kristin Carli

Ultimately, as Dr. Halperin explained, this evidence suggests that a healthier diet might help us get healthier sleep.

Another way to look at it is that this is perhaps one more proof point that our parents were right after all we all need to eat our veggies, and go to bed on time!

See the article here:
Junk food diet and sleep: What we eat impacts sleep quality - Medical News Today

Read More..

Google Announces State-of-the-Art PaLM 2 Language Model … – InfoQ.com

Google DeepMind recently announced PaLM 2, a large language model (LLM) powering Bard and over 25 other product features. PaLM 2 significantly outperforms the previous version of PaLM on a wide range of benchmarks, while being smaller and cheaper to run.

Google CEO Sundar Pichai announced the model at Google I/O '23. PaLM 2performs well on a variety of tasks including code generation, reasoning, and multilingual processing,and it is available in four different model sizes, including a lightweight version called Gecko that is intended for use on mobile devices. When evaluated on NLP benchmarks, PaLM 2 showed performance improvements over PaLM, and achieved new state-of-the-art levels in many tasks, especially on the BIG-bench benchmark. Besides powering Bard, the new model is also a foundation for many other products, including Med-PaLM 2, a LLM fine-tuned for the medical domain, and Sec-PaLM, a model for cybersecurity. According to Google,

PaLM 2 shows us the impact of highly capable models of various sizes and speeds---and that versatile AI models reap real benefits for everyone. Yet just as were committed to releasing the most helpful and responsible AI tools today, were also working to create the best foundation models yet for Google.

In 2022, InfoQ covered the original release of Pathways Language Model (PaLM), a 540-billion-parameter large language model (LLM). PaLM achieved state-of-the-art performance on several reasoning benchmarks and also exhibited capabilities on two novel reasoning tasks: logical inference and explaining a joke.

For PaLM 2, Google implemented several changes to improve model performance. First, they studied model scaling laws to determine the optimal combination of training compute, model size, and data size. They found that, for a given compute budget, data and model size should be scaled "roughly 1:1," whereas previous researchers had scaled model size 3x the data size.

The team improved PaLM 2's multilingual capabilities by including more languages in the training dataset and updating the model training objective. The original dataset was "dominated" by English; the new dataset pulls from a more diverse set of languages and domains. Instead of using only a language modeling objective, PaLM2 was trained using a "tuned mixture" of several objectives.

Google evaluated PaLM 2 on six broad classes of NLP benchmark, including: reasoning, coding, translation, question answering, classification, and natural language generation. The focus of the evaluation was to compare its performance to the original PaLM. On BIG-bench, PaLM 2 showed "large improvements," and on classification and question answering even the smallest PaLM 2 model achieved performance "competitive" with much the larger PaLM model. On reasoning tasks, PaLM 2 was also "competitive" with GPT-4; it outperformed GPT-4 on the GSM8K mathematical reasoning benchmark.

In a Reddit discussion about the model, several users commented that although its output wasn't as good as that from GPT-4, PaLM 2 was noticeably better. One user said:

They probably want it to be scalable so they can implement it for free/low cost with their products. Also so it can accompany search results without taking forever (I use GPT 4 all the time and love it, but it is pretty slow.)...I just used the new Bard (which is based on PaLM 2) and it's a good amount faster than even GPT 3.5 turbo.

The PaLM 2 tech report page on Papers with Code lists the model's performance on several NLP benchmarks.

More:
Google Announces State-of-the-Art PaLM 2 Language Model ... - InfoQ.com

Read More..

Winning the Mind Game: The Role of the Ransomware Negotiator – The Hacker News

Get exclusive insights from a real ransomware negotiator who shares authentic stories from network hostage situations and how he managed them.

Ransomware is an industry. As such, it has its own business logic: organizations pay money, in crypto-currency, in order to regain control over their systems and data.

This industry's landscape is made up of approximately 10-20 core threat actors who originally developed the ransomware's malware. To distribute the malware, they work with affiliates and distributors who utilize widespread phishing attacks to breach organizations. Profits are distributed with approximately 70% allocated to the affiliates and 10%-30% to these developers. The use of phishing renders online-based industries, like gaming, finance and insurance, especially vulnerable.

In addition to its financial motivations, the ransomware industry is also influenced by geo-political politics. For example, in June 2021, following the ransomware attacks on the Colonial Pipeline and JBS, the Byden administration announced that ransomware was a threat to National Security. The administration then listed critical infrastructures that were "off limits" to attackers.

Following these steps, a number of threat actors decided to change course: declaring they would not attack essential and fundamental organizations like hospitals, power plants and educational institutions. A few months later, the FBI reported they had attacked prominent ransomware group REvil:

The attack garnered a response from the Conti group, which reflected their ideological motives:

Managing a ransomware event is similar to managing a hostage situation. Therefore, to prepare for a ransomware incident, it is recommended for organizations to employ a similar crisis management structure. This structure is based on the following functions:

According to Etay Maor, Senior Director Security Strategy at Cato Networks, "We're seeing more and more companies offering bundles of these ransomware services. However, it is recommended to separate these roles to ensure the most professional response."

Professional negotiation is the act of taking advantage of the professional communication with the hacker in various extortion situations. The role comprises four key elements:

In 90% of cases, the attack is financially motivated. If it is politically motivated, the information may not be recovered, even after paying the ransom.

For example, by finding out what the local time is for the attacker, the negotiator can identify where they came from. This can be used for improving negotiation terms, like leveraging public holidays to ask for a discount.

For example, one company was able to buy 13 days through negotiations, allowing them to recover their information and relinquish paying the ransom altogether.

Etay Maor comments, "Ransomware is not an IT issue, it's a business issue. "The decision whether to pay or not is a business decision, influenced by many factors. While the official FBI policy is not to pay, they enable companies to do so, if the CEO decides.

For example, in one case an online gaming company was losing more money than the ransom request every hour their operations were down, influencing their decision to pay the ransom as quickly as possible while minimizing negotiation time. US lawmakers have not banned ransomware payment either. This shows how complicated the issue is.

Ransomware is becoming more prominent, but organizations can protect against it. Ransomware relies on phishing attacks and unpatched services. Therefore, it is recommended that CEOs meet their IT team regularly to ensure software and infrastructure are patched and up-to-date and that all important information is backed up. This will significantly reduce the chance of ransomware being able to exploit vulnerabilities and penetrate systems.

To learn more about ransomware attacks and how they are managed in real-time, watch the entire masterclass here.

Continue reading here:
Winning the Mind Game: The Role of the Ransomware Negotiator - The Hacker News

Read More..

Google Cloud makes generative AI support generally available in … – SiliconANGLE News

Google LLCs cloud unit today made several of the generative artificial intelligence tools in its Vertex AI product suite generally available.

The move comes a month after the company first previewed the tools at its Google I/O conference.

Vertex AI is a suite of cloud services that companies can use to build machine learning models, as well as perform related development tasks. The platform includes features for managing neural networks training datasets. Once training is complete and a model is deployed, developers can use Vertex AI to monitor it for technical issues.

The first addition to Vertex AI that became generally available today is a service called Model Garden. It provides access to more than 60 generative AI models hosted on Google Cloud. They can be used to generate text, debug code, create images and perform other tasks.

Some of the neural networks available through Model Garden were developed by Google. Others are hosted versions of free AI systems from the open-source ecosystem. According to Google, a subset of the models can be customized by customers for specific tasks.

One of the main highlights of the Model Garden neural network catalog is the Vertex AI PaLM application programming interface. Its a text generation service based on PaLM 2, a large language model Google debuted earlier this year. It can generate text in multiple languages, as well as perform reasoning tasks such as solving riddles.

PaLM 2 is an enhanced version of a 540 billion-parameter model Google first detailed in 2022. Compared with its predecessor, PaLM 2 generates responses faster. Additionally, its more adept at performing tasks such as translation that involve processing text written in multiple languages.

Model Garden also offers access to Codey, an AI system that became available in public preview this morning. Its a large language model specifically optimized for programming tasks.

According to Google, Codey can analyze a snippet of code written by a developer and suggest the next few lines that should be added to the file. Moreover, it includes a chatbot interface that supports natural language queries. Developers can ask it to generate code, debug existing files and explain programming concepts.

Google Cloud is rolling out Model Garden alongside a tool called Generative AI Studio that also became generally available today. The tool is aimed at making it easier for developers to use the search giants neural networks.

Using Generative AI Studio, developers can customize the weights of some Google Cloud neural networks. Weights are the components of a model that determine which data points it takes into account when making a decision. Customizing them makes it possible to improve the quality of the models output.

According to Google, developers can also use Generative AI Studio to test what answers a model generates in response to specific prompts. Such testing helps software teams determine which of the search giants models is most suitable for a given application project.

Model Garden and Generative AI Studio leverage Google Clouds tight partnership with Google Research and Google DeepMind, making it easy for developers and data scientists to use, customize, and deploy models, June Yang, Google Clouds vice president of cloud AI and industry solutions, detailed in a blog post.

Vertex AIs generative AI tools have already been adopted by a number of early customers. Google Cloud disclosed today that DataStax Inc., Canva Inc. and GitLab Inc. are among the companies that are using the tools to enhance their software products.

TheCUBEis an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate thecontent you create as well Andy Jassy

THANK YOU

Read the original post:
Google Cloud makes generative AI support generally available in ... - SiliconANGLE News

Read More..