Category Archives: Ai
UK invests $273 million in AI supercomputer as it seeks to compete with U.S., China – CNBC
Bletchley Park was a codebreaking facility during World War II.
Getty
The U.K. government said Wednesday that it will invest 225 million, or $273 million, into an artificial intelligence supercomputer, highlighting the country's ambition to lead in the technology as it races to catch up to the U.S. and China.
The University of Bristol will build the supercomputer, called Isambard-AI after the 19th century British engineer Isambard Brunel. The announcement coincided with the first day of the U.K.'s AI safety summit, which is being held in Bletchley Park.
The U.K. government said Isambard-AI will be the most advanced computer in Britain and once complete, it will be "10 times faster than the U.K.'s current quickest machine." The computer will pack 5,448 GH200 Grace Hopper Superchips, powerful AI chips made by U.S. semiconductor giant Nvidia, which specializes in high-performance computing applications.
Hewlett Packard Enterprise, the American IT giant, will help build the computer, with aims to eventually connect it to a newly announced Cambridge supercomputer called Dawn. That computer, built by Dell and U.K. firm StackPC, will be powered by more than 1,000 Intel chips that use water-cooling to reduce power consumption. It is expected to start running in the next two months.
The U.K. government hopes the two combined supercomputers will achieve breakthroughs in fusion energy, health care and climate modeling.
The machines will be up and running starting in summer 2024, the government said, and will help researchers analyze advanced AI models to test safety features and drive breakthroughs in drug discovery and clean energy.
The government previously earmarked 1 billion to invest in the semiconductor industry in an attempt to secure the country's chip supplies and reduce its dependence on East Asia for the most commercially important microchips.
Read the original here:
UK invests $273 million in AI supercomputer as it seeks to compete with U.S., China - CNBC
Brave responds to Bing and ChatGPT with a new anonymous and … – The Verge
Brave, the privacy-focused browser that automatically blocks unwanted ads and trackers, is rolling out Leo a native AI assistant that the company claims provides unparalleled privacy compared to some other AI chatbot services. Following several months of testing, Leo is now available to use for free by all Brave desktop users running version 1.60 of the web browser. Leo is rolling out in phases over the next few days and will be available on Android and iOS in the coming months.
The core features of Leo arent too dissimilar from other AI chatbots like Bing Chat and Google Bard: it can translate, answer questions, summarize webpages, and generate new content. Brave says the benefits of Leo over those offerings are that it aligns with the companys focus on privacy conversations with the chatbot are not recorded or used to train AI models, and no login information is required to use it. As with other AI chatbots, however, Brave claims Leos outputs should be treated with care for potential inaccuracies or errors.
Brave users can access Leo directly from the browser sidebar, seen here (pictured) on the right of the webpage. Image: Brave
AI can be a powerful tool but it can also present growing concerns for data privacy and theres a need for a privacy-first solution, said Brian Bondy, CTO and co-founder of Brave, in a press release. Brave is committed to pairing AI with user privacy, and will provide our users with secure and personalized AI assistance where they already spend their time online.
Brave says that additional models will be available to Leo Premium users alongside access to higher-quality conversations, priority queuing during peak usage, higher rate limits, and early access to new features. In a statement to The Verge, Brian Bondy, CTO and co-founder of Brave said that Leo is built in a way that many different models can be plugged into the feature. We believe that more models will be offered over time and that users should be able to choose among them.
Update, November 2nd 1.30PM ET: Updated to include a statement from Brave co-founder Brian Bondy regarding future AI models coming to Leo.
See the original post:
Brave responds to Bing and ChatGPT with a new anonymous and ... - The Verge
Is This Artificial Intelligence (AI) Stock-Split Stock a Buy After Q3 … – The Motley Fool
As earnings season kicks into high gear, all eyes will be on big tech. Alphabet (GOOG 1.39%) (GOOGL 1.26%) recently reported financial results for the quarter ended Sept. 30. Once again, the company showed noticeable progress in both its advertising unit and cloud segment as competition from TikTok, Meta Platforms, Microsoft, and Amazon lingers.
Over the last several months, Alphabet has invested significant capital in artificial intelligence (AI) applications and integrated the technology across all aspects of its business. Nonetheless, Alphabet stock appears to be taking a bit of a breather at the moment.
Let's dig into the Q3 report and take a look at how AI is fueling growth within Alphabet's ecosystem and assess if investors should scoop up some shares in the face of recent mundane pricing action.
The majority of Alphabet's revenue is captured in two categories: advertising and cloud. The table illustrates the revenue profile of each of these segments for the quarter ended Sept. 30.
Data source: Q3 earnings release. Dollar amounts in millions. Table by author.
On the advertising front, Alphabet increased revenue by 9%, which was primarily fueled by Google Search and YouTube. The company's Services business (which is mostly composed of advertising) grew 11%.
An important dynamic for investors to understand is that the majority of Alphabet's operating profits stem from Services. Per the earnings report, the company increased operating income for Services by 26% during the third quarter and boasted a 35% margin.
To put this into perspective, Alphabet's operating margin for Services in Q3 2022 was 31%. This is a massive expansion in margin, which flows straight to the bottom line.
For the quarter ended Sept. 30, Alphabet reported free cash flow of $22.6 billion, an increase of 40% year over year. By expanding margins and generating more excess cash, Alphabet has been able to invest in additional services and resources. Namely, the company's foray into generative AI is already yielding meaningful returns, underscored by a return to accelerating revenue in advertising, as well as consistently profitable cloud operations.
Let's dig into how Alphabet is integrating AI across the business and what management has to say about its future prospects.
Image source: Getty Images.
One of the more headline-grabbing topics last quarter was coverage of hedge fund manager Bill Ackman's position in Alphabet stock. During recent interviews, Ackman indicated that he is compelled by Alphabet because the company is in a unique position to leverage its vast data repository in such a way that can be stitched together across a wide array of products and services that benefit both consumers and enterprises.
Alphabet's management spent a good portion of the earnings call providing details around how AI is becoming more integrated into Search and Cloud. On the Search front, Alphabet rolled out a feature called Search Generative Experience (SGE). By layering generative AI into Search, Alphabet is effectively trying to increase its surface area on the Internet. Stated differently, SGE provides users with more links to choose from, thereby "creating new opportunities for content to be discovered."
While it is early innings for SGE, management appears optimistic about its potential to disrupt the existing advertising structure native to Search today.
Another promising opportunity rooted in AI is Alphabet's large language model. The model, dubbed Google Bard, was built to be "a complementary experience to Google Search." Since its commercial release earlier this year, Bard has made significant progress. The tool can now be used among many Google apps, including Workspace, YouTube, and Maps.
When it comes to the cloud, Alphabet's results are pretty impressive. The company shared that over half of start-ups focusing on generative AI and that have raised outside capital are customers of Google Cloud.
One of the core pillars of Google Cloud is a multifaceted product called Duet AI. Customers such as PayPal are using Duet AI to increase software development, while others are taking advantage of the tool's data analysis function within Google Workspace apps.
These dynamics underline precisely what Ackman was getting at. In a relatively short time frame, Alphabet has already integrated AI across several different aspects of its business. For this reason, investors could argue that the growth rates in the table have ample opportunity to eclipse their current profile.
GOOG PE Ratio data by YCharts
Alphabet stock is trading well below its prior highs on a price-to-earnings (P/E) and price-to-free cash flow basis. More specifically, the decline becomes more pronounced around the October window, shortly after the company released Q3 earnings.
There is no doubt that Alphabet faces stiff competition from Microsoft and Amazon when it comes to cloud computing and artificial intelligence (AI). But this financial review demonstrates how Alphabet is already benefiting from a suite of products and services connected by AI. Considering that the macroeconomy is still vulnerable to rising interest rates and inflation, I think it's appropriate to believe that Alphabet's advertising and cloud businesses are not even close to peak performance.
My fellow Fool Keith Speights recently referenced Alphabet stock as a "no-brainer buy." I wholeheartedly agree with that position and think now is an incredible opportunity to buy the dip in Alphabet stock and hold for the long term. Alphabet has made incredible progress on its artificial intelligence (AI) roadmap, and the company's strong liquidity profile suggests it has the financial horsepower to continue innovating and releasing additional resources at a fast pace.
As AI becomes more integrated across Alphabet's ecosystem, users should become more engaged and sticky, which will ultimately lead to further top-line growth and margin expansion. From my viewpoint, the current payoff from AI efforts is really encouraging, and the best is yet to come.
Suzanne Frey, an executive at Alphabet, is a member of The Motley Fools board of directors. Randi Zuckerberg, a former director of market development and spokeswoman for Facebook and sister to Meta Platforms CEO Mark Zuckerberg, is a member of The Motley Fool's board of directors. John Mackey, former CEO of Whole Foods Market, an Amazon subsidiary, is a member of The Motley Fools board of directors. Adam Spatacco has positions in Alphabet, Amazon, Meta Platforms, and Microsoft. The Motley Fool has positions in and recommends Alphabet, Amazon, Meta Platforms, Microsoft, and PayPal. The Motley Fool recommends the following options: short December 2023 $67.50 puts on PayPal. The Motley Fool has a disclosure policy.
See the original post:
Is This Artificial Intelligence (AI) Stock-Split Stock a Buy After Q3 ... - The Motley Fool
Google DeepMinds robotics head on general purpose robots, generative AI and office WiFi – TechCrunch
Image Credits: DeepMind
[A version of this piece first appeared in TechCrunchs robotics newsletter, Actuator.Subscribe here.]
Earlier this month, Googles DeepMind team debuted Open X-Embodiment, a database of robotics functionality created in collaboration with 33 research institutes. The researchers involved compared the system to ImageNet, the landmark database founded in 2009 that is now home to more than 14 million images.
Just as ImageNet propelled computer vision research, we believe Open X-Embodiment can do the same to advance robotics, researchers Quan Vuong and Pannag Sanketi noted at the time. Building a dataset of diverse robot demonstrations is the key step to training a generalist model that can control many different types of robots, follow diverse instructions, perform basic reasoning about complex tasks and generalize effectively.
At the time of its announcement, Open X-Embodiment contained 500+ skills and 150,000 tasks gathered from 22 robot embodiments. Not quite ImageNet numbers, but its a good start. DeepMind then trained its RT-1-X model on the data and used it to train robots in other labs, reporting a 50% success rate compared to the in-house methods the teams had developed.
Ive probably repeated this dozens of times in these pages, but it truly is an exciting time for robotic learning. Ive talked to so many teams approaching the problem from different angles with ever-increasing efficacy. The reign of the bespoke robot is far from over, but it certainly feels as though were catching glimpses of a world where the general-purpose robot is a distinct possibility.
Simulation will undoubtedly be a big part of the equation, along with AI (including the generative variety). It still feels like some firms have put the horse before the cart here when it comes to building hardware for general tasks, but a few years down the road, who knows?
Vincent Vanhoucke is someone Ive been trying to pin down for a bit. If I was available, he wasnt. Ships in the night and all that. Thankfully, we were finally able to make it work toward the end of last week.
Vanhoucke is new to the role of Google DeepMinds head of robotics, having stepped into the role back in May. He has, however, been kicking around the company for more than 16 years, most recently serving as a distinguished scientist for Google AI Robotics. All told, he may well be the best possible person to talk to about Googles robotic ambitions and how it got here.
At what point in DeepMinds history did the robotics team develop?
I was originally not on the DeepMind side of the fence. I was part of Google Research. We recently merged with the DeepMind efforts. So, in some sense, my involvement with DeepMind is extremely recent. But there is a longer history of robotics research happening at Google DeepMind. It started from the increasing view that perception technology was becoming really, really good.
A lot of the computer vision, audio processing, and all that stuff was really turning the corner and becoming almost human level. We starting to ask ourselves, Okay, assuming that this continues over the next few years, what are the consequences of that? One of clear consequence was that suddenly having robotics in a real-world environment was going to be a real possibility. Being able to actually evolve and perform tasks in an everyday environment was entirely predicated on having really, really strong perception. I was initially working on general AI and computer vision. I also worked on speech recognition in the past. I saw the writing on the wall and decided to pivot toward using robotics as the next stage of our research.
My understanding is that a lot of the Everyday Robots team ended up on this team. Googles history with robotics dates back significantly farther. Its been 10 yeas since Alphabet made all of those acquisitions [Boston Dynamics, etc.]. It seems like a lot of people from those companies have populated Googles existing robotics team.
Theres a significant fraction of the team that came through those acquisitions. It was before my time I was really involved in computer vision and speech recognition, but we still have a lot of those folks. More and more, we came to the conclusion that the entire robotics problem was subsumed by the general AI problem. Really solving the intelligence part was the key enabler of any meaningful process in real-world robotics. We shifted a lot of our efforts toward solving that perception, understanding and controlling in the context of general AI was going to be the meaty problem to solve.
It seemed like a lot of the work that Everyday Robots was doing touched on general AI or generative AI. Is the work that team was doing being carried over to the DeepMind robotics team?
We had been collaborating with Everyday Robots for, I want to say, seven years already. Even though we were two separate teams, we have very, very deep connections. In fact, one of the things that prompted us to really start looking into robotics at the time was a collaboration that was a bit of a skunkworks project with the Everyday Robots team, where they happened to have a number of robot arms lying around that had been discontinued. They were one generation of arms that had led to a new generation, and they were just lying around, doing nothing.
We decided it would be fun to pick up those arms, put them all in a room and have them practice and learn how to grasp objects. The very notion of learning a grasping problem was not in the zeitgeist at the time. The idea of using machine learning and perception as the way to control robotic grasping was not something that had been explored. When the arms succeeded, we gave them a reward, and when they failed, we give them a thumbs-down.
For the first time, we used machine learning and essentially solved this problem of generalized grasping, using machine learning and AI. That was a lightbulb moment at the time. There really was something new there. That triggered both the investigations with Everyday Robots around focusing on machine learning as a way to control those robots. And also, on the research side, pushing a lot more robotics as an interesting problem to apply all of the deep learning AI techniques that weve been able to work so well into other areas.
Was Everyday Robots absorbed by your team?
A fraction of the team was absorbed by my team. We inherited their robots and still use them. To date, were continuing to develop the technology that they really pioneered and were working on. The entire impetus lives on with a slightly different focus than what was originally envisioned by the team. Were really focusing on the intelligence piece a lot more than the robot building.
You mentioned that the team moved into the Alphabet X offices. Is there something deeper there, as far as cross-team collaboration and sharing resources?
Its a very pragmatic decision. They have good Wi-Fi, good power, lots of space.
I would hope all the Google buildings would have good Wi-Fi.
Youd hope so, right? But it was a very pedestrian decision of us moving in here. I have to say, a lot of the decision was they have a good caf here. Our previous office had not so good food, and people were starting to complain. There is no hidden agenda there. We like working closely with the rest of X. I think theres a lot of synergies there. They have really talented roboticists working on a number of projects. We have collaborations with Intrinsic that we like to nurture. It makes a lot of sense for us to be here, and its a beautiful building.
Theres a bit of overlap with Intrinsic, in terms of what theyre doing with their platform things like no-code robotics and robotics learning. They overlap with general and generative AI.
Its interesting how robotics has evolved from every corner being very bespoke and taking on a very different set of expertise and skills. To a large extent, the journey were on is to try and make general-purpose robotics happen, whether its applied to an industrial setting or more of a home setting. The principles behind it, driven by a very strong AI core, are very similar. Were really pushing the envelope in trying to explore how we can support as broad an application space as possible. Thats new and exciting. Its very greenfield. Theres lots to explore in the space.
I like to ask people how far off they think we are from something we can reasonably call general-purpose robotics.
There is a slight nuance with the definition of general-purpose robotics. Were really focused on general-purpose methods. Some methods can be applied to both industrial or home robots or sidewalk robots, with all of those different embodiments and form factors. Were not predicated on there being a general-purpose embodiment that does everything for you, more than if you have an embodiment that is very bespoke for your problem. Its fine. We can quickly fine-tune it into solving the problem that you have, specifically. So this is a big question: Will general-purpose robots happen? Thats something a lot of people are tossing around hypotheses about, if and when it will happen.
Thus far theres been more success with bespoke robots. I think, to some extent, the technology has not been there to enable more general-purpose robots to happen. Whether thats where the business mode will take us is a very good question. I dont think that question can be answered until we have more confidence in the technology behind it. Thats what were driving right now. Were seeing more signs of life that very general approaches that dont depend on a specific embodiment are plausible. The latest thing weve done is this RTX project. We went around to a number of academic labs I think we have 30 different partners now and asked to look at their task and the data theyve collected. Lets pull that into a common repository of data, and lets train a large model on top of it and see what happens.
What role will generative AI play in robotics?
I think its going to be very central. There was this large language model revolution. Everybody started asking whether we can use a lot of language models for robots, and I think it could have been very superficial. You know, Lets just pick up the fad of the day and figure out what we can do with it, but its turned out to be extremely deep. The reason for that is, if you think about it, language models are not really about language. Theyre about common sense reasoning and understanding of the everyday world. So, if a large language model knows youre looking for a cup of coffee, you can probably find it in a cupboard in a kitchen or on a table.
Putting a coffee cup on a table makes sense. Putting a table on top of a coffee cup is nonsensical. Its simple facts like that you dont really think about, because theyre completely obvious to you. Its always been really hard to communicate that to an embodied system. The knowledge is really, really hard to encode, while those large language models have that knowledge and encode it in a way thats very accessible and we can use. So weve been able to take this common-sense reasoning and apply it to robot planning. Weve been able to apply it to robot interactions, manipulations, human-robot interactions, and having an agent that has this common sense and can reason about things in a simulated environment, alongside with perception is really central to the robotics problem.
Simulation is probably a big part of collecting data for analysis.
Yeah. Its one ingredient to this. The challenge with simulation is that then you need to bridge the simulation-to-reality gap. Simulations are an approximation of reality. It can be very difficult to make very precise and very reflective of reality. The physics of a simulator have to be good. The visual rendering of the reality in that simulation has to be very good. This is actually another area where generative AI is starting to make its mark. You can imagine instead of actually having to run a physics simulator, you just generate using image generation or a generative model of some kind.
Tye Brady recently told me Amazon is using simulation to generate packages.
That makes a lot of sense. And going forward, I think beyond just generating assets, you can imagine generating futures. Imagine what would happen if the robot did an action? And verifying that its actually doing the thing you wanted it to and using that as a way of planning for the future. Its sort of like the robot dreaming, using generative models, as opposed to having to do it in the real world.
More:
Google DeepMinds robotics head on general purpose robots, generative AI and office WiFi - TechCrunch
Deal Dive: AI’s not the only sector dodging the funding slowdown – TechCrunch
A tougher fundraising environment reveals which companies and sectors investors have real conviction in, and which areas arent attractive outside of a bull market. AI startups dominated dealmaking this year, but there is another sector that VCs have stayed committed to: defense tech.
We saw the latest example of this trend just this week. On Tuesday, Shield AI raised a $200 million Series F round led by Thomas Tulls US Innovative Technology Fund, with participation from Snowpoint Ventures and Riot Ventures, among others. The round values the San Diegobased autonomous drone and aircraft startup at $2.7 billion.
The sheer size of the round alone makes this deal interesting. Mega-rounds over $100 million have become uncommon enough to warrant raised eyebrows in todays climate. Through the third quarter of 2023, only 194 rounds above $100 million were raised, compared to 538 in 2022 and 841 in 2021, according to PitchBook. Late-stage fundraising has also been largely muted for much of 2023. Just over $57.3 billion was invested into late-stage startups through the third quarter of this year, much lower than the $94 billion such companies raised in 2022, and the $152 billion we saw in 2021.
Brandon Tseng, the co-founder and president of Shield AI, told TechCrunch+ his company was able to raise in this environment largely because of its metrics. The companys revenue is growing 90% year over year, per Tseng, and it is on the path to becoming profitable in 2025.
This round is also made more interesting by the space the company operates in, since its the latest sign of how much investors have leaned into defense tech in recent years.
Tseng agreed that the investor appetite for companies like his has improved a lot, and he recalled how Shield AIs first few fundraises were particularly hard.
Read the original:
Deal Dive: AI's not the only sector dodging the funding slowdown - TechCrunch
AI pioneer Yoshua Bengio warns against letting Big Tech control rules – HT Tech
In the world of Artificial Intelligence (AI), there's a big problem that we should pay attention to, says one of the godfathers of AI. A respected figure in the field, Yoshua Bengio, is concerned about the growing power of a few big companies in AI. He's worried that these companies might have too much control over AI technology. He even thinks it's one of the main problems we face when it comes to AI.
Bengio, who has won a prestigious Turing Award for his work in AI, recently talked to Insider about his concerns. He said, "We are making more and more powerful AI systems, and the big question for democracy is who gets to control these systems? Is there a risk that only a few companies will have all the power?" It's an important question that has been on his mind for years, but recent developments, like the emergence of systems like ChatGPT, have made him even more worried about this issue.
Yann LeCun, another important figure in AI, has raised similar concerns. He suggested that influential tech leaders like Sam Altman from OpenAI are trying to control AI by pushing for stricter rules and regulations. However, Bengio doesn't agree with this idea. He doesn't think these tech leaders are trying to take over the AI industry.
Bengio believes that it's clear we should not let the big companies write the rules for AI. But he disagrees with the notion that these companies are trying to manipulate the rules in their favor. He thinks that the rules and regulations, as they are currently being discussed, won't necessarily benefit the big tech companies.
According to Bengio, the proposed regulations are aimed at making sure the big AI systems built by these large companies are closely watched and regulated. This means that the big companies will face more scrutiny and higher costs. However, smaller players who work on more specialised AI or create applications using the big AI systems won't be under the same strict regulations.
In short, Bengio wants to make sure that the AI rules are fair and not controlled by just a few big companies. He believes that regulations should ensure that the powerful AI systems are monitored closely. This way, everyone can benefit from AI technology without worrying about it being controlled by a select few.
See the rest here:
AI pioneer Yoshua Bengio warns against letting Big Tech control rules - HT Tech
Scientists excited by AI tool that grades severity of rare cancer – BBC.com
1 November 2023
Tina, diagnosed with a sarcoma in June 2022, now has scans every three months
Artificial intelligence is nearly twice as good at grading the aggressiveness of a rare form of cancer from scans as the current method, a study suggests.
By recognising details invisible to the naked eye, AI was 82% accurate, compared with 44% for lab analysis.
Researchers from the Royal Marsden Hospital and Institute of Cancer Research say it could improve treatment and benefit thousands every year.
They are also excited by its potential for spotting other cancers early.
AI is already showing huge promise for diagnosing breast cancers and reducing treatment times.
Computers can be fed huge amounts of information and trained to identify the patterns in it to make predictions, solve problems and even learn from their own mistakes.
"We're incredibly excited by the potential of this state-of-the-art technology," said Professor Christina Messiou, consultant radiologist at The Royal Marsden NHS Foundation Trust and professor in imaging for personalised oncology at The Institute of Cancer Research, London.
"It could lead to patients having better outcomes, through faster diagnosis and more effectively personalised treatment."
Tina's sarcoma was at the back of her abdomen
The researchers, writing in Lancet Oncology, used a technique called radiomics to identify signs, invisible to the naked eye, of retroperitoneal sarcoma - which develops in the connective tissue of the back of the abdomen - in scans of 170 patients.
With this data, the AI algorithm was able to grade the aggressiveness of 89 other European and US hospital patients' tumours, from scans, much more accurately than biopsies, in which a small part of the cancerous tissue is analysed under a microscope.
'Quicker diagnosis'
When dental nurse Tina McLaughlan was diagnosed - in June last year, after stomach pain - with a sarcoma at the back of her abdomen, doctors relied on computerised-tomography (CT) scan images to find the problem.
They decided it was too risky to give her a needle biopsy.
The 65-year-old, from Bedfordshire, had the tumour removed and now returns to the Royal Marsden for scans every three months.
She was not part of the AI trial but told BBC News it would help other patients.
"You go in for the first scan and they can't tell you what it is - they didn't tell me through all my treatment, until the histology, post-op, so it would be really useful to know that straight away," Ms McLaughlan said.
"Hopefully, it would lead to a quicker diagnosis."
'Personalised treatment'
About 4,300 people in England are diagnosed with this type of cancer each year.
Prof Messiou hopes the technology can eventually be used around the world, with high-risk patients given specific treatment while those at low risk are spared unnecessary treatments and follow-up scans.
Dr Paul Huang, from the Institute of Cancer Research, London, said: "This kind of technology has the potential to transform the lives of people with sarcoma - enabling personalised treatment plans tailored to the specific biology of their cancer.
"It's great to see such promising findings."
Read more from the original source:
Scientists excited by AI tool that grades severity of rare cancer - BBC.com
Are EU regulators ready for concentration in the AI market? – EURACTIV
Artificial Intelligence is the next frontier of market concentration in the internet economy, but experts who spoke to Euractiv feel that even the EUs shiny new regulatory tools might be ill-suited to prevent abuses of market dominance.
In the coming weeks, EU policymakers are expected to finalise the AI Act, a landmark legislation to regulate Artificial Intelligence (AI) based on its capacity to cause harm. Since the draft law was first proposed, the discussion has been disrupted by the meteoric rise of ChatGPT and similar models.
The key to ChatGPTs success was not its use of generative AI, which has been around for some time, but rather the unprecedented scale and performance of its model, OpenAIs GPT-3.5, which has already been surpassed by GPT-4.
As a result, the discussions on the AI Act have been departing from the original horizontal nature of the law in favour of introducing stricter obligations for high impact foundation models like GPT-4.
This more targeted approach focusing on the most impactful actors, which incidentally happen to be primarily non-European companies, has become increasingly recurrent in EU digital policy, from the very large online platforms of the Digital Services Act (DSA) to the gatekeepers of the Digital Markets Act (DMA).
References to these categories are increasingly common in legislative provisions targeting Big Tech companies. However, no such cross-link is available for the EUs AI rulebook due to the DMAs most spectacular failure to date: not managing to designate any cloud service.
Big Tech is leveraging its market power in the cloud sector to gain a dominant position in the AI market. This process has been ongoing for a long time, Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, told Euractiv.
The question of which foundation models should be considered high impact is still a moving target, with policymakers oriented toward a combination of different criteria. However, one of the criteria initially floated has been the amount of computing power used to train the model.
Computing power is a critical component of AI. It is concentrated mainly in the hands of companies that have reached massive economies of scale for their commercial cloud services, hyperscalers like Amazons AWS, Microsofts Azure and Google Cloud.
There is no direct relation between being a hyperscaler and being a leading company in the field of AI. In addition, using the computing power used to train a model as a criterion to designate a high impact foundation model might also have a perverse effect, as investing more initially usually means the model is more robust.
However, training a model is only one part of the equation, as constant computing power is needed to fine-tune the model and its day-to-day operations.
Moreover, the impact of a foundation model is, to a large extent, proportionate to its user base. At the same time, only a few companies worldwide can run an AI model with hundreds of millions of users, such as ChatGPT.
Nobody can build a cutting-edge foundation model without having some kind of partnership with a Big Tech company, Max von Thun, Europes Director for the Open Markets Institute, told Euractiv.
In this context, leading AI companies are partnering up with tech giants without any intervention from competition authorities, as was the case for OpenAI with Microsoft and Anthropic with Amazon. These investments are often accompanied by more or less exclusive arrangements on the underlying cloud infrastructure.
Considering these partnerships as mergers is tricky because it depends on whether the cloud provider has a stake and influence on the generative AI provider and the type of relationship, like whether its an exclusivity or only strategypartnership, Christophe Carugati, an affiliate fellow at Bruegel, told Euractiv.
Behind great Artificial Intelligence, there is great computing power. Computing capacity is a much under-discussed aspect of the AI race, on which we tried to shed some light with Vili Lehdonvirta, professor at the Oxford Internet Institute.
The idea of a foundation model is that it can be adapted to various purposes, as new AI applications can be built on top of them. Since ChatGPTs public launch, the hype around AI has led to the blossoming of thousands of AI-driven companies.
However, the expensive infrastructural costs related to powerful AI models are already pushing this market to concentrate on fewer hands.
Many of the current players are suffering huge losses, largely because of how expensive the models are to run, said Zach Meyers, a Centre for European Reform research fellow.
It seems inevitable that many of the current players will either be left behind or acquired by bigger companies.
According to Andrea Renda, one of the experts who has contributed the most to shaping the AI Act behind the scenes, we are going toward a platformisation of the AI market, whereby most new AI models will be built upon a handful of foundation models.
This market concentration could lead to several ways dominant players could further entrench their position. For instance, when an AI solution is built on a foundation model, the downstream economic operator might be forced to run its AI application on the same cloud infrastructure, in a process known as bundling.
That is already the case when an AI solution is built as an Application Programming Interface (API) to a foundation model, which provides a sort of filter adapting the models response to the needs of the AI solution. As the query is being run directly to the foundation model, the API is supported by its underlying cloud infrastructure.
Conversely, hyperscalers would be incentivised to self-preference or bundle their foundation models with their cloud offers.
What we are witnessing is some of the Big Tech giants occupying the territory by making large investments in a handful of Gen AI companies, without anyone looking into it. Its like we learned nothing from the recent past, antitrust economist Cristina Caffarra told Euractiv.
The usual suspects are grandfathering market power into the future, and there is a lot of hand-wringing, but its already happened, she said.
One way to unbundle the foundation model and the cloud service underneath is by using a fully open-source foundation model. However, these are rather rare since many AI models that claim to be open-source tend to retain critical information.
Andrea Renda, a senior research fellow at the think tank CEPS, has worked on the EUs AI Act since its conception, advised EU policymakers during the negotiations and is currently part of the discussions on the AI Code of Conduct
Self-preferencing and bundling are critical elements that enabled the formation of mono- and oligopolies in critical parts of the internet economy, precisely what the DMA promised to prevent with its ex-ante obligations, as antitrust probes in the online sphere tend to conclude when the damage is already done.
One of the aims of the DMA is to move faster to prevent monopolisation before its too late. Ironically, the platforms designated so far are in markets that are already highly concentrated. With the AI and cloud, there is the possibility to be more proactive,vonThunadded.
The DMA failed to designate any hyperscaler as a gatekeeper because its quantitative thresholds did not fit the cloud sector.
Euractiv understands that France and Germany are pushing the European Commission to launch a market investigation following the qualitative criterion. Still, this process could take years and might take years of litigation to conclude.
Meanwhile, the AI market is moving at break-necking speed, with new generations of foundation models released every few months.
According to Jonathan Sage, a senior policy advisor at Portland, without the DMAs cloud designation, there is little the EU can do to prevent them from creating dependencies between their cloud infrastructure and the foundation models.
Still, the DMAmight be unable to prevent the entrenching of market power in AI since it does not directly cover foundation models.
A more effective solution would be replicating the DMAs systemic approach specifically for foundation models, as it is still unclear what consequences market dominance in this sector will have for downstream operators, Sebastiano Toffaletti, secretary general of the Digital SME Alliance, told Euractiv.
However, putting in place new rules or amending existing ones takes years, which is precisely what the AI market might not have. Anti-trust economist Caffarra stressed it was a matter of timing.
The DMA is looking at old problems but does not have the means to pre-empt a tight oligopoly forming at the foundation level in AI. Its just not the right tool. Before anything moves, it will be far too late, she concluded.
[Edited by Zoran Radosavljevic/Alice Taylor]
Original post:
Are EU regulators ready for concentration in the AI market? - EURACTIV
FTC to Host Virtual Roundtable on AI and Content Creation – Federal Trade Commission News
The Federal Trade Commission staff will be hosting a virtual roundtable discussion on October 4, 2023 to better understand the impact of the use of generative artificial intelligence on music, filmmaking, and other creative fields.
FTC staff are seeking to better understand how the development and deployment of AI tools that can generate text, images, and audiooften referred to as generative artificial intelligencemay impact open and fair competition or enable unlawful business practices across markets, including in creative industries.The listening session will focus on different issues posed by generative AI, including concerns raised by musicians, actors, and other content creators about the use of AI to create entertainment and other content.
FTC Chair Lina M. Khan will provide opening remarks to kick off the event and will then hear from representatives from a variety of creative fields. They will explore the ways emerging AI tools are reshaping each of the participants respective industries and how they are responding to these changes. The listening session, which is being led by the FTCs Office of Technology, is part of the agencys efforts to keep up with the latest developments in emerging technologies such as AI.
The event will begin at 3 p.m. ET and be webcast on the FTCs website at FTC.gov. Additional information, including a list of panelists will be posted in the coming days to the event page.
The lead staffer on this matter is Madeleine Varner from the FTCs Office of Technology.
Read more here:
FTC to Host Virtual Roundtable on AI and Content Creation - Federal Trade Commission News
World must pass ‘AI stress test’, UK Deputy PM says, announcing … – UN News
Mr. Dowden said the so-named AI Safety Summit, set for November, will aim to preempt the risks posed by frontier AI and explore how it can be used for the public good.
AI is the biggest transformation the world has known, he emphasized, noting that it is going to change everything we do, the way we live, relations between nations, and it is going to change the United Nations, fundamentally.
Our task as governments is to understand it, grasp it, and seek to govern it, and we must do so at great speed, he stressed.
Mr. Dowden drew parallels between the work of inventors Thomas Edison (lightbulb) and Tim Berners-Lee (email) and the potential of artificial intelligence today.
They could not surely have respectively envisaged the illumination of the New York skyline at night or the wonders of the modern internet but they suspected the transformative power of their inventions.
He emphasized that frontier AI has the potential not just to similarly transform our lives, but to reimagine our understanding of science, from decoding the smallest particles to the farthest reaches of the universe.
One of the main concerns highlighted by the Deputy Prime Minister is the unprecedented speed at which AI is evolving, with the pace having far-reaching implications, both in terms of the opportunities it presents and the risks it poses.
On the positive side, AI models currently under development could play a pivotal role in addressing some of the worlds most pressing challenges: clean energy, climate action, food production or detecting diseases and pandemics.
In fact, every single challenge discussed at this years General Assembly and more could be improved or even solved by AI, he stated.
However, amidst the promise of AI, Mr. Dowden also sounded a cautionary note, underscoring the dangers of misuse, citing examples such as hacking, cyberattacks, deepfakes and the potential loss of control over AI systems.
Indeed, many argue this technology is like no other, in the sense that its creators themselves dont know how it works the principal will therefore come from misuse, misadventure, or misalignment with human objectives, he added.
There is no future in which this technology does not develop at an extraordinary pace, he said, and while companies were doing their best to set up guardrails, the starting gun has been fired on a globally competitive race in which individual companies as well as countries will strive to push the boundaries as far and fast as possible.
Against this backdrop, the AI Safety Summit will focus on addressing extreme risks associated with frontier AI, the Deputy Prime Minister said.
The summit aims to bring together experts, policymakers and stakeholders to explore strategies for mitigating these risks while harnessing the positive potential of AI for public good.
We cannot afford to become trapped in debates about whether AI is a tool for good or a tool for ill, it will be a tool for both. We must prepare for both and insure against the latter, he urged.
Full statement available here.
See more here:
World must pass 'AI stress test', UK Deputy PM says, announcing ... - UN News