The Government’s Role In Progressing AI In The UK – New … – Mondaq News Alerts

OpenAI launched ChatGPT 3.5 in November 2022 and sincethen, it set growth records as it spread like wildfire. Today, itnears one billion unique visitors per month. Since its launch, theworld has been all-consumed with talking about AI and its potentialuse cases across a wide range of industries.

Sam Altman, co-founder and CEO of OpenAI, has said that AI toolscan find solutions to "some of humanity's biggestchallenges, like climate change and curing cancer".

There's also been plenty of talk about the largest techcompanies (namely Google and Meta, as well as Microsoft) and theirrace in the pursuit of Artificial General Intelligence (AGI). Thismakes it sound very much like an arm's race, which is acomparison many have made. Within any race, there's often theconcern that those in the race will cut corners and in thisparticular race, many fear that the consequences could bedisastrous. Within this article, we'll explore the possibleconsequences and the UK's stance on the regulation of AI tohelp safeguard against these.

AI is seen as central to the government's ambition to makethe UK a science and technology superpower by 2030 and PrimeMinister Rishi Sunak again made this clear in his opening keynoteat June's London Tech Week: "If our goal is to make thiscountry the best place in the world for tech, AI is surely one ofthe greatest opportunities for us".

As discussed here, AI was also a headline feature earlierthis year in the government's Spring Budget. Both within thisBudget and since then, the following has been announced:

Despite the many potential benefits of AI, there's alsogrowing concern about the risks of AI, ranging from the widelydiscussed risk of disinformation to the evolving risk ofcybersecurity. A couple of the widely discussed risks of AIare:

Most AI tools will use Large Language Models (LLM), whicheffectively means that they are trained on large datasets, mostlypublicly available on the internet. So it stands to reason thatthese tools can only be as good as the data they're trained on,but if this data isn't carefully vetted, then the tools will beprone to misinformation and even include bias, as we saw withTwitter's infamous chatbot Tay which quickly began to post discriminatoryand offensive tweets.

AI alignment is a growing field within AI safety that aims toalign the technology with our (i.e. human) goals. Therefore, AIalignment is critical to ensuring that AI tools are safe, ethicaland align with societal values. For example, Open AI has stated"Our research aims to make AGI aligned with human values andfollow human intent".

Sir Patrick Vallance, the UK's former Government ChiefScientific Adviser, warned earlier this year that "there willbe a big impact on jobs and that impact could be as big as theIndustrial Revolution was". This isn't an uncommon vieweither, recently Goldman Sachs predicted that roughly two-thirds ofoccupations could be partially automated by AI. More worryingly,IBM's CEO Arvind Krishna predicted that 30% ofnon-customer-facing roles could be entirely replaced by AI andautomation within the next five years, which equates to 7,800 jobsat IBM. Job displacement and economic inequality is a huge risk ofAI.

Many have warned of other risks such as privacy concerns, powerconcentration, and even existential risks. As this is afast-evolving industry, you could also argue that as we don'tyet fully understand what AI could look like and be used for in thefuture, we also don't yet know all of the risks that the futurewill bring.

Despite talking about the potential benefits of AI, ranging fromsuperbug-killing antibiotics to agricultural use and potential infinding cures for diseases, Rishi Sunak also recognised thepotential dangers. "The possibilities are extraordinary. Butwe must, and we will, do it safely. I know people areconcerned". Keir Starmer, also at London Tech Week, continuedthis theme by saying "we need to put ourselves into a positionto take advantage of the benefits but guard against the risks"and called for the UK to "fast forward" AIregulation.

Rishi Sunak also went on to say that "the very pioneers ofAI are warning us about the ways these technologies could undermineour values and freedoms, through to the most extreme risks ofall". This could be a reference to multiple pioneers,including:

Despite the calls, it should also be acknowledged that AI isextremely difficult to regulate. It's constantly evolving so itbecomes difficult to predict what it will look like tomorrow and asa result, what regulation needs to look like to not become quicklyobsolete. The fear for governments, and the pushback from AIcompanies, will be that overregulation will stifle innovation andprogress, including all the positive impacts that AI could have, soa balance must be struck.

Earlier this year, it seemed that the UK's stance onregulation was to be a very hands-off approach and this would belargely left to existing regulators and the industry itself bytaking a "pro-innovation approach to AI regulation"(which was the name of the white paper initially published on 29thMarch 2023). Within this White Paper, unlike the EU, the UK'sGovernment confirmed that it wasn't looking to adopt newlegislation or create a new regulator for AI. Instead, it wouldlook to existing regulators like the ICO (InformationCommissioner's Office) and the CMA (Competition and MarketsAuthority) to "come up with tailored, context-specificapproaches that suit the way AI is actually being used in theirsectors". This approach was criticised by many, including KeirStarmer who commented that "we haven't got an overarchingframework".

However, since this white paper (which has since been updated),Rishi Sunak has shown signs that the UK's light-touch approachto regulation needs to evolve. At London Tech Week, he stated thathe wants "to make the UK not just the intellectual home butthe geographical home of global AI safety regulation". Thiswas coupled with the announcement that the UK will host a globalsummit on safety in artificial intelligence this autumn where,according to a No. 10 spokesman, the event will "provide aplatform for countries to work together on further developing ashared approach to mitigate these risks".

Since 100m has also been announced for the UK's AIFoundation Model Taskforce, with Ian Hogarth, co-author of theannual State ofAI report, announced to lead this task force. The key focus forthis Taskforce will be "taking forward cutting-edge safetyresearch in the run-up to the first global summit on AI". Itisn't just the first global summit that will come to the UK,but also OpenAI confirmed their first international office will be openingin London. Sam Altman stated this is an opportunity to"drive innovation in AGI development policy" and thathe's excited to see "the contributions our London officewill make towards building and deploying safe AI".

Time will tell on both the potential (both good and bad) for AIand how the regulation within the UK and globally rolls out, butit's clear that the UK wants to play a leading role in bothregulation and innovation, which may at times clash with eachother. In an interview to the BBC on AI regulation, Sunak said"I believe the UK is well-placed to lead and shape theconversation on this because we are very strong when it comes toAI".

The content of this article is intended to provide a generalguide to the subject matter. Specialist advice should be soughtabout your specific circumstances.

Excerpt from:

The Government's Role In Progressing AI In The UK - New ... - Mondaq News Alerts

Related Posts

Comments are closed.