Category Archives: Artificial Intelligence

Artificial intelligence can learn but its not ready to teach, experts say – Business in Vancouver

Human Resources & Education Computer scientists, educators dont see artificial intelligence fully replacing people

By Albert Van Santvoort | May 5, 2023, 4:30pm

Artificial intelligence can help students learn, but experts say it can replace teaching | Andrea De Santis/Unsplash

In late January, ChatGPT creator OpenAI addressed criticism from schools that the text generated from its AI-powered chatbot encouraged student plagiarism and cheating.

The company responded by releasing a software tool to help teachers detect whether the author of an assignment was a student, or artificial intelligence.

This is not the first time an online or digital tool has been accused of potentially destroying education. In 1998, The New York Times published an article titled, The Trouble with Cheating in the Digital Age one many stories published by various media outlets that warned about the dangers posed by the internet (library closures and increased plagiarism, among them).

Today, computer science educators say such concerns over AI are likely overblown.

Steve DiPaola is a professor in Simon Fraser Universitys cognitive science program, and leads the schools iVizLab, which focuses on AI-based computational models of human characteristics.

He expects AI will come to be used by students as a tool for gathering information, or to work through and consider an assignment. It wont be used to create essays or answers that are fully copied and pasted.

What you get cheaply out of these systems is going to be really obvious, really templatized, said DiPaola. And were all going to notice it, and were not going to care about it.

After all, ChatGPT isnt always accurate. Beyond referencing wrong information, it can hallucinate by providing a convincing but made-up answer.

For example, DiPaola asked a Vincent van Gogh chatbot how the artists friend Paul Gauguin helped him heal spiritually. The program responded randomly with: I have a friend in Jesus.

When asked how it would disrupt education, ChatGPT didnt focus on cheating.

Rather, the chatbot highlighted that AI could offer personalized learning by analyzing a students learning habits and shaping a curriculum to meet their specific needs. The program also responded that ChatGPT could enhance student engagement by providing immediate feedback, personalized recommendations and interactive discussions.

Though useful, Vered Shwartz, a University of British Columbia assistant professor of computer science, said ChatGPTs inaccuracies could create problems.

ChatGPT also said it could introduce automatic assignment grading, but Shwartz is skeptical. The industry has had automatic grading for questions with non-written answers for decades. And, with so much variation in written responses, Shwartz said she doubts that a program would be able to correctly grade written responses that vary too much from the template answer.

Another potential consequence of AI adoption in education, according to ChatGPT, is job losses.

Shwartz, however, was unconvinced. Jobs will definitely change, she said, but she added that it would be difficult for an AI to replace educators altogether. Even if the process of building a syllabus or a lesson plan could be automated, it would still have to be reviewed and likely taught by a person, she said.

avansantvoort@biv.com

See the rest here:
Artificial intelligence can learn but its not ready to teach, experts say - Business in Vancouver

Opinion: Let’s face it, artificial intelligence is becoming the new … – The Globe and Mail

Images are unavailable offline.

The recent hype over AI is much like the same fever that had fuelled crypto, when once you could slap 'blockchain' on the name of any company and see its stock soar fourfold.

Martin Meissner/The Associated Press

Amid all the hoo-ha over artificial intelligence this year, Microsoft Corp. MSFT-Q, which has a stake in the laboratory behind the ChatGPT bot, has seen its shares go up more than 25 per cent.

Various AI stocks, with names youve never heard of, are hotter than hot, even with a recession looming and at the foot of a tech beatdown in the markets. BigBear.ai Holdings Inc., an information-technology services company, is up about 250 per cent on the year; at one point in February, it was up 700 per cent.

Wanna make money? Boy, do I have a great idea for you. Just add AI to the name of your company. Theres a voice-recognition company that used to be called SoundHound Inc., but went public in 2022 as SoundHound AI Inc. SOUN-Q. The stock has admittedly pared back some gains since then, but it is still up nearly 100 per cent for the year.

Story continues below advertisement

Any of this sound familiar? Its the same fever that had fuelled crypto, when once you could slap blockchain on the name of any company and see its stock soar fourfold. Im pretty sure that soon, as with crypto, the term AI bro will enter the lexicon to describe a young man who is passionate and enthusiastic about the industry.

Oh, wait it has. An Urban Dictionary entry for AI bro was made in January of this year.

Will AI take over the world? And other questions Canadians are asking Google about the technology

Lets face it, AI is the new crypto. All the hype, investment mania and scams of past years investment cycles are going to come back.

To that, you might slam your table, squint your eye around your monocle and say: Wait thats not right! At least AI does something. Crypto is just make-believe money!

Story continues below advertisement

A commonly expressed view. And a wrong one. But lets for the sake of argument say that it is correct. Has that distinction resulted in any difference in the markets?

It wasnt just 2020, the year of the really expensive digital pictures, or NFTs, that crypto was booming. Remember 2017, when a market frenzy was sparked by the Canadian-founded Ethereum, which let anyone easily create their own coin?

At one point that year, the furniture chain Ethan Allen Interiors Inc. ETD-N was up 50 per cent, largely attributed to how its ticker at the time, ETH, was the same as the abbreviation for Ethereums ether coin.

While Ethan Allen eventually changed its ticker to distance itself, others fiercely coveted that nominal crypto association.

Story continues below advertisement

I wasnt being hyperbolic when I wrote earlier that companies can slap blockchain onto their names and see their stock quadruple in value. That was exactly what happened when Long Island Iced Tea Corp. changed its named to Long Blockchain Corp.

Meanwhile, Eastman Kodak Co. KODK-N, the camera maker, saw its stock triple in value after a bad year by announcing it would go into crypto mining.

Then there were the outright scams. The infamous OneCoin raised US$4-billion, but there is no evidence it had even developed a digital currency based on blockchain technology. Such scams are so plentiful that the U.S. Justice Department is still announcing new 2017-era cases to this day.

Such scams abounded because they were easy. Regardless of what many think of it, there are defined metrics for what makes a cryptocurrency namely in terms of the code that goes into it. But people cant see or hold a cryptocurrency. So, its easy to claim youve made one. The end user doesnt always have the sophistication to tell the difference until its too late.

Story continues below advertisement

Again sounds familiar? Have you ever wondered how many purported AI projects are actually AI?

A London-based startup, Engineer.ai, once claimed to use artificial intelligence to help people build apps. It attracted US$30-million from investors, including a unit of Japans SoftBank Group Corp. SFTBY. The Wall Street Journal later reported that Engineer.ais AI claims were greatly exaggerated actual humans in India were building the apps.

Such practices are so rampant, there is even a neologism coined for it: AI washing.

What it all boils down to is this: When crypto entered the mainstream, it was hard to define or even understand. In that messy environment, companies thrived and empires were built and so also rose the scams and OneCoins of the world. AI is having the same moment now.

Follow this link:
Opinion: Let's face it, artificial intelligence is becoming the new ... - The Globe and Mail

The coronation and artificial intelligence – Browser Media

I was planning on writing a blog post about how to use look up tables in Google Tag Manager to track numerous form conversions.

I am growing increasingly fond of the GTM / GA4 power combo and have plenty of examples of how GA4 is not all bad to share with you. But.. that is a bit of a chunky post and I have run out of time this week to finish it. I blame the bank holiday (they always throw my week into chaos), so that particular post will have to wait until next week. Today, you get some random musings.

In amongst the chaos, I have found myself thinking about two things this week:

1] The coronation

2] The rise of AI

I should probably clarify point 1. I have not actually really given the coronation itself any thought at all. I have simply reflected on the rampant apathy that seems to be surrounding the day. When asked about it, 90% of the people I have spoken to just roll their eyes and shrug their shoulders. 9% have launched into a tirade about how out of touch the monarchy is and the inappropriateness of all the pomp and ceremony during a period of such economic hardship for the people that the monarchy should serve. The remaining 1% is my mum and she is bubbling with excitement, bunting at the ready.

I am not really sure where I stand on the relevance of the monarchy, but the general ambivalence must surely be a warning sign for the future of the royal family?

With regards to the rise of AI, I hate to pour fuel on the AI is going to take away our jobs fire but it is getting interesting with increased calls for some form of control. In contrast to the apathy around the coronation, passions rise when people talk about artificial intelligence.

Whilst I think that a dystopian vision of AI fuelled cyborgs massacring humanity is a step too far, there is no doubt that AI is becoming increasingly capable and I do think that some job functions will shortly become extremely redundant.

Within the digital marketing world, there is constant debate about the role of AI content creation. Our very own Vic wrote about how good Googles Bard is (or isnt?) at creating content recently. Personally, I still believe that genuine human content will out-punch AI drivel for some time to come and that the search engines will use AI to identify AI content and ultimately prioritise genuine human content, but some of the drivel is actually quite good.

I should confess to a guilty pleasure using Open AIs ChatGPT to create random songs and poems about things that I am thinking about. It can be very amusing and you have to admire how quickly the bot will create something that would take me a long time.

In the spirit of celebrating the coronation (someone has to!), I thought I would ask the oracle about King Charles and artificial intelligence this morning. Without further ado, here is what is spat out:

In days of old, King Charles did reign,A monarch wise and just.But now in modern times we see,A new king rise from dust.

Artificial Intelligence, its name,A force both strong and fast.With algorithms and data sets,Its knowledge unsurpassed.

No longer do we need a king,To rule with iron hand.For AI can make decisions,That once were solely manned.

But still we must remember,That AI is but a tool.And though it may be powerful,It is not above the rule.

So let us use this technology,To better serve mankind.And let King Charles rest in peace,As we progress in kind.

Ouch! A politically correct ending, but definitely a punchy start.

The king is dead. Long live the king.

Have a great weekend everyone 🙂

Originally posted here:
The coronation and artificial intelligence - Browser Media

Law firms embrace the efficiencies of artificial intelligence – Financial Times

Law firms have been racing to adopt artificial intelligence after developments in the technology have enabled it to draw up contracts, assist due diligence processes and draft legal opinions.

The launch of natural language chatbot ChatGPT in November marked a significant turning point in generative AI. Created by Microsoft-backed OpenAI, the bot produces convincing and humanlike sentences, using large language models to predict the likely next word in a sequence.

It has led other big tech companies, including Google and Microsoft itself, to quickly follow suit. Start-ups have also been leveraging the underlying technology used in these products to develop specialist AI for legal services.

Law firms and consultancies are now using the software to automate tasks and drive efficiencies, spurred to cut costs by falling revenues amid a corporate dealmaking drought. Magic Circle law firms and Big Four accounting groups have been experimenting with AI platforms built for legal tasks such as drafting contracts, translating documents into different languages, and suggesting legal opinions.

Recommended

We get several thousand queries daily, and it is pretty consistent across the offices...in multiple languages [and] different areas of law, says David Wakeling, head of Allen & Overys markets innovation group, which comprises both lawyers and developers. The law firm was the first Magic Circle firm to adopt generative AI and has been using an eponymous product by a US start-up called Harvey AI since November. It is [now] a serious part of the operating model, he explains. We are way past trial.

Harvey was built using the GPT language models created by OpenAI, which has also invested in the US start-up. As well as the general internet data that underlies GPT, Harvey is trained in legal data including case law. The system alerts A&Os lawyers to fact-check the content it creates, as generative AI is known for hallucinating: stating things confidently as fact, despite there being no basis for them in reality.

It is a blank page, quick first stab, Wakeling says. You know you are always going to edit it; it is never good enough. But [if] you apply that to 3,500 people, that is a serious saving in terms of time; an hour or two a week is a big deal.

We need human lawyers to ensure we keep expanding and pushing forwards the law so it doesnt atrophy

Almost half of all current tasks in the legal profession could be replaced by AI, according to a recent report by Goldman Sachs. Automating them would eliminate the need for humans to carry out some of the more administrative and mundane work although it could also mean that trainee solicitors and graduates at law firms no longer got to experience it.

We need to think about how we train young lawyers, says Kay Firth-Butterfield, head of artificial intelligence at the World Economic Forum. They cannot all instantly be able to advise on the most complex matters or do complex client meetings, or think of ways to challenge the status quo. This has to be learned.

She sees limits to AIs use: Because all that generative AI can do is look at historical data to give answers, we need human lawyers to ensure we keep expanding and pushing forwards the law so it doesnt atrophy.

Protagonists argue that AI-based tools will free up lawyers to do more skilled work and give strategic advice, while saving time and reducing costs for firms and clients.

It definitely reduces the billable hours, says Richard Robinson, founder and chief executive of Robin AI, which launched in 2019 and provides AI-based legal software. But he points out: The best firms want to be paid for high-level strategic work, things that fundamentally, at least today, no AI is trying to replicate like high-level negotiations, insights into whats happened in other [similar] deals in the market.

$1bnPwCs investment in AI to automate parts of its audit, tax and consulting business over the next three years

Robin AI works with two of the Big Four accounting firms, as well as private equity funds and law firm Clifford Chance. It sells software and also offers an added service, where its team of 30 in-house lawyers and paralegals oversee the AI-generated results. Its technology can also be used to scan legal documents to assess risk exposure.

But Robinson warns that these tools are basically not ready to be used without people safeguarding them highlighting that the pure output of such technologies should always be checked and edited by a qualified expert.

Large language models improve as more data is fed into them and as they are put to more use. PwC, which uses Harvey for mergers and acquisitions, due diligence and drafting contracts, says it has had an influx of clients saying they want to adopt AI, but are concerned about data protection.

Data confidentiality and security is paramount and really important...because data is sensitive and there is legal privilege, says Sandeep Agrawal, a partner in PwCs tax and legal services. Agrawal met executives from Harvey recently to discuss how it can ringfence data and encrypt the information, meaning it is more secure. Harvey segregates all customer data and offers encryption tools to protect access to client information.

Recommended

In a sign of growing confidence in the technology, PwC last week pledged to invest $1bn in AI to automate parts of its audit, tax and consulting business over the next three years.

Jerry Ting, chief executive of contract management business Evisort and a lecturer at Harvard Law School, reports a similar shift towards adopting AI in recent months. Evisort, which launched in the US in 2016, offers AI software that allows clients to create and manage contracts including drafting and signing through an automated process.

Before GPT, it was: here is what AI is, heres why it benefits you; it was almost that we had to convince them, he says. Now, they are showing up at the door already convinced. The question becomes: how do I use it in a way thats safe, that actually drives my business outcomes, and it does fit in my budget?

This article has been updated to reflect that Evisort is a contract management business

Original post:
Law firms embrace the efficiencies of artificial intelligence - Financial Times

Did Stephen Hawking Warn Artificial Intelligence Could Spell the … – Snopes.com

Image Via Image Via Sion Touhig/Getty Images")}else if(is_tablet()) {document.write("")}

On May 1, 2023, the New York Post ran a story saying that British theoretical physicist Stephen Hawking had warned that the development of artificial intelligence (AI) could mean "the end of the human race."

Hawking, who died in 2018, had indeed said so in an interviewwith the BBC in 2014.

"The development of full artificial intelligence could spell the end of the human race," Hawking said during the interview. "Once humans develop artificial intelligence, it would take off on its own and re-design itself at an ever-increasing rate."

Another story, from CNBC in 2017, relayed a similar warning about AI from the physicist. It came from Hawking's speech at the Web Summit technology conference in Lisbon, Portugal, according to CNBC. Hawking reportedly said:

Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.

Such warnings became more common in 2023. In March, tech leaders, scientists, and entrepreneurs warned about the dangers posed by AI creations, like ChatGPT, to humanity.

"AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs," they wrote in an open letter published by the Future of Life Institute, a nonprofit. The letter garnered over 27,500 signatures as of this writing in early May 2023. Among the signatories were CEO of SpaceX, Tesla, and Twitter Elon Musk, Apple co-founder Steve Wozniak, and Pinterest co-founder Evan Sharp.

In addition, Snopes and other fact-checking organizations noted a dramatic uptick in misinformation conveyed on social media via AI-generated contentin 2022 and 2023.

Then, on May 2, long-time researcher at Google, Geoffrey Hinton, quit the technology behemoth to sound the alarm about AI products. Hinton, known as "Godfather of AI," told MIT Technology Review that chatbots like GPT-4 that OpenAI, an AI lab "are on track to be a lot smarter than he thought they'd be."

Given that Hawking was indeed documented as warning about the potential for AI to "spell the end of the human race," we rate this quote as correctly attributed to him.

"Geoffrey Hinton Tells Us Why He's Now Scared of the Tech He Helped Build." MIT Technology Review, https://www.technologyreview.com/2023/05/02/1072528/geoffrey-hinton-google-why-scared-ai/. Accessed 3 May 2023.

"'Godfather of AI' Leaves Google, Warns of Tech's Dangers." AP NEWS, 2 May 2023, https://apnews.com/article/ai-godfather-google-geoffery-hinton-fa98c6a6fddab1d7c27560f6fcbad0ad.

"Pause Giant AI Experiments: An Open Letter." Future of Life Institute, https://futureoflife.org/open-letter/pause-giant-ai-experiments/. Accessed 3 May 2023.

Stephen Hawking Says AI Could Be "worst Event" in Civilization. 6 Nov. 2017, https://web.archive.org/web/20171106191334/https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html.

Stephen Hawking Warned AI Could Mean the "End of the Human Race." 3 May 2023, https://web.archive.org/web/20230503162420/https://nypost.com/2023/05/01/stephen-hawking-warned-ai-could-mean-the-end-of-the-human-race/.

"Stephen Hawking Warns Artificial Intelligence Could End Mankind." BBC News, 2 Dec. 2014. http://www.bbc.com, https://www.bbc.com/news/technology-30290540.

Damakant Jayshi is a fact-checker for Snopes, based in Atlanta.

See the rest here:
Did Stephen Hawking Warn Artificial Intelligence Could Spell the ... - Snopes.com

The open-source future of artificial intelligence – Exponential View

In todays commentary I delve into the question of open-source versus closed-source models for AI, and how this will shape the future of the internet.

It all started with leaked Google documents shared by an anonymous insider to

But the uncomfortable truth is, we arent positioned to win this arms race and neither is OpenAI. While weve been squabbling, a third faction has been quietly eating our lunch.

Im talking, of course, about open source. Plainly put, they are lapping us.

Ive been thinking about this open source since Stable Diffusion came out last summer. But the last two months, since Metas LLaMa model was leaked has seen a rich ecosystem of developers swarm around open-source LLMs. And things are moving quickly.

But before I share my thoughts, Ill first summarise the key observations the Google insider makes.

The first observation is that the gap between the current state of the art (GPT-4) and open-source models is changing quickly. For example, a $100 open-source model with 13 billion parameters is competing with a $10 million Google high-end model with 540 billion parameters.

Secondly, the open-source community has solved many scaling problems through a range of optimizations. MosaicML is a great example of this, demonstrating that they can train a Stable Diffusion model, which is not a large language model, six times cheaper than the original.

The third observation is that the dynamic of the open-source market creates a faster rate of iteration. This is because there are many developers who are contributing to the market, leading to a much faster rate of learning. Learning is a key factor in the Exponential Age; it drives down cost and improves price performance. Potentially, open-source models could learn and iterate far faster than closed-source ones.

In the last few days, Ive spoken to Emad Mostaque from StabilityAI, Yann LeCun from Meta, and several other people who are developing these things. In addition, Ive been thinking about open-source and public goods for more than two decades. Heres my take

Original post:
The open-source future of artificial intelligence - Exponential View

How Artificial Intelligence could become the future of MLS scouting and recruitment – AS USA

Major League Soccer has gone into partnership with aiScout, an Artificial Intelligence-based talent analysis and development platform run by ai.io, which will enable players to be scouted by clubs in the United States no matter where they are in the world. The agreement is part of the MLS Emerging Ventures program, which continues the leagues investment into MLS Next, which is aimed at players at under-19 level and younger.

Any player can sign up to the use the digital scouting product and effectively take part in a virtual trial, which any partner club, which now includes all 29 in MLS, will have access to. The aiScout app is free to download and players can run through a series of training drills and assessments whenever they so wish, as long as they have some open space and their mobile phone handy so they can film themselves.

After players have completed the assessments and uploaded their clips, the app evaluates their performances and gives them a score, which partner clubs can track and review for themselves. The most-highly-rated players will then have the chance to train with MLS clubs at different events across the US and Canada, according to MLS official announcement of the agreement.

At MLS, we believe this partnership is going to be a real solution for some of the most important issues faced in soccer across North America namely, cost, geography, and accessibility, said Fred Lipka, Technical Director of MLS NEXT.

It is critical that we provide all players with an opportunity to access MLS NEXT and MLS NEXT Pro programming, and ai.io has built a fantastic technology platform that enables us to eliminate these traditional barriers and increase opportunities for players at no cost to the player.

As well as all 29 MLS clubs, Premier League giants Chelsea also have an agreement with aiScout, who are their academy research partner. Fellow English club Burnley, who have just won promotion back to the top flight, are also part of ai.ios club network.

By encouraging aspirational players and fans to download our aiScout app and film themselves replicating club standard drills on their mobile devices, players have had success, and in some cases are now playing for clubs in the English Premier League, added Darren Peries, CEO of ai.io.

We are thrilled about this partnership with Major League Soccer, and greatly look forward to soon providing players around the world with an opportunity to be seen, analysed, evaluated and developed by MLS clubs.

MLS clubs will be able to use aiScout in full from December 2023 onwards.

View post:
How Artificial Intelligence could become the future of MLS scouting and recruitment - AS USA

Opinion: Striking television and film writers want artificial intelligence … – The Globe and Mail

Open this photo in gallery:

Demonstrators hold signs during the 2007-2008 Writers Guild of America strike in Hollywood.CHRIS DELMAS/AFP/Getty Images

Gus Carlson is a U.S.-based columnist for The Globe and Mail.

For decades, Hollywood writers have been creating stories about a future where machines take over the world. Think The Terminator, Blade Runner, I, Robot even the animated kids movie, WALL-E.

Now, these creatives find themselves on the thin edge of the wedge in their own version of that apocalyptic plot line.

When the members of the Writers Guild of America went on strike this week, they listed among their demands a provision that nods to a not-too-distant future where human creativity is under siege: regulations for the use of materials produced using artificial intelligence or similar technologies.

Beyond the cruel irony of this existential crisis for writers, the call to limit AIs influence in this context raises the question: Does it really matter who or what creates a good story well told?

Would viewers really care if their favourite Netflix series was the product of AI, as long as it was engaging and entertaining and especially if they didnt have to wait so long between seasons?

Purists would say they should care that the human creative process is iterative and by nature takes time to brew. Great art, whether it is writing, music, film, stage, painting, dance or sculpture, is about the expression of human emotion and feeling that cant be captured and replicated by machines. AI can do many things to mimic art, but is it really art?

Sure, AI can write a plot line about racism in Depression-era Alabama, but can it capture the powerful anxiety of Harper Lees To Kill and Mockingbird? It can spew out a scene where two friends kibitz about love in a New York diner, but can it capture the comic brilliance of a line of dialogue like Ill have what shes having? And yes, it can mimic a complicated guitar solo, but can it inspire like the magic of a B.B. King free-form riff?

Increasingly, however, the consumers and producers of content might not be so quick to dismiss the idea of tech-driven shortcuts to feed our instant-gratification culture.

As production costs for films, television programs and streaming series rise, and the demand to fill the content pipeline intensifies, the use of AI is becoming a real option for studios and networks. If they can produce high-quality content faster and cheaper and the viewers and subscribers dont really care or cant even tell how the sausage is made everybody, except the writers, of course, wins.

That point is more salient when we consider that restricting AI in the creation of stories isnt the only thing the writers want as part of their contract negotiations with a trade association representing the top Hollywood studios, television networks and streaming platforms.

They have many more concerns in the here and now, including making more money. A big part of their gripes is that streaming series typically have fewer episodes than broadcast shows, so maintaining a consistent income stream is difficult.

These demands will further strain the budgets of the studios. If writers want to be paid more, AI would start to look all the more attractive to the studios.

Of course, Hollywood writers are not alone in their wariness of a creative world infected by technology. Book publishers are on the lookout for AI-manufactured manuscripts, and many college admissions officers are placing less weight on student essays some are eliminating them as a requirement altogether because of widespread use of AI to create personal stories so expertly written they could not have come from the keyboard of the average teenager.

As writers and other artists struggle to protect their gifts, the broader cultural challenge is clear. There are many things AI can do, as well as many things it cant. The quandary for creatives is whether the difference will continue to matter to the average humanoid consumer of their wares. The economic viability of their craft hangs in the balance.

Read this article:
Opinion: Striking television and film writers want artificial intelligence ... - The Globe and Mail

The Artificial Intelligence Future Is Upon Us in ‘Class of ’09’ – The Daily Beast

Shows dont come timelier than Class of 09, an eight-part FX on Hulu drama, premiering May 10, that concerns the potential benefits and pitfalls of artificial intelligenceincluding the moral questions it raises and the ramifications it may have on the human workforce. Arriving as companies such as IBM are opting to not hire new workers for positions that will be replaced by A.I. in the coming few years, its a limited series with its finger so firmly and urgently on the pulse of our present (and future) reality that its fiction plays not as pure make-believe but, rather, as a vision of a possible tomorrow.

Better yet, Tom Rob Smiths show has more going for it than just prescience. Set during a trio of time periods, it focuses on four individuals struggling to figure out (and define) who they are while simultaneously navigating a law enforcement system dedicated to identifying threats to the public. A

ll three of these strands are intertwined in various narrative and thematic ways, highlighting the ethical and practical dilemmas that drove characters to embark on their respective courses, and exposing the fundamental means by which the personal affects the professionaland, as a result, the national. Inventively conceived and deftly executed, its a crime saga that comes across a modernized, multi-layered spin on Philip K. Dicks (and Steven Spielbergs) Minority Report.

Trifurcated across decades, Class of 09 begins in 2034, with FBI director Tayo Michaels (Brian Tyree Henry) monitoring the country via a wall of monitors whose security camera footage sometimes devolves into oceanic streams of matrix-like data. In order to locate a wanted individual named Amos Garcia (Ral Castillo), Michaels sends Amy Poet (Kate Mara), who has one cybernetic eye and doesnt understand why shes been plucked for this assignment.

What she discovers alongside comrade Murphy (Mrs. Davis Jake McDorman, co-starring in yet another AI-related series) is a bank of screens not unlike those possessed by Michaels, and which eventually cut to a loop of Michaels himself proclaiming, Not only are we now one of the greatest countries on this Earthwe are now also one of the safest.

Garcia is an apparent figure from the FBIs past, and its there that Class of 09 soon travels. In 2009, Poet is a nurse who puts everyone first, but shes convinced to give herself a shot by trying out for the bureau.

At Quantico, she joins a prospective incoming class that includes Miller, a former cog in the corporate machine whos looking to fight injustice, as well as confident Lennix (Brian J. Smith), whose parents view the FBI as a step on his journey to political power, and Hour (Sepideh Moafi), the daughter of persecuted Iranian immigrants who dont understand their daughters decision to channel her MIT-grade intellect into a career with the feds. Smith delineates these characters in quick, acute strokes, and then slowly peels back their layers to bare the hang-ups that have led them to their new career.

We always reveal ourselves, says Miller to an interrogation-room suspectone of many instances in which Class of 09s protagonists articulate this sentiment. The desire to know the self is central to Smiths story, which discloses that Miller doesnt trust people (thanks to a harrowing teenage traffic stop gone awry), Hour dreams of creating an inherently fair system (because it might provide the acceptance she craves as a gay woman), and Poet is a loner who prioritizes others in the same (harmful) manner that her single mother did.

These individual issues are wrapped up in the series fascination with AI, which promises investigators the ability to correlate and analyze data on a heretofore unheard-of scale, albeit at the cost of the vital human input necessary (or is it?) to differentiate between right and wrong, good and evil.

Between 2009 and 2034, Class of 09 situates itself in 2023, with Hour attempting to convince a skeptical establishment that an interconnected criminal database would help agents (rather than render them obsolete), Poet being forced to go undercover to investigate her own (following her triumphant take-down of corrupt Philly cops), and Michaels finding himself in a firefight with Montana domestic terrorists whose cunningly smiling leader Mark Tupirik (Mark Pellegrino) seems to have his sights trained specifically on him.

The threads connecting these comrades befores and afters only slowly become clear, as Smith hopscotches between eras with tantalizing (and generally surefooted) dexterity. Theyre brought to life, moreover, by a cast that skillfully handles both the proceedings action-oriented demands and psychological and Big Picture interests, led by the typically great Henry, whose Michaels has an easygoing charisma that belies his keen perceptiveness and formidable determination. Hes the centerpiece of the show, even if he never unduly overshadows his co-stars.

Smith imagines 2034 society as populated by realistic techno-gadgets and complicated by the consequences of artificial intelligence, whose unparalleled ability to assess information results in the types of predictive precrime measures that formed the basis of Dicks predecessor.

Its a fantasy that feels like its sprung from todays headlines, and its AI-centric material serves as an apt contextual framework for a story thats about the eternal quest to know oneself, others, and the world. From touch screens to domestic automation to the implants that grant Poet and others enhanced interfacing abilitiesthe byproducts of innovation that are also necessitated by grievous injuriesClass of 09 proves to be a science-fiction venture whose latter is inspired by the former.

Since press were only provided with four of the series eight installments, theres no guessing the ultimate destination of Class of 09, which uses its time-jumping conceit to thrill and, additionally, to elucidate new facets of its primary players. In an era when so many overlong TV efforts telegraph their every move, such unpredictability is another feather in Class of 09s cap, and makes one wish that it would continue on even past this season. If not, though, theres still plenty of reason to see it through to its finishwhich, hopefully, wont provide an AI cautionary-tale lesson that hits too close to home.

Liked this review? Sign up to get our weekly See Skip newsletter every Tuesday and find out what new shows and movies are worth watching, and which arent.

See more here:
The Artificial Intelligence Future Is Upon Us in 'Class of '09' - The Daily Beast

AI could be as transformative as Industrial Revolution – The Guardian

Artificial intelligence (AI)

UKs outgoing chief scientist urges ministers to get ahead of profound social and economic changes

The new genre of AI could be as transformative as the Industrial Revolution, the governments outgoing chief scientific adviser has said, as he urged Britain to act immediately to prevent huge numbers of people becoming jobless.

Sir Patrick Vallance, who stood down from his advisory role last month, said government should get ahead of the profound social and economic changes that ChatGPT-style, generative AI could usher in.

However, in a wide-ranging final parliamentary hearing that also covered his reflections on the pandemic and the rise of China as a global scientific power, he suggested AI could also have considerable benefits that should not be overlooked.

There will be a big impact on jobs and that impact could be as big as the Industrial Revolution was, Vallance told the Commons science, innovation and technology committee. There will be jobs that can be done by AI, which can either mean a lot of people dont have a job, or a lot of people have jobs that only a human could do.

In the Industrial Revolution the initial effect was a decrease in economic output as people realigned in terms of what the jobs were and then a benefit, he added. We need to get ahead of that.

Vallance called for a national review of which sectors would be most significantly affected so plans could be drawn up to retrain and give people their time back to do [their jobs] differently.

The comments follow an announcement by IBM this week that it is suspending or reducing hiring in jobs such as human resources, with a suggestion that 30% of its back-office roles could be replaced by AI in five years.

Echoing comments by the AI pioneer Geoffrey Hinton, who announced his departure from Google this week, Vallance said the most immediate concern posed by AI was ensuring it did not distort the perception of truth.

He added that there was also a broader question of managing the risk of what happens with these things when they start to do things that you really didnt expect.

Despite these potential existential threats, the technology also presented opportunities, Vallance argued. In medicine, it could be that you get more time with your doctor rather than being pressurised, he said. That could be a good outcome.

We shouldnt view this as all risk, he added. Its already doing amazing things in terms of being able to make medical imaging better. It will make life easier in all sorts of aspects of every day work, in the legal profession. This is going to be incredible important and beneficial.

Vallance, who is now chair of the Natural History Museum, appeared sceptical about the prospect of developing a British version of ChatGPT, dubbed Brit-GPT, which some experts have called for in recent months. In March, the Treasury committed 900m to building a supercomputer to boost sovereign capabilities in this area.

Vallance said the focus for the UKs core national capability should be on understanding the implications of AI models and testing the outputs not on building our own version.

He said: You need to be able to probe them and understand them. I just dont think the idea were going to invent something that rivals what the big companies have already made is very sensible. It sounds like attempts to invent a new internet. I mean, why?

Vallance also implied that a moratorium on AI would not be feasible. Unilaterally falling behind doesnt seem to me to be a sensible approach, he said.

Looking back over his tenure, Vallance said his proudest achievements included helping establish the Covid-19 vaccines taskforce and acting as chief scientific adviser for the Cop26 climate summit.

He said he regretted very clumsy wording about herd immunity that led to misunderstanding and controversy early in the pandemic. In a March 2020 interview, Vallance said the aim was not to suppress the virus completely to build up some degree of herd immunity whilst protecting the most vulnerable.

He told the committee his intention was to reflect that immunity was fundamentally how you end pandemics rather than it being an intended strategy. People get immunity through vaccines and they get immunity through catching infections, he said. Ultimately that is where we have got to.

On the origins of the pandemic, Vallance said by far the most likely explanation was a spillover from bats, and that the available evidence suggested a lab leak was less likely.

Vallance also commented on the UKs position in a shifting geopolitical world, with countries including China in the ascendancy in science and technology. Against this backdrop, he said, it was essential for the UK to remain part of the EUs Horizon programme, pointing out it took the flagship research scheme a decade to get going effectively.

The idea that you can instantly set up something equivalent is flawed, he said. China has huge scale, US has huge scale. There are some parts of science that need scale. You cant replicate that domestically.

He called on the UK government to make changes to its visa scheme, which he said needed to be quick and internationally competitive in order to attract the best scientists. When asked whether the Home Office had responded to his advice on this, Vallance said: I guess the feedback is the action.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the rest here:
AI could be as transformative as Industrial Revolution - The Guardian