AI likely to spell end of traditional school classroom, leading expert says – The Guardian

Artificial intelligence (AI)

Exclusive: Prof Stuart Russell says technology could result in fewer teachers being employed possibly even none

Recent advances in AI are likely to spell the end of the traditional school classroom, one of the worlds leading experts on AI has predicted.

Prof Stuart Russell, a British computer scientist based at the University of California, Berkeley, said that personalised ChatGPT-style tutors have the potential to hugely enrich education and widen global access by delivering personalised tuition to every household with a smartphone. The technology could feasibly deliver most material through to the end of high school, he said.

Education is the biggest benefit that we can look for in the next few years, Russell said before a talk on Friday at the UNs AI for Good Global Summit in Geneva. It ought to be possible within a few years, maybe by the end of this decade, to be delivering a pretty high quality of education to every child in the world. Thats potentially transformative.

However, he cautioned that deploying the powerful technology in the education sector also carries risks, including the potential for indoctrination.

Russell cited evidence from studies using human tutors that one-to-one teaching can be two to three more times effective than traditional classroom lessons, allowing children to get tailored support and be led by curiosity.

Oxford and Cambridge dont really use a traditional classroom they use tutors presumably because its more effective, he said. Its literally infeasible to do that for every child in the world. There arent enough adults to go around.

OpenAI is already exploring educational applications, announcing a partnership in March with an education nonprofit, the Khan Academy, to pilot a virtual tutor powered by ChatGPT-4.

This prospect may prompt reasonable fears among teachers and teaching unions of fewer teachers being employed possibly even none, Russell said. Human involvement would still be essential, he predicted, but could be drastically different from the traditional role of a teacher, potentially incorporating playground monitor responsibilities, facilitating more complex collective activities and delivering civic and moral education.

We havent done the experiments so we dont know whether an AI system is going to be enough for a child. Theres motivation, theres learning to collaborate, its not just Can I do the sums? Russell said. It will be essential to ensure that the social aspects of childhood are preserved and improved.

The technology will also need to be carefully risk-assessed.

Hopefully the system, if properly designed, wont tell a child how to make a bioweapon. I think thats manageable, Russell said. A more pressing worry is the potential for hijacking of software by authoritarian regimes or other players, he suggested. Im sure the Chinese government hopes [the technology] is more effective at inculcating loyalty to the state, he said. I suppose wed expect this technology to be more effective than a book or a teacher.

Russell has spent years highlighting the broader existential risks posed by AI, and was a signatory of an open letter in March, signed by Elon Musk and others, calling for a pause in an out-of-control race to develop powerful digital minds. The issue has become more urgent since the emergence of large language models, Russell said. I think of [artificial general intelligence] as a giant magnet in the future, he said. The closer we get to it the stronger the force is. It definitely feels closer than it used to.

Policymakers are belatedly engaging with the issue, he said. I think the governments have woken up now theyre running around figuring out what to do, he said. Thats good at least people are paying attention.

However, controlling AI systems poses both regulatory and technical challenges, because even the experts dont know how to quantify the risks of losing control of a system. OpenAI announced on Thursday that it would devote 20% of its compute power to seeking a solution for steering or controlling a potentially super-intelligent AI, and preventing it from going rogue.

The large language models in particular, we have really no idea how they work, Russell said. We dont know whether they are capable of reasoning or planning. They may have internal goals that they are pursuing we dont know what they are.

Even beyond direct risks, systems can have other unpredictable consequences for everything from action on climate change to relations with China.

Hundreds of millions of people, fairly soon billions, will be in conversation with these things all the time, said Russell. We dont know what direction they could change global opinion and political tendencies.

We could walk into a massive environmental crisis or nuclear war and not even realise why its happened, he added. Those are just consequences of the fact that whatever direction it moves public opinion, it does so in a correlated way across the entire world.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read more here:

AI likely to spell end of traditional school classroom, leading expert says - The Guardian

Related Posts

Comments are closed.