A lawyer got ChatGPT to do his research, but he isnt AIs biggest fool – The Guardian

Opinion

The emerging technology is causing pratfalls all over not least tech bosses begging for someone to regulate them

This story begins on 27 August 2019, when Roberto Mata was a passenger on an Avianca flight 670 from El Salvador to New York and a metal food and drink trolley allegedly injured his knee. As is the American way, Mata duly sued Avianca and the airline responded by asking that the case be dismissed because the statute of limitations had expired. Matas lawyers argued on 25 April that the lawsuit should be continued and appending a list of over half a dozen previous court cases that apparently set precedents supporting their argument.

Aviancas lawyers and Judge P Kevin Castel then dutifully embarked on an examination of these precedents, only to find that none of the decisions or the legal quotations cited and summarised in the brief existed.

Why? Because ChatGPT had made them up. Whereupon, as the New York Times report puts it, the lawyer who created the brief, Steven A Schwartz of the firm Levidow, Levidow & Oberman, threw himself on the mercy of the court saying in an affidavit that he had used the artificial intelligence program to do his legal research a source that has revealed itself to be unreliable.

This Schwartz, by the way, was no rookie straight out of law school. He has practised law in the snakepit that is New York for three decades. But he had, apparently, never used ChatGPT before, and therefore was unaware of the possibility that its content could be false. He had even asked the program to verify that the cases were real, and it had said yes. Aw, shucks.

One is reminded of that old story of the chap who, having shot his father and mother, then throws himself on the mercy of the court on the grounds that he is now an orphan. But the Mata case is just another illustration of the madness about AI that currently reigns. Ive lost count of the number of apparently sentient humans who have emerged bewitched from conversations with chatbots the polite term for stochastic parrots who do nothing else except make statistical predictions of the most likely word to be appended to the sentence they are at that moment engaged in composing.

But if you think the spectacle of ostensibly intelligent humans being taken in by robotic parrots is weird, then take a moment to ponder the positively surreal goings-on in other parts of the AI forest. This week, for example, a large number of tech luminaries signed a declaration that Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war. Many of these folks are eminent researchers in the field of machine learning, including quite a few who are employees of large tech companies. Some time before the release, three of the signatories Sam Altman of OpenAI, Demis Hassabis of Google DeepMind and Dario Amodi of Anthropic (a company formed by OpenAI dropouts) were invited to the White House to share with the president and vice-president their fears about the dangers of AI, after which Altman made his pitch to the US Senate, saying that regulatory intervention by governments will be critical to mitigate the risks of increasingly powerful models.

Take a step back from this for a moment. Here we have senior representatives of a powerful and unconscionably rich industry plus their supporters and colleagues in elite research labs across the world who are on the one hand mesmerised by the technical challenges of building a technology that they believe might be an existential threat to humanity, while at the same time calling for governments to regulate it. But the thought that never seems to enter what might be called their minds is the question that any child would ask: if it is so dangerous, why do you continue to build it? Why not stop and do something else? Or at the very least, stop releasing these products into the wild?

The blank stares one gets from the tech crowd when these simple questions are asked reveal the awkward truth about this stuff. None of them no matter how senior they happen to be can stop it, because they are all servants of AIs that are even more powerful than the technology: the corporations for which they work. These are the genuinely superintelligent machines under whose dominance we all now live, work and have our being. Like Nick Bostroms demonic paperclip-making AI, such superintelligences exist to achieve only one objective: the maximisation of shareholder value; if pettifogging humanistic scruples get in the way of that objective, then so much the worse for humanity. Truly, you couldnt make it up. ChatGPT could, though.

Keeping it lo-techTim Harford has written a characteristically thoughtful column for the Financial Times on what neo-luddites get right and wrong about big tech.

Stay wokeMargaret Wertheims Substack features a very perceptive blogpost on AI as symptom and dream.

Much missedMartin Amis on Jane Austen over on the Literary Hub site is a nice reminder (from 1996) of the novelist as critic.

{{topLeft}}

{{bottomLeft}}

{{topRight}}

{{bottomRight}}

{{.}}

Read the original:
A lawyer got ChatGPT to do his research, but he isnt AIs biggest fool - The Guardian

Related Posts

Comments are closed.