AI experts say research into algorithms that claim to predict criminality must end – The Verge

A coalition of AI researchers, data scientists, and sociologists has called on the academic world to stop publishing studies that claim to predict an individuals criminality using algorithms trained on data like facial scans and criminal statistics.

Such work is not only scientifically illiterate, says the Coalition for Critical Technology, but perpetuates a cycle of prejudice against Black people and people of color. Numerous studies show the justice system treats these groups more harshly than white people, so any software trained on this data simply amplifies and entrenches societal bias and racism.

Lets be clear: there is no way to develop a system that can predict or identify criminality that is not racially biased because the category of criminality itself is racially biased, write the group. Research of this nature and its accompanying claims to accuracy rest on the assumption that data regarding criminal arrest and conviction can serve as reliable, neutral indicators of underlying criminal activity. Yet these records are far from neutral.

An open letter written by the Coalition was drafted in response to news that Springer, the worlds largest publisher of academic books, planned to publish just such a study. The letter, which has now been signed by 1,700 experts, calls on Springer to rescind the paper and for other academic publishers to refrain from publishing similar work in the future.

At a time when the legitimacy of the carceral state, and policing in particular, is being challenged on fundamental grounds in the United States, there is high demand in law enforcement for research of this nature, write the group. The circulation of this work by a major publisher like Springer would represent a significant step towards the legitimation and application of repeatedly debunked, socially harmful research in the real world.

In the study in question, titled A Deep Neural Network Model to Predict Criminality Using Image Processing, researchers claimed to have created a facial recognition system that was capable of predicting whether someone is likely going to be a criminal ... with 80 percent accuracy and no racial bias, according to a now-deleted press release. The papers authors included Phd student and former NYPD police officer Jonathan W. Korn.

In response to the open letter, Springer said it would not publish the paper, according to MIT Technology Review. The paper you are referring to was submitted to a forthcoming conference for which Springer had planned to publish the proceedings, said the company. After a thorough peer review process the paper was rejected.

However, as the Coalition for Critical Technology makes clear, this incident is only one example in a wider trend within data science and machine learning, where researchers use socially-contingent data to try and predict or classify complex human behavior.

In one notable example from 2016, researchers from Shanghai Jiao Tong University claimed to have created an algorithm that could also predict criminality from facial features. The study was criticized and refuted, with researchers from Google and Princeton publishing a lengthy rebuttal warning that AI researchers were revisiting the pseudoscience of physiognomy. This was a discipline was founded in the 19th century by Cesare Lombroso, who claimed he could identify born criminals by measuring the dimensions of their faces.

When put into practice, the pseudoscience of physiognomy becomes the pseudoscience of scientific racism, wrote the researchers. Rapid developments in artificial intelligence and machine learning have enabled scientific racism to enter a new era, in which machine-learned models embed biases present in the human behavior used for model development.

The 2016 paper also demonstrated how easy it is for AI practitioners to fool themselves into thinking theyve found an objective system of measuring criminality. The researchers from Google and Princeton noted that, based on the data shared in the paper, all the non-criminals appeared to be smiling and wearing collared shirts and suits, while none of the (frowning) criminals were. Its possible this simple and misleading visual tell was guiding the algorithms supposed sophisticated analysis.

The Coalition for Critical Technologys letter comes at a time when movements around the world are highlighting issues of racial justice, triggered by the killing of George Floyd by law enforcement. These protests have also seen major tech companies pull back on their use of facial recognition systems, which research by Black academics has shown is racially biased.

The letters authors and signatories call on the AI community to reconsider how it evaluates the goodness of its work thinking not just about metrics like accuracy and precision, but about the social affect such technology can have on the world. If machine learning is to bring about the social good touted in grant proposals and press releases, researchers in this space must actively reflect on the power structures (and the attendant oppressions) that make their work possible, write the authors.

Read more from the original source:
AI experts say research into algorithms that claim to predict criminality must end - The Verge

Related Posts

Comments are closed.