The true dangers of AI are closer than we think – MIT Technology Review

William Isaac is a senior research scientist on the ethics and society team at DeepMind, an AI startup that Google acquired in 2014. He also cochairs the Fairness, Accountability, and Transparency conferencethe premier annual gathering of AI experts, social scientists, and lawyers working in this area. I asked him about the current and potential challenges facing AI developmentas well as the solutions.

A: I want to shift the question. The threats overlap, whether its predictive policing and risk assessment in the near term, or more scaled and advanced systems in the longer term. Many of these issues also have a basis in history. So potential risks and ways to approach them are not as abstract as we think.

There are three areas that I want to flag. Probably the most pressing one is this question about value alignment: how do you actually design a system that can understand and implement the various forms of preferences and values of a population? In the past few years weve seen attempts by policymakers, industry, and others to try to embed values into technical systems at scalein areas like predictive policing, risk assessments, hiring, etc. Its clear that they exhibit some form of bias that reflects society. The ideal system would balance out all the needs of many stakeholders and many people in the population. But how does society reconcile their own history with aspiration? Were still struggling with the answers, and that question is going to get exponentially more complicated. Getting that problem right is not just something for the future, but for the here and now.

The second one would be achieving demonstrable social benefit. Up to this point there are still few pieces of empirical evidence that validate that AI technologies will achieve the broad-based social benefit that we aspire to.

Lastly, I think the biggest one that anyone who works in the space is concerned about is: what are the robust mechanisms of oversight and accountability.

A: Three areas would go a long way. The first is to build a collective muscle for responsible innovation and oversight. Make sure youre thinking about where the forms of misalignment or bias or harm exist. Make sure you develop good processes for how you ensure that all groups are engaged in the process of technological design. Groups that have been historically marginalized are often not the ones that get their needs met. So how we design processes to actually do that is important.

The second one is accelerating the development of the sociotechnical tools to actually do this work. We dont have a whole lot of tools.

The last one is providing more funding and training for researchers and practitionersparticularly researchers and practitioners of colorto conduct this work. Not just in machine learning, but also in STS [science, technology, and society] and the social sciences. We want to not just have a few individuals but a community of researchers to really understand the range of potential harms that AI systems pose, and how to successfully mitigate them.

A: In 2016, I remember, the White House had just come out with a big data report, and there was a strong sense of optimism that we could use data and machine learning to solve some intractable social problems. Simultaneously, there were researchers in the academic community who had been flagging in a very abstract sense: Hey, there are some potential harms that could be done through these systems. But they largely had not interacted at all. They existed in unique silos.

Since then, weve just had a lot more research targeting this intersection between known flaws within machine-learning systems and their application to society. And once people began to see that interplay, they realized: Okay, this is not just a hypothetical risk. It is a real threat. So if you view the field in phases, phase one was very much highlighting and surfacing that these concerns are real. The second phase now is beginning to grapple with broader systemic questions.

A: I am. The past few years have given me a lot of hope. Look at facial recognition as an example. There was the great work by Joy Buolamwini, Timnit Gebru, and Deb Raji in surfacing intersectional disparities in accuracies across facial recognition systems [i.e., showing these systems were far less accurate on Black female faces than white male ones]. Theres the advocacy that happened in civil society to mount a rigorous defense of human rights against misapplication of facial recognition. And also the great work that policymakers, regulators, and community groups from the grassroots up were doing to communicate exactly what facial recognition systems were and what potential risks they posed, and to demand clarity on what the benefits to society would be. Thats a model of how we could imagine engaging with other advances in AI.

But the challenge with facial recognition is we had to adjudicate these ethical and values questions while we were publicly deploying the technology. In the future, I hope that some of these conversations happen before the potential harms emerge.

A: It could be a great equalizer. Like if you had AI teachers or tutors that could be available to students and communities where access to education and resources is very limited, thatd be very empowering. And thats a nontrivial thing to want from this technology. How do you know its empowering? How do you know its socially beneficial?

I went to graduate school in Michigan during the Flint water crisis. When the initial incidences of lead pipes emerged, the records they had for where the piping systems were located were on index cards at the bottom of an administrative building. The lack of access to technologies had put them at a significant disadvantage. It means the people who grew up in those communities, over 50% of whom are African-American, grew up in an environment where they dont get basic services and resources.

So the question is: If done appropriately, could these technologies improve their standard of living? Machine learning was able to identify and predict where the lead pipes were, so it reduced the actual repair costs for the city. But that was a huge undertaking, and it was rare. And as we know, Flint still hasnt gotten all the pipes removed, so there are political and social challenges as wellmachine learning will not solve all of them. But the hope is we develop tools that empower these communities and provide meaningful change in their lives. Thats what I think about when we talk about what were building. Thats what I want to see.

See the original post:
The true dangers of AI are closer than we think - MIT Technology Review

Related Posts

Comments are closed.