At SC21, Plenary Wrestles With the Ethics of Mainstreamed HPC – HPCwire

As the panelists gathered onstage for SC21s first plenary talk, the so-called Peter Parker principle with great power comes great responsibility cycled across the background slideshow. For the following hour, five panelists confronted this dilemma: with the transformative power of HPC (and, in particular, HPC-enabled AI) increasingly mainstreamed and deployed by all major sectors of society, industry and government, what ethical responsibilities are conferred to whom, and how can those responsibilities be fulfilled?

The plenary, titled The Intersection of Ethics and HPC, featured five speakers: Dan Reed, professor of computer science and senior vice president of Academic Affairs at the University of Utah, who moderated the discussion; Cristin Goodwin, general manager and associate general counsel for Microsoft; Tony Hey, a physicist and chief data scientist for Rutherford Appleton Laboratory; Ellen Ochoa, chair of the National Science Board and former astronaut and director of NASAs Johnson Space Center; and Joel Saltz, an MD-PhD working as a professor and chair of the Department of Biomedical Informatics at Stony Brook University.

We know that advanced computing now pervades almost every aspect of science, technology, business, and society. Think about its impacts on financial institutions, e-commerce, communications, logistics, health, national security And big tech overall has been in the news lately and not necessarily in a good way, Reed opened, citing a Pandoras box of issues ranging from the effects of social media and data breaches to deepfakes and autonomous vehicles.

Unintended consequences and unethical actors

Technology, he continued, is also being exploited at scale, with governments and criminals leveraging high-power surveillance and intrusion tools to great effect. Beyond the national security applications and implications, HPC has also become tightly tied to competitiveness for businesses and to the state-of-the-art for forward-facing fields like medicine and consumer technology. HPC, Reed pointed out, is just the latest field to go through this tumultuous adolescence: fields like physics and medicine had experienced similar ethical dilemmas as their capabilities expanded.

As a physicist, Hey agreed, invoking perhaps the most famous step change in the ethical onus on a scientific field. I think the outstanding example is the Manhattan Project, which developed the atomic bomb during the war, he said. The Manhattan Project, he explained, had been initiated due to the fear of an unethical actor Hitler who likely would not have hesitated to use such a weapon were it in his possession. That was the original motivation. But actually before they tested their nuclear weapons that theyd developed in the Manhattan Project, Germany had surrendered. So the original reason had gone, he said leaving the scientists to wrestle with their creation. And I think, really, you can almost replace nuclear weapon technology with AI technology. You cant uninvent it, and we can be ethical about our use, but well have enemies who arent.

These enemies, and wantonly unethical actors generally, were the subject of much discussion. Goodwin, who works to address nation-state attacks at Microsoft, said that while cyberattacks by nation-states were once considered unlikely force majeure events, theyre now commonplace: Microsoft, between the period of August 2018 and this past July, notified over 20,000 customers of nation-state attacks, she said.

In my space, what I see all the time is the paradigm of unethical abuse, she added, contrasting that with the paradigm of ethical use. How are you thinking about abuse? The September 12th cockpit door? What are the ways your technology could be abused? This issue, she said, was particularly spotlit in the wake of Microsofts ill-fated chatbot, Tay.

Many people know that Microsoft back in 2016 had released a chatbot, and you could interact on Twitter and it would respond back to you, she recapped. And in about 24 hours it turned into a misogynistic Nazi and we took it down very, very quickly. And that forced Microsoft to go and look very very closely at how we think about ethics and artificial intelligence. It prompted us to create an office of responsible AI and a principled approach to how we think about that.

This kind of unanticipated reappropriation or redirection of a technology somewhat limited in scope, though offensive, when applied to a chatbot becomes much more ominous as the technologies expand. Saltz advised the audience to look beyond what [the] specific application is, citing the relatively straightforward introduction of telehealth which is now spiraling into the use of AI facial and body recognition to, in combination with medical records, make predictions for a patients health during a telehealth appointment. Pretty much every new technical advance, even if it seems relatively limited, can be extended and is being extended to something more major, he said.

Uninclusive models and unsuitable solutions

On the topic of unintended consequences, several of the panelists expressed concern over the bias that can be conferred often accidentally to AI models and their predictions through improper design and training. Ochoa referenced a famous case where an AI model was used to predict recidivism in sentencing, which, she said, resulted in the AI essentially predicting where police were deployed. These things can creep in at various different areas, but theyre being used so broadly theyre really affecting peoples lives, she said.

Indeed, much of this bias can be attributed to sample selection. To that point, Saltz spoke on the use of HPC-powered models to aid in diagnosis, prognosis and treatment. Medicine is a particular font of ethical dilemmas, theres no doubt and increasingly, these involve high-end computing and computational abilities, he said. Do you recommend an intensive, scorched-earth treatment for a patient to give them the best chance of beating cancer, or do you recommend a less taxing treatment because theyre unlikely to require anything more severe to recover? So, models can predict this, Saltz said. On the other hand: can models predict this? This is a major technical issue as well as an ethical issue. One of the main issues, again: if a particular population was used for training, how do you generalize that model? Should you?

Saltz had a couple of ideas for how to ameliorate these problems starting with the collection of more data from more groups. As human beings, we should encourage medical research and make our data available, he said. Theres a lot of work associated with this, but I think that convincing the citizenry that there really is a potential huge upside to participating in research studies and making their data available will be very important to enhance medical progress.

Second, he said, was validating the algorithms. The FDA has a project dedicated to validation of AI in medicine that were involved in, he explained. The notion is that thered be a well-defined path so that developers of algorithms can know when their algorithm has been deemed good enough to be reliable.

I think that also speaks to the notion that you want a diverse community looking at those issues, Reed said, because they will surface things that a less diverse community might not. Recommendations, too, can be asymmetrical: Hey explained how fine-grain tornado prediction enabled disaster agencies to recommend fewer evacuations along a more specific path, but that for groups that might need longer to evacuate such as people with disabilities, or older people that more targeted, quicker-response approach might be unsuitable. These things require great consideration of the people who are affected, he said.

Unfathomable explanations and unrepresentative gatherings

One core problem pervades nearly all efforts to reinforce ethics in HPC and AI: comprehension.

We are a vanishingly small fraction of the population so how do we think about informed debate and understanding with the broader community about these complex issues? Reed said. Because explaining to someone that, this is a multidisciplinary model with some abstractions based on AI and some inner loops, and weve used a numerical approximation technique with variable precision arithmetic on a million-core system with some variable error rate, and now talk to me about whether this computation is right that explanation is dead on arrival to the people who would care about how these systems are actually used.

Goodwin said that getting users, stakeholders and the general public to understand the implications of technological developments or threats was something that Microsoft had been wrestling with for some time. We have context analysts that help us simplify the way we talk about what weve learned so that communities that are not technical or not particularly comfortable with technical terms can consume that, she said. What we believe is that you cant have informed public policy if you cant take the technical detail of an attack and make it relatable for those who need to understand that.

When talking about communication between HPC or AI insiders and the general public, of course, its important to note the differences between those two groups differences that span demographics, not just credentials. The attendees at this conference are not broadly representative of our population, Reed said, gently, looking out at the audience.

Ochoa followed up on that thread, discussing efforts to fold in the missing millions that are often left unrepresented by gatherings of or decisions by technically skilled, demographically similar experts.

We try to make sure were not doing anything discriminatory, right? she said. But welcoming is actually much broader than that.

More:

At SC21, Plenary Wrestles With the Ethics of Mainstreamed HPC - HPCwire

Related Posts

Comments are closed.