What Stanfords recent AI conference reveals about the state of AI accountability – VentureBeat

We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!

As AI adoption continues to ramp up exponentially, so is the discussion around and concern for accountable AI.

While tech leaders and field researchers understand the importance of developing AI that is ethical, safe and inclusive, they still grapple with issues around regulatory frameworks and concepts of ethics washing or ethics shirking that diminish accountability.

Perhaps most importantly, the concept is not yet clearly defined. While many sets of suggested guidelines and tools exist from the U.S. National Institute of Standards and Technologys Artificial Intelligence Risk Management Framework to the European Commissions Expert Group on AI, for example they are not cohesive and are very often vague and overly complex.

As noted by Liz OSullivan, CEO of blockchain technology software company Parity, We are going to be the ones to teach our concepts of morality. We cant just rely on this emerging from nowhere because it simply wont.

OSullivan was one of several panelists to speak on the topic of accountable AI at the Stanford University Human-Centered Artificial Intelligence (HAI) 2022 Spring Conference this week. The HAI was founded in 2019 to advance AI research, education, policy and practice to improve the human condition, and this years conference focused on key advances in AI.

Topics included accountable AI, foundation models and the physical/simulated world, with panels moderated by Fei-Fei Li and Christopher Manning. Li is inaugural Sequoia Professor in Stanfords computer cience department and codirector of HAI. Manning is the inaugural Thomas M. Siebel Professor in machine learning and is also a professor of linguistics and computer science at Stanford, as well as the associate director of HAI.

Specifically, regarding accountable AI, panelists discussed advances and challenges related to algorithmic recourse, building a responsible data economy, computing the wording and conception of privacy and regulatory frameworks, as well as tackling overarching issues of bias.

Predictive models are increasingly being used in high-stakes decision-making for example, loan approvals.

But like humans, models can be biased, said Himabindu Lakkaraju, assistant professor at Harvards business school and department of computer science (affiliate) and Harvard University.

As a means to de-bias modeling, there has been growing interest in post hoc techniques that provide recourse to individuals who have been denied loans. However, these techniques generate recourses under the assumption that the underlying predictive model does not change. In practice, models are often regularly updated for a variety of reasons such as dataset shifts thereby rendering previously prescribed recourses ineffective, she said.

In addressing this, she and fellow researchers have looked at instances in which recourse is not valid, useful, or does not result in a positive outcome for the affected party such as general algorithmic issues.

They proposed a framework, Robust Algorithmic Recourse (ROAR), which uses adversarial machine learning (ML) for data augmentation to generate more robust models. They describe it as the first known solution to the problem. Their detailed theoretical analysis also underscored the importance of constructing recourses that are robust to model shifts; otherwise, additional costs can be incurred, she explained.

As part of their process, the researchers carried out a survey with customers who applied for bank loans over the previous year. The overwhelming majority of participants said algorithmic recourse would be extremely useful for them. However, 83% of respondents said they would never do business with a bank again if the bank provided recourse to them and it was not correct.

Therefore, Lakkaraju said, If we provide a recourse to somebody, we better make sure that it is really correct and we are going to hold on that promise.

Another panelist, Dawn Song, addressed overarching concerns of the data economy and establishing responsible AI and machine learning (ML).

AI deep learning has been making huge progress, said the professor in the department of electrical engineering and computer science at the University of California at Berkeley but along with that, she emphasized, it is essential to ensure the evolution of the responsible AI concept.

Data is the key driver of AI and ML, but much of this exponentially growing data is sensitive and handling sensitive data has posed numerous challenges.

Individuals have lost control of how their data is being used, Song said. User data is sold without their awareness or consent, or it is acquired during large-scale data breaches. As a result, companies leave valuable data sitting in data silos and dont use it due to privacy concerns.

There are many challenges in developing a responsible data economy, she added. There is a natural tension between utility and privacy.

To establish and enforce data rights and develop a framework for a responsible data economy, we cannot copy concepts and frameworks used in the analog world, Song said. Traditional methods rely on randomizing and anonymizing data, which is insufficient in protecting data privacy.

New technical solutions can provide data protection in use, she explained. Some examples include secure computing technologies and cryptography, as well as the training of differential language models.

Songs work in this area has involved developing programming rewriting techniques and the development of decision records that ensure compliance with privacy regulations such as General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).

As we move forward in the digital age, these issues will only become more and more severe, Song said, to the extent that they will hinder societal progress and undermine human value and fundamental rights. Hence, theres an urgent need for developing a framework for a responsible data economy.

Its true that large enterprises and corporations are taking steps in that direction, OSullivan emphasized. As a whole, they are being proactive about addressing ethical quandaries and dilemmas and tackling questions of making AI responsible and fair.

However, the most common misconception from large corporations is that theyve developed procedures on how to de-bias, according to OSullivan, the self-described serial entrepreneur and expert in fair algorithms, surveillance and AI.

In reality, many companies try to ethics wash with [a] simple solution that may not actually go all that far, OSullivan said. Oftentimes, redacting training data for toxicity is referred to as negatively impacting freedom of speech.

She also posed the question: How can we sufficiently manage risks on models that have impossible large complexity?

With computer vision models and large language models, the notion of de-biasing something is really an infinite task, she said, also noting the difficulties in defining bias in language, which is inherently biased.

I dont think we have consensus on this at all, she said.

Still, she ended on a positive, noting that the field of accountable AI is popular and growing every day and that organizations and researchers are making progress when it comes to definitions, tools and frameworks.

In many cases, the right people are at the helm, OSullivan said. It will be very exciting to see how things progress over the next couple of years.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.

Read the original:

What Stanfords recent AI conference reveals about the state of AI accountability - VentureBeat

Related Posts

Comments are closed.