Code^Shift Lab Aims To Confront Bias In AI, Machine Learning – Texas A&M Today – Texas A&M University Today

As machines increasingly make high-risk decisions, a new lab at Texas A&M aims to reduce bias in artificial intelligence and machine learning.

Getty Images

The algorithms underpinning artificial intelligence and machine learning increasingly influence our daily lives. They can decide everything from which video were recommended to watch next on YouTube to who should be arrested based on facial recognition software.

But the data used to train these systems often replicate the harmful social biases of the engineers who build them. Eliminating this bias from technology is the focus of Code^Shift, a new data science lab at Texas A&M University that brings together faculty members and researchers from a variety of disciplines across campus.

Its an increasingly critical initiative, said Lab Director Srividya Ramasubramanian, as more of the world becomes automated. Machines, rather than humans, are making many of the decisions around us, including some that are high-risk.

Code^Shift tries to shift our thinking about the world of code or coding in terms of how we can be thinking of data more broadly in terms of equity, social healing, inclusive futures and transformation, said Ramasubramanian, professor of communication in the College of Liberal Arts. A lot of trauma and a lot of violence has been caused, including by media and technologies, and first we need to acknowledge that, and then work toward reparations and a space of healing individually and collectively.

Bias in artificial intelligence can have major impacts. In just one recent example, a man has sued the Detroit Police Department after he was arrested and jailed for shoplifting after being falsely identified by the departments facial recognition technology. The American Civil Liberties Union calls it the first case of its kind in the United States.

Code^Shift will attempt to confront this issue using a collaborative research model that includes Texas A&M experts in social science, data science, engineering and several other disciplines. Ramasubramanian said eight different colleges are represented, and more than 100 people attended the labs virtual launch last month.

Experts will work together on research, grant proposals and raising awareness in the broader public of the issue of bias in machine learning and artificial intelligence. Curriculum may also be developed to educate professionals in the tech industry, such as workshops and short courses on anti-racism literacy, gender studies and other topics that are sometimes not covered in STEM fields.

The labs name references coding, which is foundational to todays digital world. Its also a play on code-switching the way people change the languages they use or how they express themselves in conversation depending on the context.

As an immigrant, Ramasubramanian says shes familiar with living in two worlds. She offers several examples of computer-based biases shes encountered in everyday life, including an experience attempting to wash her hands in an airport bathroom.

Standing at the sink, Ramasubramanian recalls, she held her hands under the faucet. As she moved them back and forth and the taps stayed dry, she realized that the sensors used to turn the water on could not recognize her hands. It was the same case with the soap dispenser.

It was something I never thought much about, but later on I was reading an article about this topic that said many people with darker skin tones were not recognized by many systems, she said.

Similarly, when Ramasubramanian began to work remotely during the COVID-19 pandemic, she noticed that her skin and hair color made her disappear against the virtual Zoom backgrounds. Voice recognition software she attempted to use for dictation could not understand her accent.

The system is treating me as the other and different in many, many ways, she said. And in return, there are serious consequences of who feels excluded, and thats not being captured.

Co-director Lu Tang, an assistant professor in the College of Liberal Arts who examines health disparity in underserved populations, says her research shows that Black patients, for example, must have much more severe symptoms that non-Black patients in order to be assigned certain diagnoses in computer software used in hospitals.

She said this is just one instance of the disparities embedded in technology. Tangs research also focuses on how machine learning algorithms used on social media platforms are more likely to expose people to misinformation about health.

If I inhabit a social media space where a lot of my friends hold certain erroneous attitudes about things like vaccines or COVID-19, I will repeatedly be exposed to the same information without being exposed to different information, she said.

Tang also is interested in what she calls the filter bubble the phenomenon of where an algorithm leads a user on TikTok, YouTube or other platforms based on content theyve watched in the past or what other people with similar viewing behaviors are watching at that moment. Watching just one video containing vaccine misinformation could prompt the algorithm to continue recommending similar videos. Tang said the filter bubble is another added layer that influences the content that people are exposed to.

I think to really understand this society and how we are living today, we as social scientists and humanities scholars need to acknowledge and understand the way computers are influencing the way society is run today, Tang said. I feel like working with computer science engineers is a way for us to combine our strengths to understand a lot of the problems we have in this society.

Computer Science and Engineering Assistant Professor Theodora Chaspari, another co-director of Code^Shift, agrees that minds from different disciplines are needed to design better systems.

To build an inclusive system, she said, engineers need to include representative data from all populations and social groups. This could help facial recognition algorithms better recognize faces of all races, she said, because a system cannot really identify a face until it has seen many, many faces. But engineers may not understand more subtle sources of bias, she said, which is why social and life sciences experts are needed to help with the thoughtful design of more equitable algorithms.

The goal of Code^Shift is to help bridge the gap between systems and people, Chaspari said. The lab will do this by raising awareness through not only research, but education.

Were trying to teach our students about fairness and bias in engineering and artificial intelligence, Chaspari said. Theyre pretty new concepts, but are very important for the new, young engineers who will come in the next years.

So far, Code^Shift has held small group discussion on topics like climate justice, patient justice, gender equity and LGBTQ issues. A recent workshop focused on health equity and the ways in which big data and machine learning can be used to take into account social structures and inequalities.

Ramasubramanian said a full grant proposal to the Texas A&M Institute of Data Science Thematic Data Science Labs Program is also being developed. The labs directors hope to connect with more colleges and make information accessible to more people.

They say collaboration is critical to the initiative. The people who create algorithms often come from small groups, Ramasubramanian said, and are not necessarily collaborating with social scientists. Code^Shift asks for more accountability in how systems are created: who has access to the data, whos deciding how to use it, and how is it being shared?

Texas A&M is home to some of the worlds top data scientists, Ramasubramanian said, making it an important place to have conversations about difficult topics like data equity.

To me, we should also be leaders in thinking about the ethical, social, health and other impacts of data, she said.

To join the Code^Shift mailing list or learn more about collaborating with the lab, contact Ramasubramanian at srivi@tamu.edu.

Read this article:
Code^Shift Lab Aims To Confront Bias In AI, Machine Learning - Texas A&M Today - Texas A&M University Today

Related Posts

Comments are closed.