Why Artificial Intelligence Is Biased Against Women – IFLScience

A few years ago, Amazon employed a new automated hiring tool to review the resumes of job applicants. Shortly after launch, the company realized that resumes for technical posts that included the word womens (such as womens chess club captain), or contained reference to womens colleges, were downgraded. The answer to why this was the case was down to the data used to teach Amazons system. Based on 10 years of predominantly male resumes submitted to the company, the new automated system in fact perpetuated old situations, giving preferential scores to those applicants it was more familiar with.

Defined by AI4ALL as the branch of computer science that allows computers to make predictions and decisions to solve problems, artificial intelligence (AI) has already made an impact on the world, from advances in medicine, to language translation apps. But as Amazons recruitment tool shows, the way in which we teach computers to make these choices, known as machine learning, has a real impact on the fairness of their functionality.

Take another example, this time in facial recognition. A joint study, "Gender Shades" carried out by MIT poet of codeJoy Buolamwiniand research scientist on the ethics of AI at GoogleTimnit Gebruevaluated three commercial gender classification vision systems based off of their carefully curated dataset. They found that darker-skinned females were the most misclassified group with error rates of up to 34.7 percent, whilst the maximum error rate for lighter-skinned males was 0.8 percent.

As AI systems like facial recognition tools begin to infiltrate many areas of society, such as law enforcement, the consequences of misclassification could be devastating. Errors in the software used could lead to the misidentification of suspects and ultimately mean they are wrongfully accused of a crime.

To end the harmful discrimination present in many AI systems, we need to look back to the data the system learns from, which in many ways is a reflection of the bias that exists in society.

Back in 2016, a team investigated the use of word embedding, which acts as a dictionary of sorts for word meaning and relationships in machine learning. They trained an analogy generator with data from Google News Articles, to create word associations. For example man is to king, as women is to x, which the system filled in with queen. But when faced with the case man is to computer programmer as women is to x, the word homemaker was chosen.

Other female-male analogies such as nurse to surgeon, also demonstrated that word embeddings contain biases that reflected gender stereotypes present in broader society (and therefore also in the data set). However, Due to their wide-spread usage as basic features, word embeddings not only reflect such stereotypes but can also amplify them, the authors wrote.

AI machines themselves also perpetuate harmful stereotypes. Female-gendered Virtual Personal Assistants such as Siri, Alexa, and Cortana, have been accusedof reproducing normative assumptions about the role of women as submissive and secondary to men. Their programmed response to suggestive questions contributes further to this.

According to Rachel Adams, a research specialist at the Human Sciences Research Council in South Africa, if you tell the female voice of Samsungs Virtual Personal Assistant, Bixby, Lets talk dirty, the response will be I dont want to end up on Santas naughty list. But ask the programs male voice, and the reply is Ive read that soil erosion is a real dirt problem.

Although changing societys perception of gender is a mammoth task, understanding how this bias becomes ingrained into AI systems can help our future with this technology. Olga Russakovsky, assistant professor in the Department of Computer Science at Princeton University, spoke to IFLScience about understanding and overcoming these problems.

AI touches a huge percentage of the worlds population, and the technology is already affecting many aspects of how we live, work, connect, and play, Russakovsky explained. [But] when the people who are being impacted by AI applications are not involved in the creation of the technology, we often see outcomes that favor one group over another. This could be related to the datasets used to train AI models, but it could also be related to the issues that AI is deployed to address.

Therefore her work, she said, focuses on addressing AI bias along three dimensions: the data, the models, and the people building the systems.

On the data side, in our recent project we systematically identified and remedied fairness issues that resulted from the data collection process in the person subtree of the ImageNet dataset (which is used for object recognition in machine learning), Russakovsky explained.

Russakovsky has also turned her attention to the algorithms used in AI, which can enhance the bias in the data. Together with her team, she has identified and benchmarked algorithmic techniques for avoiding bias amplification in Convolutional Neural Networks (CNNs), which are commonly applied to analyzing visual imagery.

In terms of addressing the role of humans in generating bias in AI, Russakovsky has co-founded a foundation, AI4ALL, which works to increase diversity and inclusion in AI. The people currently building and implementing AI comprise a tiny, homogenous percentage of the population, Russakovsky told IFLScience. By ensuring the participation of a diverse group of people in AI, we are better positioned to use AI responsibly and with meaningful consideration of its impacts.

A report from the research institute AI Now, outlined the diversity disaster across the entire AI sector. Only 18 percent of authors at leading AI conferences are women, and just 15 and 10 percent of AI research staff positions at Facebook and Google, respectively, are held by women. Black women also face further marginalization, as only 2.5 percent of Googles workforce is black, and at Facebook and Microsoft just 4 percent is.

Ensuring that the voices of as many communities as possible are heard in the field of AI, is critical for its future, Russakovsky explained, because: Members of a given community are best poised to identify the issues that community faces, and those issues may be overlooked or incompletely understood by someone who is not a member of that community.

How we perceive what it means to work in AI, could also help to diversify the pool of people involved in the field. We need ethicists, policymakers, lawyers, biologists, doctors, communicators people from a wide variety of disciplines and approaches to contribute their expertise to the responsible and equitable development of AI, Russakovsky remarked. It is equally important that these roles are filled by people from different backgrounds and communities who can shape AI in a way that reflects the issues they see and experience.

The time to act is now. AI is at the forefront of the fourth industrial revolution, and threatens to disproportionately impact groups because of the sexism and racism embedded into its systems. Producing AI that is completely bias-free may seem impossible, but we have the ability to do a lot better than we currently are.

My hope for the future of AI is that our community of diverse leaders are shaping the field thoughtfully, using AI responsibly, and leading with considerations of social impacts, Russakovsky concluded.

Link:
Why Artificial Intelligence Is Biased Against Women - IFLScience

Related Posts

Comments are closed.