Is artificial intelligence Sexist? The answer is Yes And No – Analytics India Magazine

With advanced research happening in the realm of artificial intelligence (AI), the technology is poised to become smarter than its human creators. But until that day, it is like to harbour sexist, racist and even homophobic tendencies all inherited from its makers social and cultural biases.

This was discussed at some length last year at Rising, one of the countrys biggest gatherings of women trailblazers in the fields of data science and AI. Held on March 8 to commemorate Womens Day, the one-day event hosted more than 250 participants and featured more than 15 sessions led by industry leaders, mostly women.

One of the speakers on the occasion, Director of Citi Saraswathi Ramachandra, provoked a discussion around a hotly debated topic Is AI sexist. According to her, this cannot be firmly answered in the affirmative since AI models can only respond to what it has learned. This means that the real culprit is essentially the training dataset we feed it, and not the technology by itself.

At the heart of it, AI enables tasks to be automated without dependence on step-by-step assistance by humans. How does this work? If a computer is fed enough examples relevant to a given task, it can use ML algorithms to draw an inference. It then finds a way to automatically optimise this approach and goes on to essentially, teaches itself.

To sum it up, AI software trains itself using data that is manually delivered by humans. This means that as it stands now, some level of subjectivity in the outcome cannot be avoided. What is more, as this technology develops, it continues to subtly imbibe these biases from sources like articles and webpages. Thus, our prejudices rub off on technology, thereby reinforcing and exaggerating common stereotypes.

Ramachandra illustrates this with an example.

With chatbots becoming popular across websites and social networks, Microsoft launched its Twitter chatbot Tay in March 2016. However, it was taken down within 24 hours. Tay was designed to mimic a millennial and engage in human conversations with its users. Built on the principles of AI, it was programmed to learn from these interactions and get better at it.

But cheeky Twitter users targeted her vulnerabilities and manipulated her into making deeply sexist and racist statements.

Specifically taking one conversation, Ramachandra spoke about the degree of bias that had been absorbed in the chatbot in less than a day after its launch. The comment by a user went like this We must secure the existence of our people and future for white children. To this, Tay responded by saying that it couldnt agree more. I wish there were more people talking about these things.

According to her, this example clearly demonstrates how quickly machines amplify any biases we may have. This shows that our prejudices play a big role in shaping AI as we know it, potentially becoming even more dangerous as these seep into programs and algorithms.

We often mentally categorise certain jobs on the basis of gender. While homemakers and nurses are more oriented towards women, we have internalised that engineers and doctors are the mainstays of men. This can only be explained away by our innate biases.

This does not portend well for AI systems, that are essentially inheriting these biases and amplifying it with self-learning techniques. So if the fundamental problem rests in the biased datasets we are feeding AI systems, what are the corrective measures that we can take?

A good place to start would be to increase the participation of women in STEM so that they can take up jobs for programming these very AI systems. This becomes imperative because if careful changes are not made to technologies that augment misperceptions about women and marginalised races, AI will continue to proliferate these stereotypes.

comments

Go here to read the rest:
Is artificial intelligence Sexist? The answer is Yes And No - Analytics India Magazine

Related Posts

Comments are closed.