Inside the fight to reclaim AI from Big Techs control – MIT Technology Review

Among the worlds richest and most powerful companies, Google, Facebook, Amazon, Microsoft, and Apple have made AI core parts of their business. Advances over the last decade, particularly in an AI technique called deep learning, have allowed them to monitor users behavior; recommend news, information, and products to them; and most of all, target them with ads. Last year Googles advertising apparatus generated over $140 billion in revenue. Facebooks generated $84 billion.

The companies have invested heavily in the technology that has brought them such vast wealth. Googles parent company, Alphabet, acquired the London-based AI lab DeepMind for $600 million in 2014 and spends hundreds of millions a year to support its research. Microsoft signed a $1 billion deal with OpenAI in 2019 for commercialization rights to its algorithms.

At the same time, tech giants have become large investors in university-based AI research, heavily influencing its scientific priorities. Over the years, more and more ambitious scientists have transitioned to working for tech giants full time or adopted a dual affiliation. From 2018 to 2019, 58% of the most cited papers at the top two AI conferences had at least one author affiliated with a tech giant, compared with only 11% a decade earlier, according to a study by researchers in the Radical AI Network, a group that seeks to challenge power dynamics in AI.

The problem is that the corporate agenda for AI has focused on techniques with commercial potential, largely ignoring research that could help address challenges like economic inequality and climate change. In fact, it has made these challenges worse. The drive to automate tasks has cost jobs and led to the rise of tedious labor like data cleaning and content moderation. The push to create ever larger models has caused AIs energy consumption to explode. Deep learning has also created a culture in which our data is constantly scraped, often without consent, to train products like facial recognition systems. And recommendation algorithms have exacerbated political polarization, while large language models have failed to clean up misinformation.

Its this situation that Gebru and a growing movement of like-minded scholars want to change. Over the last five years, theyve sought to shift the fields priorities away from simply enriching tech companies, by expanding who gets to participate in developing the technology. Their goal is not only to mitigate the harms caused by existing systems but to create a new, more equitable and democratic AI.

In December 2015, Gebru sat down to pen an open letter. Halfway through her PhD at Stanford, shed attended the Neural Information Processing Systems conference, the largest annual AI research gathering. Of the more than 3,700 researchers there, Gebru counted only a handful who were Black.

Once a small meeting about a niche academic subject, NeurIPS (as its now known) was quickly becoming the biggest annual AI job bonanza. The worlds wealthiest companies were coming to show off demos, throw extravagant parties, and write hefty checks for the rarest people in Silicon Valley: skillful AI researchers.

That year Elon Musk arrived to announce the nonprofit venture OpenAI. He, Y Combinators then president Sam Altman, and PayPal cofounder Peter Thiel had put up $1 billion to solve what they believed to be an existential problem: the prospect that a superintelligence could one day take over the world. Their solution: build an even better superintelligence. Of the 14 advisors or technical team members he anointed, 11 were white men.

RICARDO SANTOS | COURTESY PHOTO

While Musk was being lionized, Gebru was dealing with humiliation and harassment. At a conference party, a group of drunk guys in Google Research T-shirts circled her and subjected her to unwanted hugs, a kiss on the cheek, and a photo.

Gebru typed out a scathing critique of what she had observed: the spectacle, the cult-like worship of AI celebrities, and most of all, the overwhelming homogeneity. This boys club culture, she wrote, had already pushed talented women out of the field. It was also leading the entire community toward a dangerously narrow conception of artificial intelligence and its impact on the world.

Google had already deployed a computer-vision algorithm that classified Black people as gorillas, she noted. And the increasing sophistication of unmanned drones was putting the US military on a path toward lethal autonomous weapons. But there was no mention of these issues in Musks grand plan to stop AI from taking over the world in some theoretical future scenario. We dont have to project into the future to see AIs potential adverse effects, Gebru wrote. It is already happening.

Gebru never published her reflection. But she realized that something needed to change. On January 28, 2016, she sent an email with the subject line Hello from Timnit to five other Black AI researchers. Ive always been sad by the lack of color in AI, she wrote. But now I have seen 5 of you 🙂 and thought that it would be cool if we started a black in AI group or at least know of each other.

The email prompted a discussion. What was it about being Black that informed their research? For Gebru, her work was very much a product of her identity; for others, it was not. But after meeting they agreed: If AI was going to play a bigger role in society, they needed more Black researchers. Otherwise, the field would produce weaker scienceand its adverse consequences could get far worse.

As Black in AI was just beginning to coalesce, AI was hitting its commercial stride. That year, 2016, tech giants spent an estimated $20 to $30 billion on developing the technology, according to the McKinsey Global Institute.

Heated by corporate investment, the field warped. Thousands more researchers began studying AI, but they mostly wanted to work on deep-learning algorithms, such as the ones behind large language models. As a young PhD student who wants to get a job at a tech company, you realize that tech companies are all about deep learning, says Suresh Venkatasubramanian, a computer science professor who now serves at the White House Office of Science and Technology Policy. So you shift all your research to deep learning. Then the next PhD student coming in looks around and says, Everyones doing deep learning. I should probably do it too.

But deep learning isnt the only technique in the field. Before its boom, there was a different AI approach known as symbolic reasoning. Whereas deep learning uses massive amounts of data to teach algorithms about meaningful relationships in information, symbolic reasoning focuses on explicitly encoding knowledge and logic based on human expertise.

Some researchers now believe those techniques should be combined. The hybrid approach would make AI more efficient in its use of data and energy, and give it the knowledge and reasoning abilities of an expert as well as the capacity to update itself with new information. But companies have little incentive to explore alternative approaches when the surest way to maximize their profits is to build ever bigger models.

Link:
Inside the fight to reclaim AI from Big Techs control - MIT Technology Review

Related Posts

Comments are closed.