Among the richest and most powerful companies in the world, Google, Facebook, Amazon, Microsoft and Apple have made AI a central part of their business. Advances over the past decade, especially in an AI technique called deep learning, allowed them to monitor user behavior; recommend news, information and products to them; and most importantly, target them with advertisements. Last year, Google’s advertising device generated more than $ 140 billion in revenue. Facebook generated $ 84 billion.
Companies have invested heavily in the technology that has brought them such great wealth. Google’s parent company Alphabet has acquired London-based AI lab DeepMind for $ 600 million in 2014 and spends hundreds of millions a year to support its research. Microsoft signed a billion dollar deal with OpenAI in 2019 for the marketing rights to its algorithms.
At the same time, tech giants have become big investors in academic AI research, strongly influencing its science priorities. Over the years, more and more ambitious scientists have switched to working full-time for tech giants or have adopted dual affiliation. From 2018 to 2019, 58% of top-cited papers at the two major AI conferences had at least one author affiliated with a tech giant, up from just 11% a decade earlier, according to a study by researchers of Radical AI Network, a group that seeks to challenge the power dynamics in AI.
The problem is, the business agenda for AI has focused on techniques with commercial potential, largely ignoring research that could help address challenges like economic inequality and climate change. In fact, it has compounded these challenges. The drive to automate tasks has cost jobs and leads to the increase in tedious work like data cleaning and content moderation. The push to create ever larger models has exploded the energy consumption of AI. Deep learning has also created a culture in which our data is constantly scratched, often without consent, to form products such as facial recognition systems. And recommender algorithms have exacerbated political polarization, while major linguistic models have failed to eliminate disinformation.
It is this situation that Gebru and a growing movement of like-minded academics want to change. Over the past five years, they have sought to shift the focus of the field away from simply enriching technology companies, by expanding the number of people who can participate in the development of technology. Their goal is not only to mitigate the damage done by existing systems, but to create a new, more fair and democratic AI.
“Hello from Timnit”
In December 2015, Gebru sat down to write an open letter. Halfway through her PhD at Stanford, she had attended the Neural Information Processing Systems conference, the largest annual gathering of AI research. Of the more than 3,700 researchers present, Gebru had only five who were black.
Once a small meeting on a niche academic topic, NeurIPS (as it’s known now) was quickly becoming the biggest annual windfall of AI jobs. The richest companies in the world came to show demos, throw extravagant parties, and write big checks for Silicon Valley’s rarest people: skilled researchers in AI.
That year, Elon Musk arrived to announce the nonprofit business OpenAI. He, Sam Altman, then president of Y Combinator, and PayPal co-founder Peter Thiel had invested $ 1 billion to solve what they believed to be an existential problem: the prospect that a superintelligence could one day take over the world. . Their solution: build an even better superintelligence. Of the 14 advisers or technical staff he appointed, 11 were white men.
RICARDO SANTOS | COURTESY PHOTO
While Musk was being praised, Gebru faced humiliation and harassment. At a conference, a group of drunk guys wearing Google Research t-shirts surrounded her and subjected her to unwanted hugs, a kiss on the cheek, and a photo.
Gebru typed in a scathing critique of what she had observed: the spectacle, the cult celebrity AI cult, and most importantly, the overwhelming homogeneity. This boy’s club culture, she wrote, had already pushed talented women off the pitch. It was also leading the whole community towards a dangerously narrow conception of artificial intelligence and its impact on the world.
Google had previously deployed a computer vision algorithm that classified black people as gorillas, she noted. And the growing sophistication of unmanned drones set the US military on the path to lethal autonomous weapons. But there was no mention of these issues in Musk’s grand plan to prevent AI from taking over the world in a theoretical future scenario. “We don’t have to look into the future to see the potential negative effects of AI,” Gebru wrote. “It’s already happening.”
Gebru never published his reflection. But she realized that something had to change. On January 28, 2016, she sent an email with the subject line “Hello from Timnit” to five other Black AI researchers. “I’ve always been sad about the lack of color in AI,” she wrote. “But now I saw 5 of you 🙂 and thought it would be cool if we started a black group in AI or at least we knew.”
The email sparked a discussion. What did being black guide their research? For Gebru, his work was largely a product of his identity; for others it was not. But after meeting, they agreed: If AI was to play a bigger role in society, they needed more black researchers. Otherwise, the field would produce weaker science and its damaging consequences could worsen.
A program focused on profit
As Black in AI was just starting to merge, AI was reaching its commercial pace. In that year, 2016, the tech giants spent around $ 20 billion to $ 30 billion to develop the technology, according to the McKinsey Global Institute.
Heated by corporate investment, the ground has become distorted. Thousands of other researchers began to study AI, but they mostly wanted to work on deep learning algorithms, such as those behind the big language models. “As a young doctoral student who wants to get a job in a tech company, you realize that tech companies are all about deep learning,” says Suresh Venkatasubramanian, professor of computer science now working in the Bureau of Policy. science and technology from the White House. . “So you shift all of your research to deep learning. Then the next PhD student who arrives looks around and says: “Everyone is learning in depth. I should probably do that too.
But deep learning is not the only technique in the field. Before its rise, there was a different approach to AI known as symbolic reasoning. While deep learning uses massive amounts of data to teach algorithms about meaningful relationships in information, symbolic reasoning focuses on the explicit encoding of knowledge and logic based on human expertise.
Some researchers now believe that these techniques should be combined. The hybrid approach would make AI more efficient in its use of data and energy, and give it the knowledge and reasoning skills of an expert as well as the ability to update itself with new information. But companies have little incentive to explore alternative approaches when the surest way to maximize their profits is to build ever larger models.