Most of artificial intelligence still relies on human labor. Peer inside a AI algorithm and you will find something constructed using data that has been organized and labeled by an army of human workers.
Now, Facebook showed how some AI algorithms can learn to do useful work with much less human assistance. The company built an algorithm that learned to recognize objects in images with little help from labels.
The Facebook algorithm, called Seer (for SElf-supERvised), fed on more than a billion images extracted from Instagram, decide for himself which objects are alike. Images with mustaches, fur, and pointy ears, for example, were put together in one stack. Then, the algorithm received a small number of tagged images, including tagged “cats”. He was then able to recognize images as well as an algorithm trained using thousands of labeled examples of each object.
“The results are impressive,” says Olga Russakovsky, an assistant professor at Princeton University specializing in AI and computer vision. “Achieving self-supervised learning at work is very difficult, and breakthroughs in this area have important downstream consequences for better visual recognition.”
Russakovsky says it should be noted that the Instagram images were not hand-picked to facilitate independent learning.
The Facebook research is a milestone for an approach to AI known as “self-supervised learning,” says Facebook’s chief scientist, Yann LeCun.
LeCun was the pioneer of machine learning approach known as deep learning which involves providing data to neural networks. About ten years ago, deep learning emerged as a better way to program machines to do all kinds of useful things, such as image classification and speech recognition.
But LeCun says the conventional approach, which requires “training” an algorithm by providing it with a lot of labeled data, simply cannot scale. “I’ve been advocating for this idea of self-supervised learning for a long time,” he says. “In the long run, advances in AI will come from programs that just watch videos all day and learn like a baby.”
LeCun says that self-supervised learning could have many useful applications, such as learning to read medical images without needing to label so many scans and x-rays. He says a similar approach is already being used to automatically generate hashtags for Instagram images. And he says Seer technology could be used on Facebook to match ads to posts or to help filter out unwanted content.
Facebook research relies on constant advancements in developing deep learning algorithms to make them more efficient and effective. Self-supervised learning was previously used to translate text from one language to another, but it was more difficult to apply it to pictures than to words. LeCun says the research team has developed a new way for algorithms to learn to recognize images even when part of the image has been altered.
Facebook will release some of the technology behind Seer, but not the algorithm itself, as it was trained using data from Instagram users.
Aude Oliva, who heads MIT’s Perception and Computational Cognition Lab, says this approach “will allow us to undertake more ambitious visual recognition tasks.” But Oliva says the size and complexity of cutting-edge AI algorithms like Seer, which can have billions or trillions of neural connections or parameters – far more than a conventional image recognition algorithm with comparable performance. – also pose problems. Such algorithms require enormous problems. amounts of computing power, straining the available supply of chips.
Alexei Efros, professor at UC Berkeley, says the Facebook post is a good demonstration of an approach he believes will be important in advancing AI: making machines learn on their own using “gigantic quantities of data”. And like most advancements in AI today, he says, it builds on a series of other advancements that have emerged from the same Facebook team as well as other research groups in universities. and industry.
More WIRED stories