Then, this sensory input is absorbed by tens of thousands of cortical columns, each with a partial picture of the world. They compete and combine via some kind of voting system to build a global point of view. It’s the idea of a thousand brains. In an AI system, this could involve a machine controlling different sensors (vision, touch, radar, etc.) to get a more complete model of the world. However, there will usually be many cortical columns for each sense, such as vision.
Then there is lifelong learning, where you learn new things without forgetting the previous things. Today’s artificial intelligence systems cannot do this. And finally, we structure knowledge using frames of reference, which means that our knowledge of the world is relative to our point of view. If I slide my finger over the rim of my coffee mug, I can predict that I will feel its rim because I know where my hand is in relation to the mug.
Your lab recently made the switch from neuroscience to AI. Does this correspond to the meeting of your theory of a thousand brains?
Rather. Until two years ago, if you walked into our office, it was just neuroscience. Then we made the transition. We felt that we had learned enough about the brain to start applying it to AI.
What types of AI work do you do?
One of the first things we looked at was scarcity. At any time, only 2% of our neurons are firing; activity is sparse. We applied this idea to deep learning networks and we get spectacular results, such as 50 times faster accelerations on existing networks. Parsimony also offers you more robust networks, reduced energy consumption. Now we are working on lifelong learning.
It is interesting that you include movement as the basis of intelligence. Does this mean an AI needs a body? Does he have to be a robot?
In the future, I think the distinction between AI and robotics will disappear. But at the moment, I prefer the word “incarnation” because when you say robots it conjures up images of robots resembling humans, which I am not talking about. The bottom line is that the AI will need to have sensors and be able to move them around itself and the things it models. But you could also have virtual AI roaming the internet.
This idea is quite different from a lot of popular ideas about the intelligence of a disembodied brain.
The movement is really interesting. The brain uses the same mechanisms to move my finger over a cup of coffee, or to move my eyes, or even when you think of a conceptual problem. Your brain moves through frames of reference to recall facts it has stored in different places.
The main thing is that any intelligent system, regardless of its physical form, learns a model of the world by feeling different parts of it, moving in it. This is the foundation; you can’t get away from it. Whether it looks like a humanoid robot, a snake robot, a car, an airplane, or, you know, just a simple computer sitting on your desk browsing the internet, they’re all the same.
What do most AI researchers think of these ideas?
The vast majority of AI researchers don’t really buy into the idea that the brain is important. I mean, yeah, people discovered neural networks some time ago, and they’re kind of inspired by the brain. But most people don’t try to reproduce the brain. It’s just whatever works, works. And today’s neural networks work quite well.