神经科学究竟如何启发AI?不同路径如何殊途同归?智源社区采访了NeuroAI白皮书的第一署名作者,来自冷泉港实验室(Cold Spring Harbor Laboratory)的美国神经科学家Anthony Zador,请他分享了在神经科学与AI交叉领域研究的经历和期许。(中文版参见智源社区公众号,以下为英文原版)
Recently, a number of well-known scholars in the field of artificial intelligence and neuroscience including Yann LeCun, Yoshua Bengio, Matthew Botvinick (DeepMind), Jeff Hawkins published a paper entitled "Towards the Next Generation of Artificial Intelligence: Catalyzing the NeuroAI Revolution". The paper points out that neuroscience has long been an important driver of progress in artificial intelligence (AI), and to accelerate progress in AI, researchers must invest in fundamental research in NeuroAI.
Taking this opportunity, the BAAI Community had a conversation with American neuroscientist Anthony Zador from Cold Spring Harbor Laboratory (CSHL), the first author of the NeuroAI paper, during which he shared his research experience and expectations in the interdisciplinary field of neuroscience and AI.
NeuroAI Revolution
Q:About your recent studies on the NeuroAI revolution. Can you tell us more about this?
A: The basic idea is that if we look at the history of artificial intelligence, it is completely intertwined with the history of neuroscience. If you go back to the very beginning, the very first paper in modern AI and arguably the first neural network paper is the paper by McCulloch-Pitts in 1943. It laid out the idea of artificial neuron. The Perceptron proposed in 1957 is also a simple abstraction of biological neurons.
The convolutional neural network (CNN) was Inspired explicitly by the discoveries of Hubel and Wiesel. Over the past 20 years, many, if not most of the major advances in AI and artificial neural networks come from neuroscience. The core idea of this paper is that we want to bring together the people in neuroscience and AI, and keep those connections strong.
Q:The title is capitalizing the NeuroAI revolution. How would you presume is the ultimate goal of this revolution? Where does it lead to?
A: People often use the term AGI, artificial general intelligence. I don't like that term, because I don't believe that ther’'s anything general about intelligence. I’'s very specific. Every agent, every organism has its own specific biases. This is the basis of the no-free-lunch theorem. You ca’'t be simultaneously good at everything. What I prefer is the term AHI, artificial human intelligence. People imagine that ther’'s this pyramid of intelligence where at the bottom is worm and jellyfish, higher up, maybe fish and eventually monkeys and chimps and then humans at the very top. I think tha’'s completely wrong. Every different organism is good at solving the kinds of problems that needs to solve. So the goal of the NeuroAI revolution would be artificial human intelligence. I think the missing part is not those little pieces that make humans uniquely human, but rather everything else.
This was recognized decades ago as Moravec’s paradox. In 1988, AI researcher Moravec wrote a book where he outlined what has come to be known as Moravec’s paradox. Basically, what he said is that the things that are hard for humans are often easy for computers. And the things that are easy for humans are very hard for computers.
Chess is challenging. And yet the Deep Blue beat the World Chess Champion Gary Kasparov in a match in 1997. Obviously, computers are so much better at multiplying numbers and whatnot. The most recent large language models have been accelerating at fast pace, and they will be very close to perfect very close to human. But still we don't have artificial agents that can interact with the physical world. I want one that makes my bed for me and does my laundry, but we're nowhere close to that.
Five years ago, people were predicting that AI revolution would disrupt the economy by, for example, replacing people who drive cars. We don't seem to be getting there, because driving involves interacting with the world. I was recently joking to my kids who are about to go to college that if they want a safe job, they should become plumbers because plumbers won't be replaced anytime soon.
Q:What does "Embodied Turing Test" mean?
A: The "Embodied Turing Test" was originally proposed by other researchers, and we enhanced it in our paper. The standard Turing test is that, an AI system can fool a human in a conversation, and we're pretty close to that. The best language models can already deceive people, producing text that is even more coherent than many people I talk to. The Embodied Turing Test, on the other hand, is about testing the way an agent interacts with the world, like playing tennis or riding a bicycle.
In other words, we want an agent that can do what every single animal on this planet does effortlessly. So I guess the goal is to rethink the foundations of cognition, and rebuild it from the bottom up.
Q: As you said, the goal is AHI, how can we achieve this goal? In other words, how do we provide an actual technical roadmap?
A: I think that's a really open question. There are over 20 authors on the list, coming from different institutions, with many different ideas of how to go forward. So it's not like, here is a list of things that we should do.
But we all agree that in order to make progress, it would be foolish for us to ignore what neuroscience has to teach us. Imagine that a spaceship came from outer space and landed on earth. And then there were a bunch of advanced technologies that came out of that, including, let's say, an anti-gravity machine. In order to make one ourselves, we would take it apart and learn as much as we could about how to build an anti-gravity machine. Such is with the progress of AI. We're surrounded by the most amazing examples of intelligent animals that are able to interact with the world naturally. So it's foolish for us not to take what neuroscientists already know.
Right now we've done half of the reverse engineering. We've taken it apart. But now what we need is the second half--how to put things back together. So from what I see, the NeuroAI part is knowing what the pieces are when you take them apart, while AI is the engineering part required to put them together in a way that works and is useful.
Q: Under ideal conditions, can intelligent machines be as stable and efficient as the human brain?
A: I don't think that there's any fundamental reason that silicon should be an inferior substrate compared with biology. In fact, I think it's probably better. The problem isn't the physical substrate. The problem is we don't understand enough about how biological computation works in order to build a physical system that reproduces it.
Q: If we want to realize the world in "Westworld", what are the main obstacles?
A: I've watched a couple of seasons of Westworld, that doesn't seem like a perfect success. We don’t want robots that turn against us, we want robots that help us. Physicist Richard Feynman has a famous quote that goes "What I cannot create, I do not understand". That is, we can truly build something only if we understand it.
Q:As you mentioned, like your earlier goal is to build mind out of brain. Could you elaborate on that a little bit more?
A: The best way I can think of to build a new mind is to have children (laughs). I have two kids and they are conscious most of the time. So that's the easiest way, and I guess about 7 billion people do it these days.
And then there's the AHI, building an artificial system that thinks the same way as we do. Talk to him, see what's going on in his head. I remember a recent report (https://mashable.com/article/google-engineer-fired-ai-lamda-soul-sentient)about a Google engineer believing that the language model that he talks to has a soul.
The clash between AI and neuroscience
Q: As a neuroscientist, how should neuroscience inspire AI research?
A:The history of AI is the history of extracting ideas from neuroscience and applying them to artificial neural network. Ever since the early ideas of artificial neural networks were formed from neural circuits 50 years ago, we've learned quite a bit since then.
Smart engineers would go to neuroscience conferences, and they would find something interesting and say, maybe I could use that. But back then it was often kind of accidental. Yann LeCun and Geoff Hinton read a lot of neuroscience papers. Jeff Hawkins actually spends a lot of time thinking about neuroscience. He takes it very seriously.
Almost 30 years ago, when I was a graduate, NeurIPS was already one of the main meetings for computational neuroscience. It was full of people who did both computational neuroscience and artificial neural networks (machine learning). And then the two communities separated. The machine learning went one way and neuroscience went another. So the kinds of close group connections that were common a generation ago are less common.
Now, there are some fantastic exceptions to that, for example, DeepMind. But we thought it was important to write it down so that you could point to this one paper, and maybe that would help to catalyze it to bridge the gap and make it go forward.
Q: What is the difference in the way of thinking between AI researchers and neuroscientists?
A: People in neuroscience are scientists, and AI researchers are essentially engineers. Their ultimate goal is not to understand, but to construct. And in order to build, you need to understand the foundations of what you're building. It’s like the iPhone in our hands. In the past 5 to 10 years, it has become faster, smaller, with longer battery life. These are incremental improvements that do not require fundamental inspiration. (Of course it cannot be denied that the iPhone is a remarkable piece of engineering). In a word, AI needs fundamental inspiration from neuroscience.
Working in Cold Spring Harbor Laboratory
Q: Cold Spring Harbor Laboratory is one of the most prestigious life science laboratories in the world. What is it like to work there?
Cold Spring Harbor Laboratory
A: It's fantastic. It has a beautiful setting, right on the water, about an hour from new york city. We host meetings and courses on biology constantly. And if I want to meet all my colleagues, I could just wait here long enough, and they would all come from around the world. It's great fun. As it's a small place, we get to interact with people I would never interact with. One of my closest colleagues, he studies genetic systems and molecular biology. I ended up learning a lot of genetics and molecular biology that is now central to my work.
Q: What is the source of funding for the lab?
According to the American system, we write grants to the National Institutes of Health (NIH). And we do have endowments. The split is around 1/2 ~ 2/3.
Q: What is the focus of the lab's work?
A: Our lab tries to under neural circuits. The working hypothesis in my lab is that neural psychiatric disorders like depression, schizophrenia as well as autism are, in many cases, disorders of wiring. It means that the wrong neurons in one brain area aren't talking properly to the neurons in another brain area. Sometimes we can have guesses as to which brain areas are failing to talk to each other properly.
So we take the approach by studying the circuit in normal animals. We discover that autism that involves disruption in language communication is related to a disruption in specific neural pathways, which can be originated from genetic mutations. The general idea of research is first pick a gene that you think causes autism in human, disrupt that gene in a mouse, and then look at what happened to the neural circuit in the mouse, compared with another mouse where that gene is not disrupted.
Q: Going back to your early academic and professional career, what kept you passionate about neuroscience over the years?
A: I wandered around a lot, and it took me a long time to realize what I was interested in, that is to build a mind out of a brain, or build consciousness out of matter. That is more of a philosophical question to answer.
More specifically, it's about how you get behavior out of collections and computations of neurons. I started graduate school doing pure theory. That was the late 1980s, when artificial neural networks were becoming popular again. Then I did my PhD thesis primarily on modeling single neurons, trying to ask the question as to how a biologica neuron works differently from an artificial neuron.
During my Ph.D., I mainly studied the dendrites of neurons. Then I wanted to actually learn how to do experiments, and joined the lab of Chuck Stevens, one of the legends in neuroscience (he died last month at the age of 88). While working in his lab, I spent a lot of time understanding how synapses work. Then I came to Cold Spring Harbor, where we tried to figure out decision-making in rodents.
Along with one of my colleagues Zachary Mainen, we started teaching rats to perform complex cognitive tasks, which at the time people said it couldn't be done. We focused on the auditory system, through teasing apart the circuits responsible for making an auditory decision. At the same time, I realized that what we really needed was a map of the circuits in the brain, showing how the neurons were wired up. Then I started to develop a set of new molecular tools for doing high speed dissection of neural circuits.
Q:Finally, can you tell us about the future artificial intelligence you imagine? what will it look like?
A: My vision of the future of AI is no different from that in many science fictions. The slide shows five or six different robots from TV and movies. I usually take this as a quiz on how much sci-fi a person has seen.
There's C-3PO from Star Wars, Arnold Schwarzenegger's Terminator, as well as the little robot from a TV series I watched when I was a child, called lost in space. C-3PO might be a preferable choice, a humanoid robot that can chat, unlike in the Terminator, that tries to kill people.
C-3PO from Star Wars
I don't think robots necessarily need to be violent or malevolent. In fact, primates, which humans are evolved from, are among the meanest, nastiest, most violent mammals around. Monkeys spend a lot of their time attacking each other and being mean to each other. We now know that until 50,000 years ago, modern humans shared the earth with a bunch of other very intelligent species, like neanderthals, Denisovans etc. Over the years, they all disappeared. One possibility is that we just killed them all. As our human impulses come from our primate ancestors, I don't think that artificial intelligence is coupled with violence or domination from birth.
Q: In other words, robots should not be born malicious.
A: That's right, unless it's a killer robots made for military purposes, which I very much hope we can avoid building.
Q:What books have you read recently that you can recommend to readers?
A: Good question. I recently read a great book on the history of AI called The Genius Makers by Cade Metz. Also, there are some very inspiring books on neuroscience, one of which is called Other Minds by Peter Godfrey Smith. He was a philosopher, a naturalist, and a diver. He loves scuba diving. This book says a lot about the mind of an octopus. It turns out octopuses are super smart. And they evolve very recently, completely separately from vertebrates. The book is named Other Minds because the author believes octupus is as close as we can get on earth to an alien form of intelligence.
Another book on vertebrate intelligence is called, Are we smart enough to know how smart animals are? It's by primatologists named Frans de Waal. It reveals how creatures including monkeys, apes, elephant and even crows are smart in ways different from us.
One more book I'm reading is called the Mind of the Bee. It turns out that the bee which has only 1 million neurons is incredibly smart. A bee can go 5 to 10 miles away from its hive, and then find its way back using visual cues. There was also a recent paper showing that a bee can teach another bee how to do something. Suffice to say, we definitely have enough compute power to mimic the neural circuits of a bee, but we can't build anything as clever as a bee. It's just crazy, all with 1 million neurons.
内容中包含的图片若涉及版权问题,请及时与我们联系删除
评论
沙发等你来抢