A conversation with AI pioneer Yoshua Bengio

Deep learning expert Yoshua Bengio recently visited Microsoft’s Redmond, Washington, campus. Photo by Scott Eklund/Red Box Pictures.

When Microsoft acquired deep learning startup Maluuba in January, Maluuba’s highly respected advisor, the deep learning pioneer Yoshua Bengio, agreed to continue advising Microsoft on its artificial intelligence efforts. Bengio, head of the Montreal Institute for Learning Algorithms, recently visited Microsoft’s Redmond, Washington, campus, and took some time for a chat.

Let’s start with the basics: What is deep learning?

Yoshua Bengio:   Deep learning is an approach to machine learning, and machine learning is a way to try to make machines intelligent by allowing computers to learn from examples about the world around us or about some specific aspect of it.

Deep learning is particular among all the machine learning methods in that it is inspired by some of the things we know about the brain. It’s trying to make computers learn multiple levels of abstraction and representation, which is presumably what makes these systems so successful.

Can you give us an example of how people are using deep learning?

Bengio:   The most common way that deep learning is used is called supervised learning, which is when we give many examples to the computer of what it should be doing in many different contexts.  For example, we provide millions of examples of somebody pronouncing sentences, and then we also know the transcription of what that sentence is, and we’d like the computer to go from the sounds to the words.  So the computer gets the input that it would see in the real world, as well as what a human would do with it, and it tries to imitate the human through many, many examples of the task.

Deep learning has been around for decades. Can you talk about how we’ve gone from these early efforts to where we’re really seeing deep learning used in such a broad way today?

Bengio:  It really started in the late 1950s that people started to think about AI and to think that, hey, we should maybe look at what’s going on in the brain and get some cues for building more intelligent machines.  And then it sort of faded away and came back in the early eighties, up to about the early nineties, and again faded away because it didn’t work as well as people would have hoped.  And now deep learning is in a third wave. About five years ago, we started having really amazing breakthroughs in applications like speech recognition, object recognition and now, more recently, natural language applications like machine translation.

YouTube Video

For you as an expert in deep learning, what’s the most exciting work that you’re seeing right now?

Bengio: Right now I’m most excited about the progress we’re making in what’s called unsupervised learning.  So this is one area where the current state of the art in machine learning and deep learning is way below what humans can do. A two-year-old child can learn by simply observing and interacting with the world. For example, she can understand physics without having to take classes and understand gravity and pressure and so on by playing and observing.  This is unsupervised learning.  We’re far from that kind of ability, but the good news is we’re making pretty impressive progress in that direction. It’s very important, because in order for machines to go beyond these very limited tasks they currently are good at we’ll need unsupervised learning.

We talk a lot at Microsoft about how we see artificial intelligence as augmenting the human experience in helping people do tasks.  What are some of the most promising future capabilities that you see in terms of how AI can do some of that work?

Bengio: Well, the first important use of the progress we’re making in AI, especially with natural language, is the ability of the computer to talk to us in a way that’s more

natural.  Right now we get very frustrated when we interact with a computer and we don’t know how to communicate the information or how to get information that we want.  Natural language processing is going to make computers much more accessible to a lot of people who are not programmers.  But beyond that, the idea that the computer actually understands our needs and our questions, and can find information but also reason and help us in our work, is very promising.

I want to go back to something you said earlier about how deep learning is often described as being inspired by how brains work.  Why are deep neural networks inspired by our understanding of how brains work and how does that affect their potential?

Bengio: So from the very early days of neural networks, there was this idea that the computation performed in the brain can be abstracted by what each neuron in the brain does as a very simple mathematical operation. What the neural networks do is combine all of these little operations together, but each of the computations performed by a neuron can be changed and adapted.  That corresponds to changes in the synapses in our brain, and that’s how we learn. It turns out that this style of machine learning, where the computer learns how to combine many elements together, is very powerful.

YouTube Video

How far along are we in understanding how the brain works? 

Bengio:   The brain really remains a big mystery.  Think of it like a big puzzle.  We have all these elements and tens of thousands of neuroscientists around the world are observing many different elements, but we are missing the big picture.  And what I and others believe and hope is that the progress we’re making in deep learning is going to help us discover that big picture.  Of course, we don’t know, but there’s a lot of excitement right now in the scientific community about bringing together the more mathematical ideas in machine learning and deep learning with neural science in order to better understand the brain.  And of course, the hope is that it actually goes in the other direction as well, because current deep learning is not at all at the level of human intelligence.  Humans and human brains are able to do things that machines can’t, so maybe we can also learn about how brains do it and inspire future deep learning systems.

We hear a lot of speculation about what artificial intelligence can do.  Can you give us a sense of how close we are to creating artificial intelligence or deep learning techniques that do actually mimic how humans think and act?

Bengio:   I get a lot of these kinds of questions, and my answer is always, ‘I don’t know.’ And I think no serious scientist should be giving you a straight answer, because there’s just a lot of unknowns.  I mean, by definition we’re doing research in this field because we don’t know how to solve some set of problems. We know we’re making progress.  We can guess that things are moving in the right direction. But how long is it going to take to really address the more difficult problems of more abstract understanding, for example?  It’s impossible to answer.  Is it five years?  Is it 15 years?  Is it 50 years?  Right now we’re seeing some obstacles and we think we can tackle them.  But maybe this is just a mountain hiding other mountains.

Can you talk about where deep learning fits in the context of all of the tools in which people are using artificial intelligence?

Bengio:   Deep learning is changing the way that AI has been thought of in the last few decades, and it’s taking some of the ideas from more traditional approaches to AI and integrating them, combining some of those good ideas. Probably the best-known example of this is the fusion of deep learning and reinforcement learning.

So, reinforcement learning is a type of machine learning where the learner doesn’t get to know what a human would do in this context. The learner only gets to see if the actions were good or bad after a long set of actions.  A lot of the recent progress in this area is in things like playing games, but reinforcement learning probably is going to be very important for things like self-driving cars.

YouTube Video


Allison Linn is a senior writer at Microsoft. Follow her on Twitter.