FacebookTwitterYoutubeInstagramGoogle Plus

Big Ideas
Teach Your Robots Well: Will Self-taught Robots Be the End of Us?

“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? In this program, leading specialists in A.I., neuroscience, and philosophy tackle the very questions that may define the future of humanity. This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.Learn More

View Additional Video Information

HOD LIPSON: People have a perception of what AI and robotics should look like from Hollywood.

THEODORE T.: What do I call you? Do you have a name?

SAMANTHA:   Yes, Samantha.

THEODORE T.: Where did you get that name from?

SAMANTHA: I gave it to myself actually.

HOD LIPSON: We’ve seen happy robots, sad robots, complex robots, but, in reality, it looks very different.

ARNOLD WEBER: I understand what I’m made of, how I’m coated, but I do not understand the things that I feel.

HOD LIPSON: The first 50 years of AI were dominated by rules, by logic, by reasoning. The idea was that you program an AI system by writing out a set of rules and the computer can obey these rules and logically interpret these rules and follow them. If you did the rules just right, you can do lots of interesting things.

HOD LIPSON: From the very first days when people created the first computers, they already started to think about how to create intelligent machines for cracking codes, for simulating physical phenomenon, so forth.

NARRATOR: The task they sent it, to stack blocks, was on the face of it child’s play.

HOD LIPSON: We have things like a machine that could first win a master-level checkers player in 1957. We had a machine that would beat the world champion in chess in 1997, but that was all rule-based AI. I would say the next big milestone started to happen when AI transitioned from rule-based systems to machine learning-based systems. With machine learning, all the intelligence actually comes through holistic analysis of data. Unlike rule-based systems, machine learning systems get better the more data they get.

HOD LIPSON: Telling the difference between a cat and a dog, we intuitively know to do that very well, but it’s very difficult to articulate the rules. It turns out, it’s the same thing for a computer. If you try to do that with rules, it doesn’t work, but since 2012, that suddenly became possible. When you teach a computer you give them examples of thousands of pictures of cats and thousands of pictures of dogs. They learn how to do it pretty well and even surpass humans.

HOD LIPSON: One of the first big high-profile game show milestones in AI is Watson winning Jeopardy about a decade ago.

ALEX TREBEK: Hello, 17,973 and a two-day total of 77,147.

HOD LIPSON: Just recently, AlphaGo winning the world championship in Go.

ANNOUNCER: AlphaGo’s won again, three straight wins.

SPEAKER: Three straight wins, has won the match in great style.

HOD LIPSON: We see this rapid acceleration, this exponential growth in AI as machines not only learn from exponentially growing data, but they’re also growing by teaching each other. For example, when it comes to driverless cars, a human can have only one lifetime of experience at driving, but a driverless car can have many lifetimes of experience with driving because it can learn from all other cars. In a strange way, the more driverless cars there are on the road, the better each one of them gets.

HOD LIPSON: We’ve seen increasingly how challenges that were resistant to solving using rules turned out to be solvable using machine learning. If I look forward, there’s still milestones that are coming our way. Machines that generate things, generate images, generate music, generate art. There are art pieces that win art competitions that are painted by robots, machines that generate engineering designs, everything from antennas to circuits, that have been by machines now that outperform what humans can design.

TEACHER: How are we different from the computer? What else do we have-

STUDENT: It’s not human.

STUDENT 2:That’s what I said.

TEACHER: Right.

STUDENT: People don’t have buttons and a computer does.

TEACHER:   What is on your shirt?

STUDENT: Uh …

HOD LIPSON: Will AI ever be sentient? To me, the answer is yes. When the AI systems begin to take that incredible intelligent power and model themselves, they begin to have self-awareness, they begin to have feeling. It’s not going to be one day your computer wakes up and is sentient, it’s going to be a very gradual process.

ROBOT: I want to be more like a human. It’s the purpose I was designed for.

HOD LIPSON: People always ask, will robots reach human-level sentience? The answer is that there’s no reason to think human-level sentience is the ultimate sentience possible. Machines will keep learning, they’ll get there, and they’ll continue.

HOD LIPSON: There’s a lot of people who worry about AI getting out of hand and there’s a lot of doomsday scenarios around AI.

BILL GATES:   I do think we have to worry about it. I don’t think it’s inherent that as we create superintelligence that it will necessarily always have the same goals in mind that we do.

ELON MUSK:   We just don’t know what’s going to happen once there’s intelligence substantially greater than that of a human brain.

STEPHEN HAWKING:   I think that the development of full artificial intelligence could spell the end of the human race.

HOD LIPSON: I think AI will evolve to be different. It doesn’t experience the world the way we experience it.

Samatha: Well, I take it from your tone, you’re challenging me. Maybe because you’re curious how I work?

HOD LIPSON: We’ll know things we don’t know, we’ll know things that the AI can’t perceive and it’s going to be like a different species.

TIM URBAN: Now that you’re all properly creeped out, we can move on to our panelists. The first one is the Director of the AI Mind and Society Group at the University of Connecticut. Her AI research includes a two-year project on post-biological intelligence with NASA. Please welcome Susan Schneider.

TIM URBAN: Our next panelist is the Chief AI Scientist at Facebook and an NYU professor.. Please join me in welcoming Yann Lecun.

TIM URBAN: Also joining us is a professor of cognitive neuroscience at Dartmouth University. His research focuses on consciousness and its neural realizations. Please welcome Peter Tse.

TIM URBAN: Finally, we have a professor doing physics and AI research at MIT and he advocates for the positive use of technology as the president of the Future of Life Institute. Please welcome the ridiculously handsome Max Tegmark

TIM URBAN:… There’s been some major paradigm shifts in how we develop this stuff. We had rules-based AI and we’ve shifted to machine learning. Part of the major reason we’ve been able to do this is Yann. Yann literally is one of the people that made us able to do this. Yann, just what is rule-based AI versus machine learning and how did you do that?

YANN LECUN: Well, actually the idea of machines that can learn is about as old as computers almost. Turing was talking about it in the 40s and there were the first machines capable of running were built in the 50s essentially. The Perceptron was a machine capable of recognizing simple shapes. It was actually an analog computer, so there was a wave of machine learning back in the 60s. It kind of died out a little bit at the end of the 60s and it reappeared in the 80s.

YANN LECUN: The way machine learning works, and you saw some examples in the video initially, if you want to train your machine to recognize, let’s say, cars, airplanes, tables, and chairs in images, you collect thousands of examples of each of them, you show the machine the picture of a car, and if it doesn’t say car, you tell it, “Actually, you got it wrong. This is a car.” Then, the machine adjusts its internal parameters if you will, its functions, so that next time you show the same picture the output will be closer to what you want. That’s called supervised running. You feed the machine with the correct answer when you train it.

YANN LECUN: The problem with this is that it requires thousands and thousands if not millions of examples for the machines to do this properly. There’s a lot of tasks you can do this way. You can train machines to recognize speech. You can train them to recognize images. You can train them to translate language. It’s not perfect, but it’s useful. You can train them to classify a piece of text into a number of different topics. All of the applications that you see today on machine learning basically use this model of running, supervised running. That means it only works with things where it’s worth collecting a lot of data.

YANN LECUN: How are those machines built? There’s several ways to build learning machines, but some are based on statistics and things like this. The stuff that has become really popular in recent years is what we used to call neural networks, which we now call deep learning and it’s the idea, very much inspired by the brain a little bit, of constructing a machine as a very large network of very simple elements that are very similar to the neurons in the brain and then the machines learn by basically changing the efficacy of the connections between those neurons. They’re like coefficients you can change essentially.

YANN LECUN: With this kind of method, it’s called deep learning because those neurons are organized into many layers essentially. It’s as simple as that. It’s not deep because there’s a deep understanding in the machine of the content. With that, we can do amazing things like what you see here on the screen of being able to train a machine to not just recognize objects, but also draw the outline and figure out the pose of a human body and translate language without really understanding what it means.

YANN LECUN: I think there’s going to be a lot of applications of this in the near future, but it’s very limited. It’s trained for relatively narrow applications. There’s a second type of learning called reinforcement learning. Reinforcement learning is a process by which the machine basically trains itself by trial and error. It tries something and then you tell it whether it did good or bad. If you tell it it did good, it reinforces its behavior and if you punish it essentially, it de-emphasizes that behavior. That works really well for games, but it requires also millions and millions of trials. So, you can have machines start learning to play Atari games or Go or chess by playing millions of games against themselves and then reach superhuman performance. But, if you were to use this to train a machine to drive a car, it would have to drive for millions of hours and it would have to run off cliffs about 50,000 times before it figures out how not to do that.

YANN LECUN: We seem to be able to learn how to drive with a car with about 20 hours of training and without ever crashing for most of us. We don’t know how to do this with machines. That’s the challenge of the next few years really, that perhaps we’ll talk about a little bit later. We have an ability to learn by just observing the world and we learn an enormous amount of background knowledge about the world just when we’re babies. Just the fact that objects don’t float in the air, they fall. The fact that when an object is hidden behind another one, it’s still there. That’s called object permanence.

YANN LECUN: This notion of gravity that objects fall, that when you show an object that floats in the air to a baby below six months, they’re not surprised. They think that’s how the world works. It doesn’t violate their model of the world. After eight months, if you show that to a baby, they look at it like this. They say, “What’s going on?” I mean, they don’t say what’s going on, but they think, “What’s going on?” That means, in the meantime, they’ve built a model of the world that includes things like intuitive physics. That occurs also with animals like apes. Your dog has a model of the world. Your cat has a model of the world. When this model of the world is violated, you either find that funny or scary, but, in any case, we pay attention because you learn from it.

YANN LECUN: so here’s a baby orangutan here being shown a magic trick. An object is being removed from the cup and they show the cup, the cup is empty and the baby orangutan [crosstalk 00:18:06]. They’re rolling on the floor laughing. Obviously, his model of the world was violated. The object had to be in the cup and it wasn’t there and he said, “What? This is funny.”

YANN LECUN: How do we get machines to learn models of the world this way by observation? That’s what we don’t know how to do. We aren’t going to have truly intelligent machines until we figure this one out.

TIM URBAN:     Before we even get into the even more mindblowing AI stuff that’s coming in the farther future, let’s just talk about the next 10 years for a second. Peter, when we talk about the distinction between maybe what’s coming in 10 or 20 years and the stuff that maybe humans can do and what we have now, how would you define it?

PETER ULRIC TSE: Well, I think that artificial narrow intelligence is here. It’s in every aspect of our lives now. I think we’re going to continue in that direction. That alone is going to change our lives in a big way just as airplanes changed all of our lives. We don’t expect there to be general airplanes. We don’t want them to do anything but fly us to our destination. We don’t ask them to watch our children or mow the lawn and that’s okay. The question then is, down the line, beyond 10 years, will there be other systems that can watch our children and fly and mow the lawn. Well, do we really want that? I think the next 10 years is really all about artificial narrow intelligence becoming ever more powerful.

PETER ULRIC TSE: The real hurdle is going to be the mental models because take a case like the successes of the past five years and recognizing objects has been afforded by supervised learning where you provide lots of labels as in the ImageNet case. But I would argue that a lot of what we regard as vision, for example, is a representation of what’s invisible and cannot be easily labeled. So for example, the contents of other people’s minds is invisible. The backs of objects, the shapes of things, causation. Our conscious experience is the ultimate model that evolution gave us of what’s going on in the world right now in our bodies in that world and it includes a whole story about what’s going on, causation, other minds, and so forth. That’s going to be tricky to get to, this representation of the invisible.

PETER ULRIC TSE: I think it’s going to be very tricky for AI to come up with systems that regard an absence of information as informative short of full-blown mental models, which are, in turn, many of them realized in our own experience, our subjective experience of our bodies in this world. I think there’s a long way to go.

TIM URBAN: Yeah, you keep hearing about new areas of life that you don’t expect AI to apply to. First, it was gaming and other things, driving cars, and one after the other it keeps proving us wrong. One area that AI has started to move into is the world that we really don’t associate with computers, art and creativity. HOD LIPSON, who was actually in that first video, he and his team have created an AI artist, very sassy artist. This AI has actually created something that I’d like to show you and be nice. It’s sassy.

TIM URBAN: Right? Not so bad. We actually have a video of this being made and are we impressed or we not impressed? How do we feel about this?

SUSAN SCHNEIDER: Earlier we were talking about this and I guess I didn’t feel like it’s a true instance of creativity because it’s just a copy, but then others said, “Well, it infused its own take on the painting.” My concern is that there’s just not enough creativity but I do think that moving into the future I wouldn’t be surprised if we did see novel instances of true creativity in machines.

TIM URBAN:Ouch. Yeah.

SUSAN SCHNEIDER: Sorry, Hod.

TIM URBAN: AI has eclipsed humans. It takes a while but when it gets good at chess, it never looks back. Suddenly it is officially better than chess than all humans forever by far. Is it going to suddenly just be putting Mozart to shame where we say, “Who would ever listen to human-written music anymore?”

YANN LECUN: No, I think it’s going to help us be more creative. It’s going to be an amplification of our intelligence and our creativity. At the root of art, there is generally the communication of emotions. Art is really about either evoking or communicating emotions and if there is no emotion to communicate, there is no point. If you put a machine that doesn’t have emotion producing art, it might evoke an emotion but not communicate one. I like living nearby here because I’m walking distance to one of my favorite jazz clubs. I’m a huge jazz fan. Jazz is really about real-time communication of emotions. It’s like it’s open door to the soul of the performer. I don’t see the point of having a machine do this because then there is no communication to be…

TIM URBAN: You’re saying even if an AI could be programmed to see the audience, feel the room, understand the deal, and knows exactly how the best jazz musicians communicate with that particular emotion, there’s actually going to be, just by definition, something missing ’cause the audience knows that it’s manipulating it.

YANN LECUN: Objectively, it might not be missing, because it might not be distinguishable from something that’s actually produced by a human, but my guess is that the feeling of the audience will be different because they will know it comes from a machine. It might take many decades, perhaps centuries before people’s attitudes towards creation by machine will change, but, eventually …I had this discussion with a famous economist, in fact, a Nobel prize-winning economist called Daniel Kahneman, who I made that point that communication of emotion may take a while for machines. He said, “Yeah, but eventually they’ll at least be able to simulate it well enough that we won’t be able to tell the difference.” That’s a very good point.

MAX TEGMARK: I cringe a little bit when someone asks, “Oh, is this real creativity?” Because you were joking earlier about how people often say, “Oh, that’s not real intelligence” as soon as the machine figures out how to do it. If you take the point of view that intelligence is all about information processing and that creativity is also just a certain kind of very sophisticated information processing that we do with our brains, but then the question isn’t if it’s possible for machines to be creative but simply are we smart enough to actually make such machines, will they happen eventually. I have a lot of friends I respect who are very smart who think that machines can never be creative or even as intelligent as us because they view intelligence and creativity as something mysterious that can only exist in biological organisms like us. But, as a physicist, I consider that attitude carbon chauvinism. I think it’s arrogant to say that you can only be smart and creative if you’re made of meat.

MAX TEGMARK:   I’m made of exactly the same kind of electrons and other elementary particles as the food I eat and as my laptop. It’s all about how the patterns in which the particles are arranged, so it’s ultimately all about information processing the way I see it.

TIM URBAN:     That makes sense. In the end, it’s just the elementary particles and … Yeah.

PETER ULRIC TSE: Can I return for a moment to the creativity? Because I would argue that something like this is ‘as if’ creativity and it’s not really yet the real thing. I’m not saying it’s impossible. We are existence proof that physical systems can be creative. The kind of creativity I find to be most impressive is when people like Einstein completely reconfigure our understanding of something like space or gravity, poof, just in that totally new way or take music and create a whole new form like jazz. Now, these convolutional neural networks as they currently exist need to be taught, so given lots of examples of Mozart and then can produce something like Mozart but are they then going to create a new form? I suspect the answer is no, that we’re going to have to accomplish something more like deep unsupervised learning which is what babies and children do. Part of that, I think, is going to be moving from the nouns of the mind like labeling house, person, face, to the verbs of the mind.

PETER ULRIC TSE: Very central to human cognition is mental operations. If you look at some of the first instances of creativity in our species, they’re really mindblowing. 30,000 years ago in a cave that is now near Ulm, Germany somebody put a lion’s head on a human body, which took an operation of downloading a lion’s head, putting a human body, sticking it together, and then going and making it in the world. Now, modern examples would be lying in bed, maybe like Orville Wright did for two years, thinking about how to fly and then he said, “Well, actually we don’t need to flap. We can just pull the whole thing forward with a big fan.” Then, going and building it and making an airplane and thereby changing the world. Mental operations, this dynamic almost syntactic operations that take place in our working memory, is something that’s very central to what we do and is at the heart of our creativity and I think is very different from this ‘as if’ creativity that results from supervised learning with thousands of examples.

TIM URBAN: True originality might be harder. Although, maybe humans are also … We’re wired to fit in and copy what’s done. Maybe it will be released of the burden, of the fear of failure that sometimes hinders originality. Maybe once it gets there, it could be super original in some ways, but it’s not there in every way. We have such a good way to show you that AI is not there in all these different ways yet in some ways. It has to do with a movie called Sunspring which was a screenplay … There was an AI that was fed thousands of screenplays. They said, “Now, take all of that and write us a great screenplay.” The AI did its best and they actually got human actors and they acted out verbatim what the AI did. So, I’ll let you judge for yourself, but … Turn this on here.

H:         All right, you can’t tell me that.

C:         Yeah, I was coming to that thing because you were so pretty.

H:         I don’t know. I don’t know what you’re talking about.

C:         That’s right.

H:         So, what are you doing?

H2:       I don’t want to be honest with you.

H:         You don’t have to be a doctor.

H2:       I am not sure. I don’t know what you’re talking about.

H:         I want to see you too.

H2:       What do you mean?

H:         I’m sure you wouldn’t even touch me.

H2:       I don’t know what you’re talking about.

H:         Principle is completely constructed of the same time.

H2:       It’s all about you to be true.

H:         You didn’t even watch the movie with the rest of the base.

H2:       I don’t know.

H:         I don’t care.

H2:       I know it’s a consequence. Whatever you need to know about the presence of the story, I’m a little bit of a boy on the floor.

H:         I don’t know. I need you to explain to me what you say.

H2:       What do you mean?

H:         Because I don’t know what you’re talking about.

H2:       That? That was all the time.

H:         Would have been a good time.

TIM URBAN: It’s a little uneven right now. This is, again, the present right now and maybe a little bit of what we can expect in the next few years. What I want to move into now, which is really the mindblowing stuff, is where this is going. Max, what is artificial general intelligence and how is it different than what we have now?

MAX TEGMARK: Yeah, if we can have this picture up here, I’ll explain how I like to think about this. I like to think about this question in terms of this abstract landscape of tasks where the elevation represents how hard it is for AI to do each task at human level and the sea level represents what AI can do today. The sea level is obviously rising so there’s a kind of global warming going on here in the task landscape. The obvious take away from this is you should avoid careers right at the waterfront of course, which will soon be disrupted by automation. The much bigger question that you’re going for is, how high will the water end up rising? Will it eventually submerge all land matching human intelligence at all tasks? This is the definition of artificial general intelligence. This has been the Holy Grail of AI research ever since its inception.

TIM URBAN: Right, it’s so hard to understand because we’ve never experienced a world where there’s something that’s generally intelligent on a human level other than humans. It’s going to be something different than humans that is also smart the way humans are. That is so mindboggling that we can’t apply our own experience and say, “Well, it might be something like that.” It’s going to be very hard for us to even imagine.

TIM URBAN: Yann, you talk about that it’s … You almost refer to it as hypothetical at this point.

YANN LECUN: Well, not only don’t we have the technology for this, we don’t even have the science, so we don’t know what principles the intelligent machines at the level of human intelligence will be based on. Now, we like to think of ourself as being generally intelligent, but we’re not. We’re actually very specialized as well. We’re more general than, of course, all the machines that we have, but our brains are very specialized. There’s only certain things we do well and if there’s anything that experiments like AlphaGo has proved in the recent years is that we totally suck at Go. We’re really bad at Go. The stupid machine can beat us by a very, very large margin. We’re not very good at exploring trees, for example, of options because we don’t have that much memory. There’s a lot of tasks like this that … We’re not very good at planning a path from a city to another. This algorithm that runs on your GPS is much better at this than you are. There are things like this that we’re not particularly good at. We know how to do them somehow, but our brains are somewhat specialized.

YANN LECUN: Now, the thing is, you were talking about a new species, AI being very different from human intelligence. It will be very different from human intelligence and there is one, it’s sort of a trap, that is very easy to fall into which is to assume that when machines will be intelligent, they will have all the side effects if you want, all the characteristics of human intelligence. They will not. For example, there is the traditional Terminator scenario that we’ve all heard about of machines will become super intelligent and then they will want to take over the world and kill us all.There’s a lot of people who have been claiming this is going to happen and it’s inevitable and blah, blah, blah, or at least it’s a definite danger. Now, the thing is, even in the human species, the desire to take over is not actually correlated with intelligence.

TIM URBAN: It’s true. That is true.

YANN LECUN: It’s not like the people who are in leadership positions are necessarily the smartest. In fact, there is an evolutionary argument for the fact that it’s if you are stupid that you want to be the chief. Because if you are smart to survive on your own, you don’t need to convince anybody to help you, but if you are stupid, you need everybody else to help you feed you essentially. The desire to take over is not correlated with intelligence. It’s correlated with testosterone probably.

TIM URBAN: Yeah.

MAX TEGMARK: Tim, if I may just add a little bit to what I said. I completely agree with you, Yann, of course, that the Terminator stuff is silly and absolutely not something that we should worry about, but I think it’s worth emphasizing a little bit more why, nonetheless, artificial general intelligence is such a big deal if we ever get there. First of all, it’s important to remember that intelligence can give power. If you had artificial general intelligence and you are, for example, Google, you could replace your 40,000 engineers by 40,000 AIs that could work much faster and didn’t have to take breaks. Before too long, you could be incredibly rich and powerful and start having a vast amount of real power in the physical world. In that sense, it gives great power. Then, you can ask the question even if the AI doesn’t, as in sci-fi movies itself, somehow break out and take over, do we want whatever humans happen to be controlling the first AGI to unelected be able to take power over the planet or would we like this power to be shared more broadly?

MAX TEGMARK: That’s one example of why it’s such a big deal. A second example of why AGI, I think, would be a huge deal is because even though I completely agree with you, Yann, that we humans are very dumb and my teenage sons remind of this very often that I’m very dumb, there’s so much that we can’t do You might think there’s nothing special about human intelligence in the grand scheme of things, but there is actually. Because in the evolution of Earth, we have exactly just barely reached the level we are able to develop technology that might be able to supersede us. If we have machines, which can do everything we can, they will then perhaps also be able to be used to develop ever better machines. It’s still better and that can enable AI to bootstrap itself to become not just a tiny bit smarter than us, but way smarter. That leads to this whole controversial discussion about an intelligence explosion, singularity and so on that’s also very controversial. Those are the two reasons why I feel that AGI would be such a huge deal even though I agree with what you said.

TIM URBAN: Let’s also bring in, I think it’s an elephant in the room, any time you’re talking about human level of beyond intelligent computers, consciousness. Of all the different debates in AI that are hugely controversial, this is probably the most. You have people all over the place. Let’s just define consciousness so we can all be on the same page. Susan, what is consciousness to you?

SUSAN SCHNEIDER: Well, it’s the felt quality of experience. Right now, it feels like something from the inside to be you. Every moment of your waking life and even when you’re dreaming, you are experiencing the world. Consciousness needs to be distinguished from conscious. A lot of people run them together at first when they’re first thinking about it. To have a conscious is entirely different than having that felt quality. That’s just what it is to be alert and alive. When you see the rich hues of a sunset, when you smell the aroma of your morning coffee, you’re having conscious experience.

PETER ULRIC TSE: I completely agree that consciousness is a subjective experience. It’s nothing else than that, but it’s very special in that it is a domain of highly precompiled representations over which mental operators can operate. The key operator, I think is attention, especially volitional attention. You might have locked-in syndrome and you could shift your volitional attention to the radio or the TV, so you’d even then have a kind of volitional control even in this domain of your consciousness.

PETER ULRIC TSE: Consciousness is for something. It’s for these planning areas to have a world. In one sense, it’s a veridical hallucination, but it’s not a hallucination because it’s not saying what’s not there or it’s saying what is there. It allows us to act in this world. That’s only half of consciousness. The other half of consciousness is imagination. If I were to buzz you, probably about half of you right now are zoning out and thinking about this or that, but we spend about half of our lives in this imaginary virtual reality or our own creation. In this domain, we have total freedom. We can do anything and then we can go and build it in the world if we want. Consciousness is for something and it takes quite a while to make it. The photons in the world hit your retina at time zero, your consciousness is not happening at time zero. There’s a lot of processing that goes on in the first quarter to a third of a second and then you experience a full-blown world that allows you then to act in the world.

MAX TEGMARK: Yeah, I share the definition that you both gave of consciousness as subjective experience. When I drive down the street, I’m experiencing colors and sounds and vibrations and motions, but does the self-driving car experience anything? That’s a question I think we honestly don’t have a good scientific answer for yet. I love how controversial this is. If you look up the word consciousness in the Macmillan Dictionary of Psychology from a few years back, it says nothing of interest has ever been written on the subject. Even when I asked a lot of science colleagues, most of them say, “Consciousness is just B.S.” When I ask them why, I notice that they form the two camps that disagree violently with each other about why it’s B.S. Half of them say it’s B.S. because, of course, machines can’t be conscious. You have to be made of meat to be conscious. Then, the other half says, “Of course, this is B.S. because consciousness and intelligence are just the same thing.” In other words, anything that acts as if it were conscious will be conscious.

MAX TEGMARK: To be contrarian, to most of my colleagues, I think the truth is probably somewhere in between because I know that most of the information processing in my brain I’m actually not conscious of, the heartbeat regulation and the vast majority of other things. Actually, when I look up and be like, “Oh, there is Yann,” I have no idea how all that information processing happens. What I’m aware of is just this CEO part of my brain that gets emailed the end result of the computation. Not only do I think it’s not a B.S. question, I think people who have been saying for so long it’s a B.S. question have actually been lame and just running away from a genuine science question. ‘Cause usually if you have a great science question that lingers for hundreds of years it’s because people just dismissed it rather than doing the hard work. I think we need to do the hard work on this. If you’re a physician in the emergency room and you have an unresponsive patient coming in, wouldn’t it be great to have a consciousness detector that can tell you whether this person is in a coma and there’s one home or whether they have locked-in syndrome? If you have a helper robot, wouldn’t you want to know if it’s conscious so you should feel guilty about switching it off? Or, whether it’s just like a zombie so you should feel creeped out when it’s pretending to be happy about what you said? I’d like to know when we do these things.

YANN LECUN: The question of consciousness is probably not posed properly in the sense that back in the 18th century or 17th century or even perhaps earlier when a scientist discovered how the eye works and that the image on the retina forms upside down. They were baffled by the fact that we see right-side up. How is it that we don’t see upside down because the image in the back of our eye’s upside down? It was a big mystery. Now that we know what information processing is all about, we think this question makes absolutely no sense. The whole statement makes no sense. I think there are things about consciousness of that nature that we’re not asking the right question, but there’s a lot of contrarian opinions on this that I’d be happy to take at any moment not totally seriously because I don’t completely believe in them. The fact, for example, that consciousness is an epi-phenomenon of being intelligent. So, any intelligent entity will have to be conscious because they will have to have some sort of model of itself. That’s, according to some definition, that satisfies consciousness.

YANN LECUN: There’s another one that I like which I connect with, maybe other people connect with it as well, which is consciousness is actually a consequence of our brains being very limited. We can only focus our attention on one thing at a time and therefore … That’s because our brain is limited hardware. We have our prefrontal cortex that has to focus on one task or one particular situation and cannot do multiple things at the same time. We have to have a process in our brain that decides what to pay attention to and how to configure our prefrontal cortex to solve the problem at hand. We interpret this as consciousness, but it’s just the consequence of the fact that our brain is so small, that if our brain was ten times the size, then we could do ten things at the same time and maybe we wouldn’t have the same experience of consciousness. Maybe we will have ten simultaneous consciousnesses. Is there a plural for consciousness? Is it consciousnesses?

TIM URBAN: Consciousnesses. Let’s go with that.

YANN LECUN: Okay. It’s not a collective word, is it?

TIM URBAN: Yeah.

SUSAN SCHNEIDER: I thought-

YANN LECUN: I think we just don’t know enough really to ask these kinds of questions.

TIM URBAN:     Let’s start with Peter and then we’ll go to Susan.

PETER ULRIC TSE: All right, so bringing it back a little bit to the question of artificial intelligence, I think that why did consciousness evolve? Well, it’s for something. It’s for the frontal areas to be able to plan. You want to get the best representation of the world that you can. Now, in order to do that you need to take incredibly ambiguous visual input and recover a disambiguated representation of the world so the areas can plan appropriately. Let’s say I have a white-haired cat. It looks white to me because I want to recover what’s intrinsically true about the cat, namely that it’s a white-haired cat. Now it runs under a shadow or a blue light. Well, the light actually reflecting off of the white hair is now blue, what’s hitting my retina, but I want to discount that and recover what’s still intrinsically true, so I see it as a white cat that happens to be under a blue light. I want to recover it’s intrinsically true shape and size and distance and so forth. It’s the best representation of what’s intrinsically the case.

PETER ULRIC TSE: Again, what got built into this quasi-hallucination is, in addition to that kind of story about the physical world, stories like causation which is invisible. Go to any party, next time you’re at a party, and have your confederate turn off the lights and you say, “I can turn the lights off,” and you go, boom. The person turns the lights off. Everyone’s like, “Wow, how did you do that?” Because we are perceiving causation. We’re also perceiving other minds. It’s built into the construction. My guess is, that this is going to be very central to the creation ultimately of AGI or general intelligence ’cause it’s so central to the creation of our models of the world. I understand what it’s like for you to feel pain because I feel pain or for you to have a broken heart because I once did. This is very central. I don’t see how a system that has never felt pain can understand what I mean when I’m talking about pain.

TIM URBAN:     Susan.

SUSAN SCHNEIDER: That’s interesting. I guess my general comment here, to go back to Yann’s point about how attention is closely related to consciousness and we could have got lucky because we have limited capacity. We can only entertain maybe seven variables in working memory at any given time, and we have trouble remembering phone numbers. Maybe consciousness is something we got that relates to our limited capacity systems. Now, if that’s true though, suppose we do create AGI and shortly thereafter we create intelligent synthetic beings that are smarter than us in all sorts of ways, why think they’re conscious? Just because they look like, say, Hanson Robotics Sophia, they look human, does mean that they’ll be conscious. Think about it. Do they need to have these limited capacity systems? For example, a superintelligence could be as large as an entire planet. Its computronium, its computational resources could span the entire internet. What would be novel to it requiring slow, deliberative focus? Why would it be like us in any kind of meaningful sense? What I want to suggest is that we pull apart intelligence and consciousness and treat it as an empirical matter. If we want to figure out machine consciousness, we need to ask for each type of AI architecture whether that type of system has conscious experience and not just assume that because it looks human it feels something.

MAX TEGMARK: Yeah, I want to applaud you there for distinguishing it’s an artificial intelligence and artificial consciousness, which are way too often conflated with each other. I think many people, for example, will say things like, “Oh, we’re so scared that machines are going to become conscious and then suddenly they’re going to turn on us and be evil like in bad Hollywood movies.” Somehow, it’s the consciousness that you should worry about. That, I think, is a total red herring. Although I agree that consciousness is super important from a moral and ethical point of view-

SUSAN SCHNEIDER:   Yeah, of course.

MAX TEGMARK:   … in terms of whether you should worry or not, you don’t care about whether that heat-seeking missile chasing after you is conscious or not or how it feels about this. You only care about what it does and it’s perfectly possible for us to get in trouble with some incredibly intelligent machine even if it’s not having any subjective experience.

MAX TEGMARK: In other words, consciousness isn’t something we should worry about. That’s not going to make any particular difference from that perspective, but I think it makes enormous moral difference. When I have colleagues who tell me that they think we shouldn’t talk about consciousness because it’s just philosophical B.S., I ask them to explain to me how you can have any morality if you refuse to talk about consciousness and subjective experience. What’s wrong with torture if it’s just, oh, the elementary particles were moving around this way rather than that way? It’s all about the negativity of the subjective experience that’s at hand. If we want to be moral people, we want to create a lot of positive experiences in the future, not just a bunch of zombies.

TIM URBAN:     This is a Nick Bostrom example. If there’s a trillion simulations you’re running just to test something of a general intelligent trillion, then you’re like, “Okay, I got what I needed. The inflows, let’s shut them all off.” If they’re not conscious, it’s like closing your laptop. There’s nothing wrong with that. If those things are conscious, you just created the biggest genocide in the history of the human species. It’s pretty relevant. It matters.

YANN LECUN: Not if you have a backup.

YANN LECUN: The reason why we care about each other is because we have a lot invested in each other. There is value to every human particularly through other humans who are close to that person It’s possible that we’ll have the same relationship with our household robot that we trained. We have a lot invested in that household robot, the same that we have invested in our cat or dogs. We won’t want that robot to get destroyed because all of the time we invested in that robot will go away. But, if we have a backup, it’s okay to smash it against the wall.

TIM URBAN:     If you have an identical twin, can I just throw you into the sewer?

YANN LECUN: No, there is all kinds of interesting questions like this of imagine we invent … We have a physicist here. We invent a Star Trek style transporter. You get dematerialized. You get destroyed. You get killed and you get reconstructed at the other end. You experience death. This is a metaphor really for what is it that we’re upset at when someone gets killed or when an intelligent machine with its conscious gets destroyed? As long as there’s no pain involved, which there isn’t when you go to anesthesia. As long as you have a backup or you can get revived, there is no-

TIM URBAN:     But, if there’s suffering-

YANN LECUN: … no information loss.

TIM URBAN:     If there’s suffering then that’s a different thing.

YANN LECUN: Yep. That’s right.

TIM URBAN:     Then, consciousness does …

YANN LECUN: Okay, so now I ask the question, can machines have emotions? You see, again, Star Trek Commander Data has this chip they can turn on or off to have emotions or not ’cause somehow you have intelligent machines that don’t have emotions. I don’t personally believe that it is possible to design or build autonomous intelligent machines without them having emotions. Emotion is part of intelligence. Now, we’re going to have self-driving cars that are not going to have much emotions, but it’s because they’re not going to be, even though we call them autonomous cars, they’re not going to be autonomous intelligence. They’re just designed to drive your car.

YANN LECUN: If you’re talking about autonomous intelligence, then these are machines that can decide what they do. They have some intrinsic drive that makes them wake up every morning or do particular things, justify their lives maybe, but no pre-programmed behavior really. You can’t have a machine like this without emotions. ]

TIM URBAN:     Peter.

PETER ULRIC TSE: Yeah, so I think it’s a very interesting point that emotions are going to prove central to the generation of artificial general intelligence. If you look at the evolution of animals, we can learn something, I think, about the origin of the emotions and the desires because they are conscious states but they’re teleological states within consciousness and they’re often about what’s not visible. How would this get started? Well, you could imagine a fish that only responded to something it could see. It’s stimulus present, it does this. If it sees a barracuda, it flees.Then, imagine a new revolutionary fish that has a working memory. Now, when the barracuda peers behind a piece of coral, that fish can say, “A-ha! I know it’s going that way. I’m going to go that way.” The representation of the invisible became, I think, very central. The need for working memory is very central, which is lacking in present architectures. Then, these teleological states that force us to seek mates and food and so forth and really having these teleological states, these emotions and desires, allowed us to form, not garden paths, but desert paths. A garden path is when you know locally this is best, locally this is best, locally this is best and then you end up in the jaws of a lion.

PETER ULRIC TSE: A desert path would be well locally I have to go without, I go without, I go without, but at the end of it, I might have a mate or food or shelter. This is a big revolution that afforded us the ability to act in the world in the absence of input. Central to that also is the formation of mental models and cognitive maps of the whole landscape, physical and emotional as well as social.

YANN LECUN: Actually, one of the big progress, a very interesting development in deep learning over the last years, is deep learning systems that have a working memory, neural networks, neural Turing machines, things like that. Those are models that actually have a separate module for competition and another one for storing memory, short-term memory. Similar to we actually have a particular module in our brains called the hippocampus which sort of plays that role more or less of storing short-term memory.

PETER ULRIC TSE: I think a very interesting place to look for lessons about how to build artificial intelligence systems will be not computers, but evolution’s other experiments. I think the most interesting one is the octopus because complex brains evolved in three lineages, the chordates, and we’re sort of the culmination of that ’cause were like chimps plus symbolic processing plus syntax. Then, some arthropods like praying mantises, but honeybees, they have a couple hundred-thousand neurons. The octopus has 500 million, comparable to a bear or a dog. If we want to understand computational principles that might be universal, we should look at this animal because there might be only so many ways to build a brain. Convergent evolution found that there’s only so many ways to build a wing. You need some sort of membrane. In the chordates, the bats did this and the birds did this and the pterodactyls did this, but they all have in common flapping and membranes. There’s only so many ways to build a wing. There might be only so many ways to build a brain. Some people have argued, for example, that the vertical lobe of the octopus brain is completely or very analogous to our hippocampus with very similar circuitry. Well, convergent evolution has brought us there ’cause our common ancestor’s probably Precambrian. It was probably a little flatworm way, way back in the ancient, warm seas.

SUSAN SCHNEIDER: That’s really interesting because to go back to this idea of superintelligence, one wonders if we could discover through thinking of both AI and intriguing systems that nature gives us like the octopus if there are universal properties of intelligence and, in so doing, anticipate the shape of superintelligence. Because after this panel, I have to confess, I’m actually a little bit more worried about super intelligence.

YANN LECUN: Our basic behaviors, as humans, are driven by our basal ganglia basically. The base of our brain, that’s where human nature is hardwired. That’s what drives a lot of our basic behaviors. Then our brain on top of this makes our behavior serve those drives with intelligent, hopefully intelligent, actions, but our basic drives are driven by this hardwired basal ganglia. That’s what computes whether we are happy or not, whether what we do is going to make us happy or not. It drives all of our behavior. We need this for intelligent machines. The fact that an intelligent machine will be autonomous will mean that it will have to have this kind of hardwired piece in its brain that drives its behavior. The big question is, how do you build it in such a way that those basic drives are aligned with human values? It’s going to be probably very difficult to hardwire this by hand. We’re going to be able to hardwire some very basic behavior to make sure that robots are safe. For example, if you have a knife in your hand, don’t flail it around if there are humans around, sort of very basic things like this. There’s probably thousands of rules like this that we can’t really implement really easily. What we’re going to have to do is train those machines to, again, distinguish good from evil, behave in society and not injure people.

TIM URBAN:     Yeah, I hear people say it’s the artificial superintelligence, which is kind of general intelligence once it’s way better than we are. It’s the last invention we’ll ever make because if it’s doing what we want, then all the things that we think are hard … It’s like a monkey hitting a padlock forever and a human walks in and they look at the instructions and in just one second open it. That all these things, poverty, climate change, disease, even morality, child’s play to something that is that level of intelligent. It’s this utopia that we could be in if we could pull it off. So, you wouldn’t have to invent anything in that world because it invents everything for you. The other scenario is that it’s … I don’t hear a lot of experts talking about Terminator, evil robots, that’s anthropomorphizing.. It’s the last invention we’ll ever face then ’cause extinct species don’t invent things. The stakes are monumentally high and this is what you just kind of touched on

TIM URBAN:      We only have a few minutes left. I really want to hear what you guys have to say about … I feel like we woke up in the middle of a thriller movie in the climax of this thriller movie, but it’s just moving slowly in our minds so we don’t see that what’s happening, but it’s the choose-your-own-adventure, choose-your-own-ending. How can we nudge this in the right direction?

MAX TEGMARK: Yeah. If you’re taking a big step back and looking at it after 13.8 billion years of cosmic history, here we are, we figured out how to replace most of our muscle work by machines. That was the industrial revolution. Now, we’re figuring out how to replace our mental work by machines. Eventually, that’s going to be AGI superintelligence. So, how can we make it good? I think Yann mentioned that the key challenge there about making sure that its goals are aligned with ours, it doesn’t have to be a bad news being surrounded by more intelligent entities because we all did that when we were two years old, mommy and daddy. It worked out for us because their goals were aligned with ours.

MAX TEGMARK: How can we ensure that this will happen with AGI? Well, AI safety research is the answer. We’re investing billions of dollars now into making AI more powerful, but we also have to invest money in developing the wisdom needed to keep this AI beneficial. For example, applicable to what you said, Yann, I think we have to figure out how to make machines understand our goals, adopt our goals, and retain our goals as they get smarter. All of those are really tough. If you tell your future self-driving Uber to take you to JFK as fast as possible and you get there covered in vomit and chased by helicopters, “No, no, no, no. This isn’t what I asked for.” And, it goes, “that’s exactly what you asked for.” Then, you appreciate how hard it is to make machines understand our real goal.

MAX TEGMARK: Raise your hand if you have kids. Then, you know how big the difference is between having them understand your goals and actually adopting your goals, doing what you want.

TIM URBAN:     Also, who’s the parent deciding what the goals are?

MAX TEGMARK: Well, in this case-

TIM URBAN:     ISIS thinks it’s doing good. It does.

MAX TEGMARK: Yeah. We put a lot of effort into raising our kids. We need to put even more effort into raising humanity’s proverbial kids if we develop ever machines that are more powerful than us.

YANN LECUN: I actually disagree with this.

PETER ULRIC TSE: Well, okay.

TIM URBAN:     Let’s go down the line here.

PETER ULRIC TSE: Some of the changes that will have to happen will not only be on the AI side but on the cultural side, the transformation of our cultures. For example, any technology can be used for good or evil. A hammer can kill somebody or build a house. An airplane can transport people or bomb people. This is also true of AI, but the ethical systems that we have inherited from the past are not sufficient to deal with this. 2,000 years ago there was ten bad things you can do and they said, “Okay, God said don’t sleep with his wife and don’t steal his stuff,” and so forth.

PETER ULRIC TSE: Commandment number 853,211, thou shalt not implant bioluminescent protein alleles from fireflies into tomato, no glow-in-the-dark tomatoes. Thou shalt not raise embryos for their dopaminergic neurons to implant into Parkinson’s patients. Technology has driven … There’s now infinitely many things that are bad, harmful so we need to come up with a new ethical framework for figuring out the right course of action in these infinitely many cases. I would say a first step would be thinking of what is good is that which is fostering life and especially human life, but also life in general. That which is harmful to life is not good. That way we can confront lots of things and try to think about not only what can we do, but what should we do.

MAX TEGMARK: I think we’re in a fortunate situation actually where pretty much everything you can do which will increase the chances of superintelligence or AGI going well, that kind of safety research, actually has its first baby step doing something which is already useful and the short-term like better cybersecurity research so we don’t get hacked all the time, for example. Let’s do those things better ’cause I think we’re actually pathetically flippant with things like that now and who’s going to trust your AGI if it can get hacked?

YANN LECUN: We’re going to get very non-powerful AGI before we get very powerful AGIs. Our first AGI will have the autonomy and the intelligence level of a rat if that. Okay. I considered it a major success in my career if, by the end of my career, which is coming fast, we have a machine that has the same level of common sense as a rat or let’s say a cat. The cat has 700 million neurons. We don’t have the technology for this already. We don’t have the science for it. Once we figure out the design of an intelligent, autonomous system, it will have the intelligence of a cat or a rat. It’s not going to take over the world. With this, we can experiment to figure out how do we build into it the fact that it should behave in society and not kill everything around it.

PETER ULRIC TSE: Let me just point out that …

SUSAN SCHNEIDER: The thing that …

PETER ULRIC TSE: I’m sorry. Go ahead.

SUSAN SCHNEIDER: Oh, no. Go ahead.

PETER ULRIC TSE: Okay, thanks. Coming out of neuroscience, we just have really basic fundamental questions that we don’t know the answer to yet. Science says it’s all about what we don’t know, so we should just put this on the table. One of these is what’s the neural basis of consciousness? Another is what is the neural code? The kind of neural networks that Yann has created are rooted in a kind of view of the neural code involving changing weights.

PETER ULRIC TSE: In recent years, something people have thought, “Okay, that’s surely an important part of the puzzle, but maybe there’s other parts of the puzzle.” It’s not simply about what’s connected with what at what level of connectivity, and this is what underlies connectomics … Rather than viewing the brain as a highway system of different connections, it’s more like a train track system where there’s constant sudden switches. This piece of track can be part of an epi-connectivity between Boston and San Diego or an epi-connectivity between Boston and San Francisco depending on these shuts.

PETER ULRIC TSE: Maybe the neural code is actually a very dynamic neural code with these rapid synaptic weight changes. That’s one direction. More recently, some people have argued, and if this turns out to be true, it will be revolutionary, that memories and information, in general, is not only stored in synaptic weights but actually inside the cell. There’s some really incredible work done by Tonegawa at MIT or, I think, David Glanzman at UCLA, that I think has convincingly shown that synaptic weights might be the path to accessing the information, but the actual information might lie inside the cell. Glanzman says it’s patterns of methylation on DNA. Now, that’s really radical. He’s the only one saying that but if it’s true, it will change everything. We have so far to go in understanding the brain and present AI is based upon a metaphor of neural nets as understood in the brain 10, 20 years ago, but it’s changing very fast in real brain science. My guess is once we crack the neural code, it will be as momentous for our society as the cracking of the genetic code.

TIM URBAN:     All right, Susan.

SUSAN SCHNEIDER: Very interesting.

TIM URBAN:     I want to hear it … We have a couple minutes left. Susan, how do we make it good in the future?

SUSAN SCHNEIDER: Well, we could have an AI that becomes AGI and then rapidly evolves into superintelligence. Whether it be based on the neural code in the brain or on something highly not brain-like, it could very quickly change its own architecture. Then, I wonder, how we’ll be able to stay abreast of it. We have to hit the ground running on AI safety. I whole-heartedly agree with Max. I also wanted to add something which has not been discussed, which is, I think, as a society, we need to think about this idea of merging with AI. Elon Musk has recently suggested that in order for us to keep up with technological unemployment and to deal with the threat of super intelligence, we, ourselves, need to bring AI into the brain. I think that as a culture, we need to start discussing that AI will not be a world that looks like the Jetsons where they’re unenhanced humans surrounded by all this fancy robotic equipment. The AI will change us as well. I just want to leave you with that thought.

TIM URBAN:     I like that thought. Thank you. Thank you.

Big Ideas
Teach Your Robots Well: Will Self-taught Robots Be the End of Us?

“Success in creating effective A.I.,” said the late Stephen Hawking, “could be the biggest event in the history of our civilization. Or the worst. We just don’t know.” Are we creating the instruments of our own destruction or exciting tools for our future survival? Once we teach a machine to learn on its own—as the programmers behind AlphaGo have done, to wondrous results—where do we draw moral and computational lines? In this program, leading specialists in A.I., neuroscience, and philosophy tackle the very questions that may define the future of humanity. This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.Learn More

View Additional Video Information

Up Next

223,742 views |
265,657 views |
141,817 views |
279,512 views |
517,778 views |
160,419 views |
64,110 views |
151,209 views |
126,377 views |
294,638 views |
114,217 views |
117,151 views |
103,860 views |
169,137 views |
321,527 views |
95,517 views |
290,856 views |
244,649 views |
106,956 views |
506,804 views |
253,205 views |
179,438 views |
54,362 views |
1,145,671 views |
68,186 views |
1,485,028 views |
266,103 views |
617,266 views |
494,624 views |
151,621 views |
2,232,503 views |
57,384 views |
155,723 views |
2,821,519 views |
75,531 views |
845,580 views |
1,626,568 views | 01:36:02
130,329 views | 00:57:27
1,286,317 views | 01:30:08
51,310 views | 01:27:46
55,823 views | 01:25:38
1,493,831 views |
2,553,942 views | 01:29:37
65,965 views | 01:24:01
01:12:02
1,249,923 views | 01:29:11
1,219,090 views | 0122:24
70,101 views | 00:32:32
1,439,185 views | 1:11:33
00:35:03
01:11:03
01:11:53
5,115,669 views | 01:24:38
2,431,373 views | 01:27:24
01:20:43
3,851,115 views | 1:17:46
1,465,813 views | 1:00:10
00:58:06
00:52:42
00:48:20
106,626 views | 00:57:28
00:57:18
1,549,895 views | 01:00:35
00:51:17
1,051,489 views | 01:02:39
84,065 views | 00:41:54
1,389,354 views | 00:59:56
01:02:05
01:17:53
65,583 views | 01:06:47
4,526,003 views | 01:39:21
733,492 views | 01:06:44
103,373 views | 01:07:13
112,462 views | 01:33:49
7,803,464 views | 1:32:48
00:58:53
953,682 views | 01:02:40
01:02:01
55,848 views | 00:59:50
106,459 views | 01:03:48
01:00:35
58,754 views | 00:59:31
551,246 views | 01:17:01
141,846 views | 00:57:13
59,138 views | 01:34:35
470,710 views | 01:33:30
527,609 views | 01:24:26
211,422 views | 01:38:53
1,227,099 views | 01:27:13
1,398,580 views | 01:25:32
479,184 views | 01:39:51
520,983 views | 01:23:44
162,129 views | 01:02:32
191,827 views | 01:19:33
140,688 views | 01:21:33
812,365 views | 01:22:23
1,190,821 views | 01:22:11
461,537 views | 01:23:39
01:13:31
7,394,360 views | 01:33:30
302,395 views | 01:23:57
3,774,022 views | 01:38:44
1,637,517 views | 01:23:57
972,569 views | 01:27:52
764,497 views | 01:28:48
51,607 views | 01:24:33
781,858 views | 01:26:02
01:03:39
3,444,889 views | 01:22:35
01:08:08
449,238 views | 01:30:42
158,744 views | 01:31:59
108,490 views | 01:24:28
373,198 views | 01:35:31
535,480 views | 01:30:32
693,277 views | 01:28:02
1,155,539 views | 01:44:14
821,374 views | 01:16:48
2,990,994 views | 01:30:10
1,656,392 views | 01:33:00
95,031 views | 01:41:19
370,420 views | 01:35:29
01:32:13
273,286 views | 01:25:28
01:18:33
1,031,877 views | 01:25:19
2,415,505 views | 01:43:58
861,546 views | 01:36:21
01:25:35
333,008 views | 01:46:12
241,838 views | 01:02:50
166,424 views | 01:17:21
4,837,658 views | 01:30:56
01:07:48
01:30:23
01:30:01
01:25:42
01:27:38
279,888 views | 01:25:57
01:32:11
01:27:47
01:32:53
01:08:05
50,587 views | 01:01:50
144,016 views | 01:02:03
304,099 views | 00:48:49
00:55:43
292,834 views | 01:26:50
103,589 views |
321,934 views |
233,160 views |
220,018 views |
722,320 views | 02:03:40
229,631 views | 01:51:22
1,373,021 views | 02:08:02
1,342,027 views | 02:53:44
129,540 views | 01:54:16
80,240 views | 02:06:58
86,490 views | 02:19:44
226,165 views | 02:10:10
219,081 views | 03:01:37
260,537 views | 02:33:02
1,024,454 views | 03:05:19
187,366 views | 01:45:24
130,142 views |

Playlists