FacebookTwitterYoutubeInstagramGoogle Plus

To Be or Not to Be Bionic: On Immortality and Superhumanism

“We can rebuild him. We have the technology,” began the opening sequence of the hugely popular 70’s TV show, “The Six Million Dollar Man.” Forty-five years later, how close are we, in reality, to that sci-fi fantasy? More thornily, now that artificial intelligence may soon pass human intelligence, and the merging of human with machine is potentially on the table, what will it then mean to “be human”? Join us for an important discussion with scientists, technologists and ethicists about the path toward superhumanism and the quest for immortality.

View Additional Video Information

Cristina: Awesome to see you here. I’m so excited to be back again at the World Science Festival, and today, we’re going to talk about robots and AI. And you know the first use of the word robot was in a play about mechanical men that were built to work on factory assembly lines. In the play, the robots eventually rebel against their human masters. The play was called RUR, Rossum’s Universal Robots, and it was written by a Czech playwright, Karel Capek, in 1921. The word robot itself comes from the Czech word meaning slave.

Cristina: Today, robots entertain us in movies, like when we see C-3PO or R2-D2 and many others. Robots help perform surgeries to heal us and even are part of prostheses. They help explore far-off places, like going to Mars. Everybody remembers the Mars rovers there. They do reconnaissance for us in battle areas and go to other dangerous places. They also work in factories. In fact, some people worry about them taking our jobs or that we’ll all become too technologically dependent someday, in effect slaves to the robots.

Cristina: In this fascinating session, which is called To Be or Not to Be Bionic, on immortality and superhumanism, we’re going to explore these intersections between humans and robots today and what we might all expect in the future. And I’m going to enjoy welcoming our speakers to the stage.

Cristina: Our first participant is a professor of engineering and data science, as well as director of the Creative Machines Lab at Columbia University. He’s garnered a reputation for challenging the conventional view of robotics by designing self-aware and self-replicating robots. Please welcome Hod Lipson. Welcome, Hod.

Hod Lipson: Thank you. A pleasure to be here.

Cristina: Our next guest is assistant professor at Cornell Tech in the information science program. As executive director of Interaction Design Research at Stanford University, she innovated systems to understand how humans and robots can interact most seamlessly. She’s new to New York City, so please give a warm welcome to Wendy Ju.

Wendy Ju: Thanks for having me. Thanks.

Cristina: Next is Arthur Zitrin Professor of Bioethics and director of the Center for Bioethics at New York University’s College of Public Health. His research uses philosophic principles to study and examine the ramifications of emerging biomedical innovations. Please welcome S. Matthew Liao. Good to see you.

Matthew Liao: Good to see you.

Cristina: She’s a film director known for her pioneering work in virtual reality. She was Google’s first filmmaker and worked with a team of engineers to help develop and field test the 16-camera live action virtual reality gig and workflow now known as Google Jump. Please welcome Jessica Brillhart.

Cristina: Our final guest is a professor doing physics and AI research at MIT and advocates for positive use of technology as president of the Future Life Institute. He’s author of more than 200 technical papers, and the New York Times Bestseller Life 3.0: Being Human in the Age of Artificial Intelligence, a work that Stephen Hawking said would help a reader join “the most important conversation of our time.” Please welcome Max Tegmark.

Cristina: So we’re going to take on some heady and inspiring topics on stage with these wonderful panelists today. Max, last on the stage but first I’m going to pick on with questions. We’ve just talked about some fantastic ideas, robots and superhumanism and so on, but can you give us the lay of the land today. Where are robots and AI maybe dominating? And where is it that humans still have the upper hand?

Max Tegmark: I think, to take a step back, since this is the World Science Festival, it’s interesting to see that, after 13.8 billion years of cosmic history and a lot of human history, we used to think of basically everything that biological organisms did as something kind of mysterious, that was probably beyond science, even though we could understand how rocks moved and stuff. And then, gradually, science figured out things about how muscles and so on worked, and we built better muscles in the Industrial Revolution and started outsourcing that to machines, but the mind still seemed very mysterious.

Max Tegmark: And gradually artificial intelligence research, of course, has been chipping away at that, too. Now we get our posterior kicked at chess and in go and soon in driving and in so many other things, and although we’re very far away from being able to do everything that humans can do with our minds, most AI researchers think that we probably are going to get there in a matter of decades. This is the holy grail known as artificial general intelligence. And I think we’ll talk a lot about if that’ll happen, and if so, what we can do to make it good.

Max Tegmark: And in the meantime, what we have right now is a situation where we have very good narrow artificial intelligence that’s much better than us at certain narrow tasks, like multiplying numbers fast and, as I said, playing certain games and dealing with massive databases, and an increasingly long list of tasks, and we all feel how this is beginning to transform our daily lives, everything from GPS navigation to all the other interactions you have with computers. And I think it’s easy to underestimate the extent of it, actually. Because, as soon as something works, machines can do it, we stop calling it AI. We sort of define artificial intelligence flippantly as that which machines still cannot do.

Max Tegmark: I think, finally, looking into the next five years, 10 years, we’re going to see huge progress, particularly in the areas where you can make a lot of money off of it, so we’re going to see real revolution in healthcare, much better diagnostics, also more affordable, so it can spread throughout the world. Obviously in the transportation sector. But really, throughout much of the economy, I think. In these narrower areas.

Cristina: Thanks a lot, Max. It’s really interesting to hear both about the progress of the human mind and understanding these sort of artificial minds and where that may take us. Hod, this sort of brings me to you, so we’ve talked a little bit about the intellectual sphere of general artificial intelligence and so on. But you work a lot about robot bodies. Can you tell us a little bit more about these sort of physical manifestations?

Hod Lipson: Yes, I think, if you try to look sort the long term, what are the next things that AI and robots have a hard time with, and some of this is already bubbling in academia. It hasn’t quite made it into industry. There are certain areas. One of them, I think, is this whole area of creativity. One of these areas where we think we humans really have a hold on creativity, and machines can analyze data, but they can’t really come up with ideas. That’s one of the things we’re seeing already changing, machines that can come up with ideas, can generate creative things, anywhere from engineering designs to art. And everything in between. So creativity is one big area. I still hear people saying, “As long as you’re creative, AI cannot touch you, and we’re safe.” And I think we’re going to have to come to terms with that element as well.

Hod Lipson: But then I think the next area where AI will have a hard time is the physical body. So I like the analogy of H.G. Wells when he talked in the War of the Worlds about the aliens, that they have not earned their place in the physical world. AI has also not earned its placed in the physical world, has not crawled through mud and rain and sand as we have. And AI can live in a computer, but if you look at the state of the art of physical robotics, we are ages behind. Things like batteries and material science that aren’t growing exponentially, as software is. So that’s the one area where I think we will be safe for a while. Computers will drive your computers pretty soon, but when the car breaks down, it’s going to be a human crawling around, fixing it. Because computers can’t do that yet. So the physical body is important.

Hod Lipson: If I look beyond that, the big place where the mind and the body come together is sentience and the ability of a machine to have a concept of self, emotions, real emotions, things like that, and I think that’s where things get really interesting.

Cristina: And this kind of brings me to you, Wendy. I was thinking about something that I guess Max and Hod have touched on, which is jobs and robots and jobs. Now what are the sorts of jobs that might be automated? And what are human responses maybe to that?

Wendy Ju: That’s such an interesting question. About five years ago, Leila Takayama and I did a study asking… We went through the US Bureau of Labor Statistics job descriptions and actually had people rate the degree that people think that the robot could or should do these jobs, and contrary to what I think anyone wants for their own job, people really wanted, in general, for robots to do things instead of people, instead of with people. So your job, you might be willing to work with a robot. Other people’s jobs, it would be great if a robot would just do that.

Wendy Ju: But I wonder now if some of that logic might be changing. What I think what we’ve seen in a lot of the recent developments is that AI and robotics, they work well, but they have incredible limitations, and you could say the same about people. But a lot of the things that robots can do with people who really know how to work with robots is amazing, and definitely that’s the case with AI. The most successful chess playing is happening with these, I guess they call them centaurs. Other people using AI as part of the strategy to win a chess game.

Wendy Ju: And so I think really what we should all be looking for is to think about how we could use these technologies to change our own jobs. Instead of worrying about the replacement question. Because I think the jobs that we have will all evolve to be in a place where it will be almost unrecognizable in five years.

Cristina: So we’ve talked a little bit about robots and people. How about robots with people? Matthew, I’m looking at you. What about this idea of bionic modifications for humans? I know, robots, a lot of times when we’re watching… I know I love to watch the robot dog running around. But they sometimes seem a little clumsy. If human prospects for jobs dwindle or change, what should we do there?

Matthew Liao: So there’s actually, I think, another interesting area, sort of robots in people. And right now, there’s already a lot of devices, things like brain/computer interface. There’s something called deep brain stimulation, and that’s something that sort of… is basically you put a sort of thin electrode into your brain, and it’s connected to a battery pack, and about 100,000 people today already wear something like the deep brain stimulation for things like Parkinson’s disease, for epilepsy, for depression. But right now, the deep brain stimulation is kind of dumb. It’s not a smart device, in that it’s an open system where you kind of have to modulate the electricity. But you can imagine that, in five years… right now, DARPA is trying to make it into a closed system where they use artificial intelligence, so that it’ll monitor, automatically monitor, your brain activities, and then just modulate the brain activities in real time. And so that’s going to raise all sorts of interesting ethical issues.

Matthew Liao: But just on the near term… I’m an ethicist. I’m a philosopher. So there are a lot of near-term ethical issues that we should be thinking about. Like how do we program ethics into self-driving cars, for example? Accidents are going to happen. If the car is going to go towards five people but you’re in the… and then it can… it’s going to kill those five people, but the car can drive off the road, killing you? Would you buy a car like that? That would do that? And who decides that? Who makes that kind of programming decision?

Matthew Liao: Another near-term issue is going to be something like… There’s this idea of “garbage in, garbage out,” so the AI’s only going to be as good as the data you give it. And so if you have a lot of biased data, then that’s going to give you very biased results. And that’s showing up, for example, already. So some people are trying to use artificial intelligence for sentencing, to make sentencing decisions. But if you have biased data, that could give you very biased sentencing decisions.

Cristina: So, Jessica, I’d like to come to you to follow up on these.

Jessica B.: Oh, okay.

Cristina: So we’ve talked a lot about… Well, no, you’ll see in a second. Trust me.

Jessica B.: Okay. I’m trying to think-

Cristina: We’ve talked a lot about the human interactions. I can’t think of a more human response than how we respond in artistic ways and other ways. And as we’re talking about how work can extend human experience… you work in VR and extending human sensory experiences that way. What are some of the sort of uniquely human characteristics you bring into your work as you’re doing that?

Jessica B.: Oh. I mean I think VR is very good at challenging human perception, probably more than… I mean, it can extend it, but I think the more interesting work in the virtual space, and in the augmented reality space, and in the AI space, is how it challenges us in the notion of creativity. It challenges us in the notions of perception. It, for me at least, has made me question the very small sliver in which we can experience the universe and how limited that is. It can be both celebrated and used in narrative ways in the virtual space.

Jessica B.: I find that the work that I’ve been able to do in breaking that more into what is happening artistically with neural networks and artificial systems has been really fascinating to me. Because it extends that kind of inquiry in the virtual reality space for sure.

Cristina: Is there anything that it’s brought to you that you would suggest to the audience, who hasn’t maybe experienced it, that you wouldn’t have done any other way?

Jessica B.: Yeah. I mean, we talk a lot about branching narratives, this idea that it can permeate… the mass amounts of permeations we can make with where a story could go. Things like that, where every single choice that we make can have an actual result that the system then understands and creates and presents to you. The idea of a character that is driven by an AI is also very fascinating, where the emotions and the intent and the friendship is all based upon how you treat it. Him or her. I say “it,” but again that might change. And I think also… The company that I started, actually, after working at Google. A lot of what I think about is not so much the idea of dualism or the binary, especially given AI’s evolution based upon getting away from the binaries, really. It’s interesting and fascinating, the idea that all of these technologies, VR, AI, AR, will all come together and create kind of this new immersive world, this new reality that exists for us, that both, again, extends and challenges our abilities to perceive what it means to be human.

Cristina: So we’ve just thrown a lot of really fascinating ideas in the air, and I wonder if anybody in the audience would like to ask a question. And if you would, just say who you are, and if your question is directed to somebody on the panel in specific, please say that, too. I see one right in the front here.

Hod Lipson: Well, I don’t want to put a dark storm on things. I think I know where Max comes down on this, so maybe other people would chime in. So we seem to be really good, as humans, at doing some really cool stuff. But malevolence always ends up being present, no matter how good we want to be or seem to be. It seems like that’s the elephant in the room as far as AI goes. How do we create a world where all these great things like you’re talking about are going to be able to come to be, when in fact it seems to me that just preventing the emergence of some malevolent AI on the world seems to be the biggest challenge we face?

Cristina: Max, do you want to [inaudible 00:16:56]. Oh, Hod?

Hod Lipson: Yeah, I’m happy to take a stab. I think that the question is also not what AI will do to people but what people will do to people using AI, really. That’s the shorter term, much more urgent question. If we get to the point where AI is so clever that it doesn’t need us anymore, that’s so far ahead of me we’d be lucky to pass that. Intermediate phase is where people have this incredible power. So I think you’re absolutely right. There’s pros and cons. I definitely believe that the benefits far outweigh the risks, but there are risks, and we should all be aware of them. In general, humankind has been able to use technologies for the better rather than for the worse. There’s always bad stuff happening, but overall we’re getting better at these things. And at least I’m optimistic that we’ll be able to use this technology positively.

Max Tegmark: Let me just to add some optimism to this and cheer you up a little. Because I think, of course, you’re right, that as we make technology ever more powerful it becomes ever more important that we win the wisdom race, the race between the power of the technology and the wisdom with which you’re managing it. You would never walk into a New York kindergarten with a box of hand grades, and say, “Hey, kids! Play with this.” Because the wisdom just really isn’t there. And sometimes when I see how our world politicians are handling hydrogen bombs, I feel we’ve sort of repeated that kindergarten experiment there. And I think we should try to do better with AI.

Max Tegmark: Some of my colleagues are kind of gloomy and think there’s some sort of law of nature that any technology that can be developed for military purposes or harming people or whatever will be developed and there’s nothing you can do about it. I think that’s manifestly wrong, and we have evidence for it. Look at biology. Biology today you mainly associate with new cures, helping people live healthier, longer lives. Not with bioweapons, right? Why is that? That’s a technology we chose as a species not to create some sort of crazy arms race around. Why? Because a bunch of scientists in the 1970s pushed really hard to stigmatize this and got an international ban around this, which has made biology the force for good that it is now.

Max Tegmark: And I’m quite hopeful that, if we can similarly stigmatize bad uses of AI, from Cambridge Analytica to lethal autonomous weapons, we can end up in a similar future, where one day we look back, and people associate AI with all sorts of ways of making the world better, just as biology is today. But we’re going to have to fight for it and work hard for it, just like the biologists did. Because it’s not going to happen by default.

Cristina: Wendy, you wanted to add something.

Wendy Ju: Yeah. I think there’s two things to worry about. One thing is malevolent actors, and the second thing is just unintended consequences. And actually, for both things, something that’s very important is kind of this ability to simulate worlds and run small-scale tests and games. A lot of times, people look down on games as something that’s a waste of time or something you do for diversion, but it’s actually really important in situations like these, where you don’t actually know what the contingent behavior will result in in a larger emergent space, to be able to run small-scale experiments and actually assign some people the job of playing the bad actors and then seeing how things unfold. And I think we can’t just tell everyone, “You have to behave ethically! Let’s just do positive things. We’ll put down some regulations around it,” and hope for the best.

Wendy Ju: I actually think it’s really important to be able to anticipate, “What are the negative things?” What some of the motivations for some of those things will be. So that we can actually stay a step ahead. And that also will help in situations where there’s no malevolent actors at all. That happens all the time, where everyone is acting totally out of the goodness of their hearts, but still terrible things happen.

Jessica B.: Yeah, I think it’s important to run narrative through a lot of these systems. I mean, again, it’s the thing where people don’t like to talk about feelings or emotions or stories or any of that stuff, but what happens is that you see the permeations of where this stuff could go. You understand more fully and more richly where things could go. And I think we do ourselves a grave disservice if we don’t start doing that more now.

Max Tegmark: I just want to agree with what both of you said here. I think it’s so important to think through what can go wrong. And that is so often misunderstood as somehow scaremongering. And you see so many silly news articles accusing something like this of scaremongering for talking like this. But this is just what we at MIT call safety engineering, right? Like the moon mission. NASA thought through what could go wrong if you put some astronauts on explosive fuel tanks and launched them into space. There was a lot that could go wrong. That wasn’t scaremongering. That was doing exactly what you were suggesting, thinking through what can go wrong to make sure it can go right. And I think we owe it to ourselves to think through what can go wrong with AI exactly, so that it can become the force for good in the world that it has the potential to be.

Matthew Liao: I think Max is exactly right. We need to be techno realists about these situations and realize the risks, as well as the benefits. And making sure that… I do think that policy and regulation, making sure that that’s in place, and globally as well, because I think this technology has no borders. And so it’s something that we have to… It’s our societies, our worlds, our future. We have to have a conversation about these issues.

Cristina: So I want to keep on this. I love this topic. We talked about us versus them. We talked about us and them. We talked about us versus us using them. I think it would be good to talk a little bit more about… so we’re turning to our second act… about creating robots in our own image and maybe more exploring that. And Wendy, I’d like to come to you to talk about your experiments with automata that are like household objects or furniture, robots that blend in, so that you can see how humans respond to them and how… What have you found out about that? What are you seeing?

Wendy Ju: Yeah. So right now, at this moment in time, I’m probably most famous for the work I’m doing on how people interact with autonomous cars. But that actually comes from these experiments where we looked at how people interact with ottomans that come up to your feet and try to get you to put your feet on them, desk drawers where the desk drawer might actually know what you’re doing and offer you tools, garbage cans that drive around and collect garbage. And one of the things is we find that people feel very comfortable. There’s some of novelty effect that you see around other robots when people aren’t used to seeing the object in space. They’re just like, “Oh, yeah. It’s a garbage can that’s moving.” And I think that the level of acceptance is very nice. And when we interview people, we’re like, “Do you know what that is?” They’re like, “It’s clearly a garbage can that’s moving. That’s all. It just is happening now.”

Wendy Ju: So, one, there’s not this kind of issue of trust or fear. People understand. But the thing that’s really interesting is that they have some instinctual model of intent. So one of the things that we saw in some of the videos of the experiments we ran was like a kid waving a piece of garbage in front of the garbage can because he wanted the garbage can to be across the room. And so he was trying to bait the garbage can across the room. And we were just like, “Does the kid think the garbage can wants to eat the garbage?” And so, once we noticed that, we actually saw all these other moments in the videos where people would wave garbage and call… like it’s a dog treat. And so that’s a human-robot interaction. We often have this H model, where we think people interact with machines kind of like they’re people. And this is really different. Because nobody thinks the busboy wants to eat the garbage. And so there’s all these… But everyone goes to that model, that it’s like an animal. And they clearly know it’s not an animal. They know it’s a garbage can.

Wendy Ju: And so some of these things, like the logic that we all fall to, those are some of the things that we’re trying to pull out. Because it’s so useful as designers to be able to use that when you’re designing how things can move around people so they’re safe.

Cristina: Yeah. I mean it’s one thing, I think, to engage with a garbage can that you are attributing human behavior to, but how about… You’ve seen people mapping human qualities onto robots and machines. With the work that you do in your lab, is that a problem?

Hod Lipson: I think it’s a double-edged sword. In one way, it’s very empowering for the robot. You see situations where, we’ve noticed in the lab it’s enough to stick googly eyes onto an object, and people already can see intelligence there and emotions. It doesn’t take a lot, really, to fool the human psyche into thinking that inanimate objects are intelligent. And in one way, you can sort of exploit that, because machines invariably learn from interactions with the world and with humans, and to the extent that people can interact with them in any way, that they can elicit that interaction, that sort of bootstraps the whole AI process. Because it’s all about collecting data and learning. And we know that even babies learn through interaction. And a baby might not know whey they’re smiling, but that smile elicits more behavior and more interactions. So I think we’re seeing more and more of that with physical objects.

Hod Lipson: And there’s something I would argue that is very magical about a physical object that is intelligent. Much more than an avatar on a screen or even a VR, to the extent we have VR today. When there is something physical out there that is with you in this world and is there and potentially senses the same thing you sense, there’s bonding that happens that makes that interaction ever so powerful.

Cristina: But this gets creepy at a certain point for people, right? There’s something called the uncanny valley. Talk to us about that.

Wendy Ju: Yeah. There’s something called the uncanny valley in robotics, which is that kind of the naïve sense is the more humanlike something should be, the better it would be, and what roboticists discovered is that, as you get closer to humanlike but not quite humanlike, it actually starts to be worse, much worse. People get very creeped out and feel really uncomfortable, and so this has been studied. We can kind of map it a little bit. It’s difficult to dimensionalize it. But the hypothesis for why this is is because there are aspects where people, in our evolutionary history, where people might have psychological problems. Or you might have people who are more sick and ill and don’t behave quite right. And sociologically, that’s danger to everyone. So you’re very sensitive to things that are not quite normal. And so as the robots start to look more like us, we start to become really tuned in to all these fine differences, whereas if you can readily attribute some other identification, like, “It’s not a person. It’s a chair robot.” Then you’re not worried about that maybe it’s a sick person question.

Cristina: How do you know how much human to put in?

Wendy Ju: That’s such a great question. I mean, I don’t know. My feeling is it’s not… it’s an interesting research question, but from a pragmatic standpoint, it’s not necessary. I think a lot of times the argument for making humanlike robots is that, “Well, people know how to interact with people.” But people also know how to interact with hundreds of thousands of objects. There’s many other leaping-off points that we could use.

Cristina: This raises some ethical questions surely, Matthew, about humans and robot interactions with the machines. Does it matter how humans treat the robots if they start to become more human?

Matthew Liao: Yeah. So in Japan, there are actually a lot of… They have an aging population, and a lot of elderly are now being taken care of by robots. And it’s becoming more so. And one of the issues that they’re facing right now is that some of the elderly are getting attached to these robots, right? And so there’s this whole debate about… Because they make the robots kind of cute, and they do smiley faces, so some of the elderly are… And they have pet dogs, et cetera, et cetera. And that raises… So some people are saying that we really need to make sure that the robots are very different from humans, so that people can recognize that they’re just robots. Don’t get attached to them. But I think Hod’s right that we’re actually very easily fooled, and that it’s very easy, especially once these robots get very sophisticated and can converse with you, et cetera, et cetera, you start to develop some… They want to be taken care of by one robot, rather than another robot. And that’s very… It becomes very human, very natural.

Matthew Liao: And so we need to think about those implications when we actually deploy robots into the real world for elderly care, et cetera, et cetera.

Cristina: I mean is it a bad thing necessarily if a person gets attached to it? I had stuffed animals when I was little. Is that a bad thing?

Matthew Liao: So I think that it’s not so much a bad thing. I actually think that… I’m less worried about it. And part of the reason is that I think that the elderly, especially the elderly population there in these aging homes, they don’t have a lot of interactions with other people. And you might think that, if this can give them some sort of emotional comfort, that’s a net positive thing. You might think that, in addition to that, you shouldn’t just have that. They should also have more human interactions, et cetera, et cetera, so they can have human relationships as well, like more people visiting them and things like that. But if these robots can help them in that way, I think that that’s a net positive thing.

Cristina: I start to wonder if it’s fair to the robots.

Matthew Liao: Right.

Cristina: Jessica, does this make sense to you? I’m looking at you watching everybody and wondering [crosstalk 00:30:51] of it.

Jessica B.: I’m fascinated. No, I mean I really liked your question about that. I feel like… I’ve been reading a lot about gaming culture as well and the idea that people say the same thing about gaming. “Well, if they’re obsessed with gaming, is that a bad thing? Is that ruining their brains?” And so on. And then you realize that the psychology around game addiction has everything to do with how bad their actual reality is. It’s not because the game is alluring or the game is wrong. It’s because the reality sucks. So suddenly you’re like, “Okay, I don’t have a job. My boyfriend’s mad at me.” This isn’t true. Everything’s fine.

Jessica B.: It’s just bad times or whatever. People resort to games, or someone will resort to a game, because they feel like they’re accomplishing something, and they can get money, and they can make friends and all this stuff. And so there’s really just an issue with the real stuff that’s making the not-so-real, but maybe should be real-er, stuff harder for that person to leave.

Jessica B.: So I think, in the case of someone who’s elderly, who has a robot friend who’s taking care of her, the fact that she doesn’t have anyone who would take care of her otherwise, I think, makes it totally okay for that elderly woman to be completely infatuated with her robot friends. I don’t know. I find that relationship, kind of what was being said earlier, just the tension between what’s real and not real, and maybe there isn’t as much of a difference as we think.

Cristina: Yeah. It’s interesting. It’s a really interesting area. I wonder. I mean I’m sure, turning to you in the audience again, listening to these thoughts, you must have some questions. So I’d love to invite people who have questions on their mind to share them now. And do you still have your question? He does. Okay, can we come down to the front first? Thank you for being patient.

Cristina: What happens when the robots take over and they know everything? And then you can’t teach them anymore?

Cristina: What a good question.

Matthew Liao: Great question.

Wendy Ju: That was the cutest pessimistic [crosstalk 00:32:47]. That was wonderful.

Cristina: A fantastic question.

Matthew Liao: That’s [crosstalk 00:32:50].

Wendy Ju: Yeah. There you go.

Max Tegmark: Yeah. I think instead of asking, “What will happen?” I think we should ask what we want to happen, right? Your parents know that one day you’re going to kind of, and your generation will kind of, take over from us, right? So they aren’t just asking, “What’s going to happen when you grow up and take over?” They’re trying to educate you well and make sure that you want to be nice to them when you’re stronger than them and make sure that you have their best ideals and values and so on. So if we ever do, as a species, create robots that have the ability to take over, I think we better educate then well, too, just like I’m sure your parents are doing with you.

Wendy Ju: It’s a great question.

Cristina: Yeah. I’d like to maybe get onto our next act, if you all don’t mind, which is… and I’m going to stay with you, Jessica, because I want to look a little bit beyond human perception. We started to talk a little bit about your work, and you did a piece called DeepDream VR, which starts to get at the heart of the matter of machine creativity. And from an engineering perspective, it would be great to hear how the piece was created and the connection to recent advances in neural networks, which you touched on before, and I think we’re going to also get a chance to see it.

Jessica B.: Oh, yeah. This will be interesting for me to talk about this as you watch it. So again I was working with a VR team and filming a lot of footage to test the rigs out, and I worked with this engineer, Doug Fritz, to have a system essentially dream on top of these VR images. DeepDream is, I would say, a system that’s been trained, a neural net that’s been trained on a lot of images and then told to basically extrapolate what it sees within these images, and so sometimes they’re puppy slugs, sometimes they’re eyeballs, sometimes they’re houses and cars and so on. And the reason we did this was because we just wanted to see what would happen, and some of the results were actually pretty need. And some were a little frightening.

Jessica B.: But really what it was was kind of the first way that we brought in sort of artistic systems to work in the virtual reality space. It takes a lot of processing power. It took us a long time to even get it to be in stereo, which means that your left and right eyes see it a little bit differently, so you see depth in these images as well. This one in particular was filmed on Arecibo, and this is the tram that you take up to the main platform. I guess it’s the second largest radio telescope in the world now. So you’ve got snakes, snakes on Arecibo, that’s kind of… And I think you’re going to see some kind of interesting stuff. But basically… I wish I knew more about the engineering of it. I have to give Doug the credit for that, but I will say that we basically were thinking about how creativity isn’t inherently human, but then, as humans, how we can work with machines to create.

Jessica B.: And from an output standpoint, it really kind of gave us these ideas like, “Well, maybe there are things within these images we just don’t see with our eyes or hear with our ears,” and how we could potentially use artificial systems to help enhance that.

Cristina: What’s the next question you have that you might like to explore using these methods?

Jessica B.: I think the… Yeah. I mean there’s so many amazing things happen in the art kind of AI space. Mario Klingemann is amazing. Basically, he’s been training a system to recreate faces, which again is very kind of… But he got the eyes right. So you see… And now you can actually film yourself, and the system extrapolates on top of your face and adds, like, Picasso to you and then pulls from different Renaissance painters, and suddenly, you’re everyone but no one at the same time. It’s really fascinating. It also brings up the whole idea of, like, “Could you do that with a celebrity? Could you do that with someone who’s gone? Should you?” This whole kind of questioning of that as well. But really another guy has actually taken your left and your right eye, the way they see, and switched it in a VR experience. So now suddenly you’re seeing things through different eyes, but the way those… Kind of like messing with IPD and things like that. It’s a wild world.

Jessica B.: But I think I’m more interested in how to apply that functionally in some way? In a way that’s a little bit more controlled. We have the capacity to futz with parameters in a way that changes something that’s a puppy slug into a pagoda. So, knowing that, what do we do? And I think I’m really fascinated by where that can go.

Cristina: That’s really wild.

Jessica B.: Yeah.

Cristina: Hod, in your lab, you’ve had robots do some exploratory creative things like this, even painting and so on. Could we hear a little bit more this?

Hod Lipson: Yeah. So we have a robot that paints oil and canvas. And actually that was motivated by this brief episode in I, Robot, where the detective goes to the robot and says, “Can a robot turn a canvas into a painting?” And I said, “Well, let’s try that.” And so the robot now paints on it’s own. Initially, it was painting sort of more like a… It would see something through the live camera and paint it, and it makes real oil on canvas. Now, it’s doing things like painting faces of people that… It just won an art competition painting flowers that don’t exist, just out of its own imagination. And right now, it is actually walking around somewhere in the world through a street view, having its own experiences, and I’ll come home, I’ll see what it’s painted. It’s really-

Female: That is cool.

Hod Lipson: It’s really… But I think generally it pushes our notion of art, and I can say that… I’ve spoken about this robot a couple of times, and most people by now accept this idea that a machine can create art, but the real controversial question is, “Is it an artist?” And what I’m really excited about this is there’s no doubt that AI is moving towards a place where it can see the world in ways we can’t. It can see the world in a broader swathe of the spectrum. It can see the world not with two eyes but with 20 eyes. It can see the world in the dark, in the radar, with sensors we don’t even have. So what would a machine like that paint? How does it translate that experience? And for me it’s like going to an alien species and seeing what their art looks like. I can’t wait to see what this machine will create when it sees the world in ways I can’t and never will be able to.

Hod Lipson: So to me it’s a new time for art, and it’s not just about tweaking some parameters and getting me to create better art or different art. It’s really to let the machine create its own thing and see what it comes up with. And maybe in the end it’s going to be like explaining Shakespeare to a dog. We won’t be able to understand it because we won’t be able to have those sensations, and it will be hopeless, but it’s an expedition worth taking.

Cristina: We’ll be the dog in this case.

Hod Lipson: Yes, in that analogy. Yeah.

Cristina: Tell me, I love that the aliens might be among us and they’re some of our creations. How might machine consciousness or perception differ, then, from humans? Can we speculate on that?

Hod Lipson: I think… It’s a big debate on what is consciousness, and nobody knows, and we cloak it with a lot of words, but nobody really has an idea of what’s behind it. But my model of it is that a machine is sort of self aware to some degree when it can model itself. When it has the ability to simulate itself. When it has the ability to take all that AI that we see today that is modeling the world and turn it inside, and the AI begins to model itself. This is what we’re trying to do in the lab. We have very simple robots that model themselves. And I would argue against, like you said, a sort of binary thing. Self consciousness or self awareness is a very continuous spectrum, and you can have very, very simple machines that are, in a very crude way, self aware, but that’s what we’re seeing.

Hod Lipson: And again, when you combine that with creativity, you really get to see how the machine might see itself through it’s own eyes, and we’ll see where that takes us.

Wendy Ju: Can I pick on that a little bit?

Cristina: Yeah. Go ahead.

Wendy Ju: I mean I feel like there’s a perspective which really focuses on the interior life of people. And I actually argue maybe some of the ways that… well, we don’t really know what’s going on in each other’s head, but we give each other the benefit of the doubt. And I think a lot of that comes from the way that we end up interacting, and if you feel like the interactions are going well, you characterize that as intelligence. So I think sometimes when we use the term social intelligence, we’re really talking about being empathetic and appreciating others’ feelings, but I think a lot of it’s actually about being able to do the day-to-day negotiation of everyday life.

Wendy Ju: Like I’m still amazed, because I’m trying to build robots that don’t run into people, how people are so good at actually negotiating passage through doorways and down hallways. And it’s not done the way that a lot of roboticists want to solve these problems, which is by predicting what the person’s going to do and moving around that person. Because when you are in the space as a robot, your very presence and all of your motions change what the people are going to do. So prediction alone is not enough. That negotiation, that kind of online dialogue that happens to the motion is really important. And I think what I would think would be real intelligence would be when robots are able to do this kind of realtime negotiation that we all do without even thinking about it. And if it has no interior life but it can not run into people in that way, I think that would be intelligent to me.

Jessica B.: It’s interesting you say that, too. When I started in the VR space, I was filming with a rig that looked very weird. And you would bring this rig into the middle of a room, and you would film people, and there was always this thing, like, “Yeah, you just put it in the middle of the room, and it’s fine.” But what they didn’t think about was that how people regard the rig actually translates to how people regard you in the virtual space. So you had people… I actually filmed Google I/O, and you had people go up to the rig and kind of glare at it, and like check their watch, and be like, “I don’t want this,” and walk away. And you feel awful when you’re in that experience. You’re just like, “This is terrible!” But it’s that thing where it’s like, suddenly you’re like, “Okay, well this thing has to actually look friendly. You can’t just stick a bunch of GoPros on it and expect anyone to be okay with this in the space.

Jessica B.: And the idea of live negotiation, the things that we don’t think about, like I used to actually sit under the rig, so if anyone was like, “That’s weird. They look dumb,” and I’d be like, “It’s fine. It’s okay.” But yeah, it’s just the stuff you don’t think about.

Hod Lipson: I think the way that a robot sees itself and the way it sees other humans or other robots, it’s the same kind of AI. It’s the same kind of intelligence. It’s not separable. The way we see ourselves and the way we see other people, it’s the same kind of ability. So theory of mind and self awareness are all very, very close to each other.

Cristina: Max, we’ve been talking about these ideas, social intelligence, theory of mind, perception, and so on. How about superintelligence? What is that? Could we see a crossing of the line to superintelligence?

Max Tegmark: So right now our own intelligence is really limited by what could fit through Mommy’s birth canal, right? And our artificial intelligence is really limited by our own intelligence and ability to design it. But if we can get to the point where machines are smart enough that they can help design ever better ones, then pretty soon we’re going to be limited not by how clever we are but we’re going to just be limited by the laws of physics. Which also sets limits on everything. And it’s kind of refreshing, if you’re claustrophobic, to realize that those limits are sky high compared to today. You sometimes read these articles in the newspapers about how Moore’s law is about to fizzle out, whatever, but all they’re talking about there is that this particular technology that powers your cell phones today, shuffling electrons around in two dimensions to compute, is going to be hitting a little barrier, and so we’re going to switch to yet another paradigm. Just like we switched away from punch cards in the past.

Max Tegmark: And there’s a really fascinating paper by Seth Lloyd where he just works out, “What are the limits from the laws of physics of how much computing power one kilogram of stuff can have?” And it’s about a million million million million million times above what your laptop can do today. So we can do so vastly more. And I think the optimistic note to take away from this is that, if we can harness even a small fraction of the intelligence that’s sort of latent in nature and tap into it and have it do good things for us, then we’re not in a zero-sum game anymore, where we have to quibble about, “Oh, this little piece of land belongs to my country, not your country!” We can get so much more for everybody, in terms of resources, in terms of solving the problems we face, and if, for some reason, you want more, still more resources than there are on earth, of course with that sort of intelligence, space travel is a walk in the park. There’s so much more we could do out there also.

Max Tegmark: So superintelligence has inspired a lot of people to think, “Hey, maybe this is the next step in development of life in the cosmos.” And at the same time, of course, being the most powerful technology ever invented, it’s also freaked a lot of people out, where people say, “Wait a minute. Maybe we should think this through a little bit first. Make sure it becomes a good thing.” I personally share both the enthusiasm and the concern that we should think things through a little bit first. Fortunately, I think we probably have. I mean, if you look at the surveys of AI researchers, we’re probably some decades until something like this happens, but there are a lot of questions we have to answer first which are really, really hard. So I think we should really invest heavily in researching these questions to get the answers by the time we need them, rather than start working on them the night before someone switches on a superintelligence.

Cristina: Yeah. Like cramming for the ultimate test.

Max Tegmark: Yeah.

Cristina: For Humanity.

Max Tegmark: Exactly.

Cristina: This is a great point about asking questions, and it reminds me that it’s time for me to give you all an opportunity to ask the panel, if you have any questions that are burning. I see-

Speaker 10: Close to the topic, as stated in our title, immortality and superhumanism. And certainly immortality, I’ve heard the debate whether that’s even a good thing or not. But certainly nobody I know is really looking forward to death. So at last in extending life, where are we currently with being able to replace the parts on us that break down? What parts are harder than others? And where do we see this in the near future of where it’s going with this idea of avoiding death at least a little bit longer when our bodies begin to break down and we replace them with artificial parts.

Cristina: Who would like to-

Hod Lipson: Yeah. I think-

Matthew Liao: Oh. Go ahead.

Hod Lipson: I just want to say that a lot of people ask, “Where’s the bionics within my lifetime?” Right? That’s kind of the ultimate question. I think, as a roboticist, I can say that, when it comes to sensors and activators, that’s pretty close. In other words, you can pretty for sure replace things like ears, eyes, knees, muscles. These things are within reach. And anywhere from bioprinting all the way to implants that improve your eyesight. That’s within reach. When it comes to things like improving your ability to think, to imagine, that’s a lot harder. Because we don’t understand sort of the code of how the brain works. And not to say that it’s impossible, but that part is going to be tricky.

Hod Lipson: But then there’s auxiliary things like using AI to edit the genome to eliminate diseases that kill a lot of people. That kind of thing is sort of indirect. It’s not the bionics, but it’s indirect ways of extending life, and I think we’ll see, we are already seeing, the fruit of that. And that will extend even more.

Matthew Liao: Yeah. So yeah. Just to chime in on this, I think there are two approaches. You can have a biological approach, so there are embryonic stem cells. We also know about telomere shortening, cell death, et cetera, et cetera, so people are trying to stop telomeres from getting shorter and shorter as a way of prolonging life. And then using CRISPR as a gene-editing tool might enable us to just sort of, even without getting into bionics, as a way of sort of extending life.

Matthew Liao: And then there’s the bionic aspect, which is, I think a lot of people are interested in things like artificial neurons, right? So again, DARPA, the Defense Advanced Research Projects Agency, is interested. They’re sort of building prosthetic memories. People are coming back with damaged memories, and they’re trying to create artificial neurons to replace. And if that were possible, then you can… I think that’s probably going to be the hardest part, so we can replace hearts and limbs and stuff like that, but the brain is probably the hardest part to replace. But if you can start to build artificial neurons, and if they can function in a very similar way as biological neurons, then you’re going to… that’s going to be another way of sort of getting into life extension.

Cristina: There’s this thing that we haven’t talked about yet. So we’ve been talking more on the side of the machines becoming more human and how we’re responding to that, but how about if we decide we want to continue on through something like a brain upload. Max, what is that? What is a brain upload? Why would we want to do that?

Max Tegmark: Well, the basic idea is coming back to exactly where we started, this insight that maybe intelligence and consciousness is not something mysterious that can only exist in meat blobs but actually maybe it’s all about information processing. We now know that it only takes two gigabytes to store your entire DNA code, like a typical movie download. It’s no big deal. It just takes about 100 terabytes to store all the information in your brain. So if it’s really the information processing that matters only, well why do we need to have it done in meat blobs? Ray Kurzweil, for example, would love to upload himself into a robot before his biological body gets too old, so he can keep living on.

Max Tegmark: Is this science or science fiction? As a physicist, I’m a blob of quarks and electrons. There’s no doubt about that. And there’s nothing special about my quarks. They are the same kind of quarks, dumb quarks, that make up the chair and everything else in here. So it’s absolutely possible. Are we going to figure out how to do it in our lifetime? I think it’s actually going to be much harder to figure out how our brain does intelligence than it’s going to be to build artificial intelligence that can do all those same things. Just like it turned out to be much harder to figure out how to build a mechanical bird than to build an airplane. If you’ve seen the TED Talk of a mechanical bird now, it’s great, but it took 100 years longer than the Wright brothers.

Max Tegmark: And if we succeed in building artificial general intelligence, even superintelligence, if that actually happens in 30 years, you’re still young, healthy, because you keep going to the gym, right? Then there’s no particular reason why one couldn’t use that technology to figure out how to do uploading and all these other things that we want to do.

Max Tegmark: And just a little word of warning. You define consciousness there as model of self. I like more the definition of consciousness of David Chalmers here at NYU gives as simply subjective experience. I experience stuff when I drive, colors and sounds and so on, motions. I don’t know if a self-driving car experiences anything. And before you upload yourself. Or after you’ve uploaded yourself into this robot, the robo-Cristina that looks like you and talks like you. Before you switch off your biological body, I think you should make sure that you understand whether it’s actually having a subjective experience or it’s just a mindless zombie talking and acting like you. Because in the latter case, it would be kind of a bummer if you switched yourself off. Because that’s it, you know?

Max Tegmark: And I think this is a great challenge for scientists to try to figure out ultimately, which information processing feels conscious.

Cristina: Yeah. Matthew, I’d love your thoughts.

Matthew Liao: Yeah.

Cristina: I could see you [crosstalk 00:54:24]-

Matthew Liao: So there are sort of metaphysical and philosophical issues about uploading. So one question is going to be, “Suppose you can do that and you upload your consciousness onto some sort of hardware?” One issue is going to be whether that thing’s going to be you, right? And you might think it’s not because, well, just imagine the case where you uploaded but you’re still there. You’re still kind of standing. So which one is you? It’s going to be the wetware. It’s the meat blob that’s going to be you, rather than the thing that’s uploaded. And so if that’s not going to be you, that’s going to be a bit of a problem. I mean, it’ll have qualitatively… your qualitative characteristics, et cetera, et cetera, but if it’s not going to be you… It’s great if your twin is sort of continuing to survive, but it’s not going to be you living. You’re not going to survive forever.

Matthew Liao: And so, if you care about your own survival, then you might have to think about other ways besides uploading. And one possibility is something that people are talking about, which is gradual replacement. It’s what I was talking about earlier, where you gradually replace your neurons one by one. And there, you might be able to survive. And then there’s going to be also ethical issues, right? With respect to mortality. Do we want to live forever? So we were talking about this earlier in a back room, and a lot of people say, “No, death could actually be a good thing. Because it’s good to have deadlines.”

Matthew Liao: It’s nice that, at this session, there’s a beginning. People get really excited, right? And then there’s a middle. And then there’s an end. But imagine if there wasn’t a beginning to this session. You could just come in any time for the next 100 years, right? And then there’s no end. It just goes on and on, you know? It seems like a lot of human projects lose their urgency when there’s no deadline, when there’s no time limit to that thing. And so some people worry, philosophers worry about that. They worry that immortality might get really boring, right?

Matthew Liao: And then another issue is sort of, “If you can live forever, how is that going to affect your relationships?” Imagine you’re 200 years old. That means your children are going to be 180 years old. That means their grandchildren’s going to be 160 years old or something like that, right? And so what’s that relationship going to be like? Because you and your great grandchildren, who’s 140 years old. So those are things that we need to think about once we can extend life longer.

Cristina: A lot to think about. So I can feel an urgency in the audience to ask a question or two, and I think we have a couple of minutes left. I want to go to the back or towards the back of the room. There’s a man with his hand-

Speaker 12: Do you think computing technology, whether it’s AI or prosthetics or virtual reality, will be a social equalizer and helping people who have been left behind catch up in certain ways? Or will it increase inequality in terms of opportunities and resources?

Jessica B.: I know the work that I did at Google, which I hope they’re still doing, is really trying to get… Like when VR was sort of beginning, like the creators, working in the film… I worked in the film world before then, and I can tell you there are many biases about the film world that I didn’t want to be reintroduced to this new medium that was coming up.

Jessica B.: And so a lot of my work was in an effort to try to get this into the hands of people who should be using this technology and are usually the last to get it. Thinking about accessibility, creating VR experiences from the ground up that actually think about those who are deaf, think about those who have seeing disability as well, people who we usually think of second, and actually create experiences from the ground up thinking about them first. So really trying to figure out ways that we could approach that. It’s slow going, but I think, especially in the VR community, there is a lot of… there are a lot of great women creators out there, people of color as well, equipment being brought to communities, so that they can tell their own stories instead of other filmmakers going in and doing the whole cultural appropriation thing all over again.

Jessica B.: You can’t control all of it. It’s just things come through and seep in, and it’s hard to manage that. I will say that, recently I was asking a friend who’s doing really cool art in the AI space right now? People I should look at. And they’re all dudes. So you know? It’s hard. And I think that, again, there’s inherent bias in data. There just is. Not everyone has access to the internet. Not everyone’s giving their data. The idea of a system training itself is interesting, but then is it human if there’s no human data in it? It’s that kind of weird… it’s tricky. We have a lot of mess in all of this.

Hod Lipson: I think that the general trend is an equalizer because you can see, for example, medical applications that Max mentioned earlier. If you have an app that can diagnose cancer and now you don’t need to go to a doctor. You don’t have to have access. You don’t have to pay to a doctor. Anybody on the planet can have access to this technology because it’s in your phone. That means more people will get diagnosed. That’s the beginning.

Hod Lipson: We’re seeing that everywhere. Driverless cars will allow people in rural areas to have the kind of mobility that right now only people in Manhattan have. So I think, in general, we’re seeing more equalization. But of course it’s going to be rough. There’s going to be upside downs. But that’s, I think, the general trend.

Cristina: I could talk about this all night. In fact, if it was a couple of hundred years, I wouldn’t mind. But I love the idea of leaving on a hopeful note, and this particular bag of quarks and electrons hopes that we do win the wisdom race as a group in grappling with these amazing technologies. Could you all join me in thanking the panelists for a really.

Leave a Reply

Your email address will not be published. Required fields are marked *

View More Comments
Load More

To Be or Not to Be Bionic: On Immortality and Superhumanism

“We can rebuild him. We have the technology,” began the opening sequence of the hugely popular 70’s TV show, “The Six Million Dollar Man.” Forty-five years later, how close are we, in reality, to that sci-fi fantasy? More thornily, now that artificial intelligence may soon pass human intelligence, and the merging of human with machine is potentially on the table, what will it then mean to “be human”? Join us for an important discussion with scientists, technologists and ethicists about the path toward superhumanism and the quest for immortality.

View Additional Video Information

Moderator

Mariette DiChristinaEditor

Mariette DiChristina is Director of Editorial & Publishing for Nature Research Magazines, overseeing the global editorial teams for Nature magazine, Partnership & Custom Media and Scientific American, for which she also serves as editor in chief.

Read More

Participants

Wendy JuRoboticist

Wendy Ju is an Assistant Professor in the Department of Information Science at Cornell Tech’s Jacobs Technion-Cornell Institute, where she leads the Future Autonomy Research (FAR) Lab. Her work focuses on ways that interactive devices like robots can communicate with people without interrupting or intruding.

Read More
Jessica BrillhartImmersive Director

Jessica Brillhart is an Immersive Director, Writer, and Theorist. She’s the founder of the independent studio, Vrai Pictures, and is on the roster for Mssng Peces. Previously, Brillhart was the Principal Filmmaker for VR at Google where she worked with engineers to develop Google Jump.

Read More
S. Matthew LiaoBioethicist, Philosopher

S. Matthew Liao is Arthur Zitrin Professor of Bioethics, Director of the Center for Bioethics, and Affiliated Professor in the Department of Philosophy at New York University. He is the author of The Right to Be Loved, Moral Brains: The Neuroscience of Morality, and over 60 articles in philosophy and bioethics.

Read More
Max TegmarkPhysicist, AI researcher, Author

President of the Future of Life Institute, Max Tegmark advocates for positive use of technology. He is also a professor doing physics and AI research at MIT.

Read More
Hod LipsonRoboticist

Hod Lipson is a roboticist who works in the areas of artificial intelligence and digital manufacturing. An award-winning researcher, teacher, and communicator, Lipson enjoys sharing the beauty of robotics though his books, essays, public lectures, and radio and television appearances.

Read More

Transcription

Cristina: Awesome to see you here. I’m so excited to be back again at the World Science Festival, and today, we’re going to talk about robots and AI. And you know the first use of the word robot was in a play about mechanical men that were built to work on factory assembly lines. In the play, the robots eventually rebel against their human masters. The play was called RUR, Rossum’s Universal Robots, and it was written by a Czech playwright, Karel Capek, in 1921. The word robot itself comes from the Czech word meaning slave.

Cristina: Today, robots entertain us in movies, like when we see C-3PO or R2-D2 and many others. Robots help perform surgeries to heal us and even are part of prostheses. They help explore far-off places, like going to Mars. Everybody remembers the Mars rovers there. They do reconnaissance for us in battle areas and go to other dangerous places. They also work in factories. In fact, some people worry about them taking our jobs or that we’ll all become too technologically dependent someday, in effect slaves to the robots.

Cristina: In this fascinating session, which is called To Be or Not to Be Bionic, on immortality and superhumanism, we’re going to explore these intersections between humans and robots today and what we might all expect in the future. And I’m going to enjoy welcoming our speakers to the stage.

Cristina: Our first participant is a professor of engineering and data science, as well as director of the Creative Machines Lab at Columbia University. He’s garnered a reputation for challenging the conventional view of robotics by designing self-aware and self-replicating robots. Please welcome Hod Lipson. Welcome, Hod.

Hod Lipson: Thank you. A pleasure to be here.

Cristina: Our next guest is assistant professor at Cornell Tech in the information science program. As executive director of Interaction Design Research at Stanford University, she innovated systems to understand how humans and robots can interact most seamlessly. She’s new to New York City, so please give a warm welcome to Wendy Ju.

Wendy Ju: Thanks for having me. Thanks.

Cristina: Next is Arthur Zitrin Professor of Bioethics and director of the Center for Bioethics at New York University’s College of Public Health. His research uses philosophic principles to study and examine the ramifications of emerging biomedical innovations. Please welcome S. Matthew Liao. Good to see you.

Matthew Liao: Good to see you.

Cristina: She’s a film director known for her pioneering work in virtual reality. She was Google’s first filmmaker and worked with a team of engineers to help develop and field test the 16-camera live action virtual reality gig and workflow now known as Google Jump. Please welcome Jessica Brillhart.

Cristina: Our final guest is a professor doing physics and AI research at MIT and advocates for positive use of technology as president of the Future Life Institute. He’s author of more than 200 technical papers, and the New York Times Bestseller Life 3.0: Being Human in the Age of Artificial Intelligence, a work that Stephen Hawking said would help a reader join “the most important conversation of our time.” Please welcome Max Tegmark.

Cristina: So we’re going to take on some heady and inspiring topics on stage with these wonderful panelists today. Max, last on the stage but first I’m going to pick on with questions. We’ve just talked about some fantastic ideas, robots and superhumanism and so on, but can you give us the lay of the land today. Where are robots and AI maybe dominating? And where is it that humans still have the upper hand?

Max Tegmark: I think, to take a step back, since this is the World Science Festival, it’s interesting to see that, after 13.8 billion years of cosmic history and a lot of human history, we used to think of basically everything that biological organisms did as something kind of mysterious, that was probably beyond science, even though we could understand how rocks moved and stuff. And then, gradually, science figured out things about how muscles and so on worked, and we built better muscles in the Industrial Revolution and started outsourcing that to machines, but the mind still seemed very mysterious.

Max Tegmark: And gradually artificial intelligence research, of course, has been chipping away at that, too. Now we get our posterior kicked at chess and in go and soon in driving and in so many other things, and although we’re very far away from being able to do everything that humans can do with our minds, most AI researchers think that we probably are going to get there in a matter of decades. This is the holy grail known as artificial general intelligence. And I think we’ll talk a lot about if that’ll happen, and if so, what we can do to make it good.

Max Tegmark: And in the meantime, what we have right now is a situation where we have very good narrow artificial intelligence that’s much better than us at certain narrow tasks, like multiplying numbers fast and, as I said, playing certain games and dealing with massive databases, and an increasingly long list of tasks, and we all feel how this is beginning to transform our daily lives, everything from GPS navigation to all the other interactions you have with computers. And I think it’s easy to underestimate the extent of it, actually. Because, as soon as something works, machines can do it, we stop calling it AI. We sort of define artificial intelligence flippantly as that which machines still cannot do.

Max Tegmark: I think, finally, looking into the next five years, 10 years, we’re going to see huge progress, particularly in the areas where you can make a lot of money off of it, so we’re going to see real revolution in healthcare, much better diagnostics, also more affordable, so it can spread throughout the world. Obviously in the transportation sector. But really, throughout much of the economy, I think. In these narrower areas.

Cristina: Thanks a lot, Max. It’s really interesting to hear both about the progress of the human mind and understanding these sort of artificial minds and where that may take us. Hod, this sort of brings me to you, so we’ve talked a little bit about the intellectual sphere of general artificial intelligence and so on. But you work a lot about robot bodies. Can you tell us a little bit more about these sort of physical manifestations?

Hod Lipson: Yes, I think, if you try to look sort the long term, what are the next things that AI and robots have a hard time with, and some of this is already bubbling in academia. It hasn’t quite made it into industry. There are certain areas. One of them, I think, is this whole area of creativity. One of these areas where we think we humans really have a hold on creativity, and machines can analyze data, but they can’t really come up with ideas. That’s one of the things we’re seeing already changing, machines that can come up with ideas, can generate creative things, anywhere from engineering designs to art. And everything in between. So creativity is one big area. I still hear people saying, “As long as you’re creative, AI cannot touch you, and we’re safe.” And I think we’re going to have to come to terms with that element as well.

Hod Lipson: But then I think the next area where AI will have a hard time is the physical body. So I like the analogy of H.G. Wells when he talked in the War of the Worlds about the aliens, that they have not earned their place in the physical world. AI has also not earned its placed in the physical world, has not crawled through mud and rain and sand as we have. And AI can live in a computer, but if you look at the state of the art of physical robotics, we are ages behind. Things like batteries and material science that aren’t growing exponentially, as software is. So that’s the one area where I think we will be safe for a while. Computers will drive your computers pretty soon, but when the car breaks down, it’s going to be a human crawling around, fixing it. Because computers can’t do that yet. So the physical body is important.

Hod Lipson: If I look beyond that, the big place where the mind and the body come together is sentience and the ability of a machine to have a concept of self, emotions, real emotions, things like that, and I think that’s where things get really interesting.

Cristina: And this kind of brings me to you, Wendy. I was thinking about something that I guess Max and Hod have touched on, which is jobs and robots and jobs. Now what are the sorts of jobs that might be automated? And what are human responses maybe to that?

Wendy Ju: That’s such an interesting question. About five years ago, Leila Takayama and I did a study asking… We went through the US Bureau of Labor Statistics job descriptions and actually had people rate the degree that people think that the robot could or should do these jobs, and contrary to what I think anyone wants for their own job, people really wanted, in general, for robots to do things instead of people, instead of with people. So your job, you might be willing to work with a robot. Other people’s jobs, it would be great if a robot would just do that.

Wendy Ju: But I wonder now if some of that logic might be changing. What I think what we’ve seen in a lot of the recent developments is that AI and robotics, they work well, but they have incredible limitations, and you could say the same about people. But a lot of the things that robots can do with people who really know how to work with robots is amazing, and definitely that’s the case with AI. The most successful chess playing is happening with these, I guess they call them centaurs. Other people using AI as part of the strategy to win a chess game.

Wendy Ju: And so I think really what we should all be looking for is to think about how we could use these technologies to change our own jobs. Instead of worrying about the replacement question. Because I think the jobs that we have will all evolve to be in a place where it will be almost unrecognizable in five years.

Cristina: So we’ve talked a little bit about robots and people. How about robots with people? Matthew, I’m looking at you. What about this idea of bionic modifications for humans? I know, robots, a lot of times when we’re watching… I know I love to watch the robot dog running around. But they sometimes seem a little clumsy. If human prospects for jobs dwindle or change, what should we do there?

Matthew Liao: So there’s actually, I think, another interesting area, sort of robots in people. And right now, there’s already a lot of devices, things like brain/computer interface. There’s something called deep brain stimulation, and that’s something that sort of… is basically you put a sort of thin electrode into your brain, and it’s connected to a battery pack, and about 100,000 people today already wear something like the deep brain stimulation for things like Parkinson’s disease, for epilepsy, for depression. But right now, the deep brain stimulation is kind of dumb. It’s not a smart device, in that it’s an open system where you kind of have to modulate the electricity. But you can imagine that, in five years… right now, DARPA is trying to make it into a closed system where they use artificial intelligence, so that it’ll monitor, automatically monitor, your brain activities, and then just modulate the brain activities in real time. And so that’s going to raise all sorts of interesting ethical issues.

Matthew Liao: But just on the near term… I’m an ethicist. I’m a philosopher. So there are a lot of near-term ethical issues that we should be thinking about. Like how do we program ethics into self-driving cars, for example? Accidents are going to happen. If the car is going to go towards five people but you’re in the… and then it can… it’s going to kill those five people, but the car can drive off the road, killing you? Would you buy a car like that? That would do that? And who decides that? Who makes that kind of programming decision?

Matthew Liao: Another near-term issue is going to be something like… There’s this idea of “garbage in, garbage out,” so the AI’s only going to be as good as the data you give it. And so if you have a lot of biased data, then that’s going to give you very biased results. And that’s showing up, for example, already. So some people are trying to use artificial intelligence for sentencing, to make sentencing decisions. But if you have biased data, that could give you very biased sentencing decisions.

Cristina: So, Jessica, I’d like to come to you to follow up on these.

Jessica B.: Oh, okay.

Cristina: So we’ve talked a lot about… Well, no, you’ll see in a second. Trust me.

Jessica B.: Okay. I’m trying to think-

Cristina: We’ve talked a lot about the human interactions. I can’t think of a more human response than how we respond in artistic ways and other ways. And as we’re talking about how work can extend human experience… you work in VR and extending human sensory experiences that way. What are some of the sort of uniquely human characteristics you bring into your work as you’re doing that?

Jessica B.: Oh. I mean I think VR is very good at challenging human perception, probably more than… I mean, it can extend it, but I think the more interesting work in the virtual space, and in the augmented reality space, and in the AI space, is how it challenges us in the notion of creativity. It challenges us in the notions of perception. It, for me at least, has made me question the very small sliver in which we can experience the universe and how limited that is. It can be both celebrated and used in narrative ways in the virtual space.

Jessica B.: I find that the work that I’ve been able to do in breaking that more into what is happening artistically with neural networks and artificial systems has been really fascinating to me. Because it extends that kind of inquiry in the virtual reality space for sure.

Cristina: Is there anything that it’s brought to you that you would suggest to the audience, who hasn’t maybe experienced it, that you wouldn’t have done any other way?

Jessica B.: Yeah. I mean, we talk a lot about branching narratives, this idea that it can permeate… the mass amounts of permeations we can make with where a story could go. Things like that, where every single choice that we make can have an actual result that the system then understands and creates and presents to you. The idea of a character that is driven by an AI is also very fascinating, where the emotions and the intent and the friendship is all based upon how you treat it. Him or her. I say “it,” but again that might change. And I think also… The company that I started, actually, after working at Google. A lot of what I think about is not so much the idea of dualism or the binary, especially given AI’s evolution based upon getting away from the binaries, really. It’s interesting and fascinating, the idea that all of these technologies, VR, AI, AR, will all come together and create kind of this new immersive world, this new reality that exists for us, that both, again, extends and challenges our abilities to perceive what it means to be human.

Cristina: So we’ve just thrown a lot of really fascinating ideas in the air, and I wonder if anybody in the audience would like to ask a question. And if you would, just say who you are, and if your question is directed to somebody on the panel in specific, please say that, too. I see one right in the front here.

Hod Lipson: Well, I don’t want to put a dark storm on things. I think I know where Max comes down on this, so maybe other people would chime in. So we seem to be really good, as humans, at doing some really cool stuff. But malevolence always ends up being present, no matter how good we want to be or seem to be. It seems like that’s the elephant in the room as far as AI goes. How do we create a world where all these great things like you’re talking about are going to be able to come to be, when in fact it seems to me that just preventing the emergence of some malevolent AI on the world seems to be the biggest challenge we face?

Cristina: Max, do you want to [inaudible 00:16:56]. Oh, Hod?

Hod Lipson: Yeah, I’m happy to take a stab. I think that the question is also not what AI will do to people but what people will do to people using AI, really. That’s the shorter term, much more urgent question. If we get to the point where AI is so clever that it doesn’t need us anymore, that’s so far ahead of me we’d be lucky to pass that. Intermediate phase is where people have this incredible power. So I think you’re absolutely right. There’s pros and cons. I definitely believe that the benefits far outweigh the risks, but there are risks, and we should all be aware of them. In general, humankind has been able to use technologies for the better rather than for the worse. There’s always bad stuff happening, but overall we’re getting better at these things. And at least I’m optimistic that we’ll be able to use this technology positively.

Max Tegmark: Let me just to add some optimism to this and cheer you up a little. Because I think, of course, you’re right, that as we make technology ever more powerful it becomes ever more important that we win the wisdom race, the race between the power of the technology and the wisdom with which you’re managing it. You would never walk into a New York kindergarten with a box of hand grades, and say, “Hey, kids! Play with this.” Because the wisdom just really isn’t there. And sometimes when I see how our world politicians are handling hydrogen bombs, I feel we’ve sort of repeated that kindergarten experiment there. And I think we should try to do better with AI.

Max Tegmark: Some of my colleagues are kind of gloomy and think there’s some sort of law of nature that any technology that can be developed for military purposes or harming people or whatever will be developed and there’s nothing you can do about it. I think that’s manifestly wrong, and we have evidence for it. Look at biology. Biology today you mainly associate with new cures, helping people live healthier, longer lives. Not with bioweapons, right? Why is that? That’s a technology we chose as a species not to create some sort of crazy arms race around. Why? Because a bunch of scientists in the 1970s pushed really hard to stigmatize this and got an international ban around this, which has made biology the force for good that it is now.

Max Tegmark: And I’m quite hopeful that, if we can similarly stigmatize bad uses of AI, from Cambridge Analytica to lethal autonomous weapons, we can end up in a similar future, where one day we look back, and people associate AI with all sorts of ways of making the world better, just as biology is today. But we’re going to have to fight for it and work hard for it, just like the biologists did. Because it’s not going to happen by default.

Cristina: Wendy, you wanted to add something.

Wendy Ju: Yeah. I think there’s two things to worry about. One thing is malevolent actors, and the second thing is just unintended consequences. And actually, for both things, something that’s very important is kind of this ability to simulate worlds and run small-scale tests and games. A lot of times, people look down on games as something that’s a waste of time or something you do for diversion, but it’s actually really important in situations like these, where you don’t actually know what the contingent behavior will result in in a larger emergent space, to be able to run small-scale experiments and actually assign some people the job of playing the bad actors and then seeing how things unfold. And I think we can’t just tell everyone, “You have to behave ethically! Let’s just do positive things. We’ll put down some regulations around it,” and hope for the best.

Wendy Ju: I actually think it’s really important to be able to anticipate, “What are the negative things?” What some of the motivations for some of those things will be. So that we can actually stay a step ahead. And that also will help in situations where there’s no malevolent actors at all. That happens all the time, where everyone is acting totally out of the goodness of their hearts, but still terrible things happen.

Jessica B.: Yeah, I think it’s important to run narrative through a lot of these systems. I mean, again, it’s the thing where people don’t like to talk about feelings or emotions or stories or any of that stuff, but what happens is that you see the permeations of where this stuff could go. You understand more fully and more richly where things could go. And I think we do ourselves a grave disservice if we don’t start doing that more now.

Max Tegmark: I just want to agree with what both of you said here. I think it’s so important to think through what can go wrong. And that is so often misunderstood as somehow scaremongering. And you see so many silly news articles accusing something like this of scaremongering for talking like this. But this is just what we at MIT call safety engineering, right? Like the moon mission. NASA thought through what could go wrong if you put some astronauts on explosive fuel tanks and launched them into space. There was a lot that could go wrong. That wasn’t scaremongering. That was doing exactly what you were suggesting, thinking through what can go wrong to make sure it can go right. And I think we owe it to ourselves to think through what can go wrong with AI exactly, so that it can become the force for good in the world that it has the potential to be.

Matthew Liao: I think Max is exactly right. We need to be techno realists about these situations and realize the risks, as well as the benefits. And making sure that… I do think that policy and regulation, making sure that that’s in place, and globally as well, because I think this technology has no borders. And so it’s something that we have to… It’s our societies, our worlds, our future. We have to have a conversation about these issues.

Cristina: So I want to keep on this. I love this topic. We talked about us versus them. We talked about us and them. We talked about us versus us using them. I think it would be good to talk a little bit more about… so we’re turning to our second act… about creating robots in our own image and maybe more exploring that. And Wendy, I’d like to come to you to talk about your experiments with automata that are like household objects or furniture, robots that blend in, so that you can see how humans respond to them and how… What have you found out about that? What are you seeing?

Wendy Ju: Yeah. So right now, at this moment in time, I’m probably most famous for the work I’m doing on how people interact with autonomous cars. But that actually comes from these experiments where we looked at how people interact with ottomans that come up to your feet and try to get you to put your feet on them, desk drawers where the desk drawer might actually know what you’re doing and offer you tools, garbage cans that drive around and collect garbage. And one of the things is we find that people feel very comfortable. There’s some of novelty effect that you see around other robots when people aren’t used to seeing the object in space. They’re just like, “Oh, yeah. It’s a garbage can that’s moving.” And I think that the level of acceptance is very nice. And when we interview people, we’re like, “Do you know what that is?” They’re like, “It’s clearly a garbage can that’s moving. That’s all. It just is happening now.”

Wendy Ju: So, one, there’s not this kind of issue of trust or fear. People understand. But the thing that’s really interesting is that they have some instinctual model of intent. So one of the things that we saw in some of the videos of the experiments we ran was like a kid waving a piece of garbage in front of the garbage can because he wanted the garbage can to be across the room. And so he was trying to bait the garbage can across the room. And we were just like, “Does the kid think the garbage can wants to eat the garbage?” And so, once we noticed that, we actually saw all these other moments in the videos where people would wave garbage and call… like it’s a dog treat. And so that’s a human-robot interaction. We often have this H model, where we think people interact with machines kind of like they’re people. And this is really different. Because nobody thinks the busboy wants to eat the garbage. And so there’s all these… But everyone goes to that model, that it’s like an animal. And they clearly know it’s not an animal. They know it’s a garbage can.

Wendy Ju: And so some of these things, like the logic that we all fall to, those are some of the things that we’re trying to pull out. Because it’s so useful as designers to be able to use that when you’re designing how things can move around people so they’re safe.

Cristina: Yeah. I mean it’s one thing, I think, to engage with a garbage can that you are attributing human behavior to, but how about… You’ve seen people mapping human qualities onto robots and machines. With the work that you do in your lab, is that a problem?

Hod Lipson: I think it’s a double-edged sword. In one way, it’s very empowering for the robot. You see situations where, we’ve noticed in the lab it’s enough to stick googly eyes onto an object, and people already can see intelligence there and emotions. It doesn’t take a lot, really, to fool the human psyche into thinking that inanimate objects are intelligent. And in one way, you can sort of exploit that, because machines invariably learn from interactions with the world and with humans, and to the extent that people can interact with them in any way, that they can elicit that interaction, that sort of bootstraps the whole AI process. Because it’s all about collecting data and learning. And we know that even babies learn through interaction. And a baby might not know whey they’re smiling, but that smile elicits more behavior and more interactions. So I think we’re seeing more and more of that with physical objects.

Hod Lipson: And there’s something I would argue that is very magical about a physical object that is intelligent. Much more than an avatar on a screen or even a VR, to the extent we have VR today. When there is something physical out there that is with you in this world and is there and potentially senses the same thing you sense, there’s bonding that happens that makes that interaction ever so powerful.

Cristina: But this gets creepy at a certain point for people, right? There’s something called the uncanny valley. Talk to us about that.

Wendy Ju: Yeah. There’s something called the uncanny valley in robotics, which is that kind of the naïve sense is the more humanlike something should be, the better it would be, and what roboticists discovered is that, as you get closer to humanlike but not quite humanlike, it actually starts to be worse, much worse. People get very creeped out and feel really uncomfortable, and so this has been studied. We can kind of map it a little bit. It’s difficult to dimensionalize it. But the hypothesis for why this is is because there are aspects where people, in our evolutionary history, where people might have psychological problems. Or you might have people who are more sick and ill and don’t behave quite right. And sociologically, that’s danger to everyone. So you’re very sensitive to things that are not quite normal. And so as the robots start to look more like us, we start to become really tuned in to all these fine differences, whereas if you can readily attribute some other identification, like, “It’s not a person. It’s a chair robot.” Then you’re not worried about that maybe it’s a sick person question.

Cristina: How do you know how much human to put in?

Wendy Ju: That’s such a great question. I mean, I don’t know. My feeling is it’s not… it’s an interesting research question, but from a pragmatic standpoint, it’s not necessary. I think a lot of times the argument for making humanlike robots is that, “Well, people know how to interact with people.” But people also know how to interact with hundreds of thousands of objects. There’s many other leaping-off points that we could use.

Cristina: This raises some ethical questions surely, Matthew, about humans and robot interactions with the machines. Does it matter how humans treat the robots if they start to become more human?

Matthew Liao: Yeah. So in Japan, there are actually a lot of… They have an aging population, and a lot of elderly are now being taken care of by robots. And it’s becoming more so. And one of the issues that they’re facing right now is that some of the elderly are getting attached to these robots, right? And so there’s this whole debate about… Because they make the robots kind of cute, and they do smiley faces, so some of the elderly are… And they have pet dogs, et cetera, et cetera. And that raises… So some people are saying that we really need to make sure that the robots are very different from humans, so that people can recognize that they’re just robots. Don’t get attached to them. But I think Hod’s right that we’re actually very easily fooled, and that it’s very easy, especially once these robots get very sophisticated and can converse with you, et cetera, et cetera, you start to develop some… They want to be taken care of by one robot, rather than another robot. And that’s very… It becomes very human, very natural.

Matthew Liao: And so we need to think about those implications when we actually deploy robots into the real world for elderly care, et cetera, et cetera.

Cristina: I mean is it a bad thing necessarily if a person gets attached to it? I had stuffed animals when I was little. Is that a bad thing?

Matthew Liao: So I think that it’s not so much a bad thing. I actually think that… I’m less worried about it. And part of the reason is that I think that the elderly, especially the elderly population there in these aging homes, they don’t have a lot of interactions with other people. And you might think that, if this can give them some sort of emotional comfort, that’s a net positive thing. You might think that, in addition to that, you shouldn’t just have that. They should also have more human interactions, et cetera, et cetera, so they can have human relationships as well, like more people visiting them and things like that. But if these robots can help them in that way, I think that that’s a net positive thing.

Cristina: I start to wonder if it’s fair to the robots.

Matthew Liao: Right.

Cristina: Jessica, does this make sense to you? I’m looking at you watching everybody and wondering [crosstalk 00:30:51] of it.

Jessica B.: I’m fascinated. No, I mean I really liked your question about that. I feel like… I’ve been reading a lot about gaming culture as well and the idea that people say the same thing about gaming. “Well, if they’re obsessed with gaming, is that a bad thing? Is that ruining their brains?” And so on. And then you realize that the psychology around game addiction has everything to do with how bad their actual reality is. It’s not because the game is alluring or the game is wrong. It’s because the reality sucks. So suddenly you’re like, “Okay, I don’t have a job. My boyfriend’s mad at me.” This isn’t true. Everything’s fine.

Jessica B.: It’s just bad times or whatever. People resort to games, or someone will resort to a game, because they feel like they’re accomplishing something, and they can get money, and they can make friends and all this stuff. And so there’s really just an issue with the real stuff that’s making the not-so-real, but maybe should be real-er, stuff harder for that person to leave.

Jessica B.: So I think, in the case of someone who’s elderly, who has a robot friend who’s taking care of her, the fact that she doesn’t have anyone who would take care of her otherwise, I think, makes it totally okay for that elderly woman to be completely infatuated with her robot friends. I don’t know. I find that relationship, kind of what was being said earlier, just the tension between what’s real and not real, and maybe there isn’t as much of a difference as we think.

Cristina: Yeah. It’s interesting. It’s a really interesting area. I wonder. I mean I’m sure, turning to you in the audience again, listening to these thoughts, you must have some questions. So I’d love to invite people who have questions on their mind to share them now. And do you still have your question? He does. Okay, can we come down to the front first? Thank you for being patient.

Cristina: What happens when the robots take over and they know everything? And then you can’t teach them anymore?

Cristina: What a good question.

Matthew Liao: Great question.

Wendy Ju: That was the cutest pessimistic [crosstalk 00:32:47]. That was wonderful.

Cristina: A fantastic question.

Matthew Liao: That’s [crosstalk 00:32:50].

Wendy Ju: Yeah. There you go.

Max Tegmark: Yeah. I think instead of asking, “What will happen?” I think we should ask what we want to happen, right? Your parents know that one day you’re going to kind of, and your generation will kind of, take over from us, right? So they aren’t just asking, “What’s going to happen when you grow up and take over?” They’re trying to educate you well and make sure that you want to be nice to them when you’re stronger than them and make sure that you have their best ideals and values and so on. So if we ever do, as a species, create robots that have the ability to take over, I think we better educate then well, too, just like I’m sure your parents are doing with you.

Wendy Ju: It’s a great question.

Cristina: Yeah. I’d like to maybe get onto our next act, if you all don’t mind, which is… and I’m going to stay with you, Jessica, because I want to look a little bit beyond human perception. We started to talk a little bit about your work, and you did a piece called DeepDream VR, which starts to get at the heart of the matter of machine creativity. And from an engineering perspective, it would be great to hear how the piece was created and the connection to recent advances in neural networks, which you touched on before, and I think we’re going to also get a chance to see it.

Jessica B.: Oh, yeah. This will be interesting for me to talk about this as you watch it. So again I was working with a VR team and filming a lot of footage to test the rigs out, and I worked with this engineer, Doug Fritz, to have a system essentially dream on top of these VR images. DeepDream is, I would say, a system that’s been trained, a neural net that’s been trained on a lot of images and then told to basically extrapolate what it sees within these images, and so sometimes they’re puppy slugs, sometimes they’re eyeballs, sometimes they’re houses and cars and so on. And the reason we did this was because we just wanted to see what would happen, and some of the results were actually pretty need. And some were a little frightening.

Jessica B.: But really what it was was kind of the first way that we brought in sort of artistic systems to work in the virtual reality space. It takes a lot of processing power. It took us a long time to even get it to be in stereo, which means that your left and right eyes see it a little bit differently, so you see depth in these images as well. This one in particular was filmed on Arecibo, and this is the tram that you take up to the main platform. I guess it’s the second largest radio telescope in the world now. So you’ve got snakes, snakes on Arecibo, that’s kind of… And I think you’re going to see some kind of interesting stuff. But basically… I wish I knew more about the engineering of it. I have to give Doug the credit for that, but I will say that we basically were thinking about how creativity isn’t inherently human, but then, as humans, how we can work with machines to create.

Jessica B.: And from an output standpoint, it really kind of gave us these ideas like, “Well, maybe there are things within these images we just don’t see with our eyes or hear with our ears,” and how we could potentially use artificial systems to help enhance that.

Cristina: What’s the next question you have that you might like to explore using these methods?

Jessica B.: I think the… Yeah. I mean there’s so many amazing things happen in the art kind of AI space. Mario Klingemann is amazing. Basically, he’s been training a system to recreate faces, which again is very kind of… But he got the eyes right. So you see… And now you can actually film yourself, and the system extrapolates on top of your face and adds, like, Picasso to you and then pulls from different Renaissance painters, and suddenly, you’re everyone but no one at the same time. It’s really fascinating. It also brings up the whole idea of, like, “Could you do that with a celebrity? Could you do that with someone who’s gone? Should you?” This whole kind of questioning of that as well. But really another guy has actually taken your left and your right eye, the way they see, and switched it in a VR experience. So now suddenly you’re seeing things through different eyes, but the way those… Kind of like messing with IPD and things like that. It’s a wild world.

Jessica B.: But I think I’m more interested in how to apply that functionally in some way? In a way that’s a little bit more controlled. We have the capacity to futz with parameters in a way that changes something that’s a puppy slug into a pagoda. So, knowing that, what do we do? And I think I’m really fascinated by where that can go.

Cristina: That’s really wild.

Jessica B.: Yeah.

Cristina: Hod, in your lab, you’ve had robots do some exploratory creative things like this, even painting and so on. Could we hear a little bit more this?

Hod Lipson: Yeah. So we have a robot that paints oil and canvas. And actually that was motivated by this brief episode in I, Robot, where the detective goes to the robot and says, “Can a robot turn a canvas into a painting?” And I said, “Well, let’s try that.” And so the robot now paints on it’s own. Initially, it was painting sort of more like a… It would see something through the live camera and paint it, and it makes real oil on canvas. Now, it’s doing things like painting faces of people that… It just won an art competition painting flowers that don’t exist, just out of its own imagination. And right now, it is actually walking around somewhere in the world through a street view, having its own experiences, and I’ll come home, I’ll see what it’s painted. It’s really-

Female: That is cool.

Hod Lipson: It’s really… But I think generally it pushes our notion of art, and I can say that… I’ve spoken about this robot a couple of times, and most people by now accept this idea that a machine can create art, but the real controversial question is, “Is it an artist?” And what I’m really excited about this is there’s no doubt that AI is moving towards a place where it can see the world in ways we can’t. It can see the world in a broader swathe of the spectrum. It can see the world not with two eyes but with 20 eyes. It can see the world in the dark, in the radar, with sensors we don’t even have. So what would a machine like that paint? How does it translate that experience? And for me it’s like going to an alien species and seeing what their art looks like. I can’t wait to see what this machine will create when it sees the world in ways I can’t and never will be able to.

Hod Lipson: So to me it’s a new time for art, and it’s not just about tweaking some parameters and getting me to create better art or different art. It’s really to let the machine create its own thing and see what it comes up with. And maybe in the end it’s going to be like explaining Shakespeare to a dog. We won’t be able to understand it because we won’t be able to have those sensations, and it will be hopeless, but it’s an expedition worth taking.

Cristina: We’ll be the dog in this case.

Hod Lipson: Yes, in that analogy. Yeah.

Cristina: Tell me, I love that the aliens might be among us and they’re some of our creations. How might machine consciousness or perception differ, then, from humans? Can we speculate on that?

Hod Lipson: I think… It’s a big debate on what is consciousness, and nobody knows, and we cloak it with a lot of words, but nobody really has an idea of what’s behind it. But my model of it is that a machine is sort of self aware to some degree when it can model itself. When it has the ability to simulate itself. When it has the ability to take all that AI that we see today that is modeling the world and turn it inside, and the AI begins to model itself. This is what we’re trying to do in the lab. We have very simple robots that model themselves. And I would argue against, like you said, a sort of binary thing. Self consciousness or self awareness is a very continuous spectrum, and you can have very, very simple machines that are, in a very crude way, self aware, but that’s what we’re seeing.

Hod Lipson: And again, when you combine that with creativity, you really get to see how the machine might see itself through it’s own eyes, and we’ll see where that takes us.

Wendy Ju: Can I pick on that a little bit?

Cristina: Yeah. Go ahead.

Wendy Ju: I mean I feel like there’s a perspective which really focuses on the interior life of people. And I actually argue maybe some of the ways that… well, we don’t really know what’s going on in each other’s head, but we give each other the benefit of the doubt. And I think a lot of that comes from the way that we end up interacting, and if you feel like the interactions are going well, you characterize that as intelligence. So I think sometimes when we use the term social intelligence, we’re really talking about being empathetic and appreciating others’ feelings, but I think a lot of it’s actually about being able to do the day-to-day negotiation of everyday life.

Wendy Ju: Like I’m still amazed, because I’m trying to build robots that don’t run into people, how people are so good at actually negotiating passage through doorways and down hallways. And it’s not done the way that a lot of roboticists want to solve these problems, which is by predicting what the person’s going to do and moving around that person. Because when you are in the space as a robot, your very presence and all of your motions change what the people are going to do. So prediction alone is not enough. That negotiation, that kind of online dialogue that happens to the motion is really important. And I think what I would think would be real intelligence would be when robots are able to do this kind of realtime negotiation that we all do without even thinking about it. And if it has no interior life but it can not run into people in that way, I think that would be intelligent to me.

Jessica B.: It’s interesting you say that, too. When I started in the VR space, I was filming with a rig that looked very weird. And you would bring this rig into the middle of a room, and you would film people, and there was always this thing, like, “Yeah, you just put it in the middle of the room, and it’s fine.” But what they didn’t think about was that how people regard the rig actually translates to how people regard you in the virtual space. So you had people… I actually filmed Google I/O, and you had people go up to the rig and kind of glare at it, and like check their watch, and be like, “I don’t want this,” and walk away. And you feel awful when you’re in that experience. You’re just like, “This is terrible!” But it’s that thing where it’s like, suddenly you’re like, “Okay, well this thing has to actually look friendly. You can’t just stick a bunch of GoPros on it and expect anyone to be okay with this in the space.

Jessica B.: And the idea of live negotiation, the things that we don’t think about, like I used to actually sit under the rig, so if anyone was like, “That’s weird. They look dumb,” and I’d be like, “It’s fine. It’s okay.” But yeah, it’s just the stuff you don’t think about.

Hod Lipson: I think the way that a robot sees itself and the way it sees other humans or other robots, it’s the same kind of AI. It’s the same kind of intelligence. It’s not separable. The way we see ourselves and the way we see other people, it’s the same kind of ability. So theory of mind and self awareness are all very, very close to each other.

Cristina: Max, we’ve been talking about these ideas, social intelligence, theory of mind, perception, and so on. How about superintelligence? What is that? Could we see a crossing of the line to superintelligence?

Max Tegmark: So right now our own intelligence is really limited by what could fit through Mommy’s birth canal, right? And our artificial intelligence is really limited by our own intelligence and ability to design it. But if we can get to the point where machines are smart enough that they can help design ever better ones, then pretty soon we’re going to be limited not by how clever we are but we’re going to just be limited by the laws of physics. Which also sets limits on everything. And it’s kind of refreshing, if you’re claustrophobic, to realize that those limits are sky high compared to today. You sometimes read these articles in the newspapers about how Moore’s law is about to fizzle out, whatever, but all they’re talking about there is that this particular technology that powers your cell phones today, shuffling electrons around in two dimensions to compute, is going to be hitting a little barrier, and so we’re going to switch to yet another paradigm. Just like we switched away from punch cards in the past.

Max Tegmark: And there’s a really fascinating paper by Seth Lloyd where he just works out, “What are the limits from the laws of physics of how much computing power one kilogram of stuff can have?” And it’s about a million million million million million times above what your laptop can do today. So we can do so vastly more. And I think the optimistic note to take away from this is that, if we can harness even a small fraction of the intelligence that’s sort of latent in nature and tap into it and have it do good things for us, then we’re not in a zero-sum game anymore, where we have to quibble about, “Oh, this little piece of land belongs to my country, not your country!” We can get so much more for everybody, in terms of resources, in terms of solving the problems we face, and if, for some reason, you want more, still more resources than there are on earth, of course with that sort of intelligence, space travel is a walk in the park. There’s so much more we could do out there also.

Max Tegmark: So superintelligence has inspired a lot of people to think, “Hey, maybe this is the next step in development of life in the cosmos.” And at the same time, of course, being the most powerful technology ever invented, it’s also freaked a lot of people out, where people say, “Wait a minute. Maybe we should think this through a little bit first. Make sure it becomes a good thing.” I personally share both the enthusiasm and the concern that we should think things through a little bit first. Fortunately, I think we probably have. I mean, if you look at the surveys of AI researchers, we’re probably some decades until something like this happens, but there are a lot of questions we have to answer first which are really, really hard. So I think we should really invest heavily in researching these questions to get the answers by the time we need them, rather than start working on them the night before someone switches on a superintelligence.

Cristina: Yeah. Like cramming for the ultimate test.

Max Tegmark: Yeah.

Cristina: For Humanity.

Max Tegmark: Exactly.

Cristina: This is a great point about asking questions, and it reminds me that it’s time for me to give you all an opportunity to ask the panel, if you have any questions that are burning. I see-

Speaker 10: Close to the topic, as stated in our title, immortality and superhumanism. And certainly immortality, I’ve heard the debate whether that’s even a good thing or not. But certainly nobody I know is really looking forward to death. So at last in extending life, where are we currently with being able to replace the parts on us that break down? What parts are harder than others? And where do we see this in the near future of where it’s going with this idea of avoiding death at least a little bit longer when our bodies begin to break down and we replace them with artificial parts.

Cristina: Who would like to-

Hod Lipson: Yeah. I think-

Matthew Liao: Oh. Go ahead.

Hod Lipson: I just want to say that a lot of people ask, “Where’s the bionics within my lifetime?” Right? That’s kind of the ultimate question. I think, as a roboticist, I can say that, when it comes to sensors and activators, that’s pretty close. In other words, you can pretty for sure replace things like ears, eyes, knees, muscles. These things are within reach. And anywhere from bioprinting all the way to implants that improve your eyesight. That’s within reach. When it comes to things like improving your ability to think, to imagine, that’s a lot harder. Because we don’t understand sort of the code of how the brain works. And not to say that it’s impossible, but that part is going to be tricky.

Hod Lipson: But then there’s auxiliary things like using AI to edit the genome to eliminate diseases that kill a lot of people. That kind of thing is sort of indirect. It’s not the bionics, but it’s indirect ways of extending life, and I think we’ll see, we are already seeing, the fruit of that. And that will extend even more.

Matthew Liao: Yeah. So yeah. Just to chime in on this, I think there are two approaches. You can have a biological approach, so there are embryonic stem cells. We also know about telomere shortening, cell death, et cetera, et cetera, so people are trying to stop telomeres from getting shorter and shorter as a way of prolonging life. And then using CRISPR as a gene-editing tool might enable us to just sort of, even without getting into bionics, as a way of sort of extending life.

Matthew Liao: And then there’s the bionic aspect, which is, I think a lot of people are interested in things like artificial neurons, right? So again, DARPA, the Defense Advanced Research Projects Agency, is interested. They’re sort of building prosthetic memories. People are coming back with damaged memories, and they’re trying to create artificial neurons to replace. And if that were possible, then you can… I think that’s probably going to be the hardest part, so we can replace hearts and limbs and stuff like that, but the brain is probably the hardest part to replace. But if you can start to build artificial neurons, and if they can function in a very similar way as biological neurons, then you’re going to… that’s going to be another way of sort of getting into life extension.

Cristina: There’s this thing that we haven’t talked about yet. So we’ve been talking more on the side of the machines becoming more human and how we’re responding to that, but how about if we decide we want to continue on through something like a brain upload. Max, what is that? What is a brain upload? Why would we want to do that?

Max Tegmark: Well, the basic idea is coming back to exactly where we started, this insight that maybe intelligence and consciousness is not something mysterious that can only exist in meat blobs but actually maybe it’s all about information processing. We now know that it only takes two gigabytes to store your entire DNA code, like a typical movie download. It’s no big deal. It just takes about 100 terabytes to store all the information in your brain. So if it’s really the information processing that matters only, well why do we need to have it done in meat blobs? Ray Kurzweil, for example, would love to upload himself into a robot before his biological body gets too old, so he can keep living on.

Max Tegmark: Is this science or science fiction? As a physicist, I’m a blob of quarks and electrons. There’s no doubt about that. And there’s nothing special about my quarks. They are the same kind of quarks, dumb quarks, that make up the chair and everything else in here. So it’s absolutely possible. Are we going to figure out how to do it in our lifetime? I think it’s actually going to be much harder to figure out how our brain does intelligence than it’s going to be to build artificial intelligence that can do all those same things. Just like it turned out to be much harder to figure out how to build a mechanical bird than to build an airplane. If you’ve seen the TED Talk of a mechanical bird now, it’s great, but it took 100 years longer than the Wright brothers.

Max Tegmark: And if we succeed in building artificial general intelligence, even superintelligence, if that actually happens in 30 years, you’re still young, healthy, because you keep going to the gym, right? Then there’s no particular reason why one couldn’t use that technology to figure out how to do uploading and all these other things that we want to do.

Max Tegmark: And just a little word of warning. You define consciousness there as model of self. I like more the definition of consciousness of David Chalmers here at NYU gives as simply subjective experience. I experience stuff when I drive, colors and sounds and so on, motions. I don’t know if a self-driving car experiences anything. And before you upload yourself. Or after you’ve uploaded yourself into this robot, the robo-Cristina that looks like you and talks like you. Before you switch off your biological body, I think you should make sure that you understand whether it’s actually having a subjective experience or it’s just a mindless zombie talking and acting like you. Because in the latter case, it would be kind of a bummer if you switched yourself off. Because that’s it, you know?

Max Tegmark: And I think this is a great challenge for scientists to try to figure out ultimately, which information processing feels conscious.

Cristina: Yeah. Matthew, I’d love your thoughts.

Matthew Liao: Yeah.

Cristina: I could see you [crosstalk 00:54:24]-

Matthew Liao: So there are sort of metaphysical and philosophical issues about uploading. So one question is going to be, “Suppose you can do that and you upload your consciousness onto some sort of hardware?” One issue is going to be whether that thing’s going to be you, right? And you might think it’s not because, well, just imagine the case where you uploaded but you’re still there. You’re still kind of standing. So which one is you? It’s going to be the wetware. It’s the meat blob that’s going to be you, rather than the thing that’s uploaded. And so if that’s not going to be you, that’s going to be a bit of a problem. I mean, it’ll have qualitatively… your qualitative characteristics, et cetera, et cetera, but if it’s not going to be you… It’s great if your twin is sort of continuing to survive, but it’s not going to be you living. You’re not going to survive forever.

Matthew Liao: And so, if you care about your own survival, then you might have to think about other ways besides uploading. And one possibility is something that people are talking about, which is gradual replacement. It’s what I was talking about earlier, where you gradually replace your neurons one by one. And there, you might be able to survive. And then there’s going to be also ethical issues, right? With respect to mortality. Do we want to live forever? So we were talking about this earlier in a back room, and a lot of people say, “No, death could actually be a good thing. Because it’s good to have deadlines.”

Matthew Liao: It’s nice that, at this session, there’s a beginning. People get really excited, right? And then there’s a middle. And then there’s an end. But imagine if there wasn’t a beginning to this session. You could just come in any time for the next 100 years, right? And then there’s no end. It just goes on and on, you know? It seems like a lot of human projects lose their urgency when there’s no deadline, when there’s no time limit to that thing. And so some people worry, philosophers worry about that. They worry that immortality might get really boring, right?

Matthew Liao: And then another issue is sort of, “If you can live forever, how is that going to affect your relationships?” Imagine you’re 200 years old. That means your children are going to be 180 years old. That means their grandchildren’s going to be 160 years old or something like that, right? And so what’s that relationship going to be like? Because you and your great grandchildren, who’s 140 years old. So those are things that we need to think about once we can extend life longer.

Cristina: A lot to think about. So I can feel an urgency in the audience to ask a question or two, and I think we have a couple of minutes left. I want to go to the back or towards the back of the room. There’s a man with his hand-

Speaker 12: Do you think computing technology, whether it’s AI or prosthetics or virtual reality, will be a social equalizer and helping people who have been left behind catch up in certain ways? Or will it increase inequality in terms of opportunities and resources?

Jessica B.: I know the work that I did at Google, which I hope they’re still doing, is really trying to get… Like when VR was sort of beginning, like the creators, working in the film… I worked in the film world before then, and I can tell you there are many biases about the film world that I didn’t want to be reintroduced to this new medium that was coming up.

Jessica B.: And so a lot of my work was in an effort to try to get this into the hands of people who should be using this technology and are usually the last to get it. Thinking about accessibility, creating VR experiences from the ground up that actually think about those who are deaf, think about those who have seeing disability as well, people who we usually think of second, and actually create experiences from the ground up thinking about them first. So really trying to figure out ways that we could approach that. It’s slow going, but I think, especially in the VR community, there is a lot of… there are a lot of great women creators out there, people of color as well, equipment being brought to communities, so that they can tell their own stories instead of other filmmakers going in and doing the whole cultural appropriation thing all over again.

Jessica B.: You can’t control all of it. It’s just things come through and seep in, and it’s hard to manage that. I will say that, recently I was asking a friend who’s doing really cool art in the AI space right now? People I should look at. And they’re all dudes. So you know? It’s hard. And I think that, again, there’s inherent bias in data. There just is. Not everyone has access to the internet. Not everyone’s giving their data. The idea of a system training itself is interesting, but then is it human if there’s no human data in it? It’s that kind of weird… it’s tricky. We have a lot of mess in all of this.

Hod Lipson: I think that the general trend is an equalizer because you can see, for example, medical applications that Max mentioned earlier. If you have an app that can diagnose cancer and now you don’t need to go to a doctor. You don’t have to have access. You don’t have to pay to a doctor. Anybody on the planet can have access to this technology because it’s in your phone. That means more people will get diagnosed. That’s the beginning.

Hod Lipson: We’re seeing that everywhere. Driverless cars will allow people in rural areas to have the kind of mobility that right now only people in Manhattan have. So I think, in general, we’re seeing more equalization. But of course it’s going to be rough. There’s going to be upside downs. But that’s, I think, the general trend.

Cristina: I could talk about this all night. In fact, if it was a couple of hundred years, I wouldn’t mind. But I love the idea of leaving on a hopeful note, and this particular bag of quarks and electrons hopes that we do win the wisdom race as a group in grappling with these amazing technologies. Could you all join me in thanking the panelists for a really.