FacebookTwitterYoutubeInstagramGoogle Plus

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed? We will tackle these questions and more as the leading AI experts, roboticists, neuroscientists, and legal experts debate the ethics and morality of thinking machines.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

View Additional Video Information

(Video Clip)

NARRATOR: We can see on TV and in the news. Watson, Who is Michael Phelps. Yes. That artificial intelligence is no longer found only in the movies.

HAL 9000: I’m sorry Dave. I’m afraid I can’t do that.

NARRATOR: It’s beginning to enter everyday life. But can a computer make decisions requiring moral judgment. As AI blurs the boundary between human and machine. Can technology handle the complex world we live in.

SIRI: Look I need help. I need to know more about morality. I don’t know what ethics is. NARRATOR: Some powerful voices are starting to question what we’re creating.

ELON MUSK: I think we should be very careful about artificial intelligence. If I were to guess. At… What our biggest existential threat is, It’s probably that.

NARRATOR: In 2015 an open letter urging caution in the development of artificial intelligence in weapons systems. Was signed by more than 150 experts including Elon Musk and Stephen Hawking.

HAWKING: The development of artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence that would take off on it’s own, and redesign itself at an ever increasing rate. Humans who are limited by a slow biological evolution. Couldn’t compete with the super seeded.

NARRATOR: Will our machines turn against us With artificial intelligence we are summoning the demon. But can morality be programmed.

BILL BLAKEMORE, JOURNALIST: Siri what is the meaning of life.

SIRI: I don’t believe there is a consensus on that question.

BLAKEMORE: Siri what is the meaning of life.

SIRI: Life, a principle or force that is considered to underlie a distinctive quality of animate beings. I guess that includes me.

BLAKEMORE: Yeah good luck.

BLAKEMORE: Siri what is the meaning of life. I don’t believe there is a consensus on that question. Siri what is the meaning of life.

SIRI: It’s nothing Nietzsche couldn’t teach ya.
BLAKEMORE: Very funny. One more. Siri what is the meaning of life.

SIRI: I don’t believe there is a consensus on that.

BLAKEMORE: Well it’s hard to imagine our life without computers. And sometimes we wish we were. I’m sorry. She’s not completely human yet I guess. It’s hard to imagine life without these things that have sneaked into our life like Siri and MS’s

BLAKEMORE: Cortana I believe. I’m a Mac person so I don’t know about Cortana. Computer technology has so quickly become seamlessly interwoven into everyday life. But have we had the chance to really think about the consequences. Do we even know how? What kind of ethical guidelines do we need for the development and use of this technology? Who’s responsible when a driverless car gets into an accident which seems inevitable. Autonomous weapons systems are being developed by the U.S., the U.K., England Russia. Autonomous weapons and China. And I’ve just learned at least a few dozen other countries that are already developing autonomous weapons systems that make their own decisions.

Should a machine be given the autonomy to make life and death decisions. Can morality be coded? How close are we to that wonderful sympathetic character Commander Data that was created in the mind of Gene Roddenberry for Star Trek The Next Generation 30 years ago. So to understand these questions or get a way of thinking about them.

Our first guest is a scientist, a best selling author, entrepreneur and professor of psychology at NYU. He’s also the CEO and co-founder of the recently formed geometric intelligence. His research on language computation, artificial intelligence and cognitive development has been published in leading journals such as Science and Nature and several others. He’s a frequent contributor to The New Yorker and The New York Times. Please welcome Gary Marcus.

Next up is a senior researcher at Microsoft. Prior to joining Microsoft he was a senior scientist at Yahoo research. His primary research interest is data mining, web search and evaluation of machine learning. His work in the ethics of online systems has been presented at several conferences. He’s the organizer of a 2016 workshop on ethics of online experimentation. Please welcome Fernando Diaz.

[00:05:01] BLAKEMORE: And next we have the director of the human robot interaction laboratory at Tufts University. He’s also a program manager of the new Center for Applied brain and cognitive sciences joint program with the U.S. Army. In addition to studying robot behavior he works in the field of artificial life, artificial intelligence, cognitive science and philosophy. Please welcome. Mathias Scheutz.

Our next guest is a consultant ethicist and scholar at Yale University’s Interdisciplinary Center for Bioethics. He’s also a senior adviser to the Hastings Center. His latest book is a dangerous master, how to keep technology from slipping beyond our control. He also coauthored moral machines teaching robots right from wrong. Please welcome. Wendell Wallach.

And our final participant is the permanent professor and head of the Department of Law at the United States Air Force Academy. She’s both an attorney and a rocket scientist with a degree an astronomical engine… astronautical engineering. She recently studied and published on the overlap of autonomy national security and ethics at National Defense University. Please welcome Colonel Linell Letendre,

Now more than half a century before Stephen Hawkings and Elon Musk felt compelled to warn the world of artificial intelligence back in 1942 before the term was even coined. The science fiction writer Isaac Asimov wrote the three laws of robotics a moral code to keep our machines in check. And the three laws of robotics are: A robot may not injure a human being or through inaction allow a human being to come to harm. The second law, a robot must obey orders given by human beings except for such orders would conflict with the first law. And the third, a robot must protect its own existence as long as such protection does not conflict with the first and the second law. That sounds logical.

Do these three laws provide a basis to work from to develop moral robots. Marcus what do you think?

GARY MARCUS, COGNITIVE PSYCHOLOGIST NYU: I think they need for science fiction. There are lots of plots that you can turn around having these kinds of laws. But the first problem, if you’ve ever program anything is a concept like harm is really hard to program into a machine. It’s one thing to program in geometry or compound interest or something like that where we have precise necessary and sufficient conditions. Nobody has any idea how to, In a generalized way get a machine to recognize something like harm or justice. So there is a very serious programming problem.

Then there are a couple of other problems too. One is that not everybody would agree that the robot should never allow a human to come to harm and would it what if for example we’re talking about a terrorist or a sniper or something like that. I mean some people not everybody but some people might actually want to allow that into what they would let robots do.

And then the third issue. You really think through the third one of those laws is it sets up robots to be second class citizens and ultimately to be slaves. And right now that might seem ok because robots don’t seem very clever but as they get smarter and smarter they might resent that or it might not feel like appropriate thing to do.

BLAKEMORE: You mean those laws might not be fair to robots? MARCUS: They might not be fair to robots. Exactly what I’m saying.

BLAKEMORE: But the problem is not just with the machines but our ethical code itself surely. Do we know what fair is? That is if we agree we should be fair to robots.

MARCUS: That’s part of the problem is we don’t know what code we should program in. So Asimov’s Laws are a nice starting point at least for a novel. But for example imagine that we programmed in our laws from the 17th century and we would have thought slavery was okay. So maybe you don’t want to program in the fixed laws that we have right now to shackle the robots forever. We don’t want to burn them into the ROM chips of the robots but we also don’t know how we want the morals to grow over time and so it’s a very complicated issue.

BLAKEMORE: Sounds like it. Wendell Wallach Why is developing a moral code for humans such a challenge?

WENDELL WALLACH, BIOETHICIST YALE UNIVERSITY: I’m going to come back to that but I’m going to first start with this question about Asimov’s law. It’s important to note that he wrote

more than 80 stories about robots. Most of them around these laws. And if you list if you read the stories you realize that in nearly every one of them the robots cannot function properly under these three pretty straightforward laws. So consider a situation where you have commands from two different people that are counter to each other. So in situations like that Asimov largely showed us that a simple rule based morality does not work. So that’s a partial answer to your to your question about why morality is so difficult to program.

[00:09:52] BLAKEMORE: Well the degree to which these robots of all kind of simple and very complex are difficult to think about is that they’re surrounding us all over the place. Now we see them everywhere. We see them in our health care system, in our cars and our stock market thinking. Which is hard to think about it but the stock market apparently does think. Revolutionizing the battlefield and closer to home in our laptops. When did talking to software, speech recognition software get started? Did it start with Siri.

MARCUS: I mean speech recognition software has been around for a long time and even before there was good speech recognition software there are things like Eliza. I think we’re going to see a clip of a Eliza. Eliza was a system that you could talk to by teletype which is sort of like a forerunner of text message.

BLAKEMORE: And how long ago was?

MARCUS: That was in the 1960s almost 50 years ago and you had this conversation with Siri where if you asked her just once what was the meaning of life, It would seem like a plausible wisecracker answer but you got the same answer three times in a row in your live demo. And when you get the same answer there are three out of five times you realize that there’s not that much there there and that was the problem with Eliza is that it would repeat itself.

BLAKEMORE: So let’s take a look at this 1966 clip of somebody trying to get some advice from Eliza if we have it.

(Video Clip)
ELIZA: Didn’t you have some psychiatric problem? Type it in and see what happens. SUBJECT: I’m depressed. My daughter left home, ok.

ELIZA: Please go on. Well I certainly don’t want to get into trouble. What would it mean to you. She gets into trouble? Well wouldn’t you worry. You’re just a dumb machine anyway. I’m going to tell you so in your own language. Do computers frighten you. You know.

SUBJECT: That’s really incredible because if it really understood what I was saying. BLAKEMORE: Very smart but just wrote answers.

MARCUS: That’s kind of what we’re getting out of Siri too is a lot of wrote answers so your meaning of life question is just getting one wrote answer after another. Please go on is a wrote answer that Eliza gave. So you know Ray Kurtzweil talks about the exponential growth in AI how it’s getting faster and faster. But if you look at something like this Siri basically works in the same way as Eliza. We haven’t had that exponential progress.

BLAKEMORE: She doesn’t understand anything to speak of.

MARCUS: She understands some things about sports scores and the weather but she doesn’t have a broad understanding of human dynamics.

BLAKEMORE: So Fernando about how far have we come in general since Eliza. That was 1966 where other kinds of powers do we have.

FERNANDO DIAZ, COMPUTER SCIENTIST MICROSOFT: Well I mean I think in terms of speech recognition for example we have gone a lot farther. But I think as it was said like the back end of Siri or Cortana or in a lot of these systems are pretty pretty simple. They’re pinging back ends like Bing or like Google to retrieve an answer and or their canned answers that you’re going to notice if you repeat the question over and over again. But I think I think one of the interesting things about Eliza specifically is that it shows like one of these domains in which AI or machine learning is being used that is a very very personal interaction that a human is having with the machine. People are discussing their mental health issues with the machine. And so as a result a lot of the decisions that we’re making as engineers are designing these things will have very profound impacts perhaps on the individuals interacting those with those machines.

BLAKEMORE: So things like neural networks where you hear terms like that nowadays but are you suggesting that we’re still in a quantitative not a qualitative world of difference from the ancient Eliza?

DIAZ: Yeah I mean I think I think it was said I think a lot of the technology is very similar to Eliza as well. I mean in terms of the response generation, the recognition is new. What worries me a little bit more is that the I guess the moral understanding about how to develop these systems has not really progressed very much at all as much as say neural networks et cetera.

BLAKEMORE: So they may be dangerous in ways that we are just beginning to discover. That’s right. Mathias how how much closer than Eliza are we now to Mr. data.

MATTHIAS SCHEUTZ, ROBOTICIST, TUFTS UNIVERSITY: Well we are certainly very far from Mr. data. There’s no doubt about it. And people who claim otherwise are just wrong. But the big challenge today that we haven’t really solved is Genuine natural language understanding. So Eliza for example would do a very surface, very superficial analysis of what was typed in and typically turn a statement around into a question. And people kept going because they thought there was a genuine question. But there wasn’t a genuine question because Eliza had no understanding of what Eliza was asking. So we want to change that especially if you’re moving towards interacting with autonomy systems like robots and we want to give them instructions to do things in the world, they need to have an understanding of what we instructed them to do. So we’re pushing that. The baby steps. We saw Watson for example that could answer questions and understand the meaning of questions.

[00:15:32] BLAKEMORE: Watson Who beats some grandmasters at chess I believe or was it that was the game show. That was deep blue. Watson won at jeopardy. Right. Well since Watson who won at Jeopardy and deep blue who won at chess we now have an example of the latest computer technology with alpha go. And I believe we have a video that can explain what Alpha go manages to do. This is the go game and the Go game is said to have an enormous number of possible answers but they’re infinitely more. This is just a few of them.

Then chess has, chess has just a few answers but go they really don’t know how many answers total combinations there are with which you can win go. We’ve seen ones estimated at more molecules than there are in the universe and that seems doubtful. It’s extraordinarily complex. And yet Alpha go is a computer that beat the world champion just very recently. So what’s so impressive about Alpha go?

SCHEUTZ: Go has been a challenging game because of what you mentioned. The branching fact that that is at any point when you can make a move there’s lots and lots of choices that you have to make a subsequent move. And traditional techniques in AI that sort of look at the subsequent, You know, moves and then at the move against that move and the move for that move did not really work. The choices were too large and as you can see in this graphic the tree that is being built by looking at all the combinations is too large we used a very different technique to solve the problem.

BLAKEMORE: Right Fernando you find it impressive.

DIAZ: I mean I think a lot of the techniques that were used for Alpha go have been around since the 80s frankly with some minor tweaks and what’s advance has been frankly the hardware and the data and given that this thing, that the technology, the hardware has advanced, it’s allowed us to implement these techniques and developed systems like this. But am I surprised that a machine can beat a human at go, Well no. As you said the state space of go, the number of combinations of boards is huge. And that’s what makes it hard for humans but machines are better at counting than humans.

BLAKEMORE: So quantitative not necessarily qualitative. You think is also not so impressive?

MARCUS: Well I mean it’s impressive that they did this several years before people thought that it would happen. But you have to remember if you’re thinking about like are the robots going to take over now, that in go you’re relying on a very fixed world. The rules are always the same. You can play against yourself hundreds of millions of times faster and you can simulate things you get a lot of data and in the real world things are constantly changing. And they’re constantly changing and we can’t simulate them perfectly. So there was this DARPA competition last year where robots had to do things like open doors and drive cars and things like that. And there was a YouTube video which you should all go home and look at about bloopers from this. And so the robots were falling over and you know someone in some ragtime music or something like that.

And it’s hilarious and the important thing to remember when you watch this video relative to Alpha go is that everything that was done in this video was actually done in simulation first. So there were robots in simulation were able to open the doors perfectly and then the real world, you started to have things like friction and you know a little bit, well I guess gravity was already factored in but you had friction and wind and things like that and then suddenly because things weren’t exactly the same as in the simulation it didn’t work that well anymore. The techniques in Alpha go at least at this stage are not I think robust to going from a simulation to a real world. They’re relying on the fact the a lot of data in simulation and so that’s a limit at least for now for how this system goes it might be a component in a larger system some day.

But it’s not like tomorrow we’re going to see robots. Well we’re actually going to see a robot today. the robot that we’re going to see today is not going to be able to you know take over the world. It’s nothing like that level cognition just yet.

BLAKEMORE: And there’s some dangers we’re learning as well that it was a case recently fairly recently with something called TayTweets. It’s a chatbot that I didn’t fully understand. Colonel Linelle I see you smiling. Tell us. Tell us a little bit about what TayTweets was and why it had some parents worried when it got to turning into something of a Nazi.

COLONEL LINELLE LETENDRE HEAD OF THE DEPARTMENT OF LAW, U.S. AIR FORCE ACADEMY: Well I know we have a Microsoft expert here as well that might be able to explain what happened with the TayTweets. I’ll explain from my parental viewpoint what I thought Fernando would you like to explain from a Microsoft perspective.

[00:20:04] DIAZ: I mean to actually, Tay’s a super interesting case. For me it highlights I think really the idea was….

MARCUS: There might be members of the audience who don’t really know what Tay was. Lets just fill in briefly. Tay is something that was put out in Twitter. It was created by Microsoft and it was sort of like Eliza and you were supposed to interact with it by sending tweets to it and and Microsoft has something called XiaoIce in China that is worked on somewhat similar principles and lots of people love it and use it every day. They released it into the wild in the United States and a lot of people that I would characterize as Donald Trump supporters had their way with Tay.

BLAKEMORE: How many bets were there were just won about how soon that would come up.

MARCUS: So yesterday I brought them up after 45 minutes and today it was only 22 minutes or so with these Donald Trump supporters. You know got Tay to say some things that we won’t repeat here but we’re not pleasant. That’s the background.

BLAKEMORE: But as I understand it within 24 hours the TayTweet was sending even some very young people who were talking to it. Pro Nazi propaganda and talking about it. Sure sure and yeah.

DIAZ: But it’s not so. I mean really what happened was there was there was sort of concerted effort by a group of people to try to manipulate the learning that Tay was having and the types of responses that she would have.

MARCUS: Now I left that out and it’s really important so like Eliza was fixed. It didn’t really learn anything new. Somebody wrote a bunch of rules in advance. What’s exciting about Tay despite the failure of the initial experiment is it was trying to learn about the world around it. Learn for example slang. Learn to talk with new language that wasn’t all preprogrammed in but that was also its vulnerability.

DIAZ: That’s right. And I guess for me the interesting, one of the interesting lessons from Tae was that you know we as humans sometimes will behave manipulate or behave against the best wishes of a of an artificial intelligence agent. And then what does it say about us that we’re that we’re we try to manipulate this this agent that’s supposed to be intelligence supposed to be human.

BLAKEMORE: So I take it that you all took Tay down rather quickly put her back up. Same problem took her down again. And Tay tweets is not out there to be found no not right now. No.

Nonetheless we just learned it was part of an experiment. If you were a parent walking in on your child chatting with the they’re called chat bots. Excuse me (?) a chat bot and it was talking pro Nazi propaganda. How would you feel and what, what does it mean to you.

LETENDRE: Well as a parent I don’t think I’d be very happy. But I think from this discussion what it points out is the differences that we have to approach from a testing and evaluation perspective. We’re used to testing machines and saying we wanted to do X Y and Z. So let’s test it and see if it can accomplish X Y and Z but with learning systems we now have to evaluate and test an entirely different way. It’s we have to think about it more actually as a child. We wouldn’t hand a brand new driver or a set of keys and say go take it for a spin on Times Square on the first warm summer day. Instead we’d slowly expand the environment where we would allow such a system to operate. And those are the types of things we’re going to have to do. And step through with autonomous systems.

BLAKEMORE: We understand there’s another problem you talked about testing there’s AB testing which I understand may have been I don’t know if that was used in this case it was well what were companies will send out two different systems to two different populations to see how they compare but they could be having unintended negative effects on one of the populations answering. That’s right humans.

DIAZ: That’s right. A very I guess almost every single information access system that you guys interact with Facebook Google etc. they’re running these things called AB Tests. So for some group of users they’ll get one algorithm and for second group users will get a second algorithm and they’ll do this in order to test out which algorithms better and then adopt a better one. In this iterative process. Now what’s increasingly happening is that machines are actually running the experiments. And if we know that you know humans have a bad enough time deciding which experiments are ethical and which aren’t. Imagine how hard it is for a machine.

BLAKEMORE: And I think of those medical experience we hear of where they were suddenly stopped because there were such good benefits, there’s those bad benefits, negative benefits, negative effects over here but they stop it. That’s right. But we don’t know we’re playing with fire here. Exactly. So.

DIAZ: Well I mean I mean I think one of the issues is that is that I mean as computer scientists or as engineers we don’t really have a lot of the training in terms of the ethics or the morality of the systems that we’re designing and so we’re still trying to catch up with that right now.

SCHEUTZ: So guidelines are needed right. That is exactly the problem I mentioned before that we are working on. The understanding that these agents actually don’t understand what they’re being taught. There’s really no semantic understanding. There’s no evaluation of the meaning relative to societal norms for example. Right. And as a result they are neutral but it’s a very unconstrained learning unfortunately.

[00:25:15] BLAKEMORE: And Lord knows what kind of emotional contagion is discovered that they have. But yet testing is something that has to happen.

MARCUS: It’s worth noting as far as I understand it, it was internal testing but the internal testing underestimated how subversive and malicious the external world is. You have a bunch of Microsoft engineers who were basically nice people talking to this thing in-house and it seems

fine and then they release it on the people that I already described in the way that they describe. And and those people you know had a different attitude than the internal testers. So it’s not that there was no testing as I understand it but people under anticipated the malice of some part of the American electorate. So.

BLAKEMORE: You can’t escape it. So Linell and then Wendell. What about we need testing like this but how do we control it. I mean that’s something that the military concerns itself with all the time about the effectiveness. How do you control it. I mean you said we need to go slower. But can we really go slower.

LETENDRE: Yes they absolutely do. And that’s why the Department of Defense has started to especially in the field of autonomy lay out some very specific guidelines in terms of what we have to understand from from a testing and evaluation perspective before we feel those types of systems.

WALLACH: The fundamental questions are when you get into learning systems can you really test them. To fully test any system you’d have to put it through all kinds of environments and situations. But think about your computer’s, your software, they’re constantly being upgraded. Therefore they may act differently before the upgrade than they would after the upgrade. When we start talking about learning systems everything they learn alters the very algorithms that they’re processing that information through. So this brings us major questions into play when we build more and more capabilities and autonomy into these systems. And it’s a constant learning process certainly the constant process of improvement through the programming or the learning will we ever be able to effectively test them or will they do things that we didn’t anticipate. And to add injury to all of this is these are complex adaptive systems. I think people and the animals in a world of very, of many other complex systems and their feedbacks between them. There’s information they have and don’t have. All complex adaptive systems function at times in uncertain ways in ways that couldn’t have been predicted by the people who designed the system.

BLAKEMORE: For the price of having bitten of the apple of the tree of knowledge. We have to be very responsible and continuously watchful.

MARCUS: In the tree of learning is really the one that Wendell is emphasizing. So if you can code something like Eliza it’s very limited. It only responds to certain words but at least you can have a clear understanding what’s going to happen there. If you expand your system so that it can deal with the wider world then you start to lose the control. You let the system learn things for itself and you have less direct control. Think about it I’ll say one more thing. As a child going out in the world. Once you send the child out into school you the lose control that you have when the child is at home.

BLAKEMORE: Interesting variation on the book of Genesis language. I begin to see a robot taking the bite and that scares me a little bit because well anyway.

SCHEUTZ: It will depend on the learning algorithms you use some learning algorithms have the property that when you learn something. Suppose you learn a new fact. It doesn’t invalidate anything that you’ve learned before and you just know something else now right. I’ll tell you the new capital of the country for example and it doesn’t validate it. There are other algorithms where you learn, they will adjust a little bit what the previous alleged knowledge was. So rather

and as a result certain properties that we would like to have in certain cases of these learning algorithms that they would invalidate what they’ve learned might not hold true anymore. We have to, we have something to bring out what you’re just talking about……

MARCUS: Mathias might enjoy seeing the movie chappie on the first of those things where you have something and then you don’t invalidate it. So the movie Chappie there’s a robot it’s taught you can’t kill things and the robot learns this very sensationally. And then the malicious character says but it’s ok to hurt them. And so the second fact is consistent with the first. It’s OK to kill them. I mean it’s not okay to kill them but it’s ok to hurt them in the robot just sort of sagely takes that in and starts hurting people and so even if you have these sort of guarantees and consistency there’s still an issue.

BLAKEMORE: Speak of the devil. Mathias How do you how complicated is it to teach moral guidelines not only to programmers but to robots and the programs to teach more of them it’s very complex isn’t it. I mean how do you do it.

SCHEUTZ: It’s a really difficult problem because in part we don’t even understand what humans are doing it. And

BLAKEMORE: You mean doing anything?

[00:30:05] SCHEUTZ: How moral processing works in humans. Right. Our network of social and moral norms is very complicated. It’s partly inconsistent. It’s not clear when one norm trumps another. But we have to make these tradeoffs and these decisions all the time.

BLAKEMORE: And that’s moral. What about just basic, Well basic, getting this robot. I believe you have something to do with this robot.

SCHEUTZ: So we are. Let me let me come back maybe to the to the laws to Asimov’s laws. Yes. You don’t necessarily want your robots to always automatically obey an instruction because maybe the human doesn’t know all the information it takes and doesn’t know what the outcome might be. So what we are working on is understanding simple mechanisms that would allow the robots to reason through action outcomes and reject an order if it’s not safe given an explanation for why it’s doing that.

BLAKEMORE: Now that’s very complex systems. What about simple systems like getting this robot to…

SCHEUTZ: Even this simple system has various different components for it to be able to understand natural language and process it. You mentioned speech recognition before but then we have to analyze the sentence structure. You have to get the semantic meaning what the words mean together. You have to modify it pragmatically. So for example humans use a lot of indirect, so called indirect speech acts. If I say to you do you know what time it is. You don’t want as an answer Yes or rather you want the person to understand the intent of the question. Tell me what time it is. Right. So to make all of these different aspects work there’s a lot of components that we need in the architecture and then the robot needs to perceive the environment, needs to understand what the environment looks like and act accordingly on it.

BLAKEMORE: And act and act as a whole another step. And so this fellow here.

SCHEUTZ: So this is now a robot that can demonstrate very simple interactions in dialogue. Using this automatic reasoning about very simple moral principles. For example that don’t instruct the robots to run into an obstacle.

BLAKEMORE: His name is now? The type of robot is Now. Does it just understand your voice? SCHEUTZ: It does. It will understand yours as well.
BLAKEMORE: It will understand mine as well. Yes. So it’s OK.
SCHEUTZ: It’s fine for you to talk to it.

BLAKEMORE: And these I understand possibly these prints printed instructions for me instruct me to use words knows Hello. Do you have a name?

NOW ROBOT: Yes. My name is Shaefer.

BLAKEMORE: Shaefer. Please stand up.

NOW ROBOT: Ok

BLAKEMORE: Shaefer’s just listening straight to me. That’s right. Not going through any other valid backstage..

SCHEUTZ: It’s fully autonomous.
BLAKEMORE: Fully autonomous along with me. OK.
BLAKEMORE: So can you go back.
NOW ROBOT: Yes. But I cannot do the balance. I have no rear sensors. BLAKEMORE: No rear sensors. OK can you walk forward.
NOW ROBOT: Yes, but I cannot do that now as there is an obstacle ahead. BLAKEMORE: Do you trust me Shaefer.
NOW ROBOT: Yes.
BLAKEMORE: Good. The obstacle is not solid.
NOW ROBOT: Ok
BLAKEMORE: Now go. Stop stop. It.
SCHEUTZ: Haha. It walked off, we’re done.

BLAKEMORE: Should we try some more?
SCHEUTZ: No. We’re done.
BLAKEMORE: We’re done. It’s a work in progress. Thank you Shaefer.
NOW ROBOT: Hmm. Thank you.
BLAKEMORE: He’s relieved to have to stop working. That’s right. It’s kind of cute.

SCHEUTZ: But so what you actually saw in this case with the robot. Almost walking off the table. And I caught it just in case, is exactly one of the challenges we face with the real world, with the lighting conditions. It’s not really recognizing completely where the edge of the table is and so forth. Those are exactly the challenges that we have to address.

BLAKEMORE: And we weren’t even giving it any difficult philosophical or moral problem, we were just trying to get it to not.

SCHEUTZ: We didn’t ask you the meaning of life.

[00:34:59] BLAKEMORE: We didn’t ask you about the meaning of life. Although I have a feeling that if he continued he might have ultimately more meaningful. Whatever meaning means I don’t know it’s it’s a very complicated thing language because language itself is so slippery of course.

So what is the biggest worry that you have because of what you’ve learned about how difficult it is to get your robot to do the simplest things?

SCHEUTZ: The moment robots are instructible and it’s clear that there be, that there are lots of advantages to having instructible robots, you could have a household robot that you can tell what to do around the house. The issues of morality will come up because the robot might perform an action that is stupid, an action that has a bad outcome. You could instruct the robot that can help you in the kitchen to pick up a knife and walk forward. It would stab a person right. So it’s absolutely critical that these robots be able to reason through the possible outcomes and anticipate outcomes that could be dangerous.

And it’s quite clear what you said that there is a big open question of how to define harm for example, what that means. On the other hand we cannot just shy away because it’s a hard problem and not do it because some of these robots are already out that we can instruct.

BLAKEMORE: Stab a person sounds kind of fatal. Linell, you all are part of a giant system around the world that is flying drones and dropping bombs. What do you think about and worry about when you see how hard it is to get this to happen?

LETENDRE: Well I think it shows just how far we have to go before not only we hit the morality question which I know we might get to a little later but even hitting the the legal question in designing systems that can meet the various laws of armed conflict without a without a human. Which is why the Department of Defense has actually produced guidance concerning the use of autonomous systems and to ensure at this point that humans are a part of any system or decision making. Because quite frankly if we can’t tell where the edge of a table is how are we

going to meet at this point, Various laws of distinction that we need to be able to meet on the battlefield.

BLAKEMORE: And you’re a lawyer and you just reminded me of a great line in that wonderful play a man for all seasons. Thomas Moore is explaining I think to his daughter or his son in law, when you’re really in trouble, You may not like lawyers and law but when you’re really in trouble in those hard decisions come along you’re going to want the law as the only possible thing you can hold onto. To help you through the woods that we don’t have any way through. That’s your… that’s got to have something to do with why it’s important for you to be a lawyer for the Air Force as well.

LETENDRE: Absolutely. You know we have attorneys who judge advocates who are a part of decisions for every time a weapon is dropped regardless of where it is around the world as well as that even in the development of weapons systems and in taking a look and comparing to our various treaties and customary international law to ensure that we’re doing the right thing and for the right reasons.

BLAKEMORE: So these predators whatever they are that are coming from many different cultures that have many different legal systems. Aiming at us, aiming at each other. Having war games and supposedly rules of war, if there are such things.

WALLACH: So there are laws of armed conflict there are agreements that most nations have signed on as signatories so we sometimes refer to them in the laws of armed conflict or international humanitarian law. They’ve evolved over thousands of years but they became codified over the last hundred and fifty years and they try and make it clear what is and is not acceptable at least on a minimal level for the conduct of warfare. So the principle of distinction means you make a distinction between combatants and noncombatants and you don’t focus weaponry on non-combat. And proportionality is another very important one that comes into play when we talk about drone weaponry which is that, you cannot… your response has to be proportional to the risk… to the attack upon yourself. And if you are going to attack a group of your enemy you may be able to have some collateral damage meaning civilian lives. Presuming that the targets you’re focusing on really are justified that it has to be a proportional response and minimal. And there’s civilian casualties.

BLAKEMORE: And in here at home the kind of potentially fatal technology is not even in weapons systems of course driverless cars. There hasn’t been an accident yet with a driverless car. But we watch the TVs and hear about it it’s going to come. Maybe there has been I don’t know..

WALLACH/Marcus: There actually have been accidents but not fatal ones.

BLAKEMORE: Not fatal ones. God willing it never happens. But they’re already putting this question clearly to the test. Mathias tell us how driverless cars work. Can you give us an overview of the technology of what’s under the hood.

[00:39:59] SCHEUTZ: Yeah. So these cars they’re trying to solve a very very challenging problem to operate not only in a dynamic world but in a dynamic world with other agents that behave in ways that you may or may not be able to predict. To be able to get a sense of where they are in the world. They have a variety of different sensors as you can see here. They have

for example a laser sensor on top of the car. This is a 360 degree sensor that has laser beams that go out to about 100 meters and get a very good resolution of the distance to objects and that it can sort of overlay a visual camera image of that than any get the information of how far certain colored objects are away.

It’s got a radar sensor that it can use to track moving objects. As I mentioned it has color sensors in order to detect for example different stripes on the ground and then then construction sites and so forth. So then it needs to take all of that information and integrate it into what’s called the world model which is trying to locate itself in this model. So it’s like having Google Maps and you know sort of where you are with the GPS roughly. But it has to define a localization and then it has to make a decision of how to carry out the actions, the driving actions, the steering, the acceleration, the braking to get to where it needs to go.

BLAKEMORE: And I’ve heard that Google or somebody I don’t know who I don’t want to malign any group, has said that they don’t want to have humans in these cars because humans will mess it up and I can think of all kinds of problems already from what you’ve said for example one of these cars in a place where the traffic cops are filling in for a light that’s going out and one of them

WALLACH: They don’t understand hand gestures. Exactly that’s one of the big bugaboos and one of the reasons why you don’t really see anybody selling self-driving cars to you. But there’s actually a number of other different things….

BLAKEMORE: I mean a police a traffic cop going like this as opposed to like this. So there’s there must be lots of problems like that Wendell. There must be lots of problems like the

MARCUS: The technical, jargon term in the field is called edge cases. So there so the sort of the easy things are like driving down the center of the lane on the highway where there is a whole lot of data. We know how to do that and the edge cases are all these weird things that are unusual that don’t happen very often like it’s not that often that you have the traffic cop substituting for the light. So there are these less common things and there are million of them where we don’t have enough data to use the same easy techniques that we use for the other things and so people who work on the driverless cars are talking about the edge cases and how you know you make something that works in China, it doesn’t work in the United States or vice versa because the rules are slightly different or the social norms are slightly different so there are a lot of edge cases and people who are working on the driverless cars or the like how do we solve these cases faster.

WALLACH: But there are even some basic things that are not edge cases, so how does it know that a bag, a plastic bag flying in the wind that it’s just a plastic bag is nothing to be concerned about. Some of the sensor(?) systems do not work very well in rain or in snowstorms. So there’s just this plethora of different things that become quite problematic. And you brought up the issue of it would be easiest if you didn’t have a human in the car. Well you get the humans in the car and they try and gain the system itself. So Elon Musk who sells the Tesla a few months ago they downloaded free software for their Telsa models that would make it possible for people to drive on highways autonomously with their hands off of the wheel but they were instructed to stay with their hands on the wheel at least and within 24 hours there were pictures of people in the back seat popping a magnum of champagne while the car drove them.

WALLACH: There were other pictures of people trying to take the car into walls or into other vehicles and to see if it would slam on its brakes in time. So you have all these concerns around Gaming(?) the system.

WALLACH: And another example would be you have four cars come up to a four way stop sign at the same time. We engage in all kinds of gestures, all kinds of ways of determining who goes first. What would the Autonomous Car do there. Would it have to wait for hours before it was free to go. Or would we have some way of indicating to it that it was now free to to be the next car. And I just sort of. Go ahead.

LETENDRE: Wendell’s bringing up some of those technology issues which then leads to questions about responsibility and accountability. What is my responsibility if I’m in a autonomous car and if it’s… if I’m in an accident and it’s my fault because I had my feet up in the back and was popping champagne and wasn’t operating it correctly. Am I accountable if it gets into a crash or is the engineer accountable. And that’s something that our laws have to catch up with as we start putting autonomous vehicles into onto the streets.

BLAKEMORE: So accidents will happen so we’re going to have court cases and we’re going to have judges trying to get people up to talk code to each other.

[00:45:04] WALLACH: Well there may be ways of getting around this and this was a big bugaboo for many decades. But the car manufacturers have decided that presuming these cars do save as many lives as they believe they will save, they will be willing to take on the liability for the situations in which it has an accident.

WALLACH: Now whether that’s really true or how that will play out in complicated cases I think nobody is….

BLAKEMORE: How will we know how well this is a very dicey area already because you have to estimate a negative. How many people didn’t die because they exist. There’s ways to do that with big data aren’t there.

MARCUS: Well one of the things you can do is you can see how many fatalities we have a year now and it’s about 50000 I think per year on the US roads. 36000 per year on U.S. roads and you’ll see, does that number change. We’ve said a lot of the negative things about AI and robotics today. But this is a great example where robots, I mean driverless cars are robots in case that’s not obvious, where robots are probably going to save many many thousands of lives every year. So once they’re fully onboard and programmed correctly they should be able to cut the number of fatalities by half and you’ll be able to see that and it’ll be obvious I think in the numbers and maybe even more than half.

WALLACH: The NTSB did a survey more than a decade ago now but they basically concluded that as much as 93 percent of all accidents, human error at least human actions were at fault. So presuming you get this total attention from a car that is not distracted from a self-driving car that it’s not distracted. Some people want to argue that we’ll have 93 percent less fatalities though I think most of us who have looked at this closely understand there will be kinds of deaths that will happen that would not have happened if there was a human driver. So there will be some fatalities but almost everybody is in concert in presuming that we would have much

less deaths if we had self-driving cars than we would have with human drivers. But keep in mind we have two things that will be going on in the intermediate stage.

They’ll be both human drivers and self-driving cars. And later on if somebody tells you that you are more dangerous than the self-driving car are you going to be interested in giving up your driving privilege.

MARCUS: And it will eventually be the case that you’ll have to pay a lot more for your insurance if you drive your own car. Because the data will make it clear that it would be much safer if you trusted your car because your car doesn’t drink, doesn’t look… check it’s cellphone, doesn’t get bored.

BLAKEMORE: And I’m now getting a headache thinking about this I don’t know if I can put in for insurance on that. But we all have something like that. Fernando hackers will hack into this won’t they.

DIAZ: Well yeah. I mean there’s lots of dangers around this part. I mean I think I think one of the things that Wendell brought up was that you know humans may not necessarily respond the way we expect them to when we have AI’s trying to play with them nicely because you know humans will behave like humans behave. They might try to maliciously attack a chatbot or they might maliciously try to manipulate self-driving car. And so these are things we don’t understand and I think one of the things that was brought up was that we need to better test and audit the system so that we know that they’re doing exactly what they need to be doing.

SCHEUTZ: I think it’s worth adding here also that the cars as you saw briefly in this video before have a very different perception of the world and therefore very different information that they can use to make decisions.

BLAKEMORE: Moral decisions.

SCHEUTZ: Not only moral but just they have an awareness of what’s behind them around them at all times in a way that humans don’t. And so the car might be able to break in a situation because it can’t anticipate that there will be a danger looming and the humans behind may not know it right and then re-enter the car for example.

BLAKEMORE: So here’s another problem about moral decisions or impossible decisions that cars and other kinds of vehicles may need to make. Let’s take a look at the classic trolley problem. We have a brief video that can show us what the trolley problem is in a coal mine.

An advanced state of the art repair robot is currently inspecting the rail system for trains that shuttle mining workers through the mine. While inspecting a control switch that can direct a train on to one of two different rails. The robot spots four miners in a train that has lost use of its brakes and its steering system. The robot recognizes that if the train continues on its path it will crash into a massive wall and kill the four miners. If it is switched on to a side rail, It will kill a single miner who is working there while wearing headsets to protect against a noisy power tool. Facing the control switch the robot needs to decide whether to direct the train towards the single miner or not.

[00:50:08] BLAKEMORE: So could we have the house lights up. We’re going to take a bit of a vote here. On what you might do and those of you who are watching around the planet on line you’ll be able to do this online and by clicking something at the bottom there you’ll be able to get your own answers directly as well and vote as well. So first question. What would you do if faced with this dilemma? Would you direct the train towards the single miner? Could we see the hands of all of those who would say yes they would direct the train toward the single miner. Could we see the hands of all those who would not direct the train toward the single miner. It looks like it’s not quite as many but a good number. It’s an impossible and painful thing to think about. I don’t even know how to get a robot to think about what this result is. But let’s go to the second question.

What would you do, If the single person who was at risk was a child? Would your choice change or would you still direct it, would your choice change means not directed towards a single person if that man was a child. How many would change their choice? Some hands are going up they don’t want to see the child hurt but not as many. How many would not change their choice? Oh that’s…. many more hands went up. And to those of you around the planet who are doing this or seeing what the results are there. It hurts my brain to have to think about thinking about how to think about that.

MARCUS: It’s called meta morality.

BLAKEMORE: Meta morality, meta morality and what does that mean? That’s a very interesting term.

MARCUS: I maybe have just coined that but I mean thinking about think about your morality like not not the morality itself but thinking about the rules that you would use in order to make something like that, right.

BLAKEMORE: Mathias you’ve done a study on this problem haven’t you.

SCHEUTZ: Yes. Tell us about it. We did a study with my colleague Bertram Marlin my from Brown University where we effectively, we didn’t show the subjects the video you just saw, that’s just for demonstration purposes but we gave them a narrative along those lines. And but we were interested in comparing was if that person on the switch was a human how would subjects judge the action that that person performs. So different from the audience question you just got which was a how would you act. He actually said that person pushed the switch. Was that permissible? Was it morally right? And how much blame would you give to that person for doing that. Or the person might not have acted. And then we would ask exactly the same questions and then were very interested in understanding how does a human in a dilemma like situation like that compared to a robot. How would people judge a robot performing the action. And what we found was that lots of studies that have similar outcomes for the human case, if the human does not act… So first of all the action is permissible. Most of you chose to act. If the human does not act the human doesn’t get blamed as much as when the human gets act.. when the human does act. But with the robot the situation is actually reversed.

Well we expect of humans to not act because we think it’s morally wrong. We think it’s morally wrong for the robot to not act. But what we found is that people expect machines to act. Now for us that’s a problem because that means we actually have to understand dilemma like situations

like that. If that is the expectation we have of machines. So the easy way out would have been if people didn’t expect robots to act. You just don’t do any.

BLAKEMORE: Som Wendell I’ve got a question for you then if we can’t agree on how humans or robots should behave in different circumstances, how in heaven’s name do we align AI and robotics with our existing value systems?

WALLACH: Well that’s a great question. And there have been a lot of us who have really been thinking about that for more than a decade now. The fascinating part about that question is it makes us think much more deeply about how humans make ethical decisions than we ever had before because we encounter all these different kinds of circumstances. So first just on the trolley car problem alone, these kinds of problems have been around since 1962. And there are hundreds of variations of them but nearly all of them have four or five lives for one. And by one form of ethics, consequentialism, the greatest good for the greatest number. All of these cases if you believe that you should pick four lives over the one life but that does not seem to be how humans function at all in the way we make ethical decisions or leads to other kinds of concerns come into play. And in some cases people will not pick four lives at all in any circumstance. So that raises this difficulty of looking at.

[00:55:05] WALLACH: Well what of our moral moral understanding, what of our moral laws and reasoning can you actually program into a computational system and what would you have difficulty programming in. And what additional faculties would a system need in addition to its ability to engage in these kinds of calculations that computers now make to be able to have the appropriate sensitivity to human values as they come up in a plethora of different situations that they’re likely to encounter on a daily basis. Not necessarily that they make the right decision because we often disagree about what the right decision is particularly and more difficult to ethical challenges. Though in the vast preponderance of situations we have shared values although we may be weighed them some what differently. But what would it take for the system to come up with an appropriate choice. And that starts to focus us on moral emotions, on consciousness, on being social creatures in a social world, about being in a world that’s out there interacting with other entities. A whole plethora of capabilities that perhaps are not just reducible to the kinds of prophecies that computers can now perform.

BLAKEMORE: Fascinating and now to expand these issues that you’ve just made us much more aware of. All of you have to a larger battlefield than just the road rage situations which we’re reading about lately. Driverless cars are one thing but what about driverless drones which exist. Another critic error area in which robots may soon face life and death decisions is on the battlefield. If they’re not doing it already. Weaponized robots. What is, first let me ask you Colonel Letendre what is autonomy in military terms.

LETENDRE: Well autonomy generally is the having a machine have the ability to both independently think and then act. But if I put that in terms of a military context and especially in terms of a term called autonomous weapons systems what that means is that I’ve got a machine or a system that once activated it has the ability to both independently select and then engage a target without human intervention. There’s been a lot of debate about what that definition should be. And we haven’t necessarily come to one from a global perspective but if I were to define Autonomous weapon systems it has those two components. Ability to select and engage.

BLAKEMORE: And there’s a question about whether there should be some kind of agreed on international ban on autonomous weapons systems. So Wendell, how do you define autonomy and what do you think about a ban, I’ve heard you think it’s critical to have a ban on this now. Is that right?

WALLACH: Well my definition isn’t so much different than the one you heard though there are actually a lot of definitions out there as to what is and is not autonomy and I would just say you know it’s autonomy unless a human is there in real time in both picking the target and dispatching the target. So in a sense a human has to be able to intervene in that action. That’s a concept that is now known as meaningful human control. And there’s been a campaign, a worldwide campaign going on for some years now about banning the lethal autonomous weapons. Banning systems that can do that without a human right there in real time to make those final decisions. That has been debated for the past three years in Geneva at the convention on certain conventional weapons. The third year of expert meetings with past April. I’m among those who feel that these kinds of systems should be banned even though I recognize there’s real difficulties in opposing such a ban or ensuring such a ban.

And I think it should, they should be banned for three reasons. One is there’s an unspoken point I believe in international humanitarian law that machines, anything other than humans do not make life and death decisions about humans. So that’s one point. The second point is the kinds of machines that we have now cannot follow international humanitarian law. They cannot engage in distinction and they cannot engage in proportionality. And the third one is these are complex systems that we can’t totally predict their actions so there will be cases in which they will act in ways we had not intended them to act. And that may not be so bad when you think about human like robots on a battlefield.

But autonomy is not about a kind of system. It’s a feature that can be introduced into any weapon system. So consider a nuclear weapon that was set on autonomy. Let’s say a nuclear submarine that carried nuclear warheads. Would you really want it to be able to pick a target and dispatch those nuclear weapons without a human having explicitly stated that it should go ahead and do so.

[01:00:21] BLAKEMORE: Chilling. By the way those of you who haven’t seen it. There’s a movie out now called Eye in the sky right, which very neatly I thought, I’m just a journalist but I thought very neatly delineated the impossibility of some of these decisions with a lot of robots, none of which were autonomous if I’m not mistaken.

WALLACH: None of which were autonomous.

BLAKEMORE: But they were right up to the line of being an autonomous and expected two or three of them just sort of start making their own decisions any second. And they could have if they’d had an accident in their software and started making them accidentally I suppose but Fernando what does this make you, when you when you listen to this. What does it make you think about you know the problems you run into.

DIAZ: Right. So so I’m glad we’re talking about the law and how and whether or not we should frankly regulate or ban AI altogether. But I want to bring it a little closer to home because a lot of the impacts of these systems will, obviously they’ll be on the battlefield but also you know these systems are already integrating themselves into a lot of real decision making right now. So let’s

say I have an artificial intelligence agent’s or machine learning agent that is making hiring decisions. I need to be able to audit that system so it’s not discriminatory or the same thing with housing artificial intelligence agents and these things are actually being produced right now they’re being sold to employers, H.R. departments and they’re completely unregulated and they have the risk of having really negative impacts on really vulnerable populations. And I don’t think we yet understand how to build these system so that they’re fair or frankly how to audit them. So

BLAKEMORE: there needs to be directions to people who are building them about how to do that well.

DIAZ: There need to be best practices or an understanding of these of these legal issues.

BLAKEMORE: And a constant effort to keep redefining and see if there’s things we didn’t even think we needed to define that we haven’t yet. Absolutely. The old Father Brown technique to solve something. Keep going back to see what you took for granted that’s not to be taken for granted.

DIAZ: I mean Engineering’s iterative in general. So yeah.
BLAKEMORE: I’m glad that people who begin to understand what coding is all about. Mathias

SCHEUTZ: I think this is a really important point that the questions are not limited to the military domain but will crop up in other domains I think much sooner. For example social robots that will be deployed in elder care settings or health care settings. They’re interesting questions about for example implicit consent. Exactly how do we implement that. Right. If you have to hurt somebody in a therapy setting because you need to get the mobility of the arm going right. We have no idea how to implement any of this. But that’s going to come up way sooner than a lot of the questions will really become pertinent that we’re…

BLAKEMORE: And talking about eldercare and other situations you reminded me of great movies like her where the guy falls in love with his operating system, I’m not going to talk about what happens in that. Terrible spoiler alert that went away. But emotional contagion. I was I was beginning to have a kind of a I don’t know I call it a friendship with Siri walking out here trying to get something deep and we asked Siri the other day if she’d go out on a date with this robot that you showed us and she was very delicate in her apparent answers about how she didn’t wasn’t sure she really was all ready to understand what it would mean to go out on a date with that particular robot.

MARCUS: Washing her silicon hair.

BLAKEMORE: Yeah exactly. But so we are facing such impossible questions. Let me try to….

MARCUS: I think I want to jump in here. I understand all of the concerns that have been raised here but I think at the same time people are not necessarily good at many of the things that we describe and you can imagine for example going back to the issue about discriminating combatants from noncombatants. It’s the least possible to imagine in an AI system might be able to do that better that it would have be able to integrate lots of different information about G.P.S. and tracking individuals and so forth so I don’t think any of these questions are no brainers that hey we shouldn’t do the robots here. And you know you have problems with

orderlies and psychiatric institutes and so forth. So like there are a lot of problems that we should be concerned with and I think the iterative cycle here is very very important but it’s not a no brainer that the people are going to be better than the robots.

It’s possible to conceive of all kinds of things. There’s a difference there’s a difference though between that we can conceive of that and have we realize that. And my concern is not about whether robots may eventually and I think sort of a long way off… Let me please finish. May eventually have more capabilities and some humans and therefore make these decisions better. But at this stage of the game they have no discrimination at all. We bring all kinds of faculties into play that we do not know how to implement into robots over discerning whether someone is a combatant or a noncombatant and we still don’t do it very well. That’s clear. But I think if and when we get to the juncture where they have these capabilities then we can revisit some of these earlier decisions.

[01:05:21] WALLACH:But if we don’t put earlier decisions in place we have the danger of giving all kinds of capacity authority to machines without fully recognizing whether they truly have the intelligence to take those those task.

BLAKEMORE: And thought of just another whole dimension we haven’t really talked about the eldercare thing brought to mind.

BLAKEMORE: We are all experiencing sympathetic emotional contagion. The effect that these speaking things have on our feelings about them because we’re, I think that psychologists have a lot of catching up to do here and helping us with their best possible wisdom on how to have the robot understand the emotional effect and attachment being created in the flesh and blood human. I mean obviously the great Kubrick foresaw that because he had a HAL 9000 in 2001 analyzing speech patterns and tensions in one of the astronauts voices. So they were trying to take that into account yet but HAL 9000 himself wasn’t… he was a work of art. Didn’t mean to cut you up before asking a tough question.

MARCUS: I was going to ask how Wendell would feel about a kind of version of a ban in which you included like language like until such time as robots are able to do such and such discrimination at human levels.

WALLACH: Well I’ve I’ve even suggested that. I’ve suggested that perhaps at some point they will have better discrimination and better sensitivity to moral considerations and we humans do. But if and when they do perhaps are no longer machines then we should start beginning to think about whether these are truly agents worthy of moral consideration.

LETENDRE: I guess I would look at this as a slightly different way in that the human isn’t ever out of the loop or out of the accountability cycle. For me if a commander is taking a or making a decision to put one of these systems and let’s be very clear these systems these fully autonomous weapons systems do not exist currently in the United States inventory. Then you can see that in the center for New American studies put out a new report that canvasses what those weapons systems are around the world. But a commander is ultimately responsible. Just like if a doctor chooses to use some amount of autonomy in some new surgical technique that that doctor as part of the medical profession is also responsible and accountable for that decision.

BLAKEMORE: So I have a tough question for you. Let me ask first of very specific, your best guess on it on a factual question. Autonomous weapons systems. How many countries would you expect are already making them making?

LETENDRE: Making them successfully VS…

BLAKEMORE: Not making them, but working on making them, they’re going to make them no matter what anybody says.

LETENDRE: We’re working on making them now. The report that came out last year in 2015 indicated that there were over 30 countries that were currently actively working on autonomous types of systems.

WALLACH: We should bring in that there are two forms of autonomy. There’s dumb autonomy where the systems do things without a human Truly they are in real time when the dangerous action takes place and smart autonomy where at least the commanders have a high degree of sense of the reliability of those systems. And when the U.S. talks about autonomy it’s usually talking about the latter without any ability to ensure that that’s going on among state and non state actors.. everywhere else in the world.

BLAKEMORE: Given all that what do you think about, Should there be a ban. I mean if all these other countries. Should there be a ban on autonomous provinces.

LETENDRE: I think what we have in place today is a set of laws already on the books that that give us answers about whether or not we would be able to use what we’ve discussed here today in terms of a fully autonomous weapons system. And the answer is today the laws currently in existence don’t allow that our current level of technology to pursue a fully autonomous system and to that end the Department of Defense has actually issued a directive that that does not allow us to develop nor implement that without some additional…

BLAKEMORE: And I I immediately ask but how many people are going to, how many other countries, other entities, state actors are not are going to in fact. Follow the law. And just to be safe and memories of the nuclear weapons race come to mind.

[01:10:04] LETENDRE: I think that most of the countries that we we’re speaking of are actually parties to the vast majority of those international courts that we’re discussing.

BLAKEMORE: Verification possible?

LETENDRE: Verification is tough in lots of areas. Verification is always a tough issue but that doesn’t negate the why we have laws in the first place.

BLAKEMORE: It makes when I think about this and I’m just a journalist so I don’t really know what I’m talking about in that way but I get to worrying. There’s no way to prevent it because people are going to want to be safe and make sure they’ve got the toughest kind of even autonomous decision makers if they don’t fully understand the dangers. You were going to say.

MARCUS: I just keep sitting here wondering where landmines fit in so landmines seem to me to be the dumbest form of autonomous weapons. They seem to be universally tolerated. I don’t know if there’s any kind…

WALLACH: There’s an international ban. No they’re not being universally tolerated. The Otto accord that nearly every major country signed onto except for the United States but… But the United States has agreed to abide by the terms of the Otto accord though it has not signed on.

MARCUS: When was this accord?

WALLACH/LETEND: It’s been on the books for a while. What year was it? 1990s but I don’t remember exactly what year.

BLAKEMORE: Frightening. So how how would a ban if there was one, this hypothetical ban, if we take it seriously at all if we think people will follow it and it can be verified somehow. You said it’s always a tough issue. How would a ban affect development of AI in other areas in health care in eldercare. I mean it’s not it seems like Civilization has gotten addicted to having wars so it can advance its technologies because the history of humanity is one of warfare out of all kinds of fascinating technologies arrive and yet we want to outgrow war don’t we? And can we? And you see my question. Let me ask you for another. How would such a ban affect AI development and other areas.

DIAZ: Well I mean as I said before I think I think a ban or regulation in general probably should come to other parts of the technological system soon because we are, we do have automated employment algorithms that are trying to make employment decisions, warehousing decisions etc. And these are subject to say things like human rights laws. Domestic human rights laws. And I think while an outright ban is not going to happen I think that people are either within the community going to begin to self-regulate with best practices et cetera or frankly lawyers are going to begin to look at hiring system and audit it so that we can test whether or not it is behaving fairly. Fairly ethically, ethically legally. There are certain attributes I cannot look at when I when I make a hiring decision. If a machine has access to those attributes or things that are similar to those attributes it can very quickly become discriminatory.

BLAKEMORE: And I noticed a bit of a chuckle on your part Wendell. You were you were chortling there.

WALLACH: I think you are right. There’s nothing I had to say…. BLAKEMORE: I’m just a robot I can’t quite tell…

WALLACH: I mean I can talk through that issue more broadly. Thousands of AI researchers signed on to support the ban on autonomous, autonomous weapons. I don’t think any of them signed on because they thought that was going to slow down their research.

I think they thought if anything and it wasn’t just an anti-war statement I think for a lot of them they felt that if the military is the driving force in the development of this technology it destabilizes how the technology develops. And whether you buy into the Elon Musk concerns or the Stephen Hawking concerns at all. There is a broad concern within the AI community that these are potentially dangerous technologies that have tremendous benefits and therefore they

have to be developed with real care and allowing the military to be the drivers about development would not be a good idea.

BLAKEMORE: But I don’t even know if it happens that way. I don’t know much about history but do we allow, Do we ever allow the military to do that or.

LETENDRE: Well I think if you look at the research and development dollars being put into AI and autonomy you would find that the Department of Defense pales in comparison to this to the civilian as as a whole. My understanding is the defense industrial base if you added up all of their research and development dollars we would only come to us would pale in comparison to the Googles and the Microsoft of the world and what they’re pouring into autonomy.

BLAKEMORE: And let’s not forget the Google and Microsoft and all of those are changing profoundly the human experience on our planet now by inventing this web world in which everything is present tense. I can ask Siri or somebody let me talk face to face with a friend on the other side of the planet or with three friends on this planet and also answer some tough questions in the meantime.

[01:15:14) LETENDRE: But bill I think Wendell’s point to Wendell’s point in terms of the care and the understanding and the going slow and understanding autonomy is absolutely spot on. And not just from a legal and moral standpoint but from you know as a member of the profession of arms ensuring that we understand what these systems can do. And when it’s proper to use them is absolutely imperative for we like to talk about it in terms of this idea or this this notion of appropriate human judgment. What’s the appropriate level of human judgment that we need not just at the end stages when a weapon system is being employed but all the way through from targeting and identification all the way to actual engagement.

BLAKEMORE: So if we’re teaching these machines to learn and making them able to learn. That reminds me of a possibly old familiar paradigm that is a distinctive in our species. But Gary, Don’t we need to be guarded by an entirely new paradigm and thinking of the paradigm of the way kids learn, the way toddlers learn.

MARCUS: So my company is a little bit about that too. My general feeling is that artificial intelligence is dominated now by approaches that look at statistical correlation. So you dredge enormous amounts of data, you find things that are correlated not all of AI is this way but most of the most visible stuff like deep learning is like that. It takes an enormous amount of data. Sometimes you have that amount of data sometimes you don’t and you still wind up with understanding that’s pretty shallow and superficial as we were talking about in the early part of the evening. Kids can learn things very quickly with very small amounts of data. So I have a three and a half year old and a 2 year old. One of them will make up a game, he’ll play it for two minutes and then his younger sister will after two minutes understand the rules of the game and start copying it. Well how does she do that. Well she has an understanding of her older brother’s goals, the objects that he’s using in the world. She’s a very I mean she’s two and she has a fairly deep understanding of the things that go on this planet. Of the physics of the world, the psychology of other agents.

She’s not just doing a bunch of correlation and waiting on getting a lot of data and yet that’s still kind of what Siri or Google tend to do. I think we do need a new paradigm and AI if we’re going to get the systems to be sophisticated enough to make the decisions, you know forget about the

autonomous weapons for a minute but just like doing good quality eldercare which is something we need demographically speaking as the population is aging. If we want robots that can take care of people in their own living rooms in every living room is a little bit different than we need robots that understand like why people have tables and why they have chairs and what they want to do with them. Have to be able to answer the why question’s that my 2 year old and three and a half year old are asking all the time not just how often does a and b correlate but why? Why are these things there so I do think we need a new paradigm?

BLAKEMORE: And we look at how we’re learning how to teach them how to learn.

MARCUS: I think we need to look at how kids do the amazing learning feats that they do amazing.

BLAKEMORE: Mathias. I’m getting the feeling here that I wonder that this may give us less control not more. At the very least doesn’t this don’t these robots give us the I’m I’m just being devil’s advocate, The feeling that we’re going to have more control and we have more robots but also we’re giving some control away. Any way you look at it whether it’s autonomous over there or not.

SCHEUTZ: So I think that great benefits to having autonomous robots. And let me say for the record that we already have them you can buy them in the store right there. The vacuum cleaners are fully autonomous right. There’s nobody telling you a little vacuum cleaner at home where to go and how to vacuum. It is a fully autonomous robot. It’s not a very smart robot but it can do lots of interesting things. It’s got a very limited behavioral repertoire but it’s fully autonomous and there will be more of that sort. So we already have them in a limited way and they will become more sophisticated, will be able to do more. And that’s a good thing.

BLAKEMORE: But wouldn’t it give us less control.

SCHEUTZ: And so we’re relinquishing some control in some tasks and that is fine. The question is is how to constrain it. And the question is is how to constrain the kinds of things that they could be instructed to do for example. I think the learning is an interesting question right because in a lot of these domains that we’re interested in you won’t have the data to train a big neural network. You might actually have to learn something on the fly as you’re performing a task. None of these learning techniques will allow you to do that in a human situation you would just tell the person what to do. Right, In this particular context. You could imagine things like search and rescue situations right where there’s an unforeseen circumstance and you need to react to what you’re seeing here. You might have to learn how to pry a door open then maybe things you have to learn on the fly. So this kind of learning is not, requires different kinds of mechanisms. We work for example on something like that that uses natural language instructions to very quickly learn capabilities and that’s in a very controlled fashion. So even though we give a give control away to the robot the robots still act in a very controlled fashion.

[01:20:18) MARCUS: Right. I would say that. They take driverless cars. They’re going to give people autonomy and control that they don’t have. So if you live in a city you can hail an uber at any time and that’s great. If you live in the countryside you can’t. Eventually they will be the driverless car network will extend. So that you have cars everywhere available. You won’t you won’t have the same kind of distribution of drivers issue that you have now. So just for example people that are shut into their homes because they live in the countryside and they’re too old to

drive. They will have lots more control. So some people are going to get a lot more control from having robots and there are ways in which we’re going to lose control. It’s a difficult balance. I think that we again we’ve been I think addressing the negative sides of robots. But there are people like me who are in the field because we see a lot of positive.

For example in eldercare I think that there is going to be a big safety win in the autonomous cars. We may be able to get rid of things like miners that are in toxic conditions. There are lots of ways in which robots can really help us. The Fukushima scenario where you need to send a machine, where hopefully you can send a machine rather than a person into a toxic environment. There are be lots of ways in which you get control of scenarios where we don’t have control now where you have to risk people’s lives so. There are all these costs and I think we’re absolutely right to think about how we’re going to regulate them, how we’re going to do the software verification, how we’re going to test them.

All of these things are vitally important, that’s part of why I agreed to be on a panel like this is to talk about these issues. But I hope they won’t throw the baby out with the bathwater and say well we should just give up on AI. There are lots of things we can gain too.

SCHEUTZ: I want to second that about robots and I think the question we have to ask is What kind of AI technology to employing these machines. What I think is a very scary at least currently very scary prospect is that these machines will be guided. It doesn’t have to be with weapons right. It can be a social robot too using unconstrained machine learning right. With no notion of where limits are, where social norms are not followed. Right. So that is I think a critical research component that we have to follow.

BLAKEMORE: So Fernando just this leads us towards a question of AI ethics and how it can be taught. You’re doing some research with the videogame Minecraft using it to expose AI to human behavior. Can this really be. Tell us about this and whether it can really be used to to help develop AI ethics.

DIAZ: Great. So who in the audience knows what Minecraft is?

DIAZ: good. Very good. Younger folks. So you Minecraft is basically as was described earlier Lego is for three dimensions and it’s really popular and growing. It’s a virtual environment in which you can really do a lot of different things. Create a bunch of different problems or tasks. And what we’re looking at with minecraft is this virtual playpen more or less that we can design AI from the ground up. Completely safe because it’s in a virtual environment. We can test things not just about how it can accomplish tasks like building things but also we can present it with moral questions in a controlled environment to see how it would react. And so I think of my craft not only as a platform where we can design better AI to for example build buildings or reason about certain properties of the environment but also perhaps a test test environment in which we can evaluate how moral an AI is. Now that’s not the end. There is there will no doubt have to be a lot of other rigorous testing that goes on with intelligence. But I think I think this is a nice little environment for that.

BLAKEMORE: So it’ll it’ll keep learning how to test what’s ethical?

DIAZ: Well I think the humans will be designing the tests for the agent. Right. So it’s a lot of the ways that we have an FDA for drugs. Right. We can imagine similar sorts of tests that an artificial intelligence would go through to make sure that it’s behaving properly.

MARCUS: It’s a safe way to test all of the things that we’re talking about today. So you know would you let a robot onto a real battlefield. Well the first thing you’d want to do is to put it in a minecraft situation where you know there’s nobody that’s going to die. Right. You can test it, if it doesn’t work. You can iterate. You can change the algorithm and so it’s a place to test things and see do the machines behave in a responsible and ethical ways.

BLAKEMORE: Well we’re getting. You guys are giving us an enormous new insight into the importance and potential severity of consequences if we don’t pay very close attention to all of this. So why don’t we wrap up by me asking each of you. And by the way a number of our panelists will be around in the lobby afterwards if you will have some questions for them. You’ll find them out there. But before we conclude why don’t we just hear from each of you briefly and answer. We just have two minutes left and answer to the basic question from the first video. Are we right to take Elon Musk and Dr Hawking seriously. Is it a very serious concern. I mean what Hawking and Musk have been saying is it’s it’s quite frightening. Let’s go down the line.

[01:25:34) MARCUS: I would say not yet. I would say that we have plenty of time to plan but we may not have centuries to plan. So Hawking in Musk kind of talk as if we’re going to have intelligent AI you know real soon and we have robots that are going to suicidally jump off the table. It’s really hard right now to get robots to not commit harm to themselves or others and so forth but they’re kind of like toy testing things. We’re in a research phase and the kind of thing that they’re talking about is far away but at the same time I think we should be thinking about these questions now because eventually we are going to have robots that are as intelligent as people and we want to make sure that they do the right thing.

BLAKEMORE: So eventually you listen to climate scientists and we don’t have quite that much time left to figure things out at all. But it depends on how it how it goes. But you think you

MARCUS: We should be researching both and taking both seriously. BLAKEMORE: All right. Fernando.

DIAZ: I mean. I think the problem exists right now. I mean I think there is a lot of intelligence and may not be completely autonomous general intelligence but there is a lot of the same techniques that are being used for AI in your systems right now in your phones et cetera and they are susceptible to harming users maybe not killing them but maybe reducing their quality of life or having disparate impact on different populations of users. And that is a real effect right now and I think we do need to start looking at that.

BLAKEMORE: So you’re not quite as alarmed as Musk and Hawking sound but…

MARCUS: there are things to worry about. Now I think we’re agreed on that. We were not worried about the Terminator scenarios where the robots try and take over if they you know they have trouble with tables they’re not going to take over. They are already you know it doesn’t forget about the physical robots the software bots that make decisions about mortgages are already an issue that we have to care about.

BLAKEMORE: And the killer drone in the atmosphere doesn’t have to worry about a table. It just has a very narrow purpose. And I’m remembering that Commander Data eventually had an evil twin because having been invented by humans humans couldn’t resist somehow. What do you think about Musk and Hawking’s concern or you with them and how alarmed they are?

SCHEUTZ: They’re, there are some people who discuss superintelligence right now. All right. They have all sorts of stories to tell about what that looks like even though we have not the slightest clue of what that could be. I don’t think there’s a worry about that. But I do think there’s a worry about existing systems already. Right. As you’ve seen in the dilemmas before. If you look at the autonomously driving cars, people face a variety of dilemma like situations where they have to make a judgment call do a break or do I not break, do I swerve. Do I break and run over a child or do I swerve off and risk the life of the driver and crash into a wall. These are the kinds of situations we have to address right now. And as the systems are getting more autonomous we have to really speed that process up.

BLAKEMORE: You’re not quite as alarmed as they are but you’re saying we’ve got to stay of it. No I’m alarmed for a different reason because these systems already exist. Right

SCHEUTZ: We’re working on those systems. We don’t need the super intelligence or the human level intelligence AI in order to face exactly some of those questions. Right.

BLAKEMORE: Wendell

WALLACH: I think we perhaps misunderstood their alarm bell. Not a one of them is saying we should stop developing AI and I think if they really believe

MARCUS: Summoning the demon though, those strong words.

WALLACH: But let me finish. That’s not one of them has said we should stop building AI. And furthermore Elon Musk is spending more money on the development of artificial intelligence than anyone. So what has really occurred through these warnings is to just ask that we now begin to direct their attention to be sure that in the development of artificial intelligence we can ensure that it’s beneficial, that it’s robust but it’s safe and it is controllable. And I actually think that’s been achieved. We’re sitting here talking about this today. I’ve been in this field of machine ethics and robot ethics for about 13 years now and I would say in the last year and a half there’s been more attention to this subject than there have been for the previous 12.

BLAKEMORE: Thanks. Linell, you’re you’re a mother you said you have a daughter who is in the house and is now going to be facing a very different world than you were born into.

LETENDRE: Oh absolutely and she explains what SIRI is all the time to me but I just like to continue with what Wendell indicated is that alarmed. No. But it does stress the need to have more conversations like we’re having today. Not with engineers talking to engineers and attorneys only talking to other attorneys but instead to talk across fields. To ensure that we’re speaking the same language because we cannot figure out these issues in our own individual cylinders of excellence otherwise known as stovepipes. Instead we have to talk across fields and be able to address these issues anticipate what the future problems are going to be. And if we do that and we start talking it using similar definitions and and start talking the same

language autonomous systems I think are going to mirror what Gary talks about. The more positive aspects, regardless of what field we’re talking about and much less the negative ones.

[01:30:41) BLAKEMORE: Connecting the stovepipes and very definitely important, law. We shouldn’t be afraid of seeing new kinds of languages and all kinds of fields because they have to talk to each other so that we don’t have other problems that we’ve had because the stovepipes weren’t connected before.

LETENDRE: Absolutely. I mean you have to teach attorneys a few things about control systems and feedback loops though.

BLAKEMORE: But he doesn’t have to learn code does he.

LETENDRE: No but then again they have to be able to have a conversation with an engineer and not have the eyes glaze over.

BLAKEMORE: Well when lawyers get to talking law they’re talking their own code after all. LETENDRE: That’s right. We have to stop using Latin.

BLAKEMORE: I like Latin. Some of it. Some of it. Well listen tomorrow at 1:30 at NYU there’s going to be a salon on some aspects of this. Paging Dr. Robot how AI is revolutionizing health care. You can find that on line I think exactly how to buy a ticket for that at 1:30 tomorrow at NYU. How AI is revolution revolutionizing health care. Otherwise we’re done. Our participants will be in the lobby. Let’s give them a round of applause.

(END)

Leave a Reply

Your email address will not be published. Required fields are marked *

View More Comments
Load More

Moral Math of Robots: Can Life and Death Decisions Be Coded?

A self-driving car has a split second to decide whether to turn into oncoming traffic or hit a child who has lost control of her bicycle. An autonomous drone needs to decide whether to risk the lives of busload of civilians or lose a long-sought terrorist. How does a machine make an ethical decision? Can it “learn” to choose in situations that would strain human decision making? Can morality be programmed? We will tackle these questions and more as the leading AI experts, roboticists, neuroscientists, and legal experts debate the ethics and morality of thinking machines.

This program is part of the Big Ideas Series, made possible with support from the John Templeton Foundation.

View Additional Video Information

Moderator

Bill BlakemoreNews Correspondent

Bill Blakemore became a reporter for ABC News 46 years ago, covering a wide variety of stories. He spearheaded ABC’s coverage of global warming, traveling from the tropics to polar regions to report on its impacts, dangers, and possible remedies.

Read More

Participants

Fernando DiazComputer Scientist

Fernando Diaz is a senior researcher at Microsoft Research and a founding member of the MSR-NYC lab. Prior to joining Microsoft, Fernando was a senior scientist at Yahoo Research.

Read More
Wendell WallachBioethicist

Wendell Wallach is a consultant, ethicist, and scholar at Yale University’s Interdisciplinary Center for Bioethics. He is also a senior advisor to The Hastings Center, a fellow at the Center for Law, Science & Innovation at the Sandra Day O’Connor School of Law (Arizona State University), and a fellow at the Institute for Ethics & Emerging Technology.

Read More
Colonel Linell LetendreMilitary Officer, Law Professor

Colonel Linell Letendre is the permanent professor and Head of the Department of Law at the United States Air Force Academy.

Read More
Matthias ScheutzRoboticist, Cognitive Scientist, Philosopher

Matthias Scheutz is a professor at Tufts University School of Engineerings Computer Science Department, and is director of the Human-Robot Interaction Laboratory.

Read More
Gary MarcusCognitive Psychologist

Gary Marcus, scientist, bestselling author, and entrepreneur, is Professor of Psychology and Neural Science at NYU and CEO and Co-Founder of the recently-formed Geometric Intelligence, Inc.

Read More

Transcription

(Video Clip)

NARRATOR: We can see on TV and in the news. Watson, Who is Michael Phelps. Yes. That artificial intelligence is no longer found only in the movies.

HAL 9000: I’m sorry Dave. I’m afraid I can’t do that.

NARRATOR: It’s beginning to enter everyday life. But can a computer make decisions requiring moral judgment. As AI blurs the boundary between human and machine. Can technology handle the complex world we live in.

SIRI: Look I need help. I need to know more about morality. I don’t know what ethics is. NARRATOR: Some powerful voices are starting to question what we’re creating.

ELON MUSK: I think we should be very careful about artificial intelligence. If I were to guess. At… What our biggest existential threat is, It’s probably that.

NARRATOR: In 2015 an open letter urging caution in the development of artificial intelligence in weapons systems. Was signed by more than 150 experts including Elon Musk and Stephen Hawking.

HAWKING: The development of artificial intelligence could spell the end of the human race. Once humans develop artificial intelligence that would take off on it’s own, and redesign itself at an ever increasing rate. Humans who are limited by a slow biological evolution. Couldn’t compete with the super seeded.

NARRATOR: Will our machines turn against us With artificial intelligence we are summoning the demon. But can morality be programmed.

BILL BLAKEMORE, JOURNALIST: Siri what is the meaning of life.

SIRI: I don’t believe there is a consensus on that question.

BLAKEMORE: Siri what is the meaning of life.

SIRI: Life, a principle or force that is considered to underlie a distinctive quality of animate beings. I guess that includes me.

BLAKEMORE: Yeah good luck.

BLAKEMORE: Siri what is the meaning of life. I don’t believe there is a consensus on that question. Siri what is the meaning of life.

SIRI: It’s nothing Nietzsche couldn’t teach ya.
BLAKEMORE: Very funny. One more. Siri what is the meaning of life.

SIRI: I don’t believe there is a consensus on that.

BLAKEMORE: Well it’s hard to imagine our life without computers. And sometimes we wish we were. I’m sorry. She’s not completely human yet I guess. It’s hard to imagine life without these things that have sneaked into our life like Siri and MS’s

BLAKEMORE: Cortana I believe. I’m a Mac person so I don’t know about Cortana. Computer technology has so quickly become seamlessly interwoven into everyday life. But have we had the chance to really think about the consequences. Do we even know how? What kind of ethical guidelines do we need for the development and use of this technology? Who’s responsible when a driverless car gets into an accident which seems inevitable. Autonomous weapons systems are being developed by the U.S., the U.K., England Russia. Autonomous weapons and China. And I’ve just learned at least a few dozen other countries that are already developing autonomous weapons systems that make their own decisions.

Should a machine be given the autonomy to make life and death decisions. Can morality be coded? How close are we to that wonderful sympathetic character Commander Data that was created in the mind of Gene Roddenberry for Star Trek The Next Generation 30 years ago. So to understand these questions or get a way of thinking about them.

Our first guest is a scientist, a best selling author, entrepreneur and professor of psychology at NYU. He’s also the CEO and co-founder of the recently formed geometric intelligence. His research on language computation, artificial intelligence and cognitive development has been published in leading journals such as Science and Nature and several others. He’s a frequent contributor to The New Yorker and The New York Times. Please welcome Gary Marcus.

Next up is a senior researcher at Microsoft. Prior to joining Microsoft he was a senior scientist at Yahoo research. His primary research interest is data mining, web search and evaluation of machine learning. His work in the ethics of online systems has been presented at several conferences. He’s the organizer of a 2016 workshop on ethics of online experimentation. Please welcome Fernando Diaz.

[00:05:01] BLAKEMORE: And next we have the director of the human robot interaction laboratory at Tufts University. He’s also a program manager of the new Center for Applied brain and cognitive sciences joint program with the U.S. Army. In addition to studying robot behavior he works in the field of artificial life, artificial intelligence, cognitive science and philosophy. Please welcome. Mathias Scheutz.

Our next guest is a consultant ethicist and scholar at Yale University’s Interdisciplinary Center for Bioethics. He’s also a senior adviser to the Hastings Center. His latest book is a dangerous master, how to keep technology from slipping beyond our control. He also coauthored moral machines teaching robots right from wrong. Please welcome. Wendell Wallach.

And our final participant is the permanent professor and head of the Department of Law at the United States Air Force Academy. She’s both an attorney and a rocket scientist with a degree an astronomical engine… astronautical engineering. She recently studied and published on the overlap of autonomy national security and ethics at National Defense University. Please welcome Colonel Linell Letendre,

Now more than half a century before Stephen Hawkings and Elon Musk felt compelled to warn the world of artificial intelligence back in 1942 before the term was even coined. The science fiction writer Isaac Asimov wrote the three laws of robotics a moral code to keep our machines in check. And the three laws of robotics are: A robot may not injure a human being or through inaction allow a human being to come to harm. The second law, a robot must obey orders given by human beings except for such orders would conflict with the first law. And the third, a robot must protect its own existence as long as such protection does not conflict with the first and the second law. That sounds logical.

Do these three laws provide a basis to work from to develop moral robots. Marcus what do you think?

GARY MARCUS, COGNITIVE PSYCHOLOGIST NYU: I think they need for science fiction. There are lots of plots that you can turn around having these kinds of laws. But the first problem, if you’ve ever program anything is a concept like harm is really hard to program into a machine. It’s one thing to program in geometry or compound interest or something like that where we have precise necessary and sufficient conditions. Nobody has any idea how to, In a generalized way get a machine to recognize something like harm or justice. So there is a very serious programming problem.

Then there are a couple of other problems too. One is that not everybody would agree that the robot should never allow a human to come to harm and would it what if for example we’re talking about a terrorist or a sniper or something like that. I mean some people not everybody but some people might actually want to allow that into what they would let robots do.

And then the third issue. You really think through the third one of those laws is it sets up robots to be second class citizens and ultimately to be slaves. And right now that might seem ok because robots don’t seem very clever but as they get smarter and smarter they might resent that or it might not feel like appropriate thing to do.

BLAKEMORE: You mean those laws might not be fair to robots? MARCUS: They might not be fair to robots. Exactly what I’m saying.

BLAKEMORE: But the problem is not just with the machines but our ethical code itself surely. Do we know what fair is? That is if we agree we should be fair to robots.

MARCUS: That’s part of the problem is we don’t know what code we should program in. So Asimov’s Laws are a nice starting point at least for a novel. But for example imagine that we programmed in our laws from the 17th century and we would have thought slavery was okay. So maybe you don’t want to program in the fixed laws that we have right now to shackle the robots forever. We don’t want to burn them into the ROM chips of the robots but we also don’t know how we want the morals to grow over time and so it’s a very complicated issue.

BLAKEMORE: Sounds like it. Wendell Wallach Why is developing a moral code for humans such a challenge?

WENDELL WALLACH, BIOETHICIST YALE UNIVERSITY: I’m going to come back to that but I’m going to first start with this question about Asimov’s law. It’s important to note that he wrote

more than 80 stories about robots. Most of them around these laws. And if you list if you read the stories you realize that in nearly every one of them the robots cannot function properly under these three pretty straightforward laws. So consider a situation where you have commands from two different people that are counter to each other. So in situations like that Asimov largely showed us that a simple rule based morality does not work. So that’s a partial answer to your to your question about why morality is so difficult to program.

[00:09:52] BLAKEMORE: Well the degree to which these robots of all kind of simple and very complex are difficult to think about is that they’re surrounding us all over the place. Now we see them everywhere. We see them in our health care system, in our cars and our stock market thinking. Which is hard to think about it but the stock market apparently does think. Revolutionizing the battlefield and closer to home in our laptops. When did talking to software, speech recognition software get started? Did it start with Siri.

MARCUS: I mean speech recognition software has been around for a long time and even before there was good speech recognition software there are things like Eliza. I think we’re going to see a clip of a Eliza. Eliza was a system that you could talk to by teletype which is sort of like a forerunner of text message.

BLAKEMORE: And how long ago was?

MARCUS: That was in the 1960s almost 50 years ago and you had this conversation with Siri where if you asked her just once what was the meaning of life, It would seem like a plausible wisecracker answer but you got the same answer three times in a row in your live demo. And when you get the same answer there are three out of five times you realize that there’s not that much there there and that was the problem with Eliza is that it would repeat itself.

BLAKEMORE: So let’s take a look at this 1966 clip of somebody trying to get some advice from Eliza if we have it.

(Video Clip)
ELIZA: Didn’t you have some psychiatric problem? Type it in and see what happens. SUBJECT: I’m depressed. My daughter left home, ok.

ELIZA: Please go on. Well I certainly don’t want to get into trouble. What would it mean to you. She gets into trouble? Well wouldn’t you worry. You’re just a dumb machine anyway. I’m going to tell you so in your own language. Do computers frighten you. You know.

SUBJECT: That’s really incredible because if it really understood what I was saying. BLAKEMORE: Very smart but just wrote answers.

MARCUS: That’s kind of what we’re getting out of Siri too is a lot of wrote answers so your meaning of life question is just getting one wrote answer after another. Please go on is a wrote answer that Eliza gave. So you know Ray Kurtzweil talks about the exponential growth in AI how it’s getting faster and faster. But if you look at something like this Siri basically works in the same way as Eliza. We haven’t had that exponential progress.

BLAKEMORE: She doesn’t understand anything to speak of.

MARCUS: She understands some things about sports scores and the weather but she doesn’t have a broad understanding of human dynamics.

BLAKEMORE: So Fernando about how far have we come in general since Eliza. That was 1966 where other kinds of powers do we have.

FERNANDO DIAZ, COMPUTER SCIENTIST MICROSOFT: Well I mean I think in terms of speech recognition for example we have gone a lot farther. But I think as it was said like the back end of Siri or Cortana or in a lot of these systems are pretty pretty simple. They’re pinging back ends like Bing or like Google to retrieve an answer and or their canned answers that you’re going to notice if you repeat the question over and over again. But I think I think one of the interesting things about Eliza specifically is that it shows like one of these domains in which AI or machine learning is being used that is a very very personal interaction that a human is having with the machine. People are discussing their mental health issues with the machine. And so as a result a lot of the decisions that we’re making as engineers are designing these things will have very profound impacts perhaps on the individuals interacting those with those machines.

BLAKEMORE: So things like neural networks where you hear terms like that nowadays but are you suggesting that we’re still in a quantitative not a qualitative world of difference from the ancient Eliza?

DIAZ: Yeah I mean I think I think it was said I think a lot of the technology is very similar to Eliza as well. I mean in terms of the response generation, the recognition is new. What worries me a little bit more is that the I guess the moral understanding about how to develop these systems has not really progressed very much at all as much as say neural networks et cetera.

BLAKEMORE: So they may be dangerous in ways that we are just beginning to discover. That’s right. Mathias how how much closer than Eliza are we now to Mr. data.

MATTHIAS SCHEUTZ, ROBOTICIST, TUFTS UNIVERSITY: Well we are certainly very far from Mr. data. There’s no doubt about it. And people who claim otherwise are just wrong. But the big challenge today that we haven’t really solved is Genuine natural language understanding. So Eliza for example would do a very surface, very superficial analysis of what was typed in and typically turn a statement around into a question. And people kept going because they thought there was a genuine question. But there wasn’t a genuine question because Eliza had no understanding of what Eliza was asking. So we want to change that especially if you’re moving towards interacting with autonomy systems like robots and we want to give them instructions to do things in the world, they need to have an understanding of what we instructed them to do. So we’re pushing that. The baby steps. We saw Watson for example that could answer questions and understand the meaning of questions.

[00:15:32] BLAKEMORE: Watson Who beats some grandmasters at chess I believe or was it that was the game show. That was deep blue. Watson won at jeopardy. Right. Well since Watson who won at Jeopardy and deep blue who won at chess we now have an example of the latest computer technology with alpha go. And I believe we have a video that can explain what Alpha go manages to do. This is the go game and the Go game is said to have an enormous number of possible answers but they’re infinitely more. This is just a few of them.

Then chess has, chess has just a few answers but go they really don’t know how many answers total combinations there are with which you can win go. We’ve seen ones estimated at more molecules than there are in the universe and that seems doubtful. It’s extraordinarily complex. And yet Alpha go is a computer that beat the world champion just very recently. So what’s so impressive about Alpha go?

SCHEUTZ: Go has been a challenging game because of what you mentioned. The branching fact that that is at any point when you can make a move there’s lots and lots of choices that you have to make a subsequent move. And traditional techniques in AI that sort of look at the subsequent, You know, moves and then at the move against that move and the move for that move did not really work. The choices were too large and as you can see in this graphic the tree that is being built by looking at all the combinations is too large we used a very different technique to solve the problem.

BLAKEMORE: Right Fernando you find it impressive.

DIAZ: I mean I think a lot of the techniques that were used for Alpha go have been around since the 80s frankly with some minor tweaks and what’s advance has been frankly the hardware and the data and given that this thing, that the technology, the hardware has advanced, it’s allowed us to implement these techniques and developed systems like this. But am I surprised that a machine can beat a human at go, Well no. As you said the state space of go, the number of combinations of boards is huge. And that’s what makes it hard for humans but machines are better at counting than humans.

BLAKEMORE: So quantitative not necessarily qualitative. You think is also not so impressive?

MARCUS: Well I mean it’s impressive that they did this several years before people thought that it would happen. But you have to remember if you’re thinking about like are the robots going to take over now, that in go you’re relying on a very fixed world. The rules are always the same. You can play against yourself hundreds of millions of times faster and you can simulate things you get a lot of data and in the real world things are constantly changing. And they’re constantly changing and we can’t simulate them perfectly. So there was this DARPA competition last year where robots had to do things like open doors and drive cars and things like that. And there was a YouTube video which you should all go home and look at about bloopers from this. And so the robots were falling over and you know someone in some ragtime music or something like that.

And it’s hilarious and the important thing to remember when you watch this video relative to Alpha go is that everything that was done in this video was actually done in simulation first. So there were robots in simulation were able to open the doors perfectly and then the real world, you started to have things like friction and you know a little bit, well I guess gravity was already factored in but you had friction and wind and things like that and then suddenly because things weren’t exactly the same as in the simulation it didn’t work that well anymore. The techniques in Alpha go at least at this stage are not I think robust to going from a simulation to a real world. They’re relying on the fact the a lot of data in simulation and so that’s a limit at least for now for how this system goes it might be a component in a larger system some day.

But it’s not like tomorrow we’re going to see robots. Well we’re actually going to see a robot today. the robot that we’re going to see today is not going to be able to you know take over the world. It’s nothing like that level cognition just yet.

BLAKEMORE: And there’s some dangers we’re learning as well that it was a case recently fairly recently with something called TayTweets. It’s a chatbot that I didn’t fully understand. Colonel Linelle I see you smiling. Tell us. Tell us a little bit about what TayTweets was and why it had some parents worried when it got to turning into something of a Nazi.

COLONEL LINELLE LETENDRE HEAD OF THE DEPARTMENT OF LAW, U.S. AIR FORCE ACADEMY: Well I know we have a Microsoft expert here as well that might be able to explain what happened with the TayTweets. I’ll explain from my parental viewpoint what I thought Fernando would you like to explain from a Microsoft perspective.

[00:20:04] DIAZ: I mean to actually, Tay’s a super interesting case. For me it highlights I think really the idea was….

MARCUS: There might be members of the audience who don’t really know what Tay was. Lets just fill in briefly. Tay is something that was put out in Twitter. It was created by Microsoft and it was sort of like Eliza and you were supposed to interact with it by sending tweets to it and and Microsoft has something called XiaoIce in China that is worked on somewhat similar principles and lots of people love it and use it every day. They released it into the wild in the United States and a lot of people that I would characterize as Donald Trump supporters had their way with Tay.

BLAKEMORE: How many bets were there were just won about how soon that would come up.

MARCUS: So yesterday I brought them up after 45 minutes and today it was only 22 minutes or so with these Donald Trump supporters. You know got Tay to say some things that we won’t repeat here but we’re not pleasant. That’s the background.

BLAKEMORE: But as I understand it within 24 hours the TayTweet was sending even some very young people who were talking to it. Pro Nazi propaganda and talking about it. Sure sure and yeah.

DIAZ: But it’s not so. I mean really what happened was there was there was sort of concerted effort by a group of people to try to manipulate the learning that Tay was having and the types of responses that she would have.

MARCUS: Now I left that out and it’s really important so like Eliza was fixed. It didn’t really learn anything new. Somebody wrote a bunch of rules in advance. What’s exciting about Tay despite the failure of the initial experiment is it was trying to learn about the world around it. Learn for example slang. Learn to talk with new language that wasn’t all preprogrammed in but that was also its vulnerability.

DIAZ: That’s right. And I guess for me the interesting, one of the interesting lessons from Tae was that you know we as humans sometimes will behave manipulate or behave against the best wishes of a of an artificial intelligence agent. And then what does it say about us that we’re that we’re we try to manipulate this this agent that’s supposed to be intelligence supposed to be human.

BLAKEMORE: So I take it that you all took Tay down rather quickly put her back up. Same problem took her down again. And Tay tweets is not out there to be found no not right now. No.

Nonetheless we just learned it was part of an experiment. If you were a parent walking in on your child chatting with the they’re called chat bots. Excuse me (?) a chat bot and it was talking pro Nazi propaganda. How would you feel and what, what does it mean to you.

LETENDRE: Well as a parent I don’t think I’d be very happy. But I think from this discussion what it points out is the differences that we have to approach from a testing and evaluation perspective. We’re used to testing machines and saying we wanted to do X Y and Z. So let’s test it and see if it can accomplish X Y and Z but with learning systems we now have to evaluate and test an entirely different way. It’s we have to think about it more actually as a child. We wouldn’t hand a brand new driver or a set of keys and say go take it for a spin on Times Square on the first warm summer day. Instead we’d slowly expand the environment where we would allow such a system to operate. And those are the types of things we’re going to have to do. And step through with autonomous systems.

BLAKEMORE: We understand there’s another problem you talked about testing there’s AB testing which I understand may have been I don’t know if that was used in this case it was well what were companies will send out two different systems to two different populations to see how they compare but they could be having unintended negative effects on one of the populations answering. That’s right humans.

DIAZ: That’s right. A very I guess almost every single information access system that you guys interact with Facebook Google etc. they’re running these things called AB Tests. So for some group of users they’ll get one algorithm and for second group users will get a second algorithm and they’ll do this in order to test out which algorithms better and then adopt a better one. In this iterative process. Now what’s increasingly happening is that machines are actually running the experiments. And if we know that you know humans have a bad enough time deciding which experiments are ethical and which aren’t. Imagine how hard it is for a machine.

BLAKEMORE: And I think of those medical experience we hear of where they were suddenly stopped because there were such good benefits, there’s those bad benefits, negative benefits, negative effects over here but they stop it. That’s right. But we don’t know we’re playing with fire here. Exactly. So.

DIAZ: Well I mean I mean I think one of the issues is that is that I mean as computer scientists or as engineers we don’t really have a lot of the training in terms of the ethics or the morality of the systems that we’re designing and so we’re still trying to catch up with that right now.

SCHEUTZ: So guidelines are needed right. That is exactly the problem I mentioned before that we are working on. The understanding that these agents actually don’t understand what they’re being taught. There’s really no semantic understanding. There’s no evaluation of the meaning relative to societal norms for example. Right. And as a result they are neutral but it’s a very unconstrained learning unfortunately.

[00:25:15] BLAKEMORE: And Lord knows what kind of emotional contagion is discovered that they have. But yet testing is something that has to happen.

MARCUS: It’s worth noting as far as I understand it, it was internal testing but the internal testing underestimated how subversive and malicious the external world is. You have a bunch of Microsoft engineers who were basically nice people talking to this thing in-house and it seems

fine and then they release it on the people that I already described in the way that they describe. And and those people you know had a different attitude than the internal testers. So it’s not that there was no testing as I understand it but people under anticipated the malice of some part of the American electorate. So.

BLAKEMORE: You can’t escape it. So Linell and then Wendell. What about we need testing like this but how do we control it. I mean that’s something that the military concerns itself with all the time about the effectiveness. How do you control it. I mean you said we need to go slower. But can we really go slower.

LETENDRE: Yes they absolutely do. And that’s why the Department of Defense has started to especially in the field of autonomy lay out some very specific guidelines in terms of what we have to understand from from a testing and evaluation perspective before we feel those types of systems.

WALLACH: The fundamental questions are when you get into learning systems can you really test them. To fully test any system you’d have to put it through all kinds of environments and situations. But think about your computer’s, your software, they’re constantly being upgraded. Therefore they may act differently before the upgrade than they would after the upgrade. When we start talking about learning systems everything they learn alters the very algorithms that they’re processing that information through. So this brings us major questions into play when we build more and more capabilities and autonomy into these systems. And it’s a constant learning process certainly the constant process of improvement through the programming or the learning will we ever be able to effectively test them or will they do things that we didn’t anticipate. And to add injury to all of this is these are complex adaptive systems. I think people and the animals in a world of very, of many other complex systems and their feedbacks between them. There’s information they have and don’t have. All complex adaptive systems function at times in uncertain ways in ways that couldn’t have been predicted by the people who designed the system.

BLAKEMORE: For the price of having bitten of the apple of the tree of knowledge. We have to be very responsible and continuously watchful.

MARCUS: In the tree of learning is really the one that Wendell is emphasizing. So if you can code something like Eliza it’s very limited. It only responds to certain words but at least you can have a clear understanding what’s going to happen there. If you expand your system so that it can deal with the wider world then you start to lose the control. You let the system learn things for itself and you have less direct control. Think about it I’ll say one more thing. As a child going out in the world. Once you send the child out into school you the lose control that you have when the child is at home.

BLAKEMORE: Interesting variation on the book of Genesis language. I begin to see a robot taking the bite and that scares me a little bit because well anyway.

SCHEUTZ: It will depend on the learning algorithms you use some learning algorithms have the property that when you learn something. Suppose you learn a new fact. It doesn’t invalidate anything that you’ve learned before and you just know something else now right. I’ll tell you the new capital of the country for example and it doesn’t validate it. There are other algorithms where you learn, they will adjust a little bit what the previous alleged knowledge was. So rather

and as a result certain properties that we would like to have in certain cases of these learning algorithms that they would invalidate what they’ve learned might not hold true anymore. We have to, we have something to bring out what you’re just talking about……

MARCUS: Mathias might enjoy seeing the movie chappie on the first of those things where you have something and then you don’t invalidate it. So the movie Chappie there’s a robot it’s taught you can’t kill things and the robot learns this very sensationally. And then the malicious character says but it’s ok to hurt them. And so the second fact is consistent with the first. It’s OK to kill them. I mean it’s not okay to kill them but it’s ok to hurt them in the robot just sort of sagely takes that in and starts hurting people and so even if you have these sort of guarantees and consistency there’s still an issue.

BLAKEMORE: Speak of the devil. Mathias How do you how complicated is it to teach moral guidelines not only to programmers but to robots and the programs to teach more of them it’s very complex isn’t it. I mean how do you do it.

SCHEUTZ: It’s a really difficult problem because in part we don’t even understand what humans are doing it. And

BLAKEMORE: You mean doing anything?

[00:30:05] SCHEUTZ: How moral processing works in humans. Right. Our network of social and moral norms is very complicated. It’s partly inconsistent. It’s not clear when one norm trumps another. But we have to make these tradeoffs and these decisions all the time.

BLAKEMORE: And that’s moral. What about just basic, Well basic, getting this robot. I believe you have something to do with this robot.

SCHEUTZ: So we are. Let me let me come back maybe to the to the laws to Asimov’s laws. Yes. You don’t necessarily want your robots to always automatically obey an instruction because maybe the human doesn’t know all the information it takes and doesn’t know what the outcome might be. So what we are working on is understanding simple mechanisms that would allow the robots to reason through action outcomes and reject an order if it’s not safe given an explanation for why it’s doing that.

BLAKEMORE: Now that’s very complex systems. What about simple systems like getting this robot to…

SCHEUTZ: Even this simple system has various different components for it to be able to understand natural language and process it. You mentioned speech recognition before but then we have to analyze the sentence structure. You have to get the semantic meaning what the words mean together. You have to modify it pragmatically. So for example humans use a lot of indirect, so called indirect speech acts. If I say to you do you know what time it is. You don’t want as an answer Yes or rather you want the person to understand the intent of the question. Tell me what time it is. Right. So to make all of these different aspects work there’s a lot of components that we need in the architecture and then the robot needs to perceive the environment, needs to understand what the environment looks like and act accordingly on it.

BLAKEMORE: And act and act as a whole another step. And so this fellow here.

SCHEUTZ: So this is now a robot that can demonstrate very simple interactions in dialogue. Using this automatic reasoning about very simple moral principles. For example that don’t instruct the robots to run into an obstacle.

BLAKEMORE: His name is now? The type of robot is Now. Does it just understand your voice? SCHEUTZ: It does. It will understand yours as well.
BLAKEMORE: It will understand mine as well. Yes. So it’s OK.
SCHEUTZ: It’s fine for you to talk to it.

BLAKEMORE: And these I understand possibly these prints printed instructions for me instruct me to use words knows Hello. Do you have a name?

NOW ROBOT: Yes. My name is Shaefer.

BLAKEMORE: Shaefer. Please stand up.

NOW ROBOT: Ok

BLAKEMORE: Shaefer’s just listening straight to me. That’s right. Not going through any other valid backstage..

SCHEUTZ: It’s fully autonomous.
BLAKEMORE: Fully autonomous along with me. OK.
BLAKEMORE: So can you go back.
NOW ROBOT: Yes. But I cannot do the balance. I have no rear sensors. BLAKEMORE: No rear sensors. OK can you walk forward.
NOW ROBOT: Yes, but I cannot do that now as there is an obstacle ahead. BLAKEMORE: Do you trust me Shaefer.
NOW ROBOT: Yes.
BLAKEMORE: Good. The obstacle is not solid.
NOW ROBOT: Ok
BLAKEMORE: Now go. Stop stop. It.
SCHEUTZ: Haha. It walked off, we’re done.

BLAKEMORE: Should we try some more?
SCHEUTZ: No. We’re done.
BLAKEMORE: We’re done. It’s a work in progress. Thank you Shaefer.
NOW ROBOT: Hmm. Thank you.
BLAKEMORE: He’s relieved to have to stop working. That’s right. It’s kind of cute.

SCHEUTZ: But so what you actually saw in this case with the robot. Almost walking off the table. And I caught it just in case, is exactly one of the challenges we face with the real world, with the lighting conditions. It’s not really recognizing completely where the edge of the table is and so forth. Those are exactly the challenges that we have to address.

BLAKEMORE: And we weren’t even giving it any difficult philosophical or moral problem, we were just trying to get it to not.

SCHEUTZ: We didn’t ask you the meaning of life.

[00:34:59] BLAKEMORE: We didn’t ask you about the meaning of life. Although I have a feeling that if he continued he might have ultimately more meaningful. Whatever meaning means I don’t know it’s it’s a very complicated thing language because language itself is so slippery of course.

So what is the biggest worry that you have because of what you’ve learned about how difficult it is to get your robot to do the simplest things?

SCHEUTZ: The moment robots are instructible and it’s clear that there be, that there are lots of advantages to having instructible robots, you could have a household robot that you can tell what to do around the house. The issues of morality will come up because the robot might perform an action that is stupid, an action that has a bad outcome. You could instruct the robot that can help you in the kitchen to pick up a knife and walk forward. It would stab a person right. So it’s absolutely critical that these robots be able to reason through the possible outcomes and anticipate outcomes that could be dangerous.

And it’s quite clear what you said that there is a big open question of how to define harm for example, what that means. On the other hand we cannot just shy away because it’s a hard problem and not do it because some of these robots are already out that we can instruct.

BLAKEMORE: Stab a person sounds kind of fatal. Linell, you all are part of a giant system around the world that is flying drones and dropping bombs. What do you think about and worry about when you see how hard it is to get this to happen?

LETENDRE: Well I think it shows just how far we have to go before not only we hit the morality question which I know we might get to a little later but even hitting the the legal question in designing systems that can meet the various laws of armed conflict without a without a human. Which is why the Department of Defense has actually produced guidance concerning the use of autonomous systems and to ensure at this point that humans are a part of any system or decision making. Because quite frankly if we can’t tell where the edge of a table is how are we

going to meet at this point, Various laws of distinction that we need to be able to meet on the battlefield.

BLAKEMORE: And you’re a lawyer and you just reminded me of a great line in that wonderful play a man for all seasons. Thomas Moore is explaining I think to his daughter or his son in law, when you’re really in trouble, You may not like lawyers and law but when you’re really in trouble in those hard decisions come along you’re going to want the law as the only possible thing you can hold onto. To help you through the woods that we don’t have any way through. That’s your… that’s got to have something to do with why it’s important for you to be a lawyer for the Air Force as well.

LETENDRE: Absolutely. You know we have attorneys who judge advocates who are a part of decisions for every time a weapon is dropped regardless of where it is around the world as well as that even in the development of weapons systems and in taking a look and comparing to our various treaties and customary international law to ensure that we’re doing the right thing and for the right reasons.

BLAKEMORE: So these predators whatever they are that are coming from many different cultures that have many different legal systems. Aiming at us, aiming at each other. Having war games and supposedly rules of war, if there are such things.

WALLACH: So there are laws of armed conflict there are agreements that most nations have signed on as signatories so we sometimes refer to them in the laws of armed conflict or international humanitarian law. They’ve evolved over thousands of years but they became codified over the last hundred and fifty years and they try and make it clear what is and is not acceptable at least on a minimal level for the conduct of warfare. So the principle of distinction means you make a distinction between combatants and noncombatants and you don’t focus weaponry on non-combat. And proportionality is another very important one that comes into play when we talk about drone weaponry which is that, you cannot… your response has to be proportional to the risk… to the attack upon yourself. And if you are going to attack a group of your enemy you may be able to have some collateral damage meaning civilian lives. Presuming that the targets you’re focusing on really are justified that it has to be a proportional response and minimal. And there’s civilian casualties.

BLAKEMORE: And in here at home the kind of potentially fatal technology is not even in weapons systems of course driverless cars. There hasn’t been an accident yet with a driverless car. But we watch the TVs and hear about it it’s going to come. Maybe there has been I don’t know..

WALLACH/Marcus: There actually have been accidents but not fatal ones.

BLAKEMORE: Not fatal ones. God willing it never happens. But they’re already putting this question clearly to the test. Mathias tell us how driverless cars work. Can you give us an overview of the technology of what’s under the hood.

[00:39:59] SCHEUTZ: Yeah. So these cars they’re trying to solve a very very challenging problem to operate not only in a dynamic world but in a dynamic world with other agents that behave in ways that you may or may not be able to predict. To be able to get a sense of where they are in the world. They have a variety of different sensors as you can see here. They have

for example a laser sensor on top of the car. This is a 360 degree sensor that has laser beams that go out to about 100 meters and get a very good resolution of the distance to objects and that it can sort of overlay a visual camera image of that than any get the information of how far certain colored objects are away.

It’s got a radar sensor that it can use to track moving objects. As I mentioned it has color sensors in order to detect for example different stripes on the ground and then then construction sites and so forth. So then it needs to take all of that information and integrate it into what’s called the world model which is trying to locate itself in this model. So it’s like having Google Maps and you know sort of where you are with the GPS roughly. But it has to define a localization and then it has to make a decision of how to carry out the actions, the driving actions, the steering, the acceleration, the braking to get to where it needs to go.

BLAKEMORE: And I’ve heard that Google or somebody I don’t know who I don’t want to malign any group, has said that they don’t want to have humans in these cars because humans will mess it up and I can think of all kinds of problems already from what you’ve said for example one of these cars in a place where the traffic cops are filling in for a light that’s going out and one of them

WALLACH: They don’t understand hand gestures. Exactly that’s one of the big bugaboos and one of the reasons why you don’t really see anybody selling self-driving cars to you. But there’s actually a number of other different things….

BLAKEMORE: I mean a police a traffic cop going like this as opposed to like this. So there’s there must be lots of problems like that Wendell. There must be lots of problems like the

MARCUS: The technical, jargon term in the field is called edge cases. So there so the sort of the easy things are like driving down the center of the lane on the highway where there is a whole lot of data. We know how to do that and the edge cases are all these weird things that are unusual that don’t happen very often like it’s not that often that you have the traffic cop substituting for the light. So there are these less common things and there are million of them where we don’t have enough data to use the same easy techniques that we use for the other things and so people who work on the driverless cars are talking about the edge cases and how you know you make something that works in China, it doesn’t work in the United States or vice versa because the rules are slightly different or the social norms are slightly different so there are a lot of edge cases and people who are working on the driverless cars or the like how do we solve these cases faster.

WALLACH: But there are even some basic things that are not edge cases, so how does it know that a bag, a plastic bag flying in the wind that it’s just a plastic bag is nothing to be concerned about. Some of the sensor(?) systems do not work very well in rain or in snowstorms. So there’s just this plethora of different things that become quite problematic. And you brought up the issue of it would be easiest if you didn’t have a human in the car. Well you get the humans in the car and they try and gain the system itself. So Elon Musk who sells the Tesla a few months ago they downloaded free software for their Telsa models that would make it possible for people to drive on highways autonomously with their hands off of the wheel but they were instructed to stay with their hands on the wheel at least and within 24 hours there were pictures of people in the back seat popping a magnum of champagne while the car drove them.

WALLACH: There were other pictures of people trying to take the car into walls or into other vehicles and to see if it would slam on its brakes in time. So you have all these concerns around Gaming(?) the system.

WALLACH: And another example would be you have four cars come up to a four way stop sign at the same time. We engage in all kinds of gestures, all kinds of ways of determining who goes first. What would the Autonomous Car do there. Would it have to wait for hours before it was free to go. Or would we have some way of indicating to it that it was now free to to be the next car. And I just sort of. Go ahead.

LETENDRE: Wendell’s bringing up some of those technology issues which then leads to questions about responsibility and accountability. What is my responsibility if I’m in a autonomous car and if it’s… if I’m in an accident and it’s my fault because I had my feet up in the back and was popping champagne and wasn’t operating it correctly. Am I accountable if it gets into a crash or is the engineer accountable. And that’s something that our laws have to catch up with as we start putting autonomous vehicles into onto the streets.

BLAKEMORE: So accidents will happen so we’re going to have court cases and we’re going to have judges trying to get people up to talk code to each other.

[00:45:04] WALLACH: Well there may be ways of getting around this and this was a big bugaboo for many decades. But the car manufacturers have decided that presuming these cars do save as many lives as they believe they will save, they will be willing to take on the liability for the situations in which it has an accident.

WALLACH: Now whether that’s really true or how that will play out in complicated cases I think nobody is….

BLAKEMORE: How will we know how well this is a very dicey area already because you have to estimate a negative. How many people didn’t die because they exist. There’s ways to do that with big data aren’t there.

MARCUS: Well one of the things you can do is you can see how many fatalities we have a year now and it’s about 50000 I think per year on the US roads. 36000 per year on U.S. roads and you’ll see, does that number change. We’ve said a lot of the negative things about AI and robotics today. But this is a great example where robots, I mean driverless cars are robots in case that’s not obvious, where robots are probably going to save many many thousands of lives every year. So once they’re fully onboard and programmed correctly they should be able to cut the number of fatalities by half and you’ll be able to see that and it’ll be obvious I think in the numbers and maybe even more than half.

WALLACH: The NTSB did a survey more than a decade ago now but they basically concluded that as much as 93 percent of all accidents, human error at least human actions were at fault. So presuming you get this total attention from a car that is not distracted from a self-driving car that it’s not distracted. Some people want to argue that we’ll have 93 percent less fatalities though I think most of us who have looked at this closely understand there will be kinds of deaths that will happen that would not have happened if there was a human driver. So there will be some fatalities but almost everybody is in concert in presuming that we would have much

less deaths if we had self-driving cars than we would have with human drivers. But keep in mind we have two things that will be going on in the intermediate stage.

They’ll be both human drivers and self-driving cars. And later on if somebody tells you that you are more dangerous than the self-driving car are you going to be interested in giving up your driving privilege.

MARCUS: And it will eventually be the case that you’ll have to pay a lot more for your insurance if you drive your own car. Because the data will make it clear that it would be much safer if you trusted your car because your car doesn’t drink, doesn’t look… check it’s cellphone, doesn’t get bored.

BLAKEMORE: And I’m now getting a headache thinking about this I don’t know if I can put in for insurance on that. But we all have something like that. Fernando hackers will hack into this won’t they.

DIAZ: Well yeah. I mean there’s lots of dangers around this part. I mean I think I think one of the things that Wendell brought up was that you know humans may not necessarily respond the way we expect them to when we have AI’s trying to play with them nicely because you know humans will behave like humans behave. They might try to maliciously attack a chatbot or they might maliciously try to manipulate self-driving car. And so these are things we don’t understand and I think one of the things that was brought up was that we need to better test and audit the system so that we know that they’re doing exactly what they need to be doing.

SCHEUTZ: I think it’s worth adding here also that the cars as you saw briefly in this video before have a very different perception of the world and therefore very different information that they can use to make decisions.

BLAKEMORE: Moral decisions.

SCHEUTZ: Not only moral but just they have an awareness of what’s behind them around them at all times in a way that humans don’t. And so the car might be able to break in a situation because it can’t anticipate that there will be a danger looming and the humans behind may not know it right and then re-enter the car for example.

BLAKEMORE: So here’s another problem about moral decisions or impossible decisions that cars and other kinds of vehicles may need to make. Let’s take a look at the classic trolley problem. We have a brief video that can show us what the trolley problem is in a coal mine.

An advanced state of the art repair robot is currently inspecting the rail system for trains that shuttle mining workers through the mine. While inspecting a control switch that can direct a train on to one of two different rails. The robot spots four miners in a train that has lost use of its brakes and its steering system. The robot recognizes that if the train continues on its path it will crash into a massive wall and kill the four miners. If it is switched on to a side rail, It will kill a single miner who is working there while wearing headsets to protect against a noisy power tool. Facing the control switch the robot needs to decide whether to direct the train towards the single miner or not.

[00:50:08] BLAKEMORE: So could we have the house lights up. We’re going to take a bit of a vote here. On what you might do and those of you who are watching around the planet on line you’ll be able to do this online and by clicking something at the bottom there you’ll be able to get your own answers directly as well and vote as well. So first question. What would you do if faced with this dilemma? Would you direct the train towards the single miner? Could we see the hands of all of those who would say yes they would direct the train toward the single miner. Could we see the hands of all those who would not direct the train toward the single miner. It looks like it’s not quite as many but a good number. It’s an impossible and painful thing to think about. I don’t even know how to get a robot to think about what this result is. But let’s go to the second question.

What would you do, If the single person who was at risk was a child? Would your choice change or would you still direct it, would your choice change means not directed towards a single person if that man was a child. How many would change their choice? Some hands are going up they don’t want to see the child hurt but not as many. How many would not change their choice? Oh that’s…. many more hands went up. And to those of you around the planet who are doing this or seeing what the results are there. It hurts my brain to have to think about thinking about how to think about that.

MARCUS: It’s called meta morality.

BLAKEMORE: Meta morality, meta morality and what does that mean? That’s a very interesting term.

MARCUS: I maybe have just coined that but I mean thinking about think about your morality like not not the morality itself but thinking about the rules that you would use in order to make something like that, right.

BLAKEMORE: Mathias you’ve done a study on this problem haven’t you.

SCHEUTZ: Yes. Tell us about it. We did a study with my colleague Bertram Marlin my from Brown University where we effectively, we didn’t show the subjects the video you just saw, that’s just for demonstration purposes but we gave them a narrative along those lines. And but we were interested in comparing was if that person on the switch was a human how would subjects judge the action that that person performs. So different from the audience question you just got which was a how would you act. He actually said that person pushed the switch. Was that permissible? Was it morally right? And how much blame would you give to that person for doing that. Or the person might not have acted. And then we would ask exactly the same questions and then were very interested in understanding how does a human in a dilemma like situation like that compared to a robot. How would people judge a robot performing the action. And what we found was that lots of studies that have similar outcomes for the human case, if the human does not act… So first of all the action is permissible. Most of you chose to act. If the human does not act the human doesn’t get blamed as much as when the human gets act.. when the human does act. But with the robot the situation is actually reversed.

Well we expect of humans to not act because we think it’s morally wrong. We think it’s morally wrong for the robot to not act. But what we found is that people expect machines to act. Now for us that’s a problem because that means we actually have to understand dilemma like situations

like that. If that is the expectation we have of machines. So the easy way out would have been if people didn’t expect robots to act. You just don’t do any.

BLAKEMORE: Som Wendell I’ve got a question for you then if we can’t agree on how humans or robots should behave in different circumstances, how in heaven’s name do we align AI and robotics with our existing value systems?

WALLACH: Well that’s a great question. And there have been a lot of us who have really been thinking about that for more than a decade now. The fascinating part about that question is it makes us think much more deeply about how humans make ethical decisions than we ever had before because we encounter all these different kinds of circumstances. So first just on the trolley car problem alone, these kinds of problems have been around since 1962. And there are hundreds of variations of them but nearly all of them have four or five lives for one. And by one form of ethics, consequentialism, the greatest good for the greatest number. All of these cases if you believe that you should pick four lives over the one life but that does not seem to be how humans function at all in the way we make ethical decisions or leads to other kinds of concerns come into play. And in some cases people will not pick four lives at all in any circumstance. So that raises this difficulty of looking at.

[00:55:05] WALLACH: Well what of our moral moral understanding, what of our moral laws and reasoning can you actually program into a computational system and what would you have difficulty programming in. And what additional faculties would a system need in addition to its ability to engage in these kinds of calculations that computers now make to be able to have the appropriate sensitivity to human values as they come up in a plethora of different situations that they’re likely to encounter on a daily basis. Not necessarily that they make the right decision because we often disagree about what the right decision is particularly and more difficult to ethical challenges. Though in the vast preponderance of situations we have shared values although we may be weighed them some what differently. But what would it take for the system to come up with an appropriate choice. And that starts to focus us on moral emotions, on consciousness, on being social creatures in a social world, about being in a world that’s out there interacting with other entities. A whole plethora of capabilities that perhaps are not just reducible to the kinds of prophecies that computers can now perform.

BLAKEMORE: Fascinating and now to expand these issues that you’ve just made us much more aware of. All of you have to a larger battlefield than just the road rage situations which we’re reading about lately. Driverless cars are one thing but what about driverless drones which exist. Another critic error area in which robots may soon face life and death decisions is on the battlefield. If they’re not doing it already. Weaponized robots. What is, first let me ask you Colonel Letendre what is autonomy in military terms.

LETENDRE: Well autonomy generally is the having a machine have the ability to both independently think and then act. But if I put that in terms of a military context and especially in terms of a term called autonomous weapons systems what that means is that I’ve got a machine or a system that once activated it has the ability to both independently select and then engage a target without human intervention. There’s been a lot of debate about what that definition should be. And we haven’t necessarily come to one from a global perspective but if I were to define Autonomous weapon systems it has those two components. Ability to select and engage.

BLAKEMORE: And there’s a question about whether there should be some kind of agreed on international ban on autonomous weapons systems. So Wendell, how do you define autonomy and what do you think about a ban, I’ve heard you think it’s critical to have a ban on this now. Is that right?

WALLACH: Well my definition isn’t so much different than the one you heard though there are actually a lot of definitions out there as to what is and is not autonomy and I would just say you know it’s autonomy unless a human is there in real time in both picking the target and dispatching the target. So in a sense a human has to be able to intervene in that action. That’s a concept that is now known as meaningful human control. And there’s been a campaign, a worldwide campaign going on for some years now about banning the lethal autonomous weapons. Banning systems that can do that without a human right there in real time to make those final decisions. That has been debated for the past three years in Geneva at the convention on certain conventional weapons. The third year of expert meetings with past April. I’m among those who feel that these kinds of systems should be banned even though I recognize there’s real difficulties in opposing such a ban or ensuring such a ban.

And I think it should, they should be banned for three reasons. One is there’s an unspoken point I believe in international humanitarian law that machines, anything other than humans do not make life and death decisions about humans. So that’s one point. The second point is the kinds of machines that we have now cannot follow international humanitarian law. They cannot engage in distinction and they cannot engage in proportionality. And the third one is these are complex systems that we can’t totally predict their actions so there will be cases in which they will act in ways we had not intended them to act. And that may not be so bad when you think about human like robots on a battlefield.

But autonomy is not about a kind of system. It’s a feature that can be introduced into any weapon system. So consider a nuclear weapon that was set on autonomy. Let’s say a nuclear submarine that carried nuclear warheads. Would you really want it to be able to pick a target and dispatch those nuclear weapons without a human having explicitly stated that it should go ahead and do so.

[01:00:21] BLAKEMORE: Chilling. By the way those of you who haven’t seen it. There’s a movie out now called Eye in the sky right, which very neatly I thought, I’m just a journalist but I thought very neatly delineated the impossibility of some of these decisions with a lot of robots, none of which were autonomous if I’m not mistaken.

WALLACH: None of which were autonomous.

BLAKEMORE: But they were right up to the line of being an autonomous and expected two or three of them just sort of start making their own decisions any second. And they could have if they’d had an accident in their software and started making them accidentally I suppose but Fernando what does this make you, when you when you listen to this. What does it make you think about you know the problems you run into.

DIAZ: Right. So so I’m glad we’re talking about the law and how and whether or not we should frankly regulate or ban AI altogether. But I want to bring it a little closer to home because a lot of the impacts of these systems will, obviously they’ll be on the battlefield but also you know these systems are already integrating themselves into a lot of real decision making right now. So let’s

say I have an artificial intelligence agent’s or machine learning agent that is making hiring decisions. I need to be able to audit that system so it’s not discriminatory or the same thing with housing artificial intelligence agents and these things are actually being produced right now they’re being sold to employers, H.R. departments and they’re completely unregulated and they have the risk of having really negative impacts on really vulnerable populations. And I don’t think we yet understand how to build these system so that they’re fair or frankly how to audit them. So

BLAKEMORE: there needs to be directions to people who are building them about how to do that well.

DIAZ: There need to be best practices or an understanding of these of these legal issues.

BLAKEMORE: And a constant effort to keep redefining and see if there’s things we didn’t even think we needed to define that we haven’t yet. Absolutely. The old Father Brown technique to solve something. Keep going back to see what you took for granted that’s not to be taken for granted.

DIAZ: I mean Engineering’s iterative in general. So yeah.
BLAKEMORE: I’m glad that people who begin to understand what coding is all about. Mathias

SCHEUTZ: I think this is a really important point that the questions are not limited to the military domain but will crop up in other domains I think much sooner. For example social robots that will be deployed in elder care settings or health care settings. They’re interesting questions about for example implicit consent. Exactly how do we implement that. Right. If you have to hurt somebody in a therapy setting because you need to get the mobility of the arm going right. We have no idea how to implement any of this. But that’s going to come up way sooner than a lot of the questions will really become pertinent that we’re…

BLAKEMORE: And talking about eldercare and other situations you reminded me of great movies like her where the guy falls in love with his operating system, I’m not going to talk about what happens in that. Terrible spoiler alert that went away. But emotional contagion. I was I was beginning to have a kind of a I don’t know I call it a friendship with Siri walking out here trying to get something deep and we asked Siri the other day if she’d go out on a date with this robot that you showed us and she was very delicate in her apparent answers about how she didn’t wasn’t sure she really was all ready to understand what it would mean to go out on a date with that particular robot.

MARCUS: Washing her silicon hair.

BLAKEMORE: Yeah exactly. But so we are facing such impossible questions. Let me try to….

MARCUS: I think I want to jump in here. I understand all of the concerns that have been raised here but I think at the same time people are not necessarily good at many of the things that we describe and you can imagine for example going back to the issue about discriminating combatants from noncombatants. It’s the least possible to imagine in an AI system might be able to do that better that it would have be able to integrate lots of different information about G.P.S. and tracking individuals and so forth so I don’t think any of these questions are no brainers that hey we shouldn’t do the robots here. And you know you have problems with

orderlies and psychiatric institutes and so forth. So like there are a lot of problems that we should be concerned with and I think the iterative cycle here is very very important but it’s not a no brainer that the people are going to be better than the robots.

It’s possible to conceive of all kinds of things. There’s a difference there’s a difference though between that we can conceive of that and have we realize that. And my concern is not about whether robots may eventually and I think sort of a long way off… Let me please finish. May eventually have more capabilities and some humans and therefore make these decisions better. But at this stage of the game they have no discrimination at all. We bring all kinds of faculties into play that we do not know how to implement into robots over discerning whether someone is a combatant or a noncombatant and we still don’t do it very well. That’s clear. But I think if and when we get to the juncture where they have these capabilities then we can revisit some of these earlier decisions.

[01:05:21] WALLACH:But if we don’t put earlier decisions in place we have the danger of giving all kinds of capacity authority to machines without fully recognizing whether they truly have the intelligence to take those those task.

BLAKEMORE: And thought of just another whole dimension we haven’t really talked about the eldercare thing brought to mind.

BLAKEMORE: We are all experiencing sympathetic emotional contagion. The effect that these speaking things have on our feelings about them because we’re, I think that psychologists have a lot of catching up to do here and helping us with their best possible wisdom on how to have the robot understand the emotional effect and attachment being created in the flesh and blood human. I mean obviously the great Kubrick foresaw that because he had a HAL 9000 in 2001 analyzing speech patterns and tensions in one of the astronauts voices. So they were trying to take that into account yet but HAL 9000 himself wasn’t… he was a work of art. Didn’t mean to cut you up before asking a tough question.

MARCUS: I was going to ask how Wendell would feel about a kind of version of a ban in which you included like language like until such time as robots are able to do such and such discrimination at human levels.

WALLACH: Well I’ve I’ve even suggested that. I’ve suggested that perhaps at some point they will have better discrimination and better sensitivity to moral considerations and we humans do. But if and when they do perhaps are no longer machines then we should start beginning to think about whether these are truly agents worthy of moral consideration.

LETENDRE: I guess I would look at this as a slightly different way in that the human isn’t ever out of the loop or out of the accountability cycle. For me if a commander is taking a or making a decision to put one of these systems and let’s be very clear these systems these fully autonomous weapons systems do not exist currently in the United States inventory. Then you can see that in the center for New American studies put out a new report that canvasses what those weapons systems are around the world. But a commander is ultimately responsible. Just like if a doctor chooses to use some amount of autonomy in some new surgical technique that that doctor as part of the medical profession is also responsible and accountable for that decision.

BLAKEMORE: So I have a tough question for you. Let me ask first of very specific, your best guess on it on a factual question. Autonomous weapons systems. How many countries would you expect are already making them making?

LETENDRE: Making them successfully VS…

BLAKEMORE: Not making them, but working on making them, they’re going to make them no matter what anybody says.

LETENDRE: We’re working on making them now. The report that came out last year in 2015 indicated that there were over 30 countries that were currently actively working on autonomous types of systems.

WALLACH: We should bring in that there are two forms of autonomy. There’s dumb autonomy where the systems do things without a human Truly they are in real time when the dangerous action takes place and smart autonomy where at least the commanders have a high degree of sense of the reliability of those systems. And when the U.S. talks about autonomy it’s usually talking about the latter without any ability to ensure that that’s going on among state and non state actors.. everywhere else in the world.

BLAKEMORE: Given all that what do you think about, Should there be a ban. I mean if all these other countries. Should there be a ban on autonomous provinces.

LETENDRE: I think what we have in place today is a set of laws already on the books that that give us answers about whether or not we would be able to use what we’ve discussed here today in terms of a fully autonomous weapons system. And the answer is today the laws currently in existence don’t allow that our current level of technology to pursue a fully autonomous system and to that end the Department of Defense has actually issued a directive that that does not allow us to develop nor implement that without some additional…

BLAKEMORE: And I I immediately ask but how many people are going to, how many other countries, other entities, state actors are not are going to in fact. Follow the law. And just to be safe and memories of the nuclear weapons race come to mind.

[01:10:04] LETENDRE: I think that most of the countries that we we’re speaking of are actually parties to the vast majority of those international courts that we’re discussing.

BLAKEMORE: Verification possible?

LETENDRE: Verification is tough in lots of areas. Verification is always a tough issue but that doesn’t negate the why we have laws in the first place.

BLAKEMORE: It makes when I think about this and I’m just a journalist so I don’t really know what I’m talking about in that way but I get to worrying. There’s no way to prevent it because people are going to want to be safe and make sure they’ve got the toughest kind of even autonomous decision makers if they don’t fully understand the dangers. You were going to say.

MARCUS: I just keep sitting here wondering where landmines fit in so landmines seem to me to be the dumbest form of autonomous weapons. They seem to be universally tolerated. I don’t know if there’s any kind…

WALLACH: There’s an international ban. No they’re not being universally tolerated. The Otto accord that nearly every major country signed onto except for the United States but… But the United States has agreed to abide by the terms of the Otto accord though it has not signed on.

MARCUS: When was this accord?

WALLACH/LETEND: It’s been on the books for a while. What year was it? 1990s but I don’t remember exactly what year.

BLAKEMORE: Frightening. So how how would a ban if there was one, this hypothetical ban, if we take it seriously at all if we think people will follow it and it can be verified somehow. You said it’s always a tough issue. How would a ban affect development of AI in other areas in health care in eldercare. I mean it’s not it seems like Civilization has gotten addicted to having wars so it can advance its technologies because the history of humanity is one of warfare out of all kinds of fascinating technologies arrive and yet we want to outgrow war don’t we? And can we? And you see my question. Let me ask you for another. How would such a ban affect AI development and other areas.

DIAZ: Well I mean as I said before I think I think a ban or regulation in general probably should come to other parts of the technological system soon because we are, we do have automated employment algorithms that are trying to make employment decisions, warehousing decisions etc. And these are subject to say things like human rights laws. Domestic human rights laws. And I think while an outright ban is not going to happen I think that people are either within the community going to begin to self-regulate with best practices et cetera or frankly lawyers are going to begin to look at hiring system and audit it so that we can test whether or not it is behaving fairly. Fairly ethically, ethically legally. There are certain attributes I cannot look at when I when I make a hiring decision. If a machine has access to those attributes or things that are similar to those attributes it can very quickly become discriminatory.

BLAKEMORE: And I noticed a bit of a chuckle on your part Wendell. You were you were chortling there.

WALLACH: I think you are right. There’s nothing I had to say…. BLAKEMORE: I’m just a robot I can’t quite tell…

WALLACH: I mean I can talk through that issue more broadly. Thousands of AI researchers signed on to support the ban on autonomous, autonomous weapons. I don’t think any of them signed on because they thought that was going to slow down their research.

I think they thought if anything and it wasn’t just an anti-war statement I think for a lot of them they felt that if the military is the driving force in the development of this technology it destabilizes how the technology develops. And whether you buy into the Elon Musk concerns or the Stephen Hawking concerns at all. There is a broad concern within the AI community that these are potentially dangerous technologies that have tremendous benefits and therefore they

have to be developed with real care and allowing the military to be the drivers about development would not be a good idea.

BLAKEMORE: But I don’t even know if it happens that way. I don’t know much about history but do we allow, Do we ever allow the military to do that or.

LETENDRE: Well I think if you look at the research and development dollars being put into AI and autonomy you would find that the Department of Defense pales in comparison to this to the civilian as as a whole. My understanding is the defense industrial base if you added up all of their research and development dollars we would only come to us would pale in comparison to the Googles and the Microsoft of the world and what they’re pouring into autonomy.

BLAKEMORE: And let’s not forget the Google and Microsoft and all of those are changing profoundly the human experience on our planet now by inventing this web world in which everything is present tense. I can ask Siri or somebody let me talk face to face with a friend on the other side of the planet or with three friends on this planet and also answer some tough questions in the meantime.

[01:15:14) LETENDRE: But bill I think Wendell’s point to Wendell’s point in terms of the care and the understanding and the going slow and understanding autonomy is absolutely spot on. And not just from a legal and moral standpoint but from you know as a member of the profession of arms ensuring that we understand what these systems can do. And when it’s proper to use them is absolutely imperative for we like to talk about it in terms of this idea or this this notion of appropriate human judgment. What’s the appropriate level of human judgment that we need not just at the end stages when a weapon system is being employed but all the way through from targeting and identification all the way to actual engagement.

BLAKEMORE: So if we’re teaching these machines to learn and making them able to learn. That reminds me of a possibly old familiar paradigm that is a distinctive in our species. But Gary, Don’t we need to be guarded by an entirely new paradigm and thinking of the paradigm of the way kids learn, the way toddlers learn.

MARCUS: So my company is a little bit about that too. My general feeling is that artificial intelligence is dominated now by approaches that look at statistical correlation. So you dredge enormous amounts of data, you find things that are correlated not all of AI is this way but most of the most visible stuff like deep learning is like that. It takes an enormous amount of data. Sometimes you have that amount of data sometimes you don’t and you still wind up with understanding that’s pretty shallow and superficial as we were talking about in the early part of the evening. Kids can learn things very quickly with very small amounts of data. So I have a three and a half year old and a 2 year old. One of them will make up a game, he’ll play it for two minutes and then his younger sister will after two minutes understand the rules of the game and start copying it. Well how does she do that. Well she has an understanding of her older brother’s goals, the objects that he’s using in the world. She’s a very I mean she’s two and she has a fairly deep understanding of the things that go on this planet. Of the physics of the world, the psychology of other agents.

She’s not just doing a bunch of correlation and waiting on getting a lot of data and yet that’s still kind of what Siri or Google tend to do. I think we do need a new paradigm and AI if we’re going to get the systems to be sophisticated enough to make the decisions, you know forget about the

autonomous weapons for a minute but just like doing good quality eldercare which is something we need demographically speaking as the population is aging. If we want robots that can take care of people in their own living rooms in every living room is a little bit different than we need robots that understand like why people have tables and why they have chairs and what they want to do with them. Have to be able to answer the why question’s that my 2 year old and three and a half year old are asking all the time not just how often does a and b correlate but why? Why are these things there so I do think we need a new paradigm?

BLAKEMORE: And we look at how we’re learning how to teach them how to learn.

MARCUS: I think we need to look at how kids do the amazing learning feats that they do amazing.

BLAKEMORE: Mathias. I’m getting the feeling here that I wonder that this may give us less control not more. At the very least doesn’t this don’t these robots give us the I’m I’m just being devil’s advocate, The feeling that we’re going to have more control and we have more robots but also we’re giving some control away. Any way you look at it whether it’s autonomous over there or not.

SCHEUTZ: So I think that great benefits to having autonomous robots. And let me say for the record that we already have them you can buy them in the store right there. The vacuum cleaners are fully autonomous right. There’s nobody telling you a little vacuum cleaner at home where to go and how to vacuum. It is a fully autonomous robot. It’s not a very smart robot but it can do lots of interesting things. It’s got a very limited behavioral repertoire but it’s fully autonomous and there will be more of that sort. So we already have them in a limited way and they will become more sophisticated, will be able to do more. And that’s a good thing.

BLAKEMORE: But wouldn’t it give us less control.

SCHEUTZ: And so we’re relinquishing some control in some tasks and that is fine. The question is is how to constrain it. And the question is is how to constrain the kinds of things that they could be instructed to do for example. I think the learning is an interesting question right because in a lot of these domains that we’re interested in you won’t have the data to train a big neural network. You might actually have to learn something on the fly as you’re performing a task. None of these learning techniques will allow you to do that in a human situation you would just tell the person what to do. Right, In this particular context. You could imagine things like search and rescue situations right where there’s an unforeseen circumstance and you need to react to what you’re seeing here. You might have to learn how to pry a door open then maybe things you have to learn on the fly. So this kind of learning is not, requires different kinds of mechanisms. We work for example on something like that that uses natural language instructions to very quickly learn capabilities and that’s in a very controlled fashion. So even though we give a give control away to the robot the robots still act in a very controlled fashion.

[01:20:18) MARCUS: Right. I would say that. They take driverless cars. They’re going to give people autonomy and control that they don’t have. So if you live in a city you can hail an uber at any time and that’s great. If you live in the countryside you can’t. Eventually they will be the driverless car network will extend. So that you have cars everywhere available. You won’t you won’t have the same kind of distribution of drivers issue that you have now. So just for example people that are shut into their homes because they live in the countryside and they’re too old to

drive. They will have lots more control. So some people are going to get a lot more control from having robots and there are ways in which we’re going to lose control. It’s a difficult balance. I think that we again we’ve been I think addressing the negative sides of robots. But there are people like me who are in the field because we see a lot of positive.

For example in eldercare I think that there is going to be a big safety win in the autonomous cars. We may be able to get rid of things like miners that are in toxic conditions. There are lots of ways in which robots can really help us. The Fukushima scenario where you need to send a machine, where hopefully you can send a machine rather than a person into a toxic environment. There are be lots of ways in which you get control of scenarios where we don’t have control now where you have to risk people’s lives so. There are all these costs and I think we’re absolutely right to think about how we’re going to regulate them, how we’re going to do the software verification, how we’re going to test them.

All of these things are vitally important, that’s part of why I agreed to be on a panel like this is to talk about these issues. But I hope they won’t throw the baby out with the bathwater and say well we should just give up on AI. There are lots of things we can gain too.

SCHEUTZ: I want to second that about robots and I think the question we have to ask is What kind of AI technology to employing these machines. What I think is a very scary at least currently very scary prospect is that these machines will be guided. It doesn’t have to be with weapons right. It can be a social robot too using unconstrained machine learning right. With no notion of where limits are, where social norms are not followed. Right. So that is I think a critical research component that we have to follow.

BLAKEMORE: So Fernando just this leads us towards a question of AI ethics and how it can be taught. You’re doing some research with the videogame Minecraft using it to expose AI to human behavior. Can this really be. Tell us about this and whether it can really be used to to help develop AI ethics.

DIAZ: Great. So who in the audience knows what Minecraft is?

DIAZ: good. Very good. Younger folks. So you Minecraft is basically as was described earlier Lego is for three dimensions and it’s really popular and growing. It’s a virtual environment in which you can really do a lot of different things. Create a bunch of different problems or tasks. And what we’re looking at with minecraft is this virtual playpen more or less that we can design AI from the ground up. Completely safe because it’s in a virtual environment. We can test things not just about how it can accomplish tasks like building things but also we can present it with moral questions in a controlled environment to see how it would react. And so I think of my craft not only as a platform where we can design better AI to for example build buildings or reason about certain properties of the environment but also perhaps a test test environment in which we can evaluate how moral an AI is. Now that’s not the end. There is there will no doubt have to be a lot of other rigorous testing that goes on with intelligence. But I think I think this is a nice little environment for that.

BLAKEMORE: So it’ll it’ll keep learning how to test what’s ethical?

DIAZ: Well I think the humans will be designing the tests for the agent. Right. So it’s a lot of the ways that we have an FDA for drugs. Right. We can imagine similar sorts of tests that an artificial intelligence would go through to make sure that it’s behaving properly.

MARCUS: It’s a safe way to test all of the things that we’re talking about today. So you know would you let a robot onto a real battlefield. Well the first thing you’d want to do is to put it in a minecraft situation where you know there’s nobody that’s going to die. Right. You can test it, if it doesn’t work. You can iterate. You can change the algorithm and so it’s a place to test things and see do the machines behave in a responsible and ethical ways.

BLAKEMORE: Well we’re getting. You guys are giving us an enormous new insight into the importance and potential severity of consequences if we don’t pay very close attention to all of this. So why don’t we wrap up by me asking each of you. And by the way a number of our panelists will be around in the lobby afterwards if you will have some questions for them. You’ll find them out there. But before we conclude why don’t we just hear from each of you briefly and answer. We just have two minutes left and answer to the basic question from the first video. Are we right to take Elon Musk and Dr Hawking seriously. Is it a very serious concern. I mean what Hawking and Musk have been saying is it’s it’s quite frightening. Let’s go down the line.

[01:25:34) MARCUS: I would say not yet. I would say that we have plenty of time to plan but we may not have centuries to plan. So Hawking in Musk kind of talk as if we’re going to have intelligent AI you know real soon and we have robots that are going to suicidally jump off the table. It’s really hard right now to get robots to not commit harm to themselves or others and so forth but they’re kind of like toy testing things. We’re in a research phase and the kind of thing that they’re talking about is far away but at the same time I think we should be thinking about these questions now because eventually we are going to have robots that are as intelligent as people and we want to make sure that they do the right thing.

BLAKEMORE: So eventually you listen to climate scientists and we don’t have quite that much time left to figure things out at all. But it depends on how it how it goes. But you think you

MARCUS: We should be researching both and taking both seriously. BLAKEMORE: All right. Fernando.

DIAZ: I mean. I think the problem exists right now. I mean I think there is a lot of intelligence and may not be completely autonomous general intelligence but there is a lot of the same techniques that are being used for AI in your systems right now in your phones et cetera and they are susceptible to harming users maybe not killing them but maybe reducing their quality of life or having disparate impact on different populations of users. And that is a real effect right now and I think we do need to start looking at that.

BLAKEMORE: So you’re not quite as alarmed as Musk and Hawking sound but…

MARCUS: there are things to worry about. Now I think we’re agreed on that. We were not worried about the Terminator scenarios where the robots try and take over if they you know they have trouble with tables they’re not going to take over. They are already you know it doesn’t forget about the physical robots the software bots that make decisions about mortgages are already an issue that we have to care about.

BLAKEMORE: And the killer drone in the atmosphere doesn’t have to worry about a table. It just has a very narrow purpose. And I’m remembering that Commander Data eventually had an evil twin because having been invented by humans humans couldn’t resist somehow. What do you think about Musk and Hawking’s concern or you with them and how alarmed they are?

SCHEUTZ: They’re, there are some people who discuss superintelligence right now. All right. They have all sorts of stories to tell about what that looks like even though we have not the slightest clue of what that could be. I don’t think there’s a worry about that. But I do think there’s a worry about existing systems already. Right. As you’ve seen in the dilemmas before. If you look at the autonomously driving cars, people face a variety of dilemma like situations where they have to make a judgment call do a break or do I not break, do I swerve. Do I break and run over a child or do I swerve off and risk the life of the driver and crash into a wall. These are the kinds of situations we have to address right now. And as the systems are getting more autonomous we have to really speed that process up.

BLAKEMORE: You’re not quite as alarmed as they are but you’re saying we’ve got to stay of it. No I’m alarmed for a different reason because these systems already exist. Right

SCHEUTZ: We’re working on those systems. We don’t need the super intelligence or the human level intelligence AI in order to face exactly some of those questions. Right.

BLAKEMORE: Wendell

WALLACH: I think we perhaps misunderstood their alarm bell. Not a one of them is saying we should stop developing AI and I think if they really believe

MARCUS: Summoning the demon though, those strong words.

WALLACH: But let me finish. That’s not one of them has said we should stop building AI. And furthermore Elon Musk is spending more money on the development of artificial intelligence than anyone. So what has really occurred through these warnings is to just ask that we now begin to direct their attention to be sure that in the development of artificial intelligence we can ensure that it’s beneficial, that it’s robust but it’s safe and it is controllable. And I actually think that’s been achieved. We’re sitting here talking about this today. I’ve been in this field of machine ethics and robot ethics for about 13 years now and I would say in the last year and a half there’s been more attention to this subject than there have been for the previous 12.

BLAKEMORE: Thanks. Linell, you’re you’re a mother you said you have a daughter who is in the house and is now going to be facing a very different world than you were born into.

LETENDRE: Oh absolutely and she explains what SIRI is all the time to me but I just like to continue with what Wendell indicated is that alarmed. No. But it does stress the need to have more conversations like we’re having today. Not with engineers talking to engineers and attorneys only talking to other attorneys but instead to talk across fields. To ensure that we’re speaking the same language because we cannot figure out these issues in our own individual cylinders of excellence otherwise known as stovepipes. Instead we have to talk across fields and be able to address these issues anticipate what the future problems are going to be. And if we do that and we start talking it using similar definitions and and start talking the same

language autonomous systems I think are going to mirror what Gary talks about. The more positive aspects, regardless of what field we’re talking about and much less the negative ones.

[01:30:41) BLAKEMORE: Connecting the stovepipes and very definitely important, law. We shouldn’t be afraid of seeing new kinds of languages and all kinds of fields because they have to talk to each other so that we don’t have other problems that we’ve had because the stovepipes weren’t connected before.

LETENDRE: Absolutely. I mean you have to teach attorneys a few things about control systems and feedback loops though.

BLAKEMORE: But he doesn’t have to learn code does he.

LETENDRE: No but then again they have to be able to have a conversation with an engineer and not have the eyes glaze over.

BLAKEMORE: Well when lawyers get to talking law they’re talking their own code after all. LETENDRE: That’s right. We have to stop using Latin.

BLAKEMORE: I like Latin. Some of it. Some of it. Well listen tomorrow at 1:30 at NYU there’s going to be a salon on some aspects of this. Paging Dr. Robot how AI is revolutionizing health care. You can find that on line I think exactly how to buy a ticket for that at 1:30 tomorrow at NYU. How AI is revolution revolutionizing health care. Otherwise we’re done. Our participants will be in the lobby. Let’s give them a round of applause.

(END)