MINDWORKS

Mini: Expect the Unexpected (William Casebeer and Chad Weiss)

March 28, 2021 Daniel Serfaty
MINDWORKS
Mini: Expect the Unexpected (William Casebeer and Chad Weiss)
Show Notes Transcript

Life is anything but predictable. When designing systems that utilize artificial intelligence, what do you have to account for?  MINDWORKS host, Daniel Serfaty, was also curious so he asked the experts to find out more! Join Daniel as he speaks Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center, and Mr. Chad Weiss, Senior Research Engineer at Aptima. 

 

Listen to the entire interview in The Ethics of Artificial Intelligence with William Casebeer and Chad Weiss.

Daniel Serfaty: …this is the most sophisticated and clearest explanation of how complex the problem is, in a sense that on the designer perspective, as well as on the operator's perspective. That it is not just an issue of who has control of what, but there are many more context, variable that one has to take into account when conceiving, even of those systems. Chad, do you have an example you want to share with us?

Chad Weiss: Yeah. So first of all, Bill, great answer. I would advise anybody who is going to do a podcast with Bill Casebeer not to follow. The point that you bring up about remote kinetic capabilities is an interesting one. I think Lieutenant Colonel Dave Grossman covers that in his book On Killing about sort of the history of human kind of reluctance to take the lives of other humans. And a key variable in making that a possibility is increasing the distance between the trigger person and the target, if you will. One thing that strikes me in the military context is that what we're talking about today is not new, in any way. As we stated, it goes back to ancient Greece. It goes back to Mary Shelley and all of these different sort of cultural acknowledgements of the moral hazards that are presented by our creations.

And the history of technology shows that as much as we like to think that we can control for every eventuality, automation fails. And when automation fails or surprises the user, it fails in ways that are unintuitive. You don't see automation fail along the same lines as humans. They fail in ways that we would never fail. And I think that probably goes vice versa as well.

So something that keeps me up at night is the sort of idea of an AI arms race with military technologies, that there is an incentive to develop increasingly powerful, automated capabilities faster than the adversary. And we saw the nuclear arms race, and that this puts the world in quite a bit of peril. And what I am a little bit fearful of is the idea that we are moving towards AI superiority at such a pace that we're failing to really consider the implications and temper our developments in such a way that we're building resilient systems.

Bill Casebeer: Yeah, that's a really critical point Chad, that we need to be able to engineer systems in such a way that they can recover from the unexpected. From the unexpected behavior of both the system that it's part of and unexpected facts about the environment it's operating in. And that's part of the reason why in the United States, our doctrine presently in praise worthily requires that a soldier be involved in every use of force decision.

Just because we're aware of these unknown unknowns, both in the operation of the system and in the environment it's working in. And so bringing human judgment into there can work can really help to tamp down the unintended negative consequences of the use of a piece of technology. And now the flip side of that, of course, and I'd be interested in your thoughts on this Chad, is that as we use autonomy, and I agree with you that there is almost a ratchet, a type of inexorable increase in the use of autonomy on the battlefield because of its effect, you can act more quickly and perhaps deliver a kinetic solution if you will, to a conflict quicker than you could otherwise. So for that reason, autonomy is going to increase in its use in the battlefield.

What we might want to consider is given that the object stares back, is we need to think about how we engineer some of that resilience, even if we're not allowing deadly force judgment, decision-making to take place on the autonomy side into the autonomous system itself. And I think that's one reason why we need to think about the construction of something like an artificial conscience. That is a moral or governor that can help some of the parts of these complex and distributed systems consider and think about the ethical dimensions of the role they play in the system.

And I know a lot of people have a negative reaction to that idea that artificial intelligence could itself reason in the moral domain and perhaps for good Aristotelian or platonic reasons. For good reasons that stems from the Greek tradition that usually we only think of people as being agents, but it may very well be that as our tools start to stare back that as they become more richly and deeply cognitive, that we need to think about how we engineer some of this artificial conscience, the ability to make moral judgments, the ability to act on them, even independently of a human into the system so that we can give them the requisite flexibility they need.

Chad Weiss: Yeah, that's a great point. It strikes me that we've really been discussing this from one side, which is what are our ethical responsibilities when using artificial intelligence, developing, using artificial intelligence. There's also a question of not only what our responsibilities towards the AI that we're developing, if in fact there are any, but what does the way that we think about AI say about the human animal?

Bill Casebeer: Yeah well that's a really interesting point. Maybe we're spring loaded to think that, "Oh, a robot can't have a conscience." I think that would be too bad. I think this requires a more exacting analysis of what it means to have conscience. So we should probably talk about that. Which I think of as being something like the capability to reason over and to act on moral judgments. And of course the lurking presence here is to actually give some content to what we mean by the phrase, moral judgment. So what is morality? And that's the million dollar question because we've been around that block for a few thousand years now, and I suspect that Daniel and Chad, both of you could probably give some nice thumbnail sketches of what the domain of morality consists in, but I'll give that a go because that might set us up for more questions and conversations.

So I think of morality or ethics as really consisting of answers to three questions that we might have. We can think that for any judgment or action I might take that it might have positive and negative consequences. So that's one theory of morality. What it means to be ethical or to be moral is to take actions that have the best consequences, all things considered. And that comes from a classic utilitarian tradition that you can find in the writings of folks like John Stuart mill, probably the most famous proponent of utilitarian approach to ethics.

And on the other hand, folks like Aristotle and Plato, they were more concerned to think, not just about consequences simply, but also to think about the character of the agent who was taking the action that produces those consequences. So they were very focused on character oriented analysis of ethics and morality. And in particular, they thought that people who had good character, so people like Daniel and Chad, that they are exemplars of human flourishing, that they are well-functioning well put together human beings. And so that's a second set of questions we can ask about the morality of technology or of a system. We can ask what's its function. And is it helping people flourish, which is slightly different from a question of what are the consequences of enacting the technology.

And then finally, we can also think about ethics or morality from the perspective of do we have obligations that we owe to each other, as agents, as people who can make decisions and act on them that are independent of their consequences, and that are independent of their effect on our flourishing or our character. And those are questions that are generally ones of rights and duties. So maybe I have a right, for instance, not to be treated in certain ways by you, even if it would be good for the world, if you treated me in that way, even if I add good consequences.

So that's a third strand or tradition and ethics, that's called the deontic tradition. That's a Greek phrase. That means the study of our duties that we have towards each other. And you didn't see this in the writings of somebody like Emmanuel Kant, who can be difficult to penetrate, but who really is kind of carrying the torch in the Western tradition for thinking about rights, duties and obligations that we have independent of consequences.

So those three dimensions are dimensions of ethical evaluation, questions about the consequence of our actions, questions about the impact of our actions on our character and on human flourishing and questions about rights and duties that often revolve around the notion of consent. So I call those things, the three CS consequence, character, and consent. And if you at least incorporate those three, Cs into your questions about the moral dimensions of technology development, you'll get 90% of the way toward uncovering a lot of the ethical territory that people should discuss.

Daniel Serfaty: Thank you, Bill. I'm learning a lot today. I think I should listen to this podcast more often. As a side I know that you're a former military officer because you divide everything in three.

Bill Casebeer: Right.

Daniel Serfaty: That's one of the definitions. Thank you for this clarification, I think it's so important. We order that space a little bit. We understand a little bit those dimensions. I've never heard them classified the way you just did, which is very important. I want to take your notion of artificial conscious a little later, because when we talk about possible approaches and solutions to this enormous, enormous human challenge of the future. I would go back now to even challenge you again, Chad, you keep telling us that these are problem that whether since the dawn of humanity, almost, the ancient Greek philosophers that struggle with these issues. But isn't a AI per se, different. Different qualitatively, not quantitatively in a sense that is perhaps the first technology or technology suite, or technology category that is capable of learning from its environment.

Isn't the learning itself, put us now in a totally different category. Because when you learn, you absorb, you model, you do all the things that you guys just mentioned, but you also have enough to act based upon that learning. So does AI represent a paradigm shift here. You're welcome to push back and tell me now is just on the continuum of developing complex technologies. I want myself to challenge both of you with that notion that we are really witnessing a paradigm shift here.

Chad Weiss: You know, it's interesting, I would push back on that a bit. Certainly the way that AI learns and absorbs information, modern Ais, is different from traditional software methods. But the ability for a tool to learn from the environment, I don't think is new. I think that if you look back at a hammer that you've used for years. The shape of the handle is going to be in some way informed by the shape of your hand, which is certainly a very different kind of learning, if you're willing to call it learning at all. But ultimately I think that what we're seeing with AI is that it is shaping it's form in a sense in response to the user, to the environment and to the information that's taking in. So I don't think that it's unique in that regard.

Daniel Serfaty: Okay. I think we can agree to disagree a little bit.