MINDWORKS

Mini: I'm sorry, Daniel. I'm afraid I can't do that. (Jared Freeman and Adam Fouse)

March 09, 2021 Daniel Serfaty
MINDWORKS
Mini: I'm sorry, Daniel. I'm afraid I can't do that. (Jared Freeman and Adam Fouse)
Show Notes Transcript

“I’m sorry Dave. I’m afraid I can’t do that.”— nobody wants to hear their AI teammate utter a chilling sentiment like that, especially when they are millions of miles away from humanity and trying to re-enter their spaceship! HAL from Stanley Kubrick’s “2001: A Space Odyssey” was designed to work alongside humans as a teammate, however, HAL ended up working against its human teammates due to a lack of ethical considerations in its design. To prevent something like this from becoming a reality, what ethical considerations should we take when designing AI systems? 

Listen to the entire intervew in The Magic of Teams Part 4: Human-AI Teams with Jared Freeman and Adam Fouse.

Daniel Serfaty: But are there some ethical consideration to this particular marriage? Whether it's in your job or in the example that you gave earlier, the learner and the teacher or the pilot and the automation. Are there ethical consideration there that we should consider, we should worry about and perhaps we should guard, present or anticipate? Who wants to go there because that's a tough one. Jared, go ahead.


Jared Freeman: So I want to give a trivial example that actually has some deep implications. Let's imagine an Easter egg hunt, and we send our small children out. There's a little AI robot in the hunt as well and the AI robot discovers that the single most effective way to get the most eggs is to knock over all the little kids. This is behavior that we don't want our children to observe, we certainly don't want them to adopt it. It requires some ethical sense within the AI to choose other strategies to win. So where's the depth here, right? Let's just translate this into a warfare scenario in which the optimal strategy for war, right, is to remove the adversary from the game. You can do that in a lot of ways, trap them in an area, bomb them and so forth. It is well within the ethical bounds of war, and we want AI to have the liberty to take those actions perhaps of killing others or at least of entrapping and nullifying others. It needs to understand that that is an ethical option in that domain and should use it when it absolutely needs to.

Daniel Serfaty: Okay. That's a pretty sobering perspective because it can happen and those emerging behavior is actually, but the question is that, is it our responsibility as scientists and engineer to engineer ethical rules, almost in an Asimov kind of way, into AI. Or are we expecting that AI will develop those rules internally from observing others behaviors and derive them and exhibit them in some kind of an emergent behavior. Adam, what do you think? Ethical considerations in designing human-AI teams?

Adam Fouse: That last point you brought up, Daniel, is I think the really important one which is that, relying on AI to behave ethically through observation of humans or society, which does not always behave ethically is something we need to be very vigilant about looking for and counteracting things that might unintentionally happen in that setting. And I think we want to have AI that is ethical and we also want to have the ethical application of AI. We've already seen cases where we train AI models to help with decision-making, but because we exist in a society that has lots of inequality, those models they are just encapsulating that in inequality. A real danger there in terms of thinking about this from this human-AI team perspective, is that then humans assume that this AI is objective. It's doing number crunching and therefore I can't have any biases that are about race or income levels or other kinds of marginalized aspects of society.

It's just going to encapture those things that already exist. And so I think one of the things that we need to be very careful about is when we are designing AI, to make sure that we look for those things, but then make sure that when we apply that AI, we do it in such a way where there's processes or structures in place to look for those and counteract those, even when they do exist. Make sure that there's humans that are involved in those decisions that might be able to see something that isn't quite right and either have the combined input of the two, we do a better decision, or feedback in to say "This AI can be improved in some way."

Jared Freeman: I want to follow on to Adam's very good point there. So here's a perfect moment to look at the way that human and AI can collaborate. We know that when AI learns from historic data, it embodies the biases that are in those data. We know that when humans try to write rules for symbolic AI systems, those systems turn out to be quite brittle. And so an alternative or a compliment to those two is to ensure that AI programs in which ethics matter, such as military programs, first establish a set of ethical principles, bounds of behavior. And use those in test and evaluation of AI that learns its ethics or whose ethics get built by programmers. There needs to be at the moment, a human on the top of the stack who has a set of principles, a set of test cases, a way to evaluate AI on its ethics.

Daniel Serfaty: Yes. Well, thank you both for these profound and thoughtful remarks regarding ethics. I think in this engineers career, this is probably the period in which philosophy and design are merging the most. Precisely because we are creating these intelligent machines and we use intelligence really with a lot of caution and as engineers, as scientists, we need to think very deeply about the way we want those machines to behave. We didn't have that problem so much when we were building bridges or airplanes or cars before, but now it is very important. I believe that all curricula about engineering, all curriculum about computing science and computing engineering include ways to think deeply and maybe even to design into the systems these notions of ethical principles.