MINDWORKS

Mini: Does AI dream of electric sheep? (William Casebeer and Chad Weiss)

March 23, 2021 Daniel Serfaty
MINDWORKS
Mini: Does AI dream of electric sheep? (William Casebeer and Chad Weiss)
Show Notes Transcript

In the premier episode of MINDWORKS, “Meet your new AI coworker – are you ready?”, the MINDWORKS audience was introduced to Charlie, the world’s first AI employee. You heard her speak and you heard her ideas, but was she thinking, or did she find a hidden pattern that is opaque to human eyes? More broadly speaking, is AI learning and thinking? Join MINDWORKS host, Daniel Serfaty, as he discusses with Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center, and Mr. Chad Weiss, Senior Research Engineer at Aptima.

 

Listen to the entire interview in The Ethics of Artificial Intelligence with William Casebeer and Chad Weiss.

Daniel Serfaty: This podcast, by the way, for our audience was prompted by a question that Chad asked me several months ago, members of the audience, that probably listened to the first and second podcast that focused on this artificial intelligence employee called Charlie, so to speak, at Aptima. And there was a moment in which Charlie was fed a bunch of rap music by different artists, thousands of pieces of rap, and then came up with her, that's a she, with her own rap song that is not mimicking just the rap songs or even the rhythm that she's heard before, but came with a striking originality almost.

So the question is that, okay, what did Charlie learn? And by that, I mean, this goes back to a point that Bill mentioned earlier about this notion of emerging behavior, surprising things, did Charlie just mimic and brought some kind of an algebraic sum of all the music and came up with the music. Or did she find a very hidden pattern that is opaque to our human eyes, but that she was able to exploit. That's why I believe that AI is changing because we don't know exactly what it learns in those deep learning schemes. We think we do, but from time to time, we're surprised, sometimes the surprise is very pleasant and exciting because we have a creative solution, and sometime it can be terrifying. Do you agree with me or disagree with me? For that matter.

Chad Weiss: I hope you don't mind if I shirk your question a little bit, because you brought up a couple of things in it that make me a little uneasy, not least of all that I think that my rap was objectively better than Charlie's. It had more soul in it. But in all seriousness though, the concept of the artificial intelligence employee is something that gives me pause. It makes me uncomfortable because this is one of those areas that I think we have to take a step back and ask what it reflects in the human animal.

Because if you look at the facts, Charlie is here at Aptima through no will of her own. Charlie is not paid and Charlie has no recourse to any perceived abuse if in fact, she can perceive abuse. If Charlie starts to behave in a way that we don't necessarily like, or that's not conducive to our ends, we will just reprogram Charlie. So the question that I think that raises in my mind is what is it in the human that wants to create something that they can see as an equal and still have control over, still have domain over. Because the characterization that I just laid out of Charlie, doesn't sound like employee to me, it sounds a little bit more like a slave. And I think there's some discomfort around that. At least in my case.

Daniel Serfaty: Very good point, Chad, that's something that you and I and other folks have been thinking about. Because suddenly we have these let's call it being for lack of a better term. We don't have exactly the vocabulary for it. That is in our midst, that participate in innovation sessions, that write chapters in books.

And as you said, the anthropomorphization of Charlie is a little disturbing. Not because she's not embodied, or she doesn't have a human shape, but we use the word like employee. She has an email address, but she does not have all the rights as you said, and all the respect and consideration and social status that other employees have. So a tool or a teammate, bill?

Bill Casebeer: These are great questions. And I think that I come down more like a Chad on this topic in general. I don't think there's anything new under the sun in the moral and ethical domain. Just because we have several thousand years of human experience dealing with a variety of technologies. And so it's hard to come up with something that is entirely new.

Having said that I think there was a lot of background that we take as a given when we think about the human being, when we think about ourselves. So if I just, from a computational perspective, consider 10 to the 14th neurons I have in my three pound universe here on my spinal cord and the 10 to the 15th power connections between them and the millions of hours of training, experience and exemplars, I will have seen as I sculpt that complicated network so that it becomes Bill Casebeer, there's a lot of that going on too.

I don't know exactly how Charlie, she may be a more traditional type of AI. But if Charlie learns, if she has some limited exposure in terms of training, exemplars and sets, if she has some ability to reason over those training sets to carry out some functions, then I think Charlie might be more akin to something like a parrot. So parrots are pretty darn intelligent. They have language, they can interact with people. Some parents have jobs and we don't accord the parrot necessarily full moral agency in the same way that I do a 20 year old human.

But we do think that a parrot probably has a right not to be abused by human being or kept without food and water in a cage. And so I don't think it's crazy to think that in the future, even though there's nothing new under the sun, that our AIs like Charlie might reach the point where we have to accord them parrot-like status in the domain of moral agency. Which really leads to the question about what makes something worthy of moral respect.

Daniel Serfaty: Yes, the parrot analogy is very good. Because I think it reflects more the place where Charlie and its cohorts of other AI's, like modern new generation AI are standing. And we need to think about that. 

So artificial intelligence systems, whether they are used in medicine, in education or in defense are very data hungry. At the end of the day, they are data processing machines that absorbs what we call big data, enormous amount of data, of past data from that field, find interesting patterns, common patterns among those data, and then use the data to advise, to make decisions to interact, et cetera.

What are some of the ethical considerations we should have as data scientists, for example, when we feed those massive amounts of data to the systems and let them learn with very few constraints about those data? Do we have examples in which the emerging behavior from using those data for action or behavior has led to some questions?

Chad Weiss: That's a great question. And there are a lot of issues here. Some of them are very similar to the issues that we face when we are dealing in sort of research on human subjects. Things like do the humans that you're performing research on benefit directly from the research that you're doing. I've used the phrase moral hazard a few times here, and it's probably good to unpack that. So when I say moral hazard, what I'm referring to is when an entity has incentive to take on a higher risk because they are not the sole holders of that risk, in some sense it's outsourced or something of that nature.

So here are some specific examples we have are things like image recognition for the purpose of policing. Where we know that because of the data sets that some of these things are trained on, they tend to be much less accurate when looking at someone who is African-American or in many cases, women. And as a result of being trained on a data set of primarily white males, they are much less accurate when you're looking at some of these other groups.

And there are some very serious implications to that. If you are using something like image recognition to charge someone with a crime, and it turns out that your ability to positively identify from image recognition is significantly lower with certain demographics of people then you have an issue with fairness and equity. I believe it was Amazon who was developing an AI for hiring, and they found that no matter what they did, they were unable to get the system to stop systematically discriminating against women.

And so I think after like $50 million of investment, they had to pull the plug on it. Because they just could not get this AI to stop being chauvinist, more or less. So I think those are examples where the data sets that we use and the black box nature that you alluded to earlier come into play and present some really sticky, ethical areas in this domain.

Daniel Serfaty: These are very good, very good examples. Bill, can you add to that law enforcement and personnel management and hiring examples? Do we have other examples where data itself is biasing the behavior?

Bill Casebeer: I think we do. One of the uses of artificial intelligence and machine learning both is to enable prediction and the ethical dimensions of prediction are profound. So you and Chad have both alluded to the possibility that your training data set may perhaps unintentionally bias your algorithm so that it makes generalities that it shouldn't be making, stereotypes, classic stereotypes. So I know a professor Buolamwini at MIT has done studies about bias and discrimination present in face recognition algorithms that are used in surveillance and policing.

I think that same kind of use of stereotypes can, for example, lead as it has with human doctors to medical advice that doesn't work well for certain underprivileged groups or minorities. So if you're medical research and experimentation to prove that a certain intervention or treatment works, and this began mostly with white males then whether or not it will work for the 25 year old female, that hasn't really been answered yet, and we don't want to over-generalize from that training dataset, as our AIs sometimes can do.

The example that comes to mind for me, like Chad mentioned his Tay Bot. And Tay was a AI chatter bot that was released by Microsoft corporation back in 2016. And its training dataset was input that it received on its Twitter account. And so people started to intentionally feed it racist, inflammatory, offensive information. And it learned a lot of those concepts and stereotypes. It started to regurgitate them back in conversation, such that they eventually had to shut it down because of its racist and sexually charged and innuendo. So I think it goes, that's a risk in policing for some defense applications. If you're doing security clearances using automated algorithms, if you're determining who is a combatant based on a bias training dataset. For medicine, for job interviews, really for anywhere where prediction is important.

The second thing I would point out is that in addition to data sets that can cause bias and discrimination is people like Nicholas Carr Virginia Postrel have pointed out that sometimes you get the best outcomes when you take your native neural network and combine it with the outputs of some of these artificial neural networks. And if we over rely on these AIs, we may underlie or shirk this very nicely trained pattern detector that has probably a lot more training instances in it than any particular AI and ability to generalize across a lot more domains than a lot of AI systems. And so Nick Carr makes the point that one other ethical dimension of prediction is that we can over rely on our AI is at the expense of our native prediction capabilities. Every day AI is making people easier to use as the saying goes.