MINDWORKS

Mini: The Issue of Language (William Casebeer and Chad Weiss)

March 25, 2021 Daniel Serfaty
MINDWORKS
Mini: The Issue of Language (William Casebeer and Chad Weiss)
Show Notes Transcript

Language generates mutual intelligibility and understanding. This is paramount for effective communication, if humans do not understand how their AI counterpart think and reason, the team is doomed from the get-go. However, what does this all entail? Join MINDWORKS host, Daniel Serfaty, as he discusses with Dr. William Casebeer, Director of Artificial Intelligence and Machine Learning at Riverside Research Open Innovation Center, and Mr. Chad Weiss, Senior Research Engineer at Aptima.

 

Listen to the entire interview in The Ethics of Artificial Intelligence with William Casebeer and Chad Weiss.

Daniel Serfaty: As we move towards the future and towards the potential solution to the many very thoughtfully formulated problems that you shared with us today, the major recent development in research is to apply the knowledge that we acquired for many years in the science of teams and organization, to understand the psychology and the performance of multiperson and I use that term in particular. Because now we use it as guidelines for how to structure this relationship you just described in your last example, Bill, by combining basically human intelligence and AI intelligence into some kind of [inaudible 00:42:11] intelligence that be better than perhaps at the sum of its part, in which each one checks on the other, in a sense.

And as a result, there is some kind of an [inaudible 00:42:20] match that will produce higher levels of performance, maybe safer level of performance, maybe more ethical levels of performance. We don't know, all these are questions. So could you comment for a second on both the similarities and differences between classical teams that we know, whether they are sports team, or command and control teams, or medical teams with those new, we don't have a new word in the English language. We still call them teams of humans and artificial intelligences, blended together similarities and differences. What's the same, what's different. What worries you there?

Chad Weiss: This is another interesting area. It's a lot of this hinges upon our use of language. And this is the curse of really taking philosophy of language at a young age. There's a question here of what we mean when we say teammate, what do we mean even when we say intelligence, because machine intelligence is very different from human intelligence. And I think that if you are sort of unfamiliar with the domain. There may be a tendency to hear artificial intelligence and think that what we're talking about maps directly to what we refer to when we talk about human intelligence, very different.

Daniel Serfaty: Language is both empowering, but also very limiting Chad. That's true. We don't have that new vocabulary that we need to use. So we use what we know. That's the story of human language, and then eventually that evolves.

Chad Weiss: Thank you.

Bill Casebeer: Language generates mutual intelligibility and understanding. So if you're interacting with an agent that doesn't have language mutual intelligibility and understanding is really hard to achieve.

Chad Weiss: Yeah. And then when we're talking about teammates when I use the word teammate, it comes packaged with all of these sort of notions. When I consider a teammate, I'm thinking of someone who has a shared goal, who has a stake in the outcomes. If I have a teammate, there's a level of trust that this teammate, one, doesn't want to fail, that this teammate cares about my perception of them and vice versa, and that this teammate is going to share in not only the rewards of our success, but also the consequences of our failures.

So it's hard for me to conceptualize AI as a strictly defined teammate under those considerations, because I'm not confident that AI has the same sort of stake in the outcomes. Often you hear the question of whether it's ethical to unplug an AI without its consent. And I think that it's very different because what we're doing there is inherently drawing an analogy between depriving the human of life. You're turning them off, essentially turning off in AI is not necessarily the same as a human dying. You can switch it back on, you can copy and duplicate the code that runs the AI. So there's a really interesting sort of comparison between the stakes of a set of potential outcomes between human and Ai.

Daniel Serfaty: I think the richness on to your perspective on this notion, Bill, especially the ethical dimension of it, but I am very optimistic because those very questions that we're asking right now when we pair a radiologist, for example, with an AI machine who's read millions and millions of MRI pictures and can actually combine that intelligence with that of the expert to reach new levels of expertise. As we think through this problem, as engineers, as designers, it makes us understand the human dimension even deeper, what you're reflected right now, Chad on what does it mean to be a member of a team and what does a teammate mean to you? Has been forced, that's thinking has been forced because we are designing artificial intelligence system and we don't know what kind of social intelligence to embed with them with. So my point is that it has a beautiful kind of a going back to really understanding what makes us humans as special, unique. That for us, what do you think about that?

Bill Casebeer: That's really intriguing Daniel. I mean, when I think about the similarities and differences between AIs and people on teams, some similarities that we share with our artificial creations are that we oftentimes reason the same way. So I use some of the neural networks I have in my brain to reason about certain topics in the same way that a neural network I construct in software or in hardware reasons. So I can actually duplicate things like heuristics and biases that we see in how people make judgements in silico, if you will. So at least in some cases we do reason in the same way because we're using the same computational principles to reason.

Secondly, another similarity is that in some cases we reason in a symbolic fashion, and in some cases we reason in a non-symbolic fashion. That is in some cases we are using language and we're representing the world and intervening on it. And in others, we're using these networks that are designed to help us do biological things like move our bodies around or react in a certain way emotionally to an event. And those may be non-symbolic. Those might be more basic in computational terms, if you will.

And I think we actually see that in our Silicon partners too, depending on how they're constructed. So those are a couple of similarities, but there are some radical differences as you were just picking up on Daniel, I think. One is that there is a huge general purpose AI context that is missing. You and Chad are both these wonderful and lively people with these fascinating brains and minds. You've had decades of experience and thousands of training examples and hundreds of practical problems to confront every day. That's all missing, generally, when I engage with any particular artificial intelligence or cognitive tool, it's missing all of that background that we take for granted in human interaction.

And secondly, there's a lot of biology that's just missing here. For us as human beings, our bodies shape our minds and vice versa, such that even right now, even though we're communicating via Zoom, we're using gestures and posture and eye gaze to help make guesses about what the other person is thinking and to seek positive feedback and to know that we're doing well as a team. And a lot of that is missing for our AI agents. They're not embodied, so they don't have the same survival imperatives that Chad mentioned earlier. And they also are missing those markers that can help us understand when we're making mistakes as a team that at least for us human beings have evolved in evolutionary timescales. And are very helpful for helping us coordinate activity like be mad, angry when somebody busts a deadline. So all supremely important and differences between our artificial agents and us humans.

Daniel Serfaty: So taking on that, are you particularly worried about this notion of, it's a long verb here, but basically anthropomorphizing those artificial intelligence and robots by giving them names, giving them sometimes a body. The Japanese are very good at actually making robots move and blink and smile like humans, for example, or maybe not like humans and that's the issue. And are we worried about giving them gender like Charlie or other things like that because it creates an expectation of behavior that is not met. Tell me a little bit about that before I'm going to press you about giving us all the solutions to solve all these problems in five minutes or less, but let's explore that first, anthropomorphizing.

Bill Casebeer: I'll start. It's a risk for sure, because of that background of our biology and our good general purpose AI chops as people, we take that for granted and we assume it in the case of these agents. And when we anthropomorphize them, that can lead us to think that we have obligations to them that we actually don't, and that they have capabilities that they don't actually possess. So I think anthropomorphization, it can help enable effective team coordination in some cases, but it also presents certain risks if people aren't aware the human-like nature of these things stops. Before we kind of think about, "Oh, and this is something that rebuts Chad and Bill's assumption, there's nothing new under the sun." I would say we actually have a body of law that thinks about non-human agents, our obligations to them and how we ought to treat them. And that's corporate agency in our legal system.

So we have lots of agents running around now, they're taking actions that impact all of our lives daily. And we have at least some legal understanding of what obligations we have to them and how we ought to treat them. So IBM or name your favorite large corporation isn't composed of exclusively of people. It's this interesting agent that's recognized in our law, and that has certain obligations to us and we have certain obligations to it. Think of Citizens United. All of those things can be used as tools as we kind of work our way through how we treat corporate entities to help us maybe figure out how we ought to treat these agents that are both like, and unlike us too.

Daniel Serfaty: Thank you. Very good.

Chad Weiss: Yeah. I think I'm of two minds here on the one hand-

Daniel Serfaty: Something an artificial intelligence will never say.

Chad Weiss: On the one hand as a developer of technologies and because of my admittedly sometimes kooky approach to sort of collaborative creativity I think that there is a sense of value in giving the team a new way to think about the technology that they're developing. I often encourage teams to flip their assumptions on their heads. And to change the frame of reference with which they're approaching a problem, because I think this is a very valuable for generating novel ideas and remixing old ideas into novel domains.

It's just the key to innovation. On the other hand, I think that as shepherds of emerging and powerful technologies, we have to recognize that we have a much different view, understanding of what's going on under the hood here. And when we are communicating to the general public or to people who may not have the time or interest to really dive into these esoteric issues, that people like Bill and I are sort of driven towards by virtue of our makeup I think that we have a responsibility to them to help them understand that this is not exactly human and that it may do some things that you're not particularly clear on.

My car has some automated or artificial intelligence capabilities. It's not Knight Rider or Kit if you will. But it's one of those things where like, as a driver, if you sort of think of artificial intelligence as like human intelligence that can fill in gaps pretty reliably, you're putting yourself in a great deal of danger. There are spaces in, as I'm driving through this area, if I'm driving to the airport, I know there's one spot right before an overpass where the car sees something in front of it and it slams on the brakes. This is very dangerous when you're on the highway. And if you're not thinking of this as having limited capabilities to recover from errors or misperceptions in the best way possible you're putting your drivers, your drivers families, your loved ones, a great deal of risk, as well as other people who have not willingly engaged in taking on the artificial intelligence. There are other drivers on the road, and you're also putting their safety at risk as well. If you're misrepresenting in a way, whether intentionally or unintentionally the capabilities and the expectations of an AI.