MINDWORKS

Mini: Lessons from tragedy (Julie Shah and Laura Majors)

May 08, 2021 Daniel Serfaty
MINDWORKS
Mini: Lessons from tragedy (Julie Shah and Laura Majors)
Show Notes Transcript

In the domain of human systems, quite often, progress has unfortunately been made after big disasters. The Three Mile Island nuclear accident, for example, in the '70s prompted people to rethink about how to design control rooms and human systems. Some accidents with the US Navy prompted the rebirth of the science of teams and on. With robots, inevitably in the news, we hear more about robots when they don't work and when there is an accident somewhere. MINDWORKS host, Daniel Serfaty, speaks with Prof. Julie Shah, associate dean of Social and Ethical Responsibilities of Computing at MIT, and Laura Majors, Chief Technology Officer at Motional, about how accidents make us become better designers and better engineers.

 

Listen to the entire interview in Human-Robot Collaboration with Julie Shah and Laura Majors

Daniel Serfaty: It's been said that in our domain, in the domain of human systems, quite often, big leap, big progress has been done unfortunately after big disaster. The Three Mile Island nuclear accident, for example, in the '70s prompted people to rethink about how to design control rooms and human systems. Some accidents with the US Navy prompted the rebirth of the science of teams and on. With robots, inevitably in the news, we hear more about robots when they don't work and when there is an accident somewhere. Can you talk about these notions and how perhaps those accidents make us become better designers and better engineers? Laura?

Laura Majors:Yeah. It was a major accident that first led to the creation of the FAA. There was a mid-air collision that occurred previous to that moment in time. Our airspace was mostly controlled by the military. Flying was more recreational. It wasn't as much of a transportation option yet, but there were at least two aircraft that flew into the same cloud over the Grand Canyon. And so they lost visibility. They couldn't see each other and they had a mid-air collision. And that really sparked this big debate and big discussion around the need for a function like the FAA and also for major investment in ground infrastructure to be able to safely track aircraft and be able to see where they are and predict these collision points. And also is when the whole creation of highways in the sky to enable more efficient transportation in our skyways in a way that safe was created. So we definitely have seen that play out time and time again.

Another really interesting phenomenon is that as you look at the introduction of new technology into the cockpit, such as the glass cockpit, such as flight management system, each introduction of these new generation of capability, there was actually a spike in accidents that occurred right after the introduction of the technology before there was a steep drop-off and an improvement in accidents. And so there is this element of anytime you're trying to do something really new, it's going to change the process, it's going to change the use of the technology. There may be some momentary regression in accidents, in safety that then is followed by a rapid improvement that is significant. So we have seen this, again, in many other domains. I think that unfortunately is a little bit inevitable when you're introducing new complex technology that there will be some unexpected behaviors and unexpected interactions that we didn't predict in our testing through our certification processes and whatnot.

Daniel Serfaty: So that gives a new meaning to the word disruption. I mean, it does disrupt, but out of the disruption, something good comes up. Julie, in your world, do you have examples of that, of basically the introduction of robotic element or robotic devices cause actually worrisome accidents that eventually led to improvements?

Julie Shah:I can give you two very different examples, but I think they're useful as two points on a spectrum. There are a few people killed every year by industrial robots and it makes the news and there's an investigation and much like should we talk about in aviation? So common themes is that a key contributor to accidents is pilot error. But when you do an investigation and understand all of the different factors that lead to an incident or even a fatality, there is something called a Swiss cheese model, like many layers with holes in them have to align for you to get to that point where someone is really set up to make that mistake that results in that accident.

And when we look at industrial robots, when something goes wrong, oftentimes you hear the same refrain and it'll be with standard industrial robots. So, for example, someone enters a space while it's operating and they're harmed in that process. And then you look at it and you say, "Well, they jimmied the door. They worked around the safety mechanism. So that's their fault, right? That's the person on the factory floor, his fault for not following the proper usage of that system."

And you back up one or two steps and you start to ask questions like, "Why did they jimmy that door?" It's because the system didn't function appropriately and they had to be going in and out in order to be able to reset stock for that robot. And why weren't they going to the process of entirely shutting the robot down? Because there's a very time-consuming process for restarting it up and they're on the clock and their productivity is being monitored and being assessed. You put all these factors together and you have the perfect storm that is going to predictably with some large end result and people dying from it.

It can't just be fixing it at the training level or fixing the manual for putting an extra asterisks in the manual like don't open the cage while the system is in operation. I think this just points to one of the key themes that we bring up in the book, which is the role of designing across these layers, but also the role and opportunity that intelligence and the systems provides you as an additional layer, not just an execution, but at all the steps along the way. A very different example that comes from the research world is related to trust, inappropriate trust or Alliance on robot systems. Miscalibrated trust and automation is something that's been studied for decades in other contexts in aviation and industrial domains. And you might ask, "Does that end up having relevance as we deploy these systems in everyday environments?

There's this fascinating study done a few years ago at Georgia Tech, where they looked at the deployment of robots to beat people out of a simulated burning building, so a fire in a building. The alarm was going off, they put smoke in the building, they trained the bystanders in the operation of the robot system in advance, and half the participants observed the robot functioning very well. It could navigate, it could do its job. The other half directly observed the system malfunctioning, going in circles, acting strangely. And then when they put people in that building, even the ones that observed the robot malfunctioning moments before, followed that robot, wherever it took them through the building, including when the robot led them to a dark closet with clearly no exit.

And this might sound funny, but it's not funny because it's consistent with a long history of studies and analysis of accidents and aviation, other domains of how easy it is to engender trust in a system inappropriately. This is something that's very important in that particular example for a robot leading you through a building, but also think about cars like Teslas and being able to calibrate a person's understanding of when they need to take over with that vehicle since it's about its environment and what it doesn't. And so these are cautionary tales from the past that I think have direct application to many of the systems we're seeing deployed today.

Daniel Serfaty: Sure. I think I believe the miscalibrated trust problem as the additional complexity of being very sensitive to other factors like culture, like age, things that people in certain cultures... I'm not talking cultures with [inaudible 00:51:05], but even local cultures may trust more the machines and maybe to a fault overtrust the machine more so than other populations. I think that creates a huge challenge for the future designer of the system that this has to be adapted to factors for which usually do not design properly.

Maybe on the other side, I don't want to sound too pessimistic about accidents, even though the lesson, as both of you pointed out, is that those accidents, even that involved sometimes the unfortunate loss of life lead to leaps in technology in a positive way. But if you had to choose a domain right now where these teaming of human and robots have the most impact other economic impact or health impact, or by any other measures, what would that be? Healthcare, defense, transportation that has the good story, not the accident story now. Laura, can you think of one?

Laura Majors:I think if you look at defense and security applications, you can find some great examples where robots help in ways that we don't want people to go. So if you think of bomb disposal robots, for example, keeping people out of harm's way so that we can investigate, understand what's happening, disarm without putting a person in harms way. There are also other defense applications where we're able to have autonomous parachutes that can very precisely land to a specific location to deliver goods, food to people who need it. There are different drone applications where we can get eyes on a situation, on a fire, to understand hotspots and be able to attack it more precisely.

I think those are some good examples. And that to me is one of the reasons why I'm so drawn to autonomous cars is because this is a case where many could argue that people are not very good drivers. There are still a lot of accidents on our roadways and so there's a great opportunity to improve that safety record. And if we look at what happens in air transportation, it's such a fundamentally different safety track record that we hope to achieve on our roadways by the introduction of automation and robotics.

Daniel Serfaty: What a wonderful reason to invest in that industry. I haven't thought about it, the social impact and the greater good, not just the convenience aspect is key. Julie, what's bright in your horizon there? What do you see in robotic applications that are really already making an impact, especially when there is a human-robot collaboration dimension.

Julie Shah:Another one that comes to mind is in surgical robotics, we've seen this revolution over the past number of years in the introduction of robots in the operating room, but much like the robots used for dismantling explosive devices. These robots are really being directly controlled at a low level by an expert who's actually sitting physically in the same operating room. And nonetheless, you see great gains from that in some contexts. So, for example, rather than doing a laparoscopic surgery, which is you can imagine like surgery with chopsticks, that's going to be very hard. There's a lot of spatial reasoning you have to do to be able to perform that surgery. A lot of training required to do it. Some people are more naturally capable of that than others, even with significant amounts of training.

For example, a system like the da Vinci robot, it gives surgeons remotely their wrists back so they don't have to do chopstick surgery anymore. And so it actually enables many surgeons not fully trained up in laparoscopic surgery to be able to do a surgery that would have otherwise required fully opening a person up. And so you see great gains in recovery time of people. Surgery on the eye, if you can remove tremors, verified tremors that any human has that allow for surgical precision, that's very important in that field.

One of the commonalities between some of the applications Laura gave in bomb disposal and the surgical application is these are systems that are leveraging human expertise and guidance in them. They're not employing substantial amounts of intelligence. But as a random person in the field pointed out to me a number of years ago, what are you doing when you put a surgical robot in the operating room and put the surgeon a little further away in the room. You've put a computer between that surgeon and the patient.

Now, when we put a computer between the pilot and the aircraft, it opened up an entirely new design space for even the type of aircraft we could design and field. For example, aircraft that have gains in fuel efficiency, but are inherently unstable like a human just on their own without computer support couldn't even fly them. This is a very exciting avenue forward as we think about these new options of a computer at the interface, how we can leverage machine learning and data and how we can employ these to amplify our human capability and doing work today.

Daniel Serfaty: Work today, that's a key word here, work today. I'm going to ask you a question. I don't even know if people were asking me that question how I would answer. But you are here helping drivers and fighter pilots and surgeons and all these people with robotic devices changing their life, in fact, or changing their work, can you imagine in your own work as CTO or as professor a robotic device that can change that work for you in the future? Have you thought about that?

Julie Shah:I have thought about this quite a lot actually, [crosstalk 00:56:32] quite a lot.

Daniel Serfaty: Maybe you thought it may be a fantasy, I don't know but-

Julie Shah:And with my two and four year old, I spent endless amounts of time picking up and reorganizing toys just to have it all done again. I think one of the exciting things about framing this problem is a problem of enabling better teamwork between humans and machines or humans and robots. And Daniel, this goes back to your work from a long while ago which inspired parts of my PhD in coordination among pilots and aircraft is that effective teamwork behaviors, effective coordination behaviors, they're critical in the safety critical contexts where the team absolutely has to perform to succeed in their tasks, but good teamwork is actually good teamwork anywhere. So you remove the time criticality. If you are an effective teammate, if you can anticipate the information needs of others, offer it before it's requested, if you can mess your actions in this stance, then that good teamwork translates to other settings.

My husband is actually a surgeon. And when I was working on my PhD, I used to point out to him how he was not an effective teammate. He would not anticipate and adapt, and he still makes fun of me to that day. You're a surgeon, you need to anticipate and adapt. So good teamwork and cooking together in the kitchen, that same ability, it translates there, being able to understand, hold a high-quality mental model of your partner, understand their priorities and preferences that translates in many other domains. And so by making our teamwork flawless in these time-critical, safety-critical applications, we're really honing the technology for these systems even more useful to us in everyday life as well.

Laura Majors:Yeah, and I think as a CTO, I also think about a lot of our work is that... and what we strive to do is data-driven, decision-making, so about our technology, how it's performing in areas where it's not meeting the standards, the simulation, testing at scale. There are definitely have been many advances in those areas, but I think when I think about how could robotics and automation help a CTO be better, I think about, "Yeah, there are some parts of my job that you could automate. Could you close the loop on finding problems and identifying maybe teams or subsystems that have gaps that we may not realize until later in the test cycle, but could we learn those things earlier, identify those and have the dashboard that shows us where there may be lurking problems so we look at them sooner?”

Daniel Serfaty: No, I agree. I've been starting in my own company this philosophy since last year of the new generation called eating your own dog food, basically, which is let's try those things that we are trying to sell to our customers on ourselves first so that we can feel that pain before the customer does. But that would be an example. Let's try to help the CTO, the CEO with a dashboard and see whether or not we can actually make a difference. I think it's important we understand that at that intimate level.