MINDWORKS

Human-Robot Collaboration with Julie Shah and Laura Majors

April 20, 2021 Daniel Serfaty Season 2 Episode 3
MINDWORKS
Human-Robot Collaboration with Julie Shah and Laura Majors
Show Notes Transcript

Human-robot collaboration gives new meaning to the word “disrupt”—but out of those disruptions there’s the promise of ultimately improving human performance in the work place. Join MINDWORK host Daniel Serfaty as he explores this brave new world with Prof. Julie Shah, associate dean of Social and Ethical Responsibilities of Computing at MIT, and Laura Majors, Chief Technology Officer at Motional, co-authors of the new book, “What to Expect When You’re Expecting Robots: The Future of Human-Robot Collaboration.”

Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. Today, I have two phenomenal guests who are going to tell us stories about the future and also about the present in the much maligned collaboration between humans and robots. We've talked about, on past episodes of MINDWORKS, books and films often paint a dystopian picture from iRobot to Skynet, while in the real world, human workers are concerned about losing their jobs to robot replacements.

But today, I hope to put some of these fears to rest. We are going to talk with two MIT professionals with advanced degrees, who recently did something very interesting together. They are the characters of a new book, What To Expect When You're Expecting Robots: The Future of Human-Robot Collaboration. Well, in addition to being the coolest title published this year in any domain, it's a really important book that I recommend to all of you to read about the reality of introducing robots and artificial intelligence in our daily lives and at work.

So without further ado, I want to introduce my first guest, Professor Julie Shah, the associate dean of social and ethical responsibilities of computing at MIT. She's a professor of aeronautics and astronautics and the director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. She's expanding the use of human cognitive models for AI and has translated and worked in manufacturing assembly plants, healthcare applications, transportation, and defense.

Professor Shah has been recognized by the National Science Foundation with a Faculty Early Career Development Award and by the MIT Technology Reviews on its 35 Innovators Under 35 list. That's a great list to be on because you stay under 35 forever. Her work has received international recognition from among others, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, and the International Symposium on Robotics. She earns all her degrees in aeronautics and astronautics and in autonomous systems from MIT.

My other guest, and as you will soon learn, Julie's partner in crime and in books too, is Laura Majors. Laura is an accomplished chief technology officer and author. As CTO at Motional, Laurel leads hundreds of engineers in the development of revolutionary driverless technology system. She began her career as a cognitive engineer at Draper Laboratory, where she combined her engineering and psychology skills to design decision-making support devices for US astronauts and military personnel. After 12 years with the company, she became the division leader for the Information and Cognition Division. During this time, Laura was recognized by the Society of Women Engineers as an emerging leader. She also spent time at Aria Insights specializing in developing highly advanced drones. Laura is also a graduate of the Massachusetts Institute of Technology. Laura and Julie, welcome to MINDWORKS.

Julie Shah:Thank you so much.

Laura Majors:Thank you. It's great to be here.

Daniel Serfaty: This is, as you know, a domain that is very close to my heart. This is where I spend most of my waking hours and maybe my sleeping hours thinking about this notion of collaboration between humans and intelligent machines and you've been at it for a long time. Can you say a few words to introduce yourself? But specifically of all the disciplines in engineering you could have chosen in institution as prestigious as Georgia Tech and MIT, why this domain as a field of endeavor? Julie, you want to tell us first?

Julie Shah:As long as I can remember from when I was very small, I was interested in airplanes and in rocket ships and it was always my dream to become an aerospace engineer. I wanted nothing more than to work for NASA. I have a different job today, but that was my initial dream when I went off to college to study aerospace engineering at MIT. When I got into MIT and everybody said, "Oh, what are you going to study there? I said, "Aerospace engineering," and everybody would say, "Well, that's so specialized. How do you know at such a young age that you want to do something so specialized?" And then you get to MIT and you begin a program in aerospace engineering. And the first thing you learn is that it's a very, very broad degree. Aerospace engineering is a systems discipline. And then everybody begins to ask you, "What are you going to specialize in as an aerospace engineer?"

And the thing that caught me early was control theory. I really enjoyed learning about the autopilots of aircraft, how you make a system independently capable. And then interestingly for my master's degree, I pursued research in human factors engineering. So you make a system independently capable of flying itself, but it's never really truly independently capable. It has to be designed to fit the pilot like a puzzle piece. And then that expanded design space of the human-machine system really captivated me.

For my PhD, I went on to specialize in artificial intelligence, planning and scheduling, so moving from lower level control to how do you make a system more capable of acting intelligently and autonomously to make higher level decisions at the task level, but you still have this challenge of how you design that capability to fit the ability of a person to monitor it, to coach it through steps, to catch it when something isn't going right. And that master's degree in human factors engineering has really been the center of my interest, putting the human at the center and then designing the technology from there. And so you never truly want to make something that operates independently. It operates within a larger context. And that's part of the aerospace endeavors, teams of people and complex socio-technical systems coming together to do amazing things. And so that's how I ended up working in this space.

Daniel Serfaty: That's great. And indeed, I hear words like humans and intelligence and all kinds of things that usually we don't learn in traditional classes in engineering. Laura, you ended up in human-robotic collaboration. You took a slightly different path and you are more a leader in industry. How did you get there? Why not just be a good engineer in building bridges or something?

Laura Majors:Yeah. I'd always been interested in robotics and space and found math very easy and beautiful. But I also had this side interest in how people think and human psychology. And so when I went to college, I wasn't sure which path to go down. I will say my parents were pushing me down the engineering path. I was struggling because I also had this interest in psychology and I thought they were orthogonal. And it wasn't until my campus tour at Georgia Tech, where someone pointed out a building and said, "That's the engineering psychology building where they figure out how do you design a cockpit, how do you help a pilot control this complex machine?"

That was really the spark of inspiration for me. Of course, it wasn't until my junior year after I got through all my core engineering courses that I was able to take a single class in engineering psychology, or it was called, I think, human-computer collaboration at the time. I was fortunate to take that course with Amy Pritchett, who many of you probably know, and I was really interested in that topic. And so I approached her and asked if I could do some research in her lab as an undergrad, and really through that got exposed to what this field was all about. And so I followed in her footsteps going to MIT and humans and automation and really focused on that area for my graduate work.

Also, for me, I always wanted to build things that made a difference. And so seeing products through to the end was really, again, part of my passion. And so I saw that opportunity at Draper to really work on these important critical projects. And then that took me into the commercial world as well as I worked on drones before this and now the opportunity to really figure out how do we build robot cars that are going to work in our world and that are going to blend with human society in a way that's safe and effective.

Daniel Serfaty: That's amazing, this fascination that some of us have had despite the choice of wanting to play with airplanes and other things like that about the human as maybe the most fascinating, but yet the most mysterious part of the system. One day, somebody need to do an anthropological study about why some of us decided to migrate into that area and some others did not. But Laura, since you have the microphone, can you tell our audience what do you do on your day job? What do you do when you go to work as a CTO of a really high-tech company looking at them striving and other things like... What is it that you do?

Laura Majors:Yeah, so some days I do podcasts, like today. I have a large engineering team, so I have hundreds of engineers. I don't get to go deep anymore into the hands-on software myself, but I work with my team. So we're working on everything from the hardware design. I have to worry about the schedule. What are the key dependencies across my hardware teams and my software teams? What's the system architecture that's going to enable us to have the right sensors and the right compute to be able to host the right algorithms in making the right decisions?

So how I spend my time is a lot of meetings. I spend time with my leadership. I also do a lot of technical reviews, new architecture designs, new results. Yesterday, I was out of the track riding in our car. I try to get in the car every couple of weeks when we have new software builds so I can see it tangibly myself. I also present to our board frequently. So I have to share with them progress we've made, risks that we worry about, challenges that we face and how we're approaching them. So there's a lot of preparation for that.

And of course, I'm working with the executive team here with our CEO, with our CFO, our general counsel, our head of HR to make sure that all the pieces are coming together that we need from a technology standpoint to be successful. I have to wear a lot of hats. I would say maybe 70% of those are technical and probably more than you would expect are not technical, but they're all a part of making sure we have the right team, the right process, the right tools we need to be successful in creating this very complex system.

Daniel Serfaty: You seem to enjoy it, that the role of the CTO, which is really a very coveted role in high-tech for our audience who doesn't know the chief technology officer make all those connections between the hardware and software and business and finance and the different components, and at the same time need to be quite often the deep engineer or the deep scientist because you deal with such advanced technology. Julie, what do you do? You have at least three jobs I know of at MIT. You're associate dean of that new big school of computing, you're a professor of aero/astro, you work in the lab, you're managing your own lab, tell our audience what do you do on a typical day if there is such a thing.

Julie Shah:I'm a professor and researcher and I'm a roboticist. So I run a robotics lab at MIT. When you're doing your PhD, usually you're sitting in computer science or AI or other disciplines. Usually, you spend a lot of time sitting very quietly at your desk coding. When I turned over to becoming a professor and the better part of 10 years ago, I described the job as a job where you have half a dozen different jobs that you just juggle around. And that's a part of the fun of it. So what I do is work on developing new artificial intelligence models and algorithms that are able to model people, that are able to enable systems, whether they be physical robots or computer decision support systems to plan to work with people.

So, for example, I develop and I deploy collaborative robots that work alongside people to help build planes and build cars and industry. I work on intelligent decision support for nurses and doctors and for fighter pilots. I specialize in how you take the best of what people are able to do, which vastly surpasses the ability of computers and machines in certain dimensions and how you pair that with computational ability to enhance human work and human well-being.

Daniel Serfaty: But your professor was a million different projects and a lot of students. I assume the people actually performing that work of modeling, as you said, of building, et cetera, are your students or your other professors. What's a typical day at the lab when you're not in the classroom teaching?

Julie Shah:It's all of the above. I run a lab of about 15 grad students and post-docs, many more undergrads that engage in our lab as well. I think over the last 10 years, we've had over 200 undergrads that have partnered with the grad students and post-docs in our lab and they primarily do the development of the new models, the testing, bring people in to work with our new robots and see how and whether the systems work effectively with people. We do have many different types of projects and different domains.

But one of the most exciting things about being a professor is that the job description is to envision a future and be able to show people what's possible, so 10 plus years down the road. So we're shining a light on as we advance this technology, here's what it can do and here's the pathway to the ways it can transform work, a decade plus down. But I am very driven by more immediate, real-term applications. And so many of my students will embed an industry and hospitals and understand work today to help inspire and drive those new directions.

The key is to develop technologies that are useful across a number of different domains. And so that's how it all becomes consistent. It's actually whether it's a robot that's trying to anticipate what a person will do on an assembly line, or just decision support system anticipating the informational needs of nurses in a hospital floor. And many of the aspects of what you need to model about a person are consistent across those. And whether it's physical materials being offered or informational materials, there's consistency in how you formulate that planning problem. And so that's the joy of the job is working through into that intellectual creative space to envision what these new models and algorithms will be and how they can be widely useful.

Daniel Serfaty: I'm glad you mentioned the joy of the work because what comes across when listening to Laura and to you, Julie, is you really enjoy what you're doing. There is a true joy there, and that's very good to hear. There is also the duality in what both of you described into the work of today. I suppose both as CTO and as professor, you have to envision the future too. Actually, that's your job, as you said. And it's reflected in the book that you just co-wrote, to remind our audience, What To Expect When You're Expecting Robots: The Future of Human-Robot Collaboration. So my question is what prompted you to collaborate on this book other than just having more joy in your work, which you're going to make a lot of the people in the audience very jealous? But why this book, Laura? What prompted you to collaborate with Julie on this?

Laura Majors:There was a conference that we were both at where I was asked to talk on some of the commercial early use of robots and some of the challenges there and some of the things we learned in industrial applications that crossed over that may help in commercial applications. And after giving that talk... It was short, it was, I don't know, 15-minute talk, but it was received really well. I think it sparked some discussion. I was at Draper at the time. And so some of my staff and her students were already working on some projects together. But actually, it was only at this conference that she and I met for the first time in person when we were working just across the street from each other. We both knew of each other very well.

And so when we got back from the conference, we got together over lunch and we were talking and connecting on many topics, but I think that was the moment where Julia was considering writing a book at the time. After this talk, I had started thinking about writing a book and we both felt like we don't have time to write a book. But we thought, "Hey, if we do it together, then we can motivate each other like a gym buddy." And also, we saw that we each had these very different perspectives from all of the great theoretical work that Julie was working on in the lab and me from the practical more industrial product-oriented work. And so we decided to start pursuing this and we wrote up a proposal and we started working with editors and a concept came together that became EMR book.

Daniel Serfaty: Practicing collaboration or writing about collaboration, look at that.

Laura Majors:Yes, and it was a great joy. We kept waiting for the process to get hard and painful and it never was, I think because of that collaboration.

Julie Shah:Everything is good so far. We knew what happens next and then it was just continued to be a joy all the way through the end.

Laura Majors:Julie, I assume you have fully endorsed Laura's version of events here?

Julie Shah:Yeah, exactly right. It's exactly right. I think the only thing I'd have to add is I think we're bringing very complimentary perspectives from industry and academia. But as you can probably also infer just based on the conversation so far, there's a core, there's an orientation towards technology development that we share coming from the human needs perspective and how these systems need to integrate with people and into society. Laura gave this amazing talk. It's 15-minute talk. I had been drawing many of the same themes in year after year in a course I teach on human supervisory control of automated systems, where I say like, "Look at aerospace, look at the new applications coming and the challenges, we're going to sit with them." And afterwards I was like, "You'll capture everything so perfectly." And then a great friend and mentor of ours said in passing, "That would be a really great book."

Daniel Serfaty: You'll see both of you are going to be able to retire the day you can design us a robot that will make the same recognition. We recognize that there is an impedance match between themselves and the human they're supporting, but let's jump into that because I want to really dig deeper right now into the topic of human-robot collaboration. And my question and any one of you can answer is humans have been working with machines and computer for a while. Actually, Laura, you said at Georgia Tech you walked into a human-machine interaction class a couple of decades ago, or at least that. So we've been teaching that thing. Isn't human-robot collaboration just a special case of it? And if not, why not? What's unique here? Is there a paradigm change because of what? Any one of you can pick up and answer.

Laura Majors:I remember one of my first projects at Draper was to work on an autonomous flight manager for returning to the moon. I was so surprised to find a paper from the time of Apollo that was written. I think Larry Young was one of the authors since MIT Emeritus professor. And even back then, they were talking about how much control do you give to the guidance computer versus the astronauts. So you're right, this discussion and debate goes way back. And how is it different now? I think it's only gotten harder because machines and robots have become more intelligent and so they can do more. And so this balance of how do you figure out what it is they should do? How are they going to be designed to be that puzzle piece as Julie described to fit with the people that they interact with or interact around?

Julie Shah:I fully agree with that. And maybe the additional thing to add is I don't think human-robot interaction is a special case of a subset of human-computer interaction. There's different and important factors that arise with embodiment and thinking about interaction with an embodied system. Maybe to give two short examples from this, I'm not a social robotics researcher. I started my career working with industrial robots that work alongside people in factories. They are not social creatures, like they don't have eyes, they're not cuddly. You don't look at them and think of them as a person.

But we have this conference in the field, the International Conference on Human-Robot Interaction. And up until lately when it got too big, it was a single track conference. There's a foundation of that field that comes from the psychology background. And so in this conference, you'd watch all these different sorts of papers from all different sorts of backgrounds. I remember there was this one paper where they were showing results of differences in behavior when a person would walk by the robots, whether the robot tracked with it head camera as the person walked by, or whether the robot just stared straight ahead as the person walked by. And if the robot just tracked the person as the person walked across the room, person would take this very long and strange arc around the robot.

I just remember looking at that and thinking to myself, "So I'm working on dynamic scheduling." Like on a car assembly line, every half second matters. A half second will make or break the business case for introducing a robot. I say, "Oh, it's all about the task." But if you get these small civil cues wrong, if you just like, "Ah, maybe the robot should be social and watch people around it as they're working," that person now takes a second or two longer to get where they're going and you've broken the business case for introducing your robot.

And so these things really matter. You really need to understand these effects and they show up in other ways too. There is an effect of not trust related to embodiment of a system. So the more anthropomorphic a system is, or if you look at a physical robot versus computer decision support, the embodied system and the more anthropomorphic system can engender inappropriate trust in the system. You might engender a high level of trust, but it might not be appropriate to its capabilities. And so while you want to make a robot that looks more human-like and looks more socially capable, you can actually be undermining the ability of that human-machine team to function by engendering an inappropriate level of trust in it. And so that's a really important part of your design space and embodiment brings additional considerations beyond an HCI context.

Daniel Serfaty: So what you're sending us is a warning about do not... think first before you design a robot or robotic device in a way that looks or sounds or behave or smells or touches more like a human. It's not always a good thing.

Julie Shah:Yeah. Every design decision needs to be intentional with an understanding of the effects of that design decisions.

Daniel Serfaty: Now I understand a little more, is the fact that the robots and like classical machines of the '70s, say, has the ability to observe and learn and as a result of that learning change? Is that also changing the way we design robots today, or is that something more for the future, this notion of learning in real time?

Julie Shah:So there's a few uses of machine learning in robotics. One category of uses is that you can't fully specify the world or the tasks for your robot in advance. And so you want it to be able to learn to fill in those gaps so that it can plan and act. And a key gap that's hard to specify in advance is, for example, the behavior of people, various aspects of interacting with a person as opposed to like a human is like the ultimate uncontrollable entity. And so it's demonstrated empirically in the lab that when you hard-code the rules for a system to work with the person, or for how it communicates with the person, the team will suffer because of that versus an approach that's more adaptable, that's able to gather data online and update its model for working with a person.

And so the new ability of machine learning, which is it really transformed the field in the last 5 to going back 10 years, it certainly changes the way we think about designing robots, it also changes the way we think about deploying them, and it also introduces new critical challenges in testing and validation of the behavior of those systems, new challenges related to safety. You don't get something for nothing, basically.

Laura Majors:On that point of online learning, machine learning is, I would say, core to most robotic systems today in terms of their development, but online learning and adaptation is something that has to be designed very carefully and thought through very carefully because of this issue that most robotic systems are safety-critical systems. And so you need to go through rigorous testing for any major change before fielding that change for new software release or software update, for instance. I think some of those online learning adaptation can also create some unexpected interaction challenges with people. If the system they're using is changing underneath of them, then it can have negative impacts on that effective collaboration.

Daniel Serfaty: Yes, that makes total sense. We'll get back to this notion of mutual adaptation a little later, but your book is full of beautiful examples. I find them beautiful of basically the current state of affairs as well as the desired state of affairs because many people that are in the field tend to oversell the capability of robots, not because they're lying, but because they aspire to it and sometimes they confuse what is to what could be, will be. You're describing different industries in the book, there are beautiful examples, I would like, Laura, for you to take an example that is particularly good maybe in the world of transportation in which you live to show what do we have today and what will we have in the future in that particular domain, whether it's autonomous cars that everybody obviously is talking about or any other domain of your choice. And Julie, I'd like you to do the same after that, perhaps in the manufacturing or warehousing domain.

Laura Majors:In our book, we talk a lot about air transportation examples and how, again, some innovation we've seen in that space can also yield some more rapid deployment and improvement for other ground transportation robotics. One example that I really love is what's called TCAS, traffic collision avoidance technology, where the system is able to detect when two aircraft are on a collision course and can make a recommendation and avoidance maneuver. I think the beauty of combining that system with... There's air traffic control, there's also monitoring these aircraft, and then there's, of course, the pilots on board. And when you look at air transportation, there's been these layers of automation that have been added, and not just automation within the cockpit, but automation across... I mean, that's an example of automation across aircraft. That's really enabled us to reduce those risks where errors can happen, catastrophic errors.

And so I think we see some of that happening in ground robotics as well, I think in the future ways for robots to talk to each other. So if you imagine TCAS is a little bit like the aircraft talking to each other, if we could imagine the future robots to talk to each other, to negotiate which one goes first at an intersection, or when it's safe for a robot to cross a crosswalk, I think that's when we look into the future, kind of how do we enable robots at scale? It's that type of capability that we'll need to make it a safe endeavor.

Daniel Serfaty: So you introduced this notion of progressive introduction of automation and robotic, not to step function with more of a ramp in which the system eventually evolve to something like the one that you described. What's the time horizon for that?

Laura Majors:I think you have to get to a core capability and then there's improvements beyond that that we learn based on things that happen, not necessarily accidents, but near accidents. That's the way that aviation industry is set up. We have this way of collecting near misses, self-reported incidents that maybe didn't result in an accident, but could inform a future automation improvement or procedure improvement. I think if we just purely look at air transportation as an example, this automation was introduced over decades, really, and so I think that's maybe one of the misconceptions is that it's all or nothing. We can get to the robotic capability that can be safe, but maybe have some inefficiencies or have certain situations that can't handle where it stops and needs to get help for maybe a remote operator. We learn from those situations and we add in additional... Again, some of this automation may not even be onboard the robot. It may be across a network of robots communicating with each other. These types of capabilities, I think, will continue to enhance the effectiveness of robots.

Daniel Serfaty: So the example that Laura just gave us is maybe not mission critical, but lives are at stake when people are flying if you misdirect them. They are situations that maybe people may not think of them are dangerous, but can become dangerous because of the introduction of robots, perhaps. Julie, you worked a lot in understanding even what happened when I press Buy Now button on Amazon or Order Now, what happened in the chain of events that eventually led the package to show on your doorstep the morning after, or other situation in the manufacturing plant in which robots on the assembly lines interact with humans? Can you pick one of those examples and do a similar thing? What we have today and what will we have once you're done working on it?

Julie Shah:Sure. Yeah. In manufacturing, maybe we can take the example of automotive manufacturing, building a car, because most of us probably think of that as a highly automated process. When we imagine a factory where a car is built, we imagined the big robots manipulating the car, building up our car. But actually in many cases, much of the work is still done manually in building up your car. It's about half the factory footprint and half the build schedule is still people mostly doing the final assembly of the car, so the challenging work of installing cabling, installation, very dextrous work.

So the question is why don't we have robots in that part of the work? And up until very recently, you needed to be able to cage and structure the task for a robot and move the robot from the person and put a physical cage around it for safety because these are dangerous, fast moving robots. They don't sense people. And honestly, it's hard and a lot of manual work. Same thing with building large commercial airplanes. There's little pieces of work that could be done by a robot today, but it's impractical to carve out those little pieces, take them out, structure them, and then cage a robot around to do it. It's just easy to let a person step a little bit to the right and do that task.

But what's been the game changer over the last few years is the introduction of this new type of robot, a collaborative robot. So it's a robot that you can work right alongside without a cage relatively safely. So if it bumps into you, it's not going to permanently harm you in any way. And so what that means is now these systems can be elbow-to-elbow with people on the assembly line. But in the introduction, this is a very fast-growing segment of the industrial robotics ecosystem. But what folks, including us as we began to work to deploy these robots a number of years ago, noticed is that just because you have a system that's safe enough to work with people doesn't mean it's smart enough to get the work done and add value. So increase the productivity.

And so just as a concrete example, think of a mobile robot manipulating around a human associate assembling a part of a car and the person just steps out of their normal work position just to talk to someone else for a few moments. And so the robot that's moving around just stops. It just stops and waits until there's a space in front of it for it to continue on to the other side of the line. But everything is on a schedule. So you delay that robot by 10 seconds, the whole line needs to stop because it didn't get to where it needed to be and you'd have a really big problem.

So there's two key parts of this. One is giving these [inaudible 00:31:52] systems smart enough to work with people, looking at peoples more than obstacles, but as entities with intense, being able to model where they'll be, order why. A key part of that is modeling people's priorities and preferences and doing work. And another part of that is making the robots predictable to a person. So the robot can beep to tell people they need to move out of the way. Well, actually, sometimes people won't, unless they understand the implication of not doing that. So it can be a more complex challenge than you might initially think as well.

So the key here is not just to make systems that... The way this translates to the real world is now we are increasingly we have systems that are getting towards safe enough to maneuver around people. There are still mishaps like security guard robots that make contact with the person when they shouldn't and that's very problematic. But we're moving towards a phase in which these robots can be safe enough, but in making them safe enough does not mean they're smart enough to add value and to integrate without causing more disruption than benefit. And that's where the leading edge of what we're doing in manufacturing know some of that can very well translate as these robots escape the factory.

Daniel Serfaty: These are two excellent examples that shows the fallacy of just petitioning the task space into this is what humans do best, this is what robots do best, let's design them and let's hope for the best. I love in your book at some point you're talking about dance, you use the word dance, which I like a lot in this case because whatever saying you want, it takes two to tango, but the fact is that in order to be a great tango team, you not only have to be an excellent dancer by yourself. And certainly the two roles of the traditional role of men and women in that dance perhaps are different. However, you need to anticipate the moves of your partners to be a great dancer, especially in tango, it's particularly difficult.

You write about that and you start a journey to indicate this notion of harmony, of collaborative aspect of the behavior, Laura, in your world, is that as important, this notion of a robot having almost like an internal mental model of the human behavior and for the human that is also in the loop having some kind of an internal understanding of what the robot is capable of doing and what he's not capable of doing?

Laura Majors:Yeah, absolutely. We have people who ride in our cars who will take an autonomous ride to their passengers. So they have to understand what is the robot doing and why and how do I change what it's doing if I want it to stop earlier, or I want to know why it got so close to a truck, or does it see that motorcycle up ahead? There are also pedestrians who will need to cross in front of robotaxis and need to know is that car going to stop or not? So our vehicles have to be able to communicate to pedestrians and other human drivers in ways that they can understand. We have a project we call expressive robotics that's looking at different ways you can communicate to people, and again, using mental models they already have, rather than... You see a lot of research around flash a bunch of lights or have some display, but is there something that's more intuitive and natural?

In some of our studies, we discovered that people often use the deceleration of the vehicle as one indicator. So should we start the stop a little more abruptly and maybe a little earlier to indicate that we see you and we're stopping for you? Another cue people use is sound, the screeching of the brakes. So when we stop, should we actually amplify the screeching sound? That's something that we work on. And then the third class of users or of people in our integrated system that we think about our remote operators. So if a car gets stuck, let's say, it comes up to an intersection where the traffic light is out and there's a traffic cop directing traffic, or a remote operator needs to take over control and have some ability to interface with the car and with the situation. It's definitely an important part of autonomous vehicles.

Daniel Serfaty: That's interesting because in the first capture you only imagine the proverbial user, but in a large system or a system of systems, the way you describe it, there are all kinds of stakeholders here that you have to take into account in the very design of the vehicle itself.

Laura Majors:That's right. Julie and I in the book we call this other set of people bystanders. These are people who may not even realize is a car a human-driven car or a robot. The car may be far enough or angled in a way that you can't see if there's a person in the driver's seat or not. And so these people don't necessarily know what are the goals of that robot? Where's it going? What is it doing? How does it work? What are its blind spots? And so I think there's a lot of work there to figure out how can you effectively communicate with those bystanders, again, who know nothing about your system and be able to interact in a safe way with those bystanders.

Daniel Serfaty: That's fascinating because it's about almost amplifying an interaction that you wouldn't do normally if you're a car, in the sense that because you adapted in certain way, you have to exaggerate your signals somehow. We'll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS, but don't have time to listen to an entire episode? Then we have a solution for you, MINDWORKS Minis, curated segments from the MINDWORKS Podcast, condensed to under 15 minutes each and designed to work with your busy schedule. You'll find the minis along with full length episodes under MINDWORKS on Apple, Spotify, Buzzsprout, or wherever you get your podcasts.

Julie, what do you think are the remaining big scientific or technological hurdles for the next generation robots in a sense that I know you're working with students and you're working in a lab, you have the luxury of slow experimentation and grading semester after semester, maybe a luxury Laura doesn't have in her world? If you had some wishes for the next generation robots, will they be more socially intelligent, more emotionally intelligence, more culturally aware, more creative? What kind of quality you would like eventually to be able to design into those robots in the future?

Julie Shah:Well, we definitely need the systems to be more human aware in various ways, starting with humans as more than obstacles is a good starting point. And then once you go down that path, what is the right level at which to model a person? What do you need to know about them? And then in collections of people, are the norms, the conventions really do become important. So that's really just at its beginning. So being able to learn norms, conventions from relatively few demonstrations for observations is challenging, or to be able to update, start with a scaffold and update a model that the system has in a way that doesn't take thousands or hundreds of thousands or even millions of examples.

And so one of the technical challenges is as machine learning becomes more critically important to deploying these systems in less structured and more dynamic environments, it's relatively easy to lose sight as to what's required to make those systems capable. You look at the advances today, systems that are able to play various games like go and how they're able to learn. This requires either collecting vast amounts of labeled data, in which we're structuring the knowledge of the world through the system through those labels or a high fidelity emulator to practice. And our encoding of that emulator, it never truly mimics the real world. And so what translates what needs to be fixed up relatively quickly.

Many of our advances in perception, for example, are in fields where it's much easier to collect these vast amounts of data and it's easier to tailor them for different environments. If you look at what's required for deploying these systems in terms of understanding state of the world and being able to project, we don't have data sets on human behavior. And human behavior changes in ways that are tied to a particular intersection or a particular city when you're driving or when you're walking as a pedestrian and so that chance for problem becomes very important for a safety-critical system operating in these environments as well.

And so our own lab has a robust research program and what I call the small data problem. Everybody's working in big data and machine learning. If you work with people, you live in a world of small data and you begin to work very hard to gain the most you can out of what type of data it's easy for people to give. And labels are not easy, but there's other forms of high-level input a person can give to guide the behavior of a system or guide its learning or inference process paired with small amounts of labeled data.

And so we use techniques like that for being able to back out or infer human mental models, human latent states that affect humans behavior. And so as a very concrete example of that, for a very simple system, imagine a subway going up and down a line. And if that's how it goes up and down the line in Boston or New York, the behavior of the subway is the same. But in Boston, we say it's inbound and outbound from some arbitrary point called Park Street in the middle of the line. And in New York, we say like uptown and downtown based on when it gets to the end of the line and switches. It's sort of a two-state latent state that we hold to describe and understand that system.

But as a person that grew up in New Jersey and then moved to Boston, that switch can be very confusing. But if a person is able to give a machine just the change point in their own mental model, even if they can't use words to describe it, I can say the behavior of the subway switches at this point when it moves through Park Street. But the behavior of the subway in my mental models switches at the end of the line at this point. That's actually enough for a machine to lock in the two latent states that actually correspond to the human health mental model of the behavior of that system. And so these are your technical challenges, but ones that we can formulate and that we can address and make these systems vastly more capable with relatively little data and with very parsimonious only gathered human input. And so I think there's a really bright future here, but it's framing the problem the right way.

Daniel Serfaty: Laura, in your world, if you had one wish that will simplify, that will create a leap into the system that you are designing, what particular characteristics, or am hesitant to call it intelligence, but let's say social, cultural, creative, emotional components of the robot side of the equation would you wish for?

Laura Majors:One way I think about it is how do we create intelligence that can be predicted by other humans in the environment? And so I think that's really the leap. We talk about some in our book. Do you have to change the fundamental decision-making of the robot in order for its behavior to be understood and predicted by the people who encounter it? I think that's a really big gap still today. I think back to some of my early work in autonomous systems in talking with pilots in the military who flew around some of the early drones like Predator and other ones and they said the behavior of those systems was just so fundamentally different than a human-piloted vehicle, that they would avoid the airspace around those vehicles, give them lots of spacing and just get out of town.

And then Julia described in the manufacturing setting that these industrial robots were safe and could be side-by-side with people, but weren't smart and weren't contributing as well as they could be. So if we have that on our streets and our sidewalks, these systems that behave in ways we don't understand and who aren't able to add value to the tasks that we're doing every day, whether that's delivering food to us or getting us somewhere safely but quickly, I think that's going to be highly disruptive and a nuisance and it's not going to solve the real problems that these robots are designed or intended to solve. I think there's an element of predictive intelligence.

Daniel Serfaty: I like that, predictive intelligence. It's been said that in our domain, in the domain of human systems, quite often, big leap, big progress has been done unfortunately after big disaster. The Three Mile Island nuclear accident, for example, in the '70s prompted people to rethink about how to design control rooms and human systems. Some accidents with the US Navy prompted the rebirth of the science of teams and on. With robots, inevitably in the news, we hear more about robots when they don't work and when there is an accident somewhere. Can you talk about these notions and how perhaps those accidents make us become better designers and better engineers? Laura?

Laura Majors:Yeah. It was a major accident that first led to the creation of the FAA. There was a mid-air collision that occurred previous to that moment in time. Our airspace was mostly controlled by the military. Flying was more recreational. It wasn't as much of a transportation option yet, but there were at least two aircraft that flew into the same cloud over the Grand Canyon. And so they lost visibility. They couldn't see each other and they had a mid-air collision. And that really sparked this big debate and big discussion around the need for a function like the FAA and also for major investment in ground infrastructure to be able to safely track aircraft and be able to see where they are and predict these collision points. And also is when the whole creation of highways in the sky to enable more efficient transportation in our skyways in a way that safe was created. So we definitely have seen that play out time and time again.

Another really interesting phenomenon is that as you look at the introduction of new technology into the cockpit, such as the glass cockpit, such as flight management system, each introduction of these new generation of capability, there was actually a spike in accidents that occurred right after the introduction of the technology before there was a steep drop-off and an improvement in accidents. And so there is this element of anytime you're trying to do something really new, it's going to change the process, it's going to change the use of the technology. There may be some momentary regression in accidents, in safety that then is followed by a rapid improvement that is significant. So we have seen this, again, in many other domains. I think that unfortunately is a little bit inevitable when you're introducing new complex technology that there will be some unexpected behaviors and unexpected interactions that we didn't predict in our testing through our certification processes and whatnot.

Daniel Serfaty: So that gives a new meaning to the word disruption. I mean, it does disrupt, but out of the disruption, something good comes up. Julie, in your world, do you have examples of that, of basically the introduction of robotic element or robotic devices cause actually worrisome accidents that eventually led to improvements?

Julie Shah:I can give you two very different examples, but I think they're useful as two points on a spectrum. There are a few people killed every year by industrial robots and it makes the news and there's an investigation and much like should we talk about in aviation? So common themes is that a key contributor to accidents is pilot error. But when you do an investigation and understand all of the different factors that lead to an incident or even a fatality, there is something called a Swiss cheese model, like many layers with holes in them have to align for you to get to that point where someone is really set up to make that mistake that results in that accident.

And when we look at industrial robots, when something goes wrong, oftentimes you hear the same refrain and it'll be with standard industrial robots. So, for example, someone enters a space while it's operating and they're harmed in that process. And then you look at it and you say, "Well, they jimmied the door. They worked around the safety mechanism. So that's their fault, right? That's the person on the factory floor, his fault for not following the proper usage of that system."

And you back up one or two steps and you start to ask questions like, "Why did they jimmy that door?" It's because the system didn't function appropriately and they had to be going in and out in order to be able to reset stock for that robot. And why weren't they going to the process of entirely shutting the robot down? Because there's a very time-consuming process for restarting it up and they're on the clock and their productivity is being monitored and being assessed. You put all these factors together and you have the perfect storm that is going to predictably with some large end result and people dying from it.

It can't just be fixing it at the training level or fixing the manual for putting an extra asterisks in the manual like don't open the cage while the system is in operation. I think this just points to one of the key themes that we bring up in the book, which is the role of designing across these layers, but also the role and opportunity that intelligence and the systems provides you as an additional layer, not just an execution, but at all the steps along the way. A very different example that comes from the research world is related to trust, inappropriate trust or Alliance on robot systems. Miscalibrated trust and automation is something that's been studied for decades in other contexts in aviation and industrial domains. And you might ask, "Does that end up having relevance as we deploy these systems in everyday environments?

There's this fascinating study done a few years ago at Georgia Tech, where they looked at the deployment of robots to beat people out of a simulated burning building, so a fire in a building. The alarm was going off, they put smoke in the building, they trained the bystanders in the operation of the robot system in advance, and half the participants observed the robot functioning very well. It could navigate, it could do its job. The other half directly observed the system malfunctioning, going in circles, acting strangely. And then when they put people in that building, even the ones that observed the robot malfunctioning moments before, followed that robot, wherever it took them through the building, including when the robot led them to a dark closet with clearly no exit.

And this might sound funny, but it's not funny because it's consistent with a long history of studies and analysis of accidents and aviation, other domains of how easy it is to engender trust in a system inappropriately. This is something that's very important in that particular example for a robot leading you through a building, but also think about cars like Teslas and being able to calibrate a person's understanding of when they need to take over with that vehicle since it's about its environment and what it doesn't. And so these are cautionary tales from the past that I think have direct application to many of the systems we're seeing deployed today.

Daniel Serfaty: Sure. I think I believe the miscalibrated trust problem as the additional complexity of being very sensitive to other factors like culture, like age, things that people in certain cultures... I'm not talking cultures with [inaudible 00:51:05], but even local cultures may trust more the machines and maybe to a fault overtrust the machine more so than other populations. I think that creates a huge challenge for the future designer of the system that this has to be adapted to factors for which usually do not design properly.

Maybe on the other side, I don't want to sound too pessimistic about accidents, even though the lesson, as both of you pointed out, is that those accidents, even that involved sometimes the unfortunate loss of life lead to leaps in technology in a positive way. But if you had to choose a domain right now where these teaming of human and robots have the most impact other economic impact or health impact, or by any other measures, what would that be? Healthcare, defense, transportation that has the good story, not the accident story now. Laura, can you think of one?

Laura Majors:I think if you look at defense and security applications, you can find some great examples where robots help in ways that we don't want people to go. So if you think of bomb disposal robots, for example, keeping people out of harm's way so that we can investigate, understand what's happening, disarm without putting a person in harms way. There are also other defense applications where we're able to have autonomous parachutes that can very precisely land to a specific location to deliver goods, food to people who need it. There are different drone applications where we can get eyes on a situation, on a fire, to understand hotspots and be able to attack it more precisely.

I think those are some good examples. And that to me is one of the reasons why I'm so drawn to autonomous cars is because this is a case where many could argue that people are not very good drivers. There are still a lot of accidents on our roadways and so there's a great opportunity to improve that safety record. And if we look at what happens in air transportation, it's such a fundamentally different safety track record that we hope to achieve on our roadways by the introduction of automation and robotics.

Daniel Serfaty: What a wonderful reason to invest in that industry. I haven't thought about it, the social impact and the greater good, not just the convenience aspect is key. Julie, what's bright in your horizon there? What do you see in robotic applications that are really already making an impact, especially when there is a human-robot collaboration dimension.

Julie Shah:Another one that comes to mind is in surgical robotics, we've seen this revolution over the past number of years in the introduction of robots in the operating room, but much like the robots used for dismantling explosive devices. These robots are really being directly controlled at a low level by an expert who's actually sitting physically in the same operating room. And nonetheless, you see great gains from that in some contexts. So, for example, rather than doing a laparoscopic surgery, which is you can imagine like surgery with chopsticks, that's going to be very hard. There's a lot of spatial reasoning you have to do to be able to perform that surgery. A lot of training required to do it. Some people are more naturally capable of that than others, even with significant amounts of training.

For example, a system like the da Vinci robot, it gives surgeons remotely their wrists back so they don't have to do chopstick surgery anymore. And so it actually enables many surgeons not fully trained up in laparoscopic surgery to be able to do a surgery that would have otherwise required fully opening a person up. And so you see great gains in recovery time of people. Surgery on the eye, if you can remove tremors, verified tremors that any human has that allow for surgical precision, that's very important in that field.

One of the commonalities between some of the applications Laura gave in bomb disposal and the surgical application is these are systems that are leveraging human expertise and guidance in them. They're not employing substantial amounts of intelligence. But as a random person in the field pointed out to me a number of years ago, what are you doing when you put a surgical robot in the operating room and put the surgeon a little further away in the room. You've put a computer between that surgeon and the patient.

Now, when we put a computer between the pilot and the aircraft, it opened up an entirely new design space for even the type of aircraft we could design and field. For example, aircraft that have gains in fuel efficiency, but are inherently unstable like a human just on their own without computer support couldn't even fly them. This is a very exciting avenue forward as we think about these new options of a computer at the interface, how we can leverage machine learning and data and how we can employ these to amplify our human capability and doing work today.

Daniel Serfaty: Work today, that's a key word here, work today. I'm going to ask you a question. I don't even know if people were asking me that question how I would answer. But you are here helping drivers and fighter pilots and surgeons and all these people with robotic devices changing their life, in fact, or changing their work, can you imagine in your own work as CTO or as professor a robotic device that can change that work for you in the future? Have you thought about that?

Julie Shah:I have thought about this quite a lot actually, [crosstalk 00:56:32] quite a lot.

Daniel Serfaty: Maybe you thought it may be a fantasy, I don't know but-

Julie Shah:And with my two and four year old, I spent endless amounts of time picking up and reorganizing toys just to have it all done again. I think one of the exciting things about framing this problem is a problem of enabling better teamwork between humans and machines or humans and robots. And Daniel, this goes back to your work from a long while ago which inspired parts of my PhD in coordination among pilots and aircraft is that effective teamwork behaviors, effective coordination behaviors, they're critical in the safety critical contexts where the team absolutely has to perform to succeed in their tasks, but good teamwork is actually good teamwork anywhere. So you remove the time criticality. If you are an effective teammate, if you can anticipate the information needs of others, offer it before it's requested, if you can mess your actions in this stance, then that good teamwork translates to other settings.

My husband is actually a surgeon. And when I was working on my PhD, I used to point out to him how he was not an effective teammate. He would not anticipate and adapt, and he still makes fun of me to that day. You're a surgeon, you need to anticipate and adapt. So good teamwork and cooking together in the kitchen, that same ability, it translates there, being able to understand, hold a high-quality mental model of your partner, understand their priorities and preferences that translates in many other domains. And so by making our teamwork flawless in these time-critical, safety-critical applications, we're really honing the technology for these systems even more useful to us in everyday life as well.

Laura Majors:Yeah, and I think as a CTO, I also think about a lot of our work is that... and what we strive to do is data-driven, decision-making, so about our technology, how it's performing in areas where it's not meeting the standards, the simulation, testing at scale. There are definitely have been many advances in those areas, but I think when I think about how could robotics and automation help a CTO be better, I think about, "Yeah, there are some parts of my job that you could automate. Could you close the loop on finding problems and identifying maybe teams or subsystems that have gaps that we may not realize until later in the test cycle, but could we learn those things earlier, identify those and have the dashboard that shows us where there may be lurking problems so we look at them sooner?"

Daniel Serfaty: No, I agree. I've been starting in my own company this philosophy since last year of the new generation called eating your own dog food, basically, which is let's try those things that we are trying to sell to our customers on ourselves first so that we can feel that pain before the customer does. But that would be an example. Let's try to help the CTO, the CEO with a dashboard and see whether or not we can actually make a difference. I think it's important we understand that at that intimate level. Julie, I know that part of your job as associate dean of the school of computing is to consider or to worry about the ethical and societal dimensions of introducing automation, artificial intelligence, compute robotic devices in our lives. What are you worried about? What's the worst thing that can happen with introducing these new forms of intelligences, some of them embodied into our lives?

Julie Shah:There's a lot to worry about, or at least there's a lot I worry about. I was delighted to take this new role as associate dean of social and ethical responsibilities of computing within MIT's new Schwarzman College of Computing. I was predisposed to step into the role because much of my research has been focused around aiming to be intentional about developing computing that augments or enhances human capability rather than replacing it and thinking about the implications for future of work, what makes for good work for people? So it's not about deploying robots in factories that replace people or supplant people, but how do we leverage and promote the capabilities of people. That's only one narrow slice of what's important when you talk about social and ethical responsibilities.

But the aspects that worry me are the questions that are not asked at the beginning and the insight and the expertise, the multidisciplinary perspectives that are not brought to the conception and design stage of technologies in large part because we just don't train our students to be able to do that. And so the vision behind what we're aiming do is actively weave social, ethical, and policy considerations into the teaching research and implementation of computing. A key part of that is to innovate and try to figure out how we embed this different way of thinking, this broadening of these are the languages our students need to speak into the bread and butter of their education as engineers.

On the teaching side, our strategy to do that is to not give them a standalone ethics class, but we're working with many, many dozens of faculty across the institute to develop new content as little seeds that we weave into the undergrad courses that they're taking in computing, so the major machine learning classes, the early classes in algorithms and inference and show our students how this is something that's not separate, that needs to be an add on that they think about later and check a box, but how it's something that needs to be incorporated into their practice as engineers.

And so sort of applied, almost like a medical ethics type thing. What is the equivalent of a medical ethics education for a doctor to a practicing engineer or computer scientist? And by seeding this content through their four years, we essentially make it inescapable for every student that we send out into the world and show them through modeling, through the incredible inspiring efforts of the faculty to at a different stage in their career also work to bridge these fields, show them how they can do it too. A key part of this is understanding the modes of inquiry and analysis of other disciplines and just being able to build a common language to be able to leverage the insights from others beyond your discipline to even just ask the right questions at the start.

Daniel Serfaty: I think this is phenomenon. Introducing this concept for our engineers and computer scientists today, we're going to create a new generation of folks that are going to, as you say, ask many questions before jumping in coding or jumping and writing equations and understanding the potential consequences or implications of what they're doing. That's great. I think that we should all rather than worrying crazy about Skynet and the invasion of the robots, I think it's a much better thing to do to understand this introduction of new intelligences, in plural, in our lives and in our work and thinking about it almost like a philosopher or a social scientist would think about it. That's great.

Laura, I want a quick prediction, and then I'm going to ask both of you for some carrier advice, not mine, perhaps mine too. Laura, can you share with the audience your prediction. You've been in different labs and companies and you're lecturing all over the world about this world. What does human-robot collaboration look like in three years and maybe in 15 years?

Laura Majors:That's a big question. I know it's a broad question too because there are robots in many different applications. Yeah, we've seen some really tremendous progress in factory and manufacturing settings and in defense settings. I think the next revolution that's going to happen and really why we wrote the book the way we did and when we did it was because I think the next revolution we're going to see is in the consumer space. So we haven't really seen robots take off. There are minor examples. There's the Roomba, which is a big example, but very limited tasks that it performs. We're seeing robot lawnmowers, but I think the next big leap is going to be seeing delivery robots, robotaxis, this type of capability start to become a reality and not everywhere, but I would say in certain cities. I think it's going to start localized and with a lot of support in terms of mapping and the right infrastructure to make that successful.

I think that's the three-year horizon. I think the 10-year horizon you start to see these things scale and become a little more generalizable and applicable to broader settings, and again, start to be more flexible to changing city, changing rules and these types of things that robots struggle with. They do very well with what we programmed them to do. And so it's us, the designers, that have to learn and evolve and figure out how do we program them to be more flexible and what are some of those environment challenges that will be especially challenging when we move a robot from one city to another city, whether it's sidewalk or robotaxi.

But I think we're... After the deployment in a few years where we start to see these things in operation in many locations, then we'll start to see how do we pick that robot up and move it to a new city and how can we better design it to still perform well around people who have different norms, different behaviors, different expectations from the robot, and also there are different rules and other kind of infrastructure changes that may be hard for robots to adapt to without significant technical changes.

Daniel Serfaty: Thank you. That's the future I personally am looking forward to because I think it will make us change as human beings, as workers, as executives, as passengers, and that change I think I'm looking forward to it. My last question has to do with our audience. Many people in the audience are young people that are either maybe finishing high school or in college and they hear those two super bright and super successful professional engineer women. And you painted a fascinating domain that many people do not fully understand that blend of human sciences and engineering sciences. What advice would you have on a young person, men or women for that matter, maybe just trying to choose which direction to pick in college, or even which college to pick? MIT is not the answer, by the way. Do you mind spending a few seconds each on some career advice? Laura, you want to start?

Laura Majors:Yeah. I think you can't go wrong with following your passion. So finding ways, I think, early on to explore and try out some different areas. So if you're in high school and you're trying to figure out what college you want to go to visit, take tours, do a range of different options so you can really understand the space and see what you really connect to and where your passion lies. If you're in college, do internships, do research in labs, find ways to get exposed to things to see, again, what's going to spark that interest.

My freshmen summer in college, I did a civil engineering internship. I thought I was going to build bridges and it didn't click for me. It wasn't interesting. I'm glad I did it. It was an interesting experience, but it wasn't something I wanted to do the rest of my life. Try things out, explore early. And then if something clicks, pursue it. Once I found the path I wanted to go down, I never looked back. And so I'd say try to find the intersection of where your passion connects with something that will have an impact. And then if you deliver on that, the sky's the limit, more doors will open than you expect and you'll go far.

Daniel Serfaty: Thank you, Laura. Julie, your wisdom?

Julie Shah:One piece of advice I give to my undergrad, or in the past, my freshmen advisees is that... Just to tell a story, I had one freshman advisee that was trying to think through what classes they should take and actually ask, "What is the one class I take that sets me up to be successful for the rest of my career?" I knew what that class was in high school and I took that class and I got here. So what is that one class I have to take here? I think the most important thing to know is that once you get to that point, there's no external metric for success that anyone is defining for you anymore. You define your own metric for success, and you can define that any way that you find to be fulfilling, and that is fulfilling for you. Actually, only you can define it.

And so to Laura's point, the critical part of defining that is having very different experiences to explore that space. In our department, we say we're aiming to train creative engineers and innovators. And creative is a really important role. And so where does creativity come from? The work that I do now, I would say maybe I'm not the traditional roboticist. I approach the work differently and how I frame the problems is different and it's because of very different experience I've had than other people in computer science or robotics. I did my master's degree in human factors engineering. I started my career in aerospace engineering, which is a systems discipline, which is looking to design a full system.

And so you can't optimize each individual component of a spacecraft, say, and expect your overall spacecraft to be an optimized system. There's sort of trade-offs and pulls and pushes. And so those very different experiences set me up on a trajectory to make very different contributions. I think that's the key aspect about following your passion, like what are your different experiences going to be that you can bring together to have a unique impact? And that's just your path to be able to carve.

Daniel Serfaty: Thank you, Julie, and thank you, Laura, for sharing some of your knowledge, but most importantly, your passion and your wisdom when it comes to all these topics. I remind our audience that Julie and Laura just published a book called What To Expect When You're Expecting Robots: The Future of Human-Robot Collaboration. It's by Laura Majors and Julie Shah. I urge you to read the book and make your own impressions with respect to what you can contribute to the field, but also perhaps the choices you're going to make in your professional life.

Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS Podcast and tweet us @mindworkspodcast or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima Incorporated. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.