MINDWORKS

Intelligent Cognitive Assistants with Valerie Champagne and Sylvain Bruni

June 29, 2021 Daniel Serfaty Season 2 Episode 5
MINDWORKS
Intelligent Cognitive Assistants with Valerie Champagne and Sylvain Bruni
Show Notes Transcript

These days, whether we are in our cars, at our desks, or at play, we are increasingly surrounded by automation and so-called “intelligent” devices. Do they help us, or do they make our lives more complicated? Join MINDWORKS host Daniel Serfaty as he talks with Valerie Champagne of Lockheed Martin and Sylvain Bruni of Aptima, experts in human-machine capabilities to support decision-making, as they explore what it will take to build a truly intelligent cognitive assistant—one that could more seamlessly improve human performance in mission-critical environments. 

Daniel Serfaty: Welcome to MINDWORKS. This is your host, Daniel Serfaty. These days, whether we are in our cars, at our desks at work, or at play, we are increasingly surrounded by automation and so-called intelligent devices. Do they help us, or actually make our lives more complicated? My two guests today are coming to the rescue. They're experts in envisioning and designing intelligent cognitive assistance to harness these emerging technologies in order to help us make better decisions, alleviate our workload and achieve, perhaps, and that is a question of higher quality of life in the future. 

First, Valerie Champagne is retired from the United States Air Force, where she served in the intelligence field, specializing in all sorts analysis, collection management, imagery exploitation, and command and control for more than 20 years. After the Air Force, she worked as director of advanced technology for PatchPlus Consulting when she first partners among others with my second guest. Currently, she's a lead for Lockheed Martin Advanced Technology Labs, portfolio in command and control with a focus on human and machine capabilities to support decision-making.

My second guest is Sylvain Bruni, who I'm honored to count as a colleague at Aptima. He is principal engineer at Aptima and the deputy division director for Performance Augmentation Systems. He's an expert in human automation collaboration. His current work focuses on the design, development, and deployment of, hear that, digital sidekicks that provide cognitive augmentation to mission critical operators, both in the defense and the healthcare domains. Valerie and Sylvain, welcome to MINDWORKS. 

Sylvain Bruni: Thank you for having us. 

Daniel Serfaty: Let me ask first to introduce yourself and what made you choose this particular domain that is a complicated domain, exciting but complicated as a field of endeavor? Valerie, you could have stayed in the Air Force and being a command and control expert. Sylva, you're an electrical engineer. You could have been an engineer working on complex electrical systems or different kinds of complex system at MIT, other places. Why did you go into this human field, Valerie?

Valerie Champagne: For me, it's all about making a difference to a warfighter. Let me just expand upon that. You've heard my background. With most of the time spent in the Air Force, I typically been on the receiving end of technology. I've experienced what we call fly-by-night fielding, I think you've probably heard of that, which basically means capabilities were developed and fielded void of real operator input or training. And so ultimately, and we used to say this at the unit I was in, these capabilities became very expensive paperweight or doorstop because they either weren't appropriately developed for what we needed, or we didn't know how to use them. And so they weren't used. 

Toward the end of my career in the Air Force, I did have an opportunity to lead an acquisition of emerging technologies for command and control. And for me, that was a eureka moment. That is what I was in a position to be able to connect the developer to the operators so that what we delivered was relevant to what the operator needed and that the training was there so that they could actually put that technology to use. When I moved on from the Air Force, I pursued work in the emerging tech area because that's where I really thought you can make a difference to the warfighter, developing and delivering those capabilities that make their life easier and their life better. That's why I'm in this field.

Daniel Serfaty: Thank you. No, that explains it. I think that's a theme that we'll explore over the next hour or so, this notion of inadvertently in an attempt to help the warfighter or the user or the doctor or the surgeon actually making their lives more complicated. And that's happens when engineers design for engineers as opposed to human beings. Talking about engineers, Sylva, you're a bonafide engineer. What made you come to this field?

Sylvain Bruni: Interestingly, it's actually very personal from a childhood dream of being an astronaut and going to Mars and explore space. I kept seeing on TV, whether it's in cartoon or in TV series, all of these space folks going into those advanced spacecrafts and exploring. They always had this kind of omniscience automation voice that they can talk to that was doing those incredible things. And to me, I was like, "Well, I wish we could have that. If I'm going to be an astronaut, I want to have that kind of technology to help me out because certainly I'll never be able to do everything that characters in those fictional things are doing." 

And little by little, learning more about technology, becoming an engineer, what I've come to realize is that there are so many different problems that need to be solved for those types of space exploration, propulsion, radiation, that kind of stuff. But the human aspect, the cognitive and behavioral aspect of the technology that interacts with a human and help do things that the human otherwise cannot do weren't really paid too much attention to. And so to me, it was like, "I want to look at, I want to solve those problems." And so little by little, learning and understanding better what this is about, so the field of human factors, cognitive assistance engineering, and then getting to actually build some of those technologies is really is what's driven me to this domain and the passion to actually build those types of digital assistance.

Daniel Serfaty: Thank you for both of you mentioning this kind of dedication and passion because I think this is a common denominator in people in our field, whether they come from an engineering angle or from the expert or the warfighter angle in your case, Valerie, to say there is a way we can do better. And for our audience, this is a mystery because it sounds, "Oh my God, this is art. This is not science. This is about the human mind." And yet there is deep science and complex technology that can take that into account. Talking about complex technology before our audience, what do you do today in your day job, Valerie? Can you describe things that you actually do for our audience so that they have a more tangible sense of what the research and engineering this field has?

Valerie Champagne: I still work in the emerging technology field for Lockheed Advanced Technology Lab. I focus on command and control as was stated at the beginning and specifically on the development of artificial intelligence and machine learning for decision-making. A big area for us is on AI explainability and how the human will interact with that AI to enable speed to decision-making. We're very focused right now on Joint All-Domain Operations and there's a big speed and scale problem with that. We're focused on the development of AI to enable speed and scale, but also to ensure that the human is able to understand what the machine is providing to them for a potential option or operation.

Daniel Serfaty: We've come back to that notion of human understanding the technology and perhaps also the reciprocal design, which is technology understanding humans in order to help them best. But for our audience, you said command and control. Most people don't know what command and control is. Can you tell us what it is? Is that just decision-making in the military or does it go beyond that?

Valerie Champagne: It does go somewhat beyond that. Ultimately, command and control is having the authority to make decisions. And it consists of strategy development, target development, resource to task pairing, and then it, of course, includes the execution of operations and the dynamic replanning that occurs when you are executing operations and then the assessment of those operations. And that's all part of what we call the air tasking order cycle and basically what we would consider command and control. And for now, Joint All-Domain Operations, we're seeing that tasking order cycle expand to all domain.

Daniel Serfaty: That sounds pretty complex to me in a sense that it's not just the decision-making like taking one decisions and moving on to the next decision, but this notion of constantly for the human to be in that planning and replanning loop. I can imagine how humans can need assistance from the technology in order to deal with that complexity. 

Valerie Champagne: Absolutely. We can talk later about some examples where I had had a cognitive assistant to help me out.

Daniel Serfaty: Yes, of course. Sylva, how about you? You are both in a principal engineer, which is really senior engineer, but also you are managing a division or co-managing a division called Performance Augmentation System. That sounds like space exploration to me. What do you augment?

Sylvain Bruni: That's a very good point. On the daily, I'm a program manager for those digital sidekick efforts. The types of augmentations we do are actually pretty far ranging in terms of the domains that we work in. We cover a number of things, including intelligence analysis, whether that's for the army or the Air Force. We cover project maintenance and inspection of maintenance operations for the navy. We help the missile defense agency with data analysis in their simulation environments, which have a lot of data, the most I've ever seen in any domain. We also help other agencies and also commercial partners figure out how technology can serve as a way to expend the cognitive capabilities of humans. That means exploiting better the data that they have in a way that matches what they want and the time that they want to actually do the work that they need to do, ultimately yielding the types of outcomes that they want.

For example, if you think about cognitive assistant in the maintenance environments, one of the big problem that the navy has and other services is the lack of expert maintainers. Oftentimes there's just not enough people who have the skills and experience to actually do all of the work that is backlogged. What if a cognitive assistant could actually help novice maintainers perform like they are experts quickly? We've built some technology where a combination of augmented reality and artificial intelligence model basically help augment what maintainers know how to do, so the basic skills in maintenance but perform those skills at a much higher level than they have been trained for because the technology is helping bridge the gap. 

For example, concretely, what that means is if they are wearing augmented reality glasses, they can see information in the context of operation, so highlights on certain parts in a landing gear, for example, the specific instructions that they have to follow and keep an advice that other people have told the systems before to say, "Hey, you should stand on this side of the lending gear because in the next step, you're going to be opening this valve. And if you're on that other side, you're going to get sprayed with stuff in your face and you don't want that to happen." All of those things that experts would know the system could know, the digital psychic could know, and basically preempt any future problem by delivering the right information at the right time in the right context to a more novice maintainer. Those are the types of things we do in all of the domains I've mentioned.

Daniel Serfaty: That's interesting what both your answers to my question are because on one hand, you are arguing that through technology, we can increase basically the level of expertise at which a maintainer, a commander, a surgeon can perform, which is a pretty daring proposition, if you think about it, because sometime it takes 20 years to develop that expertise. In a sense, so you're talking about accelerating that with a piece of AI or digital sidekicks. But are you as in tempted rather than to augment, to replace in a sense, you say, "Why do I need the digital sidekick, may be the digital psychic, may be the main actor?" Valerie, are they domains in which we say, "Oh, the heck with it, it's too complicated for a human. Let me just invent a new device that can do that humans job."

Valerie Champagne: My perspective is that the technology is there to support the human and not the other way around. I think certainly there's types of paths that could be delegated to a machine to perform. But in that delegation, just with like people you work with, you delegate a task, but you still follow up. You're still cognizant of what's going on and making sure it's being done the way you want it to be done. And so the human, in my opinion, is always at least on the loop at times, maybe in the loop. There's really three reasons that I jotted down related to this. And that is, first, the AI is there to support the human, not the other way around, which is what I just said. And so one of the examples I was thinking of was from this weekend, Memorial weekend.

I don't know if you guys travel, but I traveled to Maine to go up to our lake house. There's a lot of tolls going from where I live up to Maine. It used to be in the past that your hour... hours upon hours waiting to get through all of these tolls. There's like four or five of them. And so now with the E‑ZPass system, this is just a simple example and they have those overhead cameras that you can just zip right through, and so it's a huge time saver. And that's simple AI, but it really helps to save time for the human [inaudible 00:15:16] AI as a force multiplier in this case.

The second reason would be the human is the ultimate decision maker and I think has qualities that at least for now in AI don't exist. Things like intuition, judgment, creativity. And so for that reason, you wouldn't want to take the human out of the equation. And then finally, I do believe that there are some things that just can't be delegated to AI. These are things where the stakes are very high. For example, the decision to strike a dynamic target. AI can certainly support bringing speed to decision and assisting the human in identifying options that they may not have thought of. But ultimately, the decision to strike requires at the very least a human in the loop.

Daniel Serfaty: I'm going to play devil's advocate, but I want to hear Sylvain's answer also to this challenge with devil's advocate in your first example of you driving to Maine and not having to stop and saving time basically adding a few hours to your hard earned vacation. What happened to that portal worker? He doesn't have a job anymore, or she doesn't have a job anymore. This is an example actually of elimination rather than augmentation. I know it made the life of the user better, but is there a trade-off there in a sense? Just a question to ponder about. We'll come back to that because many people are gloom and doom predicting that this is it, the robots are coming to take our jobs and on and on.

I think that you gave two very good examples, very one in which the consequences of a decision are so important that you've got to keep a human commander on the loop. But at the end of the day, the responsibility is there, where another example in which we found a technology that totally replaced what used to be decent paying job for some. And so I think on that continuum, we should talk about this notion of replacement or augmentation. But I'll plant the seed, we'll come back to it later. Sylva, on my question, are you ever tempted in your many projects to just say, "Hey, this is one in which we need to replace that human operator"?

Sylvain Bruni: No, never, because as a human systems engineer, if I were to say that, I would be out of a job. No more seriously, I would completely support what Valerie said about certain aspects of... particularly in critical environments where the human is absolutely needed and where technology is nowhere near where it needs to be now to be thinking about replacing the human. Valerie mentioned creativity and intuition and judgment. I would add empathy in certain kinds of environments like healthcare. This is a critical aspect of the human contribution that AI will not replace anytime soon. 

You also mentioned in the loop and on the loop as the types of relationship that the human has with the system. I would add the perspective of the with the loop environment, where there are multiple types of loops that exist in the system and the human needs to understand how those work in relationship to one another, things that currently AI models or the type of technology we are devising cannot really do.

Even with the exponential availability and capabilities of AI and automation, there are still those types of roles and responsibilities that the human needs to have because we can do it otherwise. If I go back to the example of the maintainer, we don't have robots that are good enough to actually change the switches and the valves and the little pieces in the types of environments that we need them to be. There is a degree of nuance and finesse that the human can bring. And by the way, this is both physical and cognitive. The physical finesse of having your fingers in the engine moving things, but cognitively understanding the shades of gray in complex environments is really critical, going back to the word judgment, that value you mentioned. 

I think for now, Serf, you're right, there is an entire discourse and argument you have about the displacement and replacement of jobs. I'm a firm believer that it's not a net negative, that on the contrary there's advancing technologies and net positive in terms of job creation. But you're right, those are different types of jobs for different types of purposes. We always go back to the same quotes from Ford. If we were to ask a long time ago what people wanted for transportation, they would have said faster horses. Now we have the car, which did eliminate normally the job of the horse, but also when you have carriages, the driver of the carriage, but at the same time, there are tons of more jobs that were created in the manufacturing world, in the maintenance world, et cetera.

Daniel Serfaty: Now, these are all very wise remarks. Let's dig a little deeper into the core of the topic of today, which is really intelligent cognitive assistant, very loaded words, each one of them. Humans have been interacting with machines for a while, from the basic manipulation of information through windows or through clicks or through the mouse. And so human computer interaction has been around. What is new? Is there something that is qualitatively new here in this notion of an intelligent cognitive assistant that is here to help you do your work? Can you unpack that for our audience and tell us what is novel here? Valerie, earlier you mentioned artificial intelligence and explainability. These are, again, very loaded words. Can you just tell our audience whether you think it's just a continuous of regular human computer interaction design, or there is something that is fundamentally novel.

Valerie Champagne: For this one, it's a little tougher for me because I think we are more in the technical space here. But I really think what's novel about the direction we're going is we're seeing technologies now with the idea of a cognitive assistant is we're not delivering just black box to the end user, but we're delivering a capability that will interact with the operator and serve as a force multiplier to that operator. I think one of the things that Sylvain said that really resonated with me was the idea of taking a novice person. And with that cognitive assistant, they're able to get smarter with that cognitive assistance. They're able to be more productive, and that's the idea of being a force multiplier. 

A case in point when I was doing all sorts of analysis, we had to build three things and we had to go through what's called message traffic. There were keyword searches that were in Boolean logic theory, very difficult to formulate yourself. And so what would happen is that senior mentor would always just cut and paste over his keyboard search to the new guy so that they would be able to get the right message traffic to be able to do their job. And so you could imagine that cognitive assistant could be that senior mentor to that new person, be it a novice person or a new person to the job because in the intelligence field, the second you arrive in theater, you're the expert, even if you've never actually worked that country. It can be a little scary and having that cognitive assistance would be so helpful.

Daniel Serfaty: Basically, this notion of helping you when you need it most and knowing when to help you it's really key to that intelligent part of the cognitive assistance. There's a certain degree here, I sense, of the other side taking some initiative in order to help you. Sylvain, can you expand on that from your perspective again, what is new? Valerie made a very good point here. Are there other things that our audience need to know that we are witnessing some kind of revolution in the way we think of technology to help us?

Valerie Champagne: I would say so. Traditional HCI design is looking at interaction affordances. For example, graphical user interfaces, or tactile interactions, or audio, or all interactions and sometime even smell, or taste interactions, but all of those have some form of a physical embodiment, the way the interaction between the human and the machine happens. To me, cognitive assistants go a step beyond. I'm going to use a big made-up word here. I consider them to be at the cognosomatic level. That means that they produce interactions that are... Those are the cognitive level, so the cogno, and at the physical totality of the human user, that's the somatic part. They account for both what the human can see, hear, touch, et cetera, but also what they think, what's in that context in the brain of the human what they want, where they want to go, what are their goals and objectives and how they go about that.

If you think, for example, about Siri or Alexa, those are called assistant. But to me, those are not cognitive assistants because if you ask them, for example, what time it is, they will both respond very accurately, or if you set up an alarm in advance, they'll be reactive and they ping you when you have requested them to alert you. If you type in a search, for example, in their visual interface, they will give you a series of answers and oftentimes pretty good answers. They're getting better and better. But none of that actually touches the cognitive level. They have no clue. Why am I asking for reminder? Why am I asking for certain kinds of questions? 

The opportunity lost here is that Siri and Alexa could actually provide me better answers or very different kinds of answers and support if they knew the reason behind I was asking those questions. When Val talked about force multiplier, Siri and Alexa could multiply my impact in the world by actually giving me better support, maybe slightly different from what I've asked for, but understanding and contextualizing what I've asked for for better outcomes. In that sense, the human computer interaction goes beyond the traditional design, which is quite transactional in nature to focusing, as you said, on the context and the totality of where we are and where we want to go. Does that make sense?

Daniel Serfaty: It makes sense. It also I'm sure sounds a little bit like science fiction for our audience, which is good because we're getting paid to work on the science fiction. But what does Siri or Alexa of the future need to know about Sylva in order to be able to understand the why, or to answer the why is he asking me that question, or even beyond that to provide you with information before you ask for it because it understands that you may need that information today? What do they need to know about you?

Sylvain Bruni: That's a great question. And really, that is at the heart of the research we were doing and the prototype development we generate to identify what are those pieces of information that need to be in that cognitive assistance so it is able to provide that kind of advanced argumentative support. So far, what we are really focusing is context. In a wide encompassing definition of what we mean by context, I can summarize it usually into three buckets. Number one is what is the user trying to accomplish? What are the goals, the objectives, what are the end state? What do they consider to be success for them in the situation they are currently at? Not in a generic way, but for the specific situation. 

In bucket number two, it's more about the processes, the missions, the methods that they want to employ to reach those goals. Think about your own work in everyday life, you always have certain ways of doing things. What are those? How do we know that this is what we're using to accomplish those goals? If the cognitive assistant can help that, it can provide a more granular support at every step of the process, every step of the way getting to those objectives. 

And bucket number three is about the tools and capabilities to accomplish the processes, and what are the contributions or the impacts of those tools and processes and capabilities in reaching the goals? Think of it as the various levers and knobs and things you can parameterize to make your tools and your capabilities actually in the service of reaching a goal. Once a cognitive assistant has an understanding, even if it's very basic of those three types of things, then we can start actually building the AI, the models, advanced interfaces, where the system will be able to support you at a very precise and helpful level.

Valerie Champagne: I just felt like comment. I think we need to simplify the cognitive assistance so that we can actually get some technology out there to the warfighter that will provide the support, maybe an incremental development process where instead of trying to get to that full understanding and reasoning, we just get a cognitive assistant who can help me with some of my mundane tasks. I can think of a time when we were getting ready to brief a general officer and I'm sitting there trying to plot ranges of ISR capabilities and figure out placement of orbits and tracks instead of focusing on the message that we needed to provide the general. 

And to me, what I was doing, I remember thinking when I was sitting there, "Why can't automation do this for me?" If that cognitive assistant could see me do this task three or four times, why can't he then pop in and say, "Hey, let me take this task over for you to free you up so you can go think about something a little bit more important." I just think if we could start there, maybe it's not quite as cognitive as what Sylvain is talking about, but it's certainly extremely helpful to the end user who's in the military.

Daniel Serfaty: Thank you for that example and that clarification. I think it's okay for the audience to understand that this continuum of sophistication regarding those cognitive assistance is there. Some of it is really researchy in a sense that we are still exploring and some of it is actually ready for prime time. We'll be back in just a moment, stick around. Hello, MINDWORKS listeners. This is Daniel Serfaty. Do you love MINDWORKS but don't have time to listen to an entire episode? Then we have a solution for you. MINDWORKS Minis, curated segments from the MINDWORKS podcast condensed to under 15 minutes each and designed to work with your busy schedule. You'll find the minis along with full length episodes under MINDWORKS on Apple, Spotify, Buzzsprout, or wherever you get your podcasts. 

But Valerie, I have actually the mirror image question for you I asked you earlier. What does a cognitive assistant need to know about us to actually perform all these wonderful things that we want it to perform, that anticipation and that depth of understanding of my need as a human, as a user? What about the reciprocal question? In order for us to use those cognitive assistant or maybe not use, maybe they will be offended if I say that verb, but maybe collaborate with those cognitive assistant the best, what do we need to know about them? You talked earlier about the ability of those cognitive assistant to explain themselves in term of AI, you used the term explainability, I believe, tell us a little bit about how that works. Does a cognitive assistant need to explain what he does in order for the human to use it better?

Valerie Champagne: For AI explainability, and I'll just talk briefly, we had a project that we were working with the Air Force Research Lab related to distributed operations. And as part of that project, we developed some AI explainability capabilities. What happened on the project was you had distributed nodes who would bid on task to figure out who the trade-off of who could do what for resource task pairing. Some of these nodes were connected with comms and some weren't. And so as a user, you could get the printout of, "Okay, here's your allocation, here's your resource to task pairing, basically the answer." But as a user, if I want to go in and understand, "Well, why did you pick this resource instead of that resource," there needed to be this idea of AI explainability.

And so we developed a means to drill down and there was actual tech that would say, "These are the nodes I talked to, and this one couldn't do it for this reason, this one couldn't do it for this reason. So we went with this nodes solution." As a user, that was really important to be able to understand how machine came to their answer. Now, that's great if you have just a single thing that you're looking at. I think the real difficulty, though, is when you try to scale. And so for the cognitive assistant to be able to render information in a way that highlights there may be problems or highlights where I might want to drill down further to that actual tech would be really beneficial, I think.

I think the idea of AI explainability is building that trust in the system. And so as an operator, I want to be able to basically test my cognitive assistant to make sure they're doing what I intended them to do, or maybe they're doing something better, but obviously I want to make sure they're not doing something wrong because that's going to create a problem for me. I look at the idea of building trust in the system as essential, and that comes from AI explainability, and it comes from three basic characteristics that the cognitive assistant needs to add to my job.

I need to be timeless to my workflow. I don't want to be jarred every time there's an update that makes things take longer for me. The cognitive assistant has to fit into my workflow. If I have to pull out a checklist for the buttonology every time I have to use this cognitive assistant and I don't want it, it's not intuitive. The cognitive assistant has to be intuitive to me. And then if the cognitive assistant makes work for me, then I definitely don't want it. The cognitive assistant has to be productive. 

A case in point... This frustrates me to this day and it happened probably 20 years ago. I was working at DIA, the Defense Intelligence Agency. They gave us a new link analysis tool and it was supposed to make our job so much easier identifying what the vulnerabilities were in these... I was a drug analyst on these drug trafficking organizations. And so we would spend hours inputting the relationships some nodes and whatnot to be able to get the software to work. And ultimately, it would dump our data. And so all of that time spent entering the data was lost and that was really painful and nonproductive. For me, I didn't want anything to do with that software. And so for a cognitive assistant I think to be valuable, has to have those three traits, timeless in my workflow, intuitive, and productive. And that spells out TIP, I call it the TIP rule.

Daniel Serfaty: Thank you. This is a great tutorial on understanding really what it takes really to engineer those systems so that they are useful. I'm reminded of an old human machine interaction design and all the acronym HABA-MABA. Have you ever heard that one? About 20 plus years ago, the principle was human are best at, that's HABA, and machines are best at. This notion that if we can partition the world into two categories, things that machines are good at and things that humans are good at, we can therefore design that world and everybody will be happy. 

Obviously, it didn't happen because... and this is my question to you, is there something else when we design a cognitive assistant today, an intelligent cognitive assistant very much along the lines of what I very just described that it's not just enough to have a human expert in that and a machine expert in that, we also have to engineer, I'm going to say the word, the team of the human and the assistant together? There is something that needs to be engineered in order for the system to work better. Sylva, what do you think?

Sylvain Bruni: I'm glad you're bringing this up and I'm having flashback to grad school about HABA-MABA and how this [crosstalk 00:37:35]-

Daniel Serfaty: I'm sorry about causing you that kind of thing. Yes.

Sylvain Bruni: No, but to me, it's a very basic dichotomy of task allocation which served its purpose many, many years ago. But it's honestly very outdated, both with respect to what we know now, but also what is available in human automation, collaboration and technologies such as AI, events interfaces, and things like that. And unfortunately, I have to say that this pops up more than what we would think in the research, in what people currently do nowadays. And that's really to my dismay because I think it should be just abandoned and we need to move forward following the very simple principle, which is the whole is greater than the sum of its parts. We use that in everyday life for many things. It applies here just as well.

And from my perspective, moving beyond HABA-MABA is looking at transitioning from task allocation to role and potentially responsibilities allocation, so a higher level of abstraction in terms of the work being performed. What we are working on and what the field in general is moving towards is this dynamic meshing of who does what when based on the context and the needs that the human has. And when I say who here, it can be one human, multiple humans, it can be one agent, multiple agents. And by agent, that can be algorithms, automated things, automated components in a larger system, it can be robots, things like that. 

And to me, it's important to think more in terms of the roles and the responsibility, which has ties to the outcomes, the products of what we want, and then figure out what the technology and right allocation of which parts of a task are done by which member of the team gives automatically a lot better performance and better system design in general thinking about the acceptability down the road and as you said, Val, the explainability. That type of granularity in how things are allocated enables the better explainability and transparency in the system.

Daniel Serfaty: That's good, Sylva. Thank you. I think probably in the mind of many members of our audience, those of us old enough to remember that, people are thinking as you're talking about trust and you're talking about reliability, they remember that little cartoon character in Microsoft that was on your screen that looked like a paper clip. And that was perhaps an early naive attempt of what an intelligent assistant in terms of organizing your actions on the screen was supposed to do. I'm thinking about the keyword that you pronounced, Valerie, the notion of trust that Clippy was trying to infer from your actions what you needed and what's wrong at least 50% of the time, which prompted most people because it says, "If Daniel wanted that and Daniel moved that window, et cetera, that means that he intends to do that." Those connections were not always right.

In fact, they were wrong most of the time. And that became very annoying. It's like that overeager assistant that wants to help but cannot help you at all. People rejected it as a whole. I'm sure it had some good feature. But it seems to me that that notion of an assistance that is there that is designed to help you but actually go the other way destroy the level of trust people have in that Clippy and probably others for years to come. How do we build that trust? How do we build the trust into those system? They need to reach a certain level of reliability that perhaps is beyond the reach of current technology or not. We want to tackle that notion of building trust for technology.

Valerie Champagne: I'd too like to make a comment about the problem of the Clippy and giving basically false alerts. From an operator perspective, if you receive too many of those alerts, they do become annoying Chicken Little or The Boy Who Cried Wolf. But the real danger here is then you get de-sensitization. And so instead of heeding the warning and getting out and doing your checklist if the adversary is scrambling their aircraft, maybe you do nothing because you're like, "Oh, that's happened 50 times and it's never been accurate. My assistant is wrong here." That's one issue.

Another issue is that the alerts can become [inaudible 00:42:24]. And so it can decrease this idea of critical thinking where you don't look beyond the obvious. If you think of, in some part, Pearl Harbor or 9/11, we had some of that going on. And so the cognitive assistance, it's super important that you do build that trust. I think ideally for me, if I was going to have a cognitive assistant, I would want to be able to test it. And ideally, the assistant would co-evolve with me. And so I would want to be able to test it as we evolve. You don't want to have to go back through your whole... the testing cycle and has to be something where I'm able to generate the tasks myself and execute it on the cognitive assistant to see how they perform. That would be my ideal world.

Daniel Serfaty: Thank you very, Valerie. Sylvain, you want to chime in on that, and especially let me add a level of complexity to my own question based upon what Valerie just said. That notion of co-evolution, that notion of a cognitive assistant that continues to learn after the designers have been done with it for perhaps months or years by observing what you do, inferring, making conclusion, making inferences, and acting accordingly, it mean that you have in front of you a piece of technology that will behave tomorrow differently than the way it behaved yesterday because it learns something in the past 48 hours about you and your habits. How do you trust such a thing? How do you build that trust. Valerie, suggests the ability to continuously test and continually understand really the direction of that co-evolution. Any other ideas along those lines before we take a break?

Sylvain Bruni: I agree with the suggestion Valerie made. The testing, which to me is just like training or co-training, that humans would do with a team of humans, you are in certain situations, you rehearse things, you work scenarios, you explore, you see how your teammates react, you course correct as needed. That type of principle I agree is probably the easiest for the human to understand, but probably too for the AI and the system design side to account for. And creating those opportunities is certainly a great method from an engineering perspective to build trust and enabling the trust to grow and the relationship to grow and get better over time.

I will say that trust transparency explainability, though, in the last couple of years have really become buzzwords, so I very much like the way value has been compromising. What exactly does that mean in terms of the engineering aspects that we need to focus on? And add a couple to that, going back to one word that was mentioned earlier, which was the fit, the fit I would say in the conversation, so the back and forth and back and forth between the two, the intent fit and the role fit between the users and the cognitive assistance, though, I think have some kinds of dimensions or constraints or assumptions that really need to be thought about to enable that trust and a good working together to happen. 

In some way, it reminds me of the origin of all of this, which is the gliders symbiosis between humans and machines. I think this is about the compassing the engineering dimensions that enable that type of symbiosis. To address your question about methods to co-learn and co-evolve, apart from training, I would say it's about providing the points of leverage within the system itself so that every operation of use creates some form of learning for the human and the system. You mentioned earlier tool versus teammates, to me, this is where it's going beyond a debate of tool versus teammate. There needs to be key learning happening at every type of interaction that is happening. And when you design a system, you have to put that in there.

If I return to the maintenance example, when the novice maintainer is using this system, the cognitive assistant as they are repairing, let's say, landing gear, there needs to be learning on the human side. The human should learn something that they can reuse somewhere else at another time in the future with or without the cognitive assistance. But in reverse, the cognitive assistant should also understand what the human is doing and why they're doing it because it might modify the way the AI is actually modeling the environment to task the system over which the human interacts. 

The human may be very creative and find another way to replace that little valve number three over there which is not in the original guidance that the cognitive assistance may have learned from, or may not have been something any of the previous human experts have demonstrated, so the cognitive assistant would have never seen that before. And that that new person who has this new way of doing things is injecting new data that can be helpful to the AI model to increase performance over time. All of that needs to be designed for an engineer when the system gets created so that evolution benefits from operational use as well. 

Daniel Serfaty: Thank you, Sylva and Valerie, for these explanation. This is complex. It seems like until we get an intelligent cognitive assistant that can collaborate with a human operator in a way the manner by which, say, a true human intelligent assistant will collaborate with the user, we have to pay special attention in order to build trust with these learning moment, these co-evolution moments because without them, we might throw the baby with the bathwater, in a sense that until we reach a certain level of smoothness, for lack of a better term, in that collaboration, we have to pay particular attention as designers to injecting opportunities for collaboration in the design of the system. 

We just talked about the complexity, but also opportunities of designing those intelligent cognitive assistance, perhaps in the future with some kind of proliferation of them around us that it is some very natural for us to work with different intelligent cognitive assistance for different parts of what we do. In what particular domain areas of work or play, actually, do you believe cognitive assistance, especially those intelligent cognitive assistance will have the strongest impact? Is that at home, in defense, in cyber, in healthcare, in gaming? You're welcome to give examples of what's happening today already, in which you've seen already an impact, but also what's coming up? What are the markets, so the domains of work where they are ready to welcome those?

Valerie Champagne: I really think there is a high payoff in the defense industry for these cognitive assistance. Like we've said previously, they have an opportunity to be a real force multiplier. I think depending on the field you're in, depends on what the current status is, I think. Sylvain, you've been talking about the work you've been doing in the maintenance field. I know for intelligence, we had outplayed the butler that we worked on together, but I don't think they're heavily deployed to the field right yet that I have seen in recent time. But I think there's real opportunity there.

My sister is a nurse. Before this call, I went ahead and talked to her about, "What do you think about a cognitive assistant?" I was asking her about medication. She said, right now, that's already automated for them, where if it's not a narcotic that has to be handled separately, but basic medications are automatically distributed, basically that's an assistant. It gives you the pills to provide to the patient and it automatically inventories those. She feels that they have complete trust. It's super great, she loves it, and that's very much integrated into their workflows there. She's a nurse on their floor.

Daniel Serfaty: You're talking about defense healthcare as probably right for that kind of accepting that kind of innovation that's coming out of the labs right now and maybe it has not been fully fielded. Is that because those domains are particularly complex, Sylva, have particularly variables, or because those domains have grave consequences for errors?

Sylvain Bruni: I would say it's both. To me, defense and healthcare, and I would also add cyber as another domain or market where I would see intelligent cognitive assistance as having a major play and role to serve. I think that's for a couple of reasons because in those domains, humans need additional support or cognitive augmentation to perform at their best and beyond and avoid those type of critical outcomes, namely death. In those domains, if you make a mistake, a human might die, or the wrong human might die in the case of military operations. And that is really a cost that we don't want to bear, therefore, considering the complexity, the dynamicity, the uncertainty of the environment which apply to defense, cyber, and healthcare, this speed of automation, the repeatability, the potential accuracy that the automation, the AI can provide at a speed much greater than the human could ease a necessity to embed.

The question is what type of mechanisms do we want to embed and how? I think that's where the crux is and where it's getting really difficult to bridge the gap between the research and the actual deployment of a developed version of a cognitive assistance because you need to select. Like you said earlier, Val, we have those grand ideas about perfect cognitive assistants and what they need and what they could do. But in the reality of engineering and deploying the systems, you need to focus narrowly on something that is accomplishable right away and demonstrate the value before you can increase that. 

I will say in those three domains of defense, healthcare, and cyber, we are witnessing a widening gap between the amount of data that's available and the human's ability to handle those data. It's only getting worse by the advent of 5G, the Internet of Things, edge computing. All of those new technologies basically increase multiple times the amount of data that humans have to handle. And to me, that's where I see or identify what kind of domains are ready for this kind of technology. 

Daniel Serfaty: It's interesting, so complexity certainly and mission criticality in a sense of the consequences of making certain decisions. How about economics of it, or even the social acceptance aspect? Like I wear my Fitbit or my smartwatch has become socially acceptable. They don't augment me, they just measure what I do. But having an assistant on my desk... And actually, let me ask you, Valerie and Sylvain. Both of you are highly qualified experts, making decisions all day long about projects and about customers and about collaborations. Are your jobs going to be impacted by intelligent cognitive assistants? Can you imagine a day when you have a cognitive assistant next to you that helps you do your job, eating your own dog food?

Sylvain Bruni: I definitely do. 

Daniel Serfaty: You do?

Sylvain Bruni: Yeah, eating our own dog food, I absolutely do. I would say maybe another characteristic of where cognitive assistant could take off as a technology, which is when there is a huge backlog of work and not enough humans to perform it. I see that in my own job where I have so much work. I would want to clone myself, but if I can, then maybe an intelligent cognitive assistant can help. I always think of that in terms of two lines of work. There is the overload of work, I have a ton of things that I need to do, but then there is the overhead associated with the work, which is sometimes, let's say, I need to write a proposal. 

Well, writing a proposal is not just me taking pen and paper or a text processing system and typing the proposal. There are a lot of overhead to that. I need to fill out forms, I need to understand what the proposal is going to be about, what the customer wants, what kinds of capabilities we have to offer, all of that, or the type of additional things that need to be done, but they don't interestingly provide a specific value to the task of writing a proposal. 

Along those two threads of overhead and overload, I could see you an intelligent cognitive assistant helping me out. In proposal writing, filling out those forms for me using the template, using the processes we currently use that are very well-defined, why couldn't I have this cognitive assistant actually do all of those menial tasks so that I can focus like Valerie mentioned earlier, I can focus on those parts that I really need to focus on? Same thing as in your example with writing a report for a general. You want to spend your cognitive abilities on what's going to make the difference and bring the value to the general, not on selecting the right font and the right template and the right colors for the reports you want to do.

And that, to me, applies to almost everything I do, customer management, project management, and even the basic research of keeping up to speed with literature reviews or the content of a conference, all of those types of things there are a lot of the actual work that I need to perform that could be automated. So my brain focuses on only what my brain can do.

Daniel Serfaty: In a sense, you're imagining the world in which you will have several cognitive assistants that specialize in different things. You'll have one of them who helps you write proposals in the way that you just described, another one who helps you maybe manage your schedule, another one that you can send to listen on to a conference that you don't have time to go to and can summarize the conference for you or the talk.

Sylvain Bruni: Interestingly, I would say, I would want one cognitive assistant to manage an army of other cognitive assistant doing those things because there is an interplay between all of this. Remember when I was talking about the complexity of the context and what the context means. When I'm writing a proposal, I'm also thinking in the back of my head about the work I'm doing on this other project and that other customer for whom the proposal is end, but could potentially be interested in the work from this proposal. All of those things are so interrelated that I would want my cognitive assistant to be aware of absolutely everything to be able to support me in a way that augments everything and not just being siloed. Does that make sense?

Daniel Serfaty: That makes sense. What about you, Valerie? If you had a dream intelligent cognitive assistant, what would it do for you?

Valerie Champagne: I agree with everything that Sylvain said. That sounds awesome. I will just add in, so when I was an executive officer for a general officer for a brief period of time, one of my tasks for him that I did was every morning came in and I would review his email too. I would get rid of all the stuff that wasn't a priority, and then I would highlight those items that he really needed to look at. I think it's like my email, I don't know about you, but as soon as I get off this call, I'm going to have hundreds of... or at least 100 emails in there that I've got to weed through to figure out what's important and what isn't.

That's a very simple example of how this cognitive assistant could really help us out. I think it's probably doable now, where it can learn the things that are most important to me, rather than... I know you can input different rules and things into the email system and make it do it for you, but that also takes time and it's not always intuitive. And so that cognitive assistant that can just make it happen, yeah, that's what I want.

Daniel Serfaty: It's interesting. You describe a world where that device, so that collection of devices or that hierarchy of devices, according to Sylvain, he wants to have all army helping him, has quite an intimate knowledge, is developing quite an intimate knowledge about you, about your data, your preferences, but also perhaps your weaknesses and even beyond that. In order to help you very well like a partner or life partner, there is that kind of increased intimacy of knowing about each other. That intimacy is really the result of a lot of data that that assistant is going to have about you, the user. Do we have some ethical issues with that? Are there ethical consideration in the design of those systems to protect those data almost and perhaps beyond the way we protect medical data today through HIPAA compliance or other ways? We're talking about getting way inside our psyche at this point. How do we design? As engineers, how do we conceive of protecting those data?

Valerie Champagne: I have no idea about the design. I'll leave that to Sylvain. But I will say there is a feel of big brother with that cognitive assistant. I know I've seen some sci-fi movies where that cognitive assistant then turns on you and does nefarious things. And so I think security... And I mean, we've just seen recently what two cyber breaches in the last couple of weeks, one for our oil and one for a meat factory. Imagine if your personal assistant got hacked, that could be pretty scary. We definitely need to build in security and then whatever else Sylvain says.

Daniel Serfaty: Yes.

Sylvain Bruni: No pressure.

Daniel Serfaty: Immense stuff. Valerie said this is entirely your responsibility to do that. Have you thought about that as you design those data stream as way to protect that? Because as you said, if I want to know something about you, it will be very costly for me to spy on you, so to speak, but I could easily hack your assistant who knows almost everything about you, your cognitive assistant.

Sylvain Bruni: This is a valid concern just like with any technology that's going to be handling data generated by humans. I think there are two aspects of the problem. The first one which you went into was the cybersecurity, the integrity of the system aspect of it. Both in the world of cybersecurity and healthcare world, there are a number of protocols and methods in engineering to design systems that counter as much as possible that. Obviously, 100% safety of access does not exist. There is always a risk. A secondary part within that realm of cybersecurity and data integrity is in the way the data are stored or manipulated adding layers of security.

Blockchain is one emerging way of ensuring that the data is better protected through distribution, but you could also imagine certain things about separation of data. Two and two cannot put together kind of process. Anonymization and abstraction of information is another method we can think of to do that aspect of data security. But there is another bigger problem to me, which is what the assistant could do or reveal with the data it has beyond its current mandate of supporting the human for something that sometimes that latent knowledge can yield opportunities for learning and betterment of the human, we've talked about that a little bit, using the gaps that may be identified by the cognitive assistance as an opportunity for learning. 

That aspect, we were definitely thinking about that and intentionally trying to put in place the mechanisms such that when a gap is revealed, it is not about saying, "Oh, you're bad and you suck at your job," but more, "Hey, here's an opportunity for improvement," and cognitive assistant could trigger a task, an activity, something for the human to learn and bridge that gap. The problem is beyond that when AI becomes a little bit more intelligent and it can do things that we can't anticipate necessarily just yet, and that I do not have yet a good answer for it, but a lot of other people are thinking about those types of issues right from the very beginning because that's where it needs to be thought of at the design level. 

Currently, the gate is really in the interaction modalities. The cognitive assistants are built for a specific mandate with our interface. All of the latent knowledge that could be used for something else typically would not come out. But who knows? We could have a cognitive assistant say things that are very inappropriate in the language that they use. That has happened. There are some methods to go against that, but we're discovering what those types of problems may be as we implement and test those kinds of systems.

Daniel Serfaty: I'm sure Hollywood is going to have a field day with that. I'm waiting for the next big feature when the cognitive assistant goes wrong. It happened before in 2001, one of the first one.

Sylvain Bruni: Correct.

Daniel Serfaty: My last question for you is non-withstanding the science fiction aspect of it, it's not so much fiction. We're really touching those assistants right now as we speak and they're becoming more sophisticated, hopefully more helpful, but with the danger of being also nefarious. And the question is if you imagine a world, 10 years from now, so I'll give you enough time to be right or to be wrong, describe me a day in the life of a worker where you can pick a doctor, or a nurse, or a command and control operator, or an officer, or a maintenance worker leaves the cognitive assistance of the future. How does it work? Can you make a prediction? Are they going to be everywhere? Are they going to be very localized in some specialties? Who wants to start with that wild prediction? Don't worry about predictions about the future. You cannot be wrong.

Sylvain Bruni: My prediction is that in the next full of years, we're going to see incremental evolution of the types of assisting capabilities that we have mentioned throughout the podcast, ID the Siri and Alexa type of things getting better, a bit more clever, having some basic understanding of the environment and the conversation being more fluid and multimodal. I think that constant improvement is going to happen. However, further down the road, I would see really a big leap happening when data interoperability in various domain is a lot easier and faster particularly in the consumer world. 

I would imagine that in the future, those cognitive assistants will be everyday work companion that we cannot live without, just like the cell phone. Nowadays, we would not be able to survive the world without a cell phone. I think down the road, the same will be true for cognitive assistants because they will have proven their value in removing all of the menial little things that are annoying every day about doing things about data searches, about data understanding, about data overhead. I would really see that as being the way this concept is going to be in the hands of everyday people.

Before that, I think the research is still needed and those key critical environments like defense and healthcare will be the drivers of technology development, such that in those areas where the cost of a mistake is so high and the demand in human brain power is so high and currently resources so limited will have to have that type of tool or teammate. I don't want to reopen Pandora's box on this one, but that type of support to actually do the work that needs to be done. Those are my two cents prediction for the future. 

Daniel Serfaty: Thank you. That's very brave and exciting, actually, as a future. Valerie, you want to chime in on this one, 10 years from now?

Valerie Champagne: Sure. Mine is going to be a little more grim than Sylvain because I'm taking it from the standpoint of an air operation center. And back in the early 2000s where the air operation center was technologically and where it is now, it has not advanced very far. And so when I look at the capabilities of a cognitive assistant, I just think in 10 years that it will take a while for it to maybe not be developed, but to be fielded and tufted within the field and integrated fully into the workplace. In 10 years, I think you may get a cognitive assistant that does some of those rudiment mundane types of tasks that free up the operators so they have the time to really think. And then once they gain trust in that, then I think you could see it leap. But I don't think that will happen in the next 10 years. I think the leap will happen after that. 

Daniel Serfaty: Here, you have audience, both optimistic and a more cautious prediction about the next 10 years where artificial intelligence and power devices, software or hardware, are going to be here to make our lives better, alleviate our workload, help us make better, wiser decisions. Let's meet again in 10 years and discuss and see where we are at. The MINDWORKS podcast will go on for the rest of the century. So don't worry, we'll be here. 

I'm going to ask you before we close advise. Many people in our audience are college students or maybe high school students thinking about going to college or graduate students, so people wanting to change a carrier and they might be fascinated by this new field when, in a sense, you're fortunate to work on, Valerie and Sylvain, that look at the intersection of psychology and human performance and computer science and artificial intelligence and systems engineering and they are asking themselves, "How can I be like very like Valerie or Sylvain one day?" What's your career advice for those young professionals or about to be young professionals? You have each a couple of minutes. Who wants to start?

Valerie Champagne: I'll go first. My background is not computer science. I don't code. I'm not even a cognitive science folk, although if I was going back for my bachelor's, that's what I would study. There is no doubt about it because I loved the work. And so what I would say to folks is I have a bachelor's in German and I have a couple of masters related to the intelligence field that I got when I was in the military. 

And so what I would say to folks is studying of the sciences isn't directly your passion. You could still work in the space by, first of all, having passion for wanting to deliver relevant capabilities to the end user and then gaining experience so that you can be a connector. And that's how I look at myself. I don't develop, but I do know how to connect and I also know a good thing when I see it and can help to connect the developers to the people they need to talk to so that their product is better and ideally would have a better chance of transition.

Daniel Serfaty: Passion and connection, that's really your advice, Valerie. Thank you very much. Sylvain, what's your advice here?

Sylvain Bruni: I have three pieces of advice. The first one from an academic and professional development point of view, I would encourage folks not to overly specialize in a specific field for two reasons. Those fields that you need to understand to do this kind of work, they're very dynamic. They change all the time. The tools, the knowledge, the capabilities of what's out there, everything just changes so fast. The technology we are using in 2021 is vastly different from what we were using even like three years ago. 

My advice there is learn the basics in systems engineering, human factors, design, AI, software, a little bit of everything here and see how those connect with one another because then you will need the connection between those different fields to be able to work in the area of cognitive augmentation. Number two, it would be for folks to be curious and eager to learn and tackle things that are completely foreign to them. That's mostly for the field of application.

I started being very passionate about space, little by little that moved to aviation, that moved to defense, that moved to healthcare. When I started, I had no clue about anything related to defense and healthcare. And now, that's where my professional life is and continues to evolve. And being curious about those things you don't know is really going to be an advantage to you because then you're going to start asking the questions that you need to really understand to be able to build a technology that's not going to fail at certain critical times. 

And finally, going back to our previous discussion, I would very much encourage everyone to watch and read science fiction. This to me is my best source of information for the work I do, because, one, I can see what authors have been dreaming as the worst of the worst that could happen. We talked about that. What could go wrong with AI? Well, it turns out there is a huge creative community in the world that is thinking about that all the time and making movies and books and comics out of that. And so just for your own awareness and understanding what could go wrong, that can have an influence on your design work. 

But also for the good side, not everything is apocalyptic, and so you have some good movies, you have some good books that will tell you a brighter future permitted by robots and AI and all of that. And those types of capabilities and features, they're always something you want to aspire to in the work that you do in building them and delivering them to the world. I could go on and on and on about science fiction and how it's actually useful for everyday engineering and design, but I will encourage people to take a look.

Daniel Serfaty: Here, you heard it, audience. Learn something new, be curious, read science fiction, be passionate and make connections. Thank you for listening. This is Daniel Serfaty. Please join me again next week for the MINDWORKS podcast and tweet us @mindworkspodcst, or email us at mindworkspodcast@gmail.com. MINDWORKS is a production of Aptima, Inc. My executive producer is Ms. Debra McNeely and my audio editor is Mr. Connor Simmons. To learn more or to find links mentioned during this episode, please visit aptima.com/mindworks. Thank you.