MINDWORKS
Join Aptima CEO, Daniel Serfaty, as he speaks with scientists, technologists, engineers, other practitioners, and thought leaders to explore how AI, data science, and technology is changing how humans think, learn, and work in the Age of AI.
MINDWORKS
AI’s Expertise Upheaval: Mastery vs. Growth Roles
AI is reshaping entry-level work and the path to expertise. Host Daniel Serfaty and Prof. Joseph Fuller of the Harvard Business School break down how “mastery” roles shrink as rules-based tasks automate, and “growth” roles expand as AI removes barriers to entry.
Learn what this shift means for career development and the next generation of talent.
Daniel Serfaty: How did your interest in the future of work begin, whether particular moment on inside the project, that turned your focus in that direction?
Prof. Joseph Fuller: Well, like a lot of life, it was a bit of thought and chance. When I left Monitor and joined the Harvard Business School faculty, the school had inaugurated several years earlier during the end of The Great Recession of study of the competitiveness of the US economy and I was invited to meet with the leadership of that project and to see if I wanted to get involved and there was an anomaly in the data, which is that the respondents to a survey we did of all our alumni, the first in the history of our school, showed that the respondents felt that the workforce in the United States had been a very important source of competitiveness for the country, but they also assessed that advantage as rapidly declining.
It was actually the most stark data that came out of that survey and I asked who was studying that, and essentially, the answer was, "Well, no one here feels they know anything about that." Perhaps foolishly, I said, "Well, why don't I start looking at that?" almost bringing up, "This is a consulting project in an area I don't know, so I'll learn about it and maybe can come up with some insight." It was tempered also by the fact that I knew that my consulting clients, several of whom continued to work with me after I left Monitor and came to Harvard Business School, were very concerned about this. So I had some confirming data, this wasn't mass hysteria or overstated and I began to look into it and here we are the better part of 13 years later.
Daniel Serfaty: We talked about the Managing the Future of Work Project at Harvard. This is how the initiative started or it started a little later, especially when did the mission evolve over time, especially in the last few years because of AI?
Prof. Joseph Fuller: Yes, on both fronts, it did start a little later. Originally, it was a module in the broader project on US competitiveness, but it rapidly became more than half of the ongoing research within the project and our dean at that time, Dean Nitin Nohria, suggested that it should just become its own project. And very happily, my colleague, Professor Bill Kerr, who's a scholar of global talent flows and immigration and labor productivity, who was already a collaborator of mine in a teaching setting, agreed to become a co-head of the project. So we spun that project out of the competitiveness project.
Several years later, the president of Harvard at the time, President Larry Bacow, asked me to chair a university-wide faculty taskforce on workforce issues and that led me at the end of that project along with Professor David Deming of the Harvard Kennedy School and now the dean of Harvard College to find a second project that you mentioned in your introduction, the Project on Workforce, which is more narrowly focused on upward mobility, skills gaps, income polarization, whereas the Managing the Future of Work Project adopts an attitude of, "What are important questions that decision makers in industry, in organized labor, in the executive branches of government need to understand with data and in a way that they find approachable, so we don't create a lot of what would be classified as scholarly research for a peer-reviewed journal if we try to present playbooks and analyses that speak directly to the questions that those decision makers have on their minds?"
Daniel Serfaty: For me, that's particularly interesting, because as we start focusing on the AI impact on the workforce and the future of work in general, most people that I've talked to, both in my work but also in previous podcasts, start with the technology side. And they unpack that at different levels of what kind of LLMs and what kind of agents as opposed to looking at it from the human work side and understanding that it's really transforming workplace and perhaps even society. I think it's very refreshing that you tackle that from that angle first and then looking at AI as almost like an independent variable that comes into the workforce. Can you elaborate on that a little bit?
Prof. Joseph Fuller: Well, first of all, I'm delighted that most people are approaching it that way, which means I don't have much head-to-head competition the way I'm thinking about it, but I think you described it nicely. Economists are very youth to looking at technologies and productivity data, but feels like organizational behavior are not really adrift in those types of analyses, certainly the analysis usually done by labor economists. And labor economists start with a phenomena and then explore it. They don't start necessarily with a problem and seek to interrogate it. And of course, much of the work done by the types of people you're talking about is absolutely brilliant, very, very difficult to do, very well documented. I read all of it, I benefit from the vast majority of it, but it's not actionable.
If I'm the editor of a peer-reviewed journal, it's actionable, but if I'm a secretary of labor and manpower in a state, if I'm a chief human resources officer in a company, if I'm a labor leader, if I'm an entrepreneur in the space or even a large company like the big workforce providers like Randstad, Adecco, Manpower, which all do their own research by the way and some of it quite excellent, I don't know what I'm supposed to do differently. Now, there are some clear lights there. Professor Brynjolfsson at Stanford, formerly at MIT, has done some absolutely seminal work. My co-head of the Project on Workforce professor David Deming at Harvard has done work that is right up on the other side of the line.
If you can stand the Greek letters and the equations and look at the findings, you'll see some important learnings for decision maker, but still not in context and often appears in publications that the vast, vast majority of decision makers are absolutely oblivious to. So we try to bridge that gap.
Daniel Serfaty: And I think that's a gap that needs more and more bridging. In your published work, I've read a few of your papers and certainly listened to a few of your lectures and I'm impressed by how you bring basically what is an academic institution and its power into very practical advice. One particular work as we dig now into more about the framing of AI and work, you've pointed to that learning curve between junior and senior workers as a key factor of how AI will reshape the job structures, that it affects a particular level in the pyramid, or however, we want to visualize that hierarchy. Can you explain that argument a little more, why it is central to the thesis of how jobs are being transformed by AI?
Prof. Joseph Fuller: Well, we have several major workstreams going and let me start with the first you referred to which we called The Expertise Upheaval that I wrote with Matt Sigleman, the founder of the Burning Glass Institute, who's been my co-author many times and is a tremendously insightful source as is the institute on these issues, Mr. Mike Fenlon, formerly the CHRO at PwC, the professional services giant, but now happily an employee of Harvard University. We looked at how AI would affect entry-level jobs and tried to understand where AI would crimp the number of entry-level jobs because AI was significantly more productive at those tasks than an entry-level worker could be.
And those are what we call mastery jobs where a lot of the tasks in the early years of employment are routine cognitive tasks, where the employees being asked to present to apply rules or guidelines provided by the employer to make certain decisions. A really good example of that would be a credit analyst for a bank or for a commercial organization deciding whether or not to provide a buyer credit to buy my goods or services. The company will have developed rules by which you make those decisions about how big the account is, where they're located, how much they're buying, at what price, do they have a history with us or not, are we gaining market share through this or is this sustaining an existing account.
Well, generative AI loves rules-based decisions where there's lots of longitudinal data. Unless there's a error, it's very, very quickly going to have essentially no likelihood of hallucinating. So a first-year credit analyst economically will be dominated by the creation of a bot or an agentic AI to administer that process, but one sees, we call it mastery because over time that junior credit analyst begins to understand much more complicated transactions, to have the insight to change the rules by which decisions are made, to see, to spot an hallucination in a more complex transaction that may be three, four, five years into their career because mastery has been gained as a travel-down experience curve and you can't have a five-year-old unless you had a one-year-old.
So if it takes three to five years to gain that mastery, then if you do not have a supply chain of talent growing into those roles, what's the organization to do? Now, a lot of discussions about technology and the impact on entry-level jobs that stops there because the question has been a rhetorical one, "Isn't it true that this is going to destroy a lot of entry-level jobs?" but there's a doppelganger of this, which is jobs that will be made more accessible where more people would qualify to be considered to be hired into entry-level jobs because of AI, because the AI is automating a task that has been hard for people to master, that may require more difficult and demanding credentials or more experience.[DM1]
So an AI might be able to very quickly do the basic framing of a website, but the creative content, understanding the context and the strategy of the website, not creating prose and text and photographic and graphic collateral, that's just all conventional unoriginal, uninteresting because it's gone and looked at competitive websites and just shot for the mean. So we call these jobs growth positions, and actually, growth positions outnumber mastery positions by about 40%. Growth mastery positions represent about 12% of jobs in the United States, growth positions about 17%.
Now, I want to be clear about a couple of things. While AI will allow more people to be plausibly considered in those growth positions, hiring is always a relative phenomena. It's not, "Is this person qualified, yes or no?" It's, "Do I think Daniel is more qualified than Joe?" And so we may still see a bias toward people with those types of backgrounds and credentials, but in a low growth labor force and also with the growing importance of social skills to success in work, we do think that more of these growth positions will get broader considerations, set a richer, more diverse pool of potential candidates and then employers will benefit from that as will the workers.
Daniel Serfaty: Thank you. That clarification just bring a plethora of questions. Does it mean that basically the acquisition of expertise to move from a growth position to a mastery position is going to be affected by the very phenomenon that we are mitigating? Basically, after three years of being in a growth position augmented by AI, the nature of expertise to get to the mastery position has changed, because now, your scale plus AI equals a different kind of scale or a different kind of competence at that level. So in a sense, the experts of tomorrow are going to be different than the experts of today.
Prof. Joseph Fuller: Yes, I think that's very well said in a couple of observations. The first is that what is very, very difficult, arguably close to impossible to get sorted out at this stage is when does the curve starts shallowing, curve of improvement for the AI, because it's been improving faster than we predicted. And of course, what this is really a very big systems dynamics problem. Let's say you are my supervisor and my job is a mastery job, but I'm already a bit down the experience curve, so I'm able to keep my position but use AI to become more productive. Now, that may prevent the need to have another one-year-old.
It may allow us to reduce our current size of staff. It may allow you to expand my responsibilities, but at the same time, AI is affecting your job and the type of leverage you need from me. So you have this feedback loop created by AI. The second phenomena is that AI is unique among the history of technologies insomuch as it improves itself. We've never seen anything like that. So think of the cost line, a breakeven line for when a mastery job suddenly is better off being occupied by human being than the AI. That is likely moving up the Y-axis. And similarly in other areas, innovation AI will be directed through market signals to hard-to-do tasks which might create some more growth positions and we will just be juggling these two balls indefinitely.
Daniel Serfaty: I totally respond to this analogy of system dynamics because that's exactly what it is. You have double learning loops basically. A lot of folks are critics of AI in a sense. They say, "Well, it's just like every other revolution. We have automated pilot. We have this." I believe the big difference is the one that you mentioned. This is a technology that learns, and therefore, evolves and adapts like automated landing system in a cockpit. So in this particular case, that co-evolution or co-learning of the human and the AI together, especially as future systems of AI will have recall about previous interaction with that particular worker before we have a better mental model of the worker itself. That's why I like to talk about human-AI teaming.