TL;DR AI makes first drafts cheap, but we still need to build judgment to tell if it’s good. As AI takes over more of the work that once trained beginners, mentorship may become one of the main ways expertise is passed on. In the past, people learned by doing. In an AI-heavy workplace, they may need to learn more by being exposed to the though process of experts.
We talk a lot about AI making work “more productive”. Drafts appear in seconds, code writes itself (or at least seems to), summaries, plans, emails, reports, analyses – the first version of almost anything is suddenly faster than ever.
But there is a very real problem underneath all this productivity. A lot of the work AI does for us was never just “work”, it was also how people learned. The rough first draft, the boring research task, the spreadsheet checked line by line, the junior report that came back filled with comments – all of that were part of how people learned to recognise quality.
That kind of work was not glamorous. Often, it was dull, repetitive, and frustrating. Nobody romanticised it while doing it. But it served a purpose. It gave people practice reps. It exposed them to the realities of the field, taught them what matters, what can go wrong, what “good enough” looks like, and how different that is from genuinely good.
In that sense, some junior work followed the same logic as older apprenticeship traditions: sweep the floors, repeat the basics, stay with the task long enough for your perception to change. Not because sweeping floors is noble, and not because entry-level workers should suffer through boring work just because that is how things were done “in the old days.” A lot of that model was inefficient and some of it was probably just bad management dressed up as character-building.
But it is still worth asking what function that slow beginning served. It was a filter and a training mechanism at the same time. Before AI, you usually could not get to a polished output without wrestling with the material first. You had to sit with the task for a while and make clumsy attempts. You had to notice where your own understanding fell short. That process selected for those who could tolerate boredom, repetition, and delayed reward. It wasn’t much about what you do, but about what your mind becomes while doing it.
Repetition trained attention.
Feedback trained perception.
Small mistakes trained proportion.
Over time, you began to see finer differences: between clear and vague, useful and correct, plausible and true.
AI changes that path. Now we can often skip the slow first attempts. We can produce, post, analyse, design, and perform almost immediately. The boring phase can be compressed or avoided altogether. You can arrive at something that looks good before you have developed the inner standard needed to judge its quality.
So the question is: if AI removes the task, what replaces the learning?
Because judgment does not emerge automatically just because someone has access to a powerful tool. If anything, the tool makes judgment more important. AI can produce something fluent, polished, and confident. It can give you an answer that looks finished and it may even work. But you’ve likely experienced that “works” and “is right” are not the same thing.
This is why the conversation about AI and expertise cannot only be about productivity. It also has to be about how mastery is developed. The point is to preserve the learning that used to be hidden inside slower work.
And this is where mentorship starts to look like an essential part of the infrastructure. Not because mentors have all the answers, which they don’t. And not because junior workers should simply copy senior worker’s preferences. That would only replace one kind of dependency with another.
But because mentorship can make judgment visible.
In an AI-heavy workplace, that may become one of the most valuable things a senior person can contribute: not just answers, but access to their thought process.
Among all mentoring models, the most effective becomes the one where an expert deliberately explains what they noticed, and just as importantly, why they noticed it. They show the difference between something fluent and something useful; between something that satisfies the brief and something that understands the real problem; between something that sounds confident and something that can survive in real world.
Much of expert judgment is tacit. A senior person may look at a report, a design, a piece of code, or a strategy and say, “Something is off here.” From the outside, that often looks like intuition. But usually it is compressed experience: years of seeing similar things succeed, fail, confuse, mislead, or break under pressure. The problem is that compressed experience does not automatically transfer. And what is needed is a kind of feedback that trains perception.
That also asks something new of experts. In a world where AI can produce more first drafts, senior people may need to become better at explaining their own judgment. Not performing expertise from a distance, but making the reasoning behind it available. That is not always easy. Experts often skip steps because they have internalised the logic. They may know that something is wrong before they can explain exactly why.
But if judgment is becoming more valuable, then the transmission of judgment becomes more valuable too. And that means mentorship is not only a support role. It becomes part of how an organisation keeps its expertise alive. If that pattern recognition stays locked inside the expert’s head, the next generation may inherit the tools but not the taste.
And that is the part AI does not solve by itself (yet).
Perhaps this is the next version of mentorship: not simply handing down tasks, and not simply correcting mistakes, but exposing the reasoning behind quality. Showing how an experienced person notices, doubts, compares, and decides.
Because if AI removes some of the work that used to build judgment, then we may need to become much more deliberate about passing judgment on. Not as a set of rules, but as a way of seeing.