TL;DR AI changed what expertise looks like. Producing a draft with AI is getting cheap; knowing whether it’s right is becoming the premium skill. We used to automate by formalizing the path. Now we often start with the destination and spend our effort judging the route AI proposes. AI is changing what counts as expertise: less typing, more taste, more judgment, more accountability. The bottleneck has moved from making things to validating them.
For a long time, automating something usually meant understanding it first. Really understanding it.
You had to map the process end to end, define the rules, encode the logic, think through the edge cases, and turn the whole thing into a solid structure, an algorithm for a machine to follow. Only then could you automate it. That is still the basic logic of traditional rule-based automation.
AI changes the order. Now you start with a goal, add context, set constraints, maybe add a few examples, and the system gives you a solution. Then you work on that: push it, correct, refine, see where it breaks, iterate until it behaves the way you need. The work becomes intent-first.
Agents make this even more obvious. The approach is no longer just “follow this (fixed) procedure.” It becomes more like “here is the objective, figure out the path.” That is also how recent writing on agentic automation describes the shift – away from scripted systems and toward systems that work from goals, adapt their steps, and need more human oversight around the edges. Anthropic’s 2026 agentic coding report describes engineers increasingly focusing on architecture, strategy, supervision, and validation while orchestrating long-running agents that handle implementation details.
So the role starts to change too. It moves away from being “author of procedures” and manually writing every step, toward being “author of intent + reviewer of behaviour.”
GitHub’s interviews with advanced AI users describe that shift pretty directly: less “code producer,” more “creative director of code,” with orchestration and verification becoming central parts of the job.
AI didn’t remove the need to understand the work. It changed what understanding is for. Before, you needed to understand the process deeply enough to encode it yourself. Now, you need to understand it well enough to tell whether the AI actually did what you meant. To know what good looks like.
Microsoft Research has a good phrase for this: the intent gap – the distance between informal human intent and the actual behaviour that gets generated. Their point is that AI-generated code can be plausible without being correct. It can look right, sound right, even run – and still miss what you were actually asking for. That gap between “what I meant” and “what it did” becomes one of the central problems.
And that is why AI so often feels amazing at first, and frustrating later. Getting a draft, a workflow, or a working piece of code is suddenly much cheaper. But deciding whether it is correct, complete, reliable, or just confidently wrong can become more expensive. Research on generative AI in work settings points to the same pattern: people don’t just hand work off to AI and walk away; they take on new work around monitoring, refining, and managing outputs. Separate 2026 HBR research argues AI can even intensify work rather than reduce it, because it increases pace, scope, and the amount of judgment still needed from the human side.
The transformation is not that AI is changing how we automate. It is changing what counts as valuable expertise. There is less value in spelling out every step manually, and more value in setting the right constraints, asking better questions, spotting where the system drifted, and knowing what “good” or “complete” actually looks like.
So yes, AI is making execution cheaper. But judgment is now the scarcer skill. And the winners in AI-heavy work won’t be the fastest prompters. They’ll be the best judges.
If AI makes first drafts cheap, the next challenge is not teaching people to produce more. It is teaching them to judge better. But judgment does not appear automatically when someone is given an AI tool. It has to be built through exposure, feedback, consequences, and mentorship. More on that soon.