The sunk cost of being good at something

There is a particular conversational move that has become common in discussions about AI. Someone demonstrates a new capability, shares a use case, or describes how their workflow has changed, and the response arrives like clockwork. What about security. What about governance. What about the hallucination problem. What about my twenty years of experience. Each objection arrives wearing the costume of legitimate concern, and each one contains enough truth to feel reasonable in the moment. But taken together, they form something that looks less like careful analysis and more like a defence mechanism operating at industrial scale.

The pattern is whataboutism in its textbook form. The term originates from Cold War-era Soviet diplomacy, where officials would deflect criticism of human rights abuses by pointing to racial violence in America. The rhetorical structure was never designed to resolve the original issue. It existed to neutralise it. To shift the frame from “is this true” to “but what about that other thing,” and in doing so, to ensure that neither question ever gets properly answered. The AI version of this runs on similar fuel, though the people doing it are rarely aware they’re doing it at all.

The objections are correct and that is beside the point

The uncomfortable thing about AI whataboutism is the concerns are mostly valid. AI security is genuinely underdeveloped, particularly around Model Context Protocol implementations where the attack surface is wide and poorly understood. Governance frameworks in most organisations range from nonexistent to laughably outdated. Hallucinations remain a structural feature of large language models, a byproduct of how they generate text rather than a bug that some future update will fix. And twenty years of domain expertise does contain knowledge that no model can replicate, particularly the kind of tacit understanding that comes from watching things break in production over and over again until you develop an instinct for where the next failure will come from.

All true, but none of it is the point.

The point is that these objections are being deployed not as calls to action but as reasons for inaction. There is a significant difference between “AI has security vulnerabilities, so we need to build better guardrails while we adopt it” and “AI has security vulnerabilities, so I’ll wait.” The first is engineering. The second is avoidance dressed up as prudence.

Leon Festinger’s theory of cognitive dissonance, first published in 1957, describes exactly what’s happening here. When a person holds a belief about themselves (I am an expert, my skills are valuable, my experience matters) and encounters information that threatens that belief (this technology can do parts of my job faster and cheaper than I can), the resulting psychological discomfort has to go somewhere. Festinger identified three common escape routes for that discomfort. You can avoid the contradictory information entirely, you can delegitimise its source, or you can minimise its importance by focusing on its flaws. AI whataboutism is all three at once, packaged as due diligence.

The sunk cost

Samuelson and Zeckhauser’s work on status quo bias adds another layer here that is worth sitting with. Their 1988 paper demonstrated that people disproportionately prefer the current state of affairs, even when alternatives are measurably better, and that this preference strengthens as the number of available options increases. The mechanism underneath isn’t stupidity or laziness. It is loss aversion applied to identity.

When you have spent fifteen or twenty years building expertise in a specific domain, that expertise becomes part of how you understand yourself. It is the thing that justifies your salary, your title, your seat at the table. The suggestion that a tool might compress the value of that expertise, or redistribute it, or make parts of it accessible to people who didn’t put in the same years, triggers something that feels like an attack even when it isn’t one. The natural response is to find reasons why the tool can’t possibly do what it appears to be doing. And conveniently, AI provides an inexhaustible supply of such reasons, because it is, in fact, imperfect.

The trap is that imperfect doesn’t mean useless. Imperfect is the condition of every tool that has ever existed. The first commercial aircraft couldn’t fly in bad weather. The early internet went down constantly. Mobile phones in the 1990s weighed a kilogram and dropped calls in buildings. Nobody looked at any of those technologies and concluded that the smart move was to wait until they were perfect before learning how they worked.

Yet that is precisely the position many experienced professionals are taking with AI, and the whataboutism provides them with just enough intellectual cover to feel like they’re being rigorous rather than scared.

The velocity problem

What makes this particular round of technological change different from previous ones, and what makes the coping mechanisms around it more dangerous than usual, is the speed.

Previous disruptions gave people time to adjust. The internet took roughly a decade to move from novelty to necessity for most businesses. Cloud computing crept in over years, first as a weird thing Amazon was doing with spare server capacity, then gradually as the default. Even mobile took the better part of five years to go from “we should probably have an app” to “our mobile experience is our primary channel.”

AI is not operating on that timeline. The gap between GPT-3 and GPT-4 was measured in months. The capabilities that seemed like science fiction in 2023 are baseline features in 2026. Agentic systems that were theoretical eighteen months ago are shipping in production today. The window in which “wait and see” was a defensible strategy has already closed for most knowledge work, and many of the people deploying whataboutism as a delaying tactic are burning through competitive advantage while they debate whether the fire is hot enough to worry about.

This is where the coping mechanism becomes actively harmful rather than merely unproductive. If the pace of change were slower, there would be time for the concerns to be addressed sequentially. Fix the security model, then adopt. Build the governance framework, then deploy. But the pace doesn’t allow for sequential anything. The security model has to be built while adopting. The governance framework has to be designed while deploying. The two activities are not opposed to each other and treating them as an either-or is itself a form of denial.

What experience is actually worth now

The most pernicious form of AI whataboutism is the appeal to experience, because it contains the highest concentration of legitimate truth mixed with self-serving reasoning.

Experience matters enormously. The question is which parts of it matter, and for what. The parts that involve pattern recognition accumulated over decades of watching projects succeed and fail, the ability to smell trouble before it shows up in a status report, the judgement to know when a technically correct answer is practically wrong, those parts matter more than ever in a world where AI can generate plausible output at speed. What AI cannot do is evaluate whether the output is appropriate for the specific context, the specific client, the specific political dynamics of a given organisation. That evaluation requires exactly the kind of accumulated wisdom that experienced people possess.

But the parts of experience that involve doing the work that AI can now do faster, the manual production, the research grunt work, the first-draft generation, the template building, those parts are depreciating rapidly. And for many experienced professionals, the manual production was the majority of how they spent their time, which means the shift feels existential even though the most valuable parts of what they know have arguably increased in value. AI is also moving up the value chain, much as Chinese manufacturing moved from cheap toys to highly complex electronics. This creates a kind of creeping dread that even our most-valued, intangible skills will also be eventually under threat.

The whataboutism around experience is often an attempt to avoid this sorting exercise entirely. Rather than doing the difficult work of figuring out which parts of twenty years of expertise are now more valuable and which parts need to be released, it is easier to treat the entire bundle as sacred and dismiss the technology that requires the unbundling.

The way out is through the discomfort

Cognitive dissonance resolves in one of two directions. You can change your beliefs to match the new information, which is uncomfortable but productive. Or you can distort the information to match your existing beliefs, which is comfortable and eventually catastrophic. Whataboutism is the distortion path, and the longer you walk down it, the harder it becomes to turn around, because every objection you’ve raised becomes part of the identity you’re now defending.

The alternative isn’t to abandon caution. It is to be honest about the difference between caution that leads to better decisions and caution that functions as a socially acceptable way to avoid making decisions at all. Build the governance framework, but build it while experimenting, not instead of experimenting. Raise the security concerns, but raise them in the context of “how do we solve this” rather than “this proves we should wait.” Lean on your experience, but do the honest accounting of which parts of that experience the world still needs and which parts you’re holding onto because letting go feels like losing a piece of yourself.

The concerns are all valid. The coping mechanisms aren’t.

I am a partner in Better than Good. We help companies make sense of technology and build lasting improvements to their operations. Talk to us today: https://betterthangood.xyz/#contact