Your Org Chart Is Not Your AI Strategy
If you’ve spent any time in enterprise technology over the past two decades, you’ll recognise the pattern immediately. A new category of tool emerges. Employees start using it because it makes their working lives easier. IT discovers this unsanctioned adoption, panics about security and compliance, and responds by trying to lock everything down. A period of organisational friction follows, during which the people who were already getting value from the tool become increasingly frustrated, while IT attempts to build a sanctioned alternative.
This is almost exactly what is happening with AI right now, except the speed of adoption has compressed what used to be a multi-year cycle into months. Harmonic Security’s analysis of 22.4 million enterprise AI prompts during 2025 found that while only 40% of companies had purchased official AI subscriptions, employees at over 90% of organisations were actively using AI tools anyway, mostly through personal accounts that IT never approved. BlackFog’s research from late 2025 found that 49% of employees surveyed admitted to using AI tools not sanctioned by their employer at work. And perhaps most tellingly, 63% of respondents believed it was acceptable to use AI tools without IT oversight if no company-approved option was provided. And even when there is a sanctioned version (typically an Enterprise license for Copilot and/or chatGPT, implementation seldom goes far beyond simply making licenses available to users.
The instinct from many IT departments has been to treat this as a security problem. And in all fairness, it is partly a security problem. IBM’s 2025 Cost of a Data Breach report found that 20% of organisations suffered a breach due to shadow AI, adding roughly $200,000 to average breach costs. That is not nothing. But treating shadow AI purely as a security problem misses the more interesting and more consequential question underneath it, which is about organisational design, capability gaps, and who should actually be responsible for an organisation’s AI strategy.
The ownership reflex
There is a well-documented tendency in organisations for existing power centres to claim ownership of emerging technologies. IT departments in particular have a long history of this behaviour, and it makes a certain amount of institutional sense. New technology involves infrastructure, security considerations, vendor relationships, and integration with existing systems. These are things IT teams understand and have built processes around.
The problem is that AI, particularly generative AI and the emerging wave of agentic AI, does not fit neatly into the traditional IT operating model. It is not a new enterprise application to be procured, deployed, and maintained. It is not an infrastructure upgrade. It is not even, primarily, a technology problem at all. AI adoption is fundamentally a business transformation problem that happens to involve technology.
When IT departments attempt to own AI strategy, several predictable things happen. First, they frame it through the lens they understand best, which means the conversation becomes dominated by questions about security policies, approved vendor lists, data governance frameworks, and integration architecture. These are all legitimate concerns, but they represent perhaps 30% of what makes AI adoption successful.
The capability gap
Effective AI implementation in an organisation needs people who can do several things that don’t appear anywhere on a traditional IT org chart. You need someone who understands the business process being transformed well enough to know where AI adds value and where it introduces risk. You need people who can design prompts and workflows that produce useful outputs, which turns out to be a surprisingly nuanced skill that combines writing ability, logical thinking, and deep familiarity with whatever domain you’re working in.
You need people who can evaluate AI outputs for accuracy and bias, which requires subject matter expertise that sits in the business, not in IT. And you need people who can manage the change process, because asking someone to fundamentally alter how they do their job is never a simple matter of handing them a new login.
This capability gap helps explain why shadow AI is happening in the first place. The people closest to the work are the ones who best understand where AI can help them. A marketing analyst who discovers that Claude can help them write campaign briefs in half the time is not going to stop using it because IT hasn’t approved the tool yet. A financial analyst who finds that an LLM can help them spot patterns in quarterly data is going to keep using it regardless of what the acceptable use policy says. These people are not being reckless. They are being rational, responding to the incentive structure in front of them, which rewards productivity and results over process compliance.
The Gartner prediction that shadow IT will reach 75% of employees by 2027 (up from 41% in 2022) tells you everything about the trajectory. And shadow AI, being even more accessible than traditional shadow IT since all you need is a browser tab and a free account, is accelerating this pattern dramatically.
So if IT cannot own AI strategy alone, and if the business is already adopting AI without waiting for permission, what does the right organisational response look like?
Conway’s law and the automation trap
Before getting to solutions, it is worth understanding the most important conceptual framework for why AI adoption goes wrong in traditional organisations. In 1967, a mathematician named Melvin Conway observed that organisations are constrained to produce designs that mirror their own communication structures. The observation, which became known as Conway’s Law, was originally about software architecture, but it applies with uncomfortable precision to how organisations approach AI.
Conway’s Law predicts that if you let AI adoption emerge organically within existing organisational structures, what you will build is a set of AI solutions that reproduce your existing departmental silos, legacy objectives, internal politics, and traditional power dynamics. You will, in effect, automate the existing org chart.
This is the single most common failure mode I see in enterprise AI adoption, and it is devastatingly easy to fall into. Marketing builds its own AI tools for content generation. Finance builds its own AI tools for forecasting. Customer service builds its own AI chatbot. HR builds its own AI-powered recruiting screener. Each of these projects may individually deliver some efficiency gains, but collectively they create a fragmented ecosystem of AI capabilities that cannot talk to each other, that duplicate effort, that embed existing biases and inefficiencies into automated systems, and that make future integration progressively harder.
As Toby Elwin put it, an enterprise cannot adopt AI faster than it can align decision rights, language, and accountability. If your departments cannot communicate effectively with each other today, your AI implementations will faithfully reproduce that dysfunction. The AI will hedge like committees hedge. It will fragment like silos fragment. It will optimise for departmental metrics rather than organisational outcomes.
FourWeekMBA’s analysis of Conway’s Law made the point vividly by examining Microsoft’s troubled Copilot deployment. If that product feels like three different tools fighting each other, it’s because it was built by three different divisions that were forced to integrate after the fact. This is not bad engineering. It is Conway’s Law doing exactly what Conway’s Law always does.
The temptation to automate the existing org chart is especially strong because it is the path of least resistance. It does not require anyone to give up territory. It does not require difficult conversations about who owns what. It does not require rethinking how work gets done. It simply applies AI to existing processes in existing departmental silos, which delivers enough small wins to create the illusion of progress while actually cementing the structural problems that will prevent the organisation from capturing AI’s larger transformative potential.
The incremental-vs-wholesale question
One of the most contentious questions in AI organisational strategy is whether you can get there incrementally or whether the scale of change required demands a more fundamental restructuring.
The honest answer is that it depends on your starting position and your ambition level. If you are a mid-sized professional services firm that wants to use AI to make your existing teams 20-30% more productive, an incremental approach that adds AI tools to existing workflows, builds capability gradually, and evolves governance frameworks over time is probably sufficient and definitely lower risk.
But if you are a larger organisation in a competitive market where AI is already changing the basis of competition, incrementalism may be dangerously slow. The organisations that are winning with AI right now are not the ones that added ChatGPT or Copilot to their existing processes. They are the ones that redesigned their processes around AI capabilities, which is a fundamentally different thing.
There is a useful distinction from the organisational design literature between “first-order change” (improving existing processes within the current structure) and “second-order change” (fundamentally altering the structure and assumptions themselves). Most organisations default to first-order change because it is more comfortable and less politically fraught. But AI may be one of those rare technological shifts where second-order change is necessary for organisations that want to do more than survive.
Consider a practical example. A mid-sized insurer wants to improve its claims process using AI. Today, a claim passes through four separate teams in sequence. First contact sits with the customer service team, who log it. Assessment and settlement sit with the claims handlers, who evaluate damage, validate the claim against the policy, and calculate what to pay. Investigation sits with a fraud and compliance team, who flag suspicious patterns. And payment authorisation sits with finance, who release the funds. Each handoff introduces delay, each team has its own systems and metrics, and the customer experiences the whole thing as an opaque, slow, and frequently frustrating process. This is Conway’s Law made visible to the policyholder.
The incremental approach would give each of those four teams their own AI tools. Customer service gets a chatbot for first notification of loss. The claims handlers get an AI that pre-populates damage estimates from photos and suggests settlement amounts. The fraud team gets a pattern-matching model. Finance gets automated payment routing. Each team becomes somewhat faster in isolation, but the fundamental structure remains untouched. Four teams, four handoffs, four sets of metrics, and the customer still waits while their claim passes from queue to queue.
The transformative approach would ask why the claim needs to pass through four teams at all. An AI system that can simultaneously assess damage from submitted photos, cross-reference the policy terms, run fraud indicators against historical patterns, calculate the settlement, and trigger payment could collapse most of that chain into a single interaction for straightforward claims. The customer submits their claim, the AI processes it end-to-end, and a human reviewer approves the output. What was a four-team, ten-day process becomes a one-team, same-day process for the 70% of claims that are routine. The complex and contested claims still need human expertise, but even those benefit from the AI having done the preliminary work across all four traditional functions simultaneously.
That second approach is incompatible with the existing org chart. It eliminates handoffs that currently define departmental boundaries. It changes what claims handlers, fraud analysts, and finance teams actually do with their time. It requires new performance metrics, because “claims processed per handler” stops making sense when the AI is doing the initial processing. And it raises uncomfortable questions about headcount in teams whose primary function was moving information from one stage to the next.
Aligning the value chain
So how do you actually make this work? The standard answer from most consultancies and conference speakers is “create a cross-functional AI team,” and while that answer is directionally correct, it is also woefully insufficient. Creating a cross-functional team is a structural intervention, and structural interventions fail when they are not supported by corresponding changes to strategy, capabilities, processes, and incentives. You cannot simply staple people from different departments together, give them an AI mandate, and expect results.
Jonathan Trevor’s strategic alignment research at Oxford’s Saïd Business School provides the most useful framework I’ve found for thinking about this practically. Trevor’s central argument, developed across his books Align and Re:Align and a series of articles in Harvard Business Review, is that organisations are enterprise value chains, and they are only ever as strong as their weakest link. The chain runs from purpose (what we do and why) through business strategy (what we are trying to win at) to organisational capability (what we need to be good at), organisational architecture (the resources and structures that make us good enough), and management systems (the processes that deliver the performance we need).
The power of Trevor’s framework is that it forces you to work through AI adoption as a linked sequence of decisions rather than treating it as an isolated structural question. And it exposes exactly where most organisations’ AI efforts break down.
Start with purpose. Most organisations’ stated purpose does not change because of AI, but AI may fundamentally change what fulfilling that purpose looks like in practice. Our insurer’s purpose is presumably something about protecting policyholders and paying claims fairly and promptly. AI does not alter that purpose, but it radically changes what “promptly” can mean and what “fairly” requires in terms of oversight.
Then business strategy. If AI enables same-day claims settlement for routine cases, that becomes a competitive differentiator. The strategy question is whether the insurer wants to compete on speed and customer experience (which demands the transformative approach) or on cost efficiency within the existing model (which might justify the incremental approach). This is a leadership decision that needs to be made explicitly, because the organisational implications of each choice are completely different.
Then organisational capability. This is where most AI initiatives fall apart, because the capabilities required to execute an AI-driven claims process are different from the capabilities the insurer currently has. You need people who understand insurance underwriting AND who can evaluate AI outputs for accuracy. You need people who can design human-AI workflows where the AI handles routine cases and humans handle exceptions, which is a design skill that barely existed even five years ago.
You need people who can monitor AI systems for drift and bias over time, which is a form of quality assurance that traditional insurance operations have never had to think about. Trevor’s framework makes you ask whether these capabilities exist in the organisation today, whether they can be developed internally, and what the timeline for building them looks like. If the honest answer is that the organisation does not have these capabilities and cannot build them quickly enough, then the strategy needs to account for that through hiring, partnerships, or a phased approach that builds capability as it goes.
Then organisational architecture. This is where the cross-functional team question finally becomes relevant, but now it sits within a much richer context. The architecture question is about what structures, roles, and resources are needed to support the capabilities you have identified. For our insurer, this might mean creating a new “claims intelligence” function that sits alongside the existing claims teams, staffed by people who combine insurance domain knowledge with AI workflow design skills.
It might mean redefining the role of claims handlers from “people who assess claims” to “people who review and improve AI-assisted claim assessments,” which is a different job with different skill requirements and different performance expectations. It almost certainly means changing reporting lines so that the people responsible for AI-driven claims have authority over the end-to-end process rather than being subordinate to any single one of the four existing departmental heads.
The architectural decisions also need to address the political dimension directly. In the insurer example, the head of claims, the head of fraud, and the head of finance all currently control their own domains with their own budgets and their own staff. A transformative AI implementation threatens all three of those power bases simultaneously.
Trevor’s work acknowledges this tension by framing alignment as a leadership responsibility rather than an organisational design exercise. The decision about how to restructure around AI cannot be delegated to the teams whose authority it threatens. It has to come from senior leadership who have the authority and the willingness to make uncomfortable choices about where power and resources should sit.
Then management systems. This is the link that gets forgotten most often and that causes the most damage when it is neglected. Management systems include how people are measured, how they are rewarded, how information flows, and how decisions are made. You can create the perfect cross-functional AI team with the right people and the right mandate, and it will still fail if the management systems around it are pulling in the wrong direction.
Return to the insurer. Suppose you have created your claims intelligence function and staffed it with capable people. If the claims handling team is still measured on “claims assessed per handler per day,” they have no incentive to cooperate with the AI initiative, because the AI threatens to make their metric irrelevant. If the fraud team’s bonus structure is tied to “fraud cases identified,” they will resist an AI system that flags fraud automatically, because it removes the activity their compensation is based on. If the IT department’s budget is allocated based on the number of systems it manages, it will resist an architecture where AI tools are managed by the business, because every tool that sits outside IT reduces IT’s budget justification.
These are not hypothetical objections. They are the exact mechanisms through which well-intentioned AI initiatives get quietly suffocated by the organisations that launched them. Trevor’s value chain framework makes these dynamics visible before they become fatal, because it forces you to ask whether your management systems are aligned with your stated AI strategy or whether they are actively working against it.
The practical implication is that an organisation pursuing transformative AI adoption needs to change its measurement and reward systems at the same time as it changes its structures and capabilities. For the insurer, this might mean replacing team-level productivity metrics with end-to-end outcome metrics like “time from claim submission to resolution” and “customer satisfaction at point of settlement.”
It might mean creating shared incentives that reward the claims intelligence function and the traditional claims teams for collaborative outcomes rather than individual departmental throughput. And it definitely means ensuring that the people whose roles are changing through AI adoption have a visible and credible path to new roles that are at least as valued as their old ones.
What separates success from failure
The patterns on both sides are remarkably consistent. The organisations getting this right have governance frameworks that distinguish between high-risk and low-risk AI use cases rather than applying blanket controls to everything, and they have accepted that some amount of unsanctioned experimentation is healthy and necessary.
SentinelOne offers a good example of this in practice. Rather than threatening consequences for unapproved AI use, they created a coalition of eager participants across the organisation who can test new tools and introduce them for piloting, with multiple fast pathways for getting a tool evaluated and adopted. The data supports this approach. Harmonic Security’s research found 665 different AI tools across enterprise environments, and concluded that blanket blocking was futile and counterproductive.
The failure modes are the mirror image. Organisations go wrong when they hand AI ownership entirely to the CTO, when they create governance so heavy it prevents adoption altogether (pushing more activity into the shadows), when they mandate a single vendor across the entire organisation, or when they treat AI as a cost-reduction exercise (which produces the “automating the existing org chart” failure mode rather than process transformation).
The most pernicious mistake is treating AI adoption as a single programme with a defined start and end date. AI is not an ERP implementation. It does not have a go-live date. It is a continuous organisational capability, and the Nadler-Tushman Congruence Model helps explain why. When the formal structure says “IT owns AI” but the informal culture says “people are already using AI tools whether IT knows about it or not,” that misalignment will eventually break something. Usually what gives is the formal structure, albeit slowly and painfully.
Making it practical
The frameworks above provide a way to think about the problem, but thinking is not the same as doing. Here is what the sequence of practical actions looks like when you apply Trevor’s value chain logic to AI adoption in a traditional organisation.
Start by pressure-testing your strategy. Before making any structural changes, get your senior leadership team in a room and answer one question honestly. Are you pursuing AI for incremental efficiency within your current operating model, or are you pursuing it to fundamentally change how you compete? Both are valid answers, but they lead to completely different organisational responses.
Most organisations have not answered this question explicitly, which means different parts of the business are operating on different assumptions about what AI is for. That misalignment will express itself as confusion, turf wars, and wasted investment. Trevor and Varcoe’s HBR diagnostic on strategic alignment provides a structured way to surface these gaps.
Map capabilities against ambition. Once you have strategic clarity, audit what capabilities you have today versus what you need. Be honest about this. Most organisations dramatically overestimate their internal AI capability because they conflate IT technical skills with AI implementation skills, which are different things. The capability audit should cover technical AI skills (model selection, integration, monitoring), domain translation skills (people who can bridge between business processes and AI possibilities), workflow design skills (people who can redesign processes around AI rather than bolting AI onto existing processes), and change leadership skills (people who can bring others along). For each capability, you need a frank assessment of whether it exists internally, whether it can be developed on a realistic timeline, or whether it needs to be acquired through hiring or partnerships.
Design architecture around capability, not hierarchy. This is where the cross-functional team becomes relevant, but only if you design it deliberately. The team needs a clear mandate tied to the strategic choice you made in step one. It needs to be staffed with people who collectively cover the capability gaps you identified in step two. It needs reporting lines that give it authority over the processes it is transforming, which almost certainly means it reports to someone senior enough to arbitrate between competing departmental interests. And it needs to be structured in a way that acknowledges the political dynamics honestly. In practice, this means having representatives from the affected business units on the team, giving those representatives genuine influence over decisions, and ensuring that the business units they come from are rewarded for their cooperation rather than penalised for losing headcount or budget.
Redesign management systems in parallel. This is the step that separates organisations that succeed from organisations that create impressive-sounding AI teams that quietly accomplish nothing. Before the cross-functional team starts work, change the metrics and incentives for the business units it will be working with. If you are asking the adjusting team to cooperate with an AI initiative that will change their roles, make sure their performance metrics reflect the new expectations rather than the old ones. If you are asking IT to hand over some responsibilities to the AI function, make sure IT’s budget and headcount are not penalised for doing so. The management system changes do not need to be permanent or perfect at this stage, but they need to exist, because without them you are asking people to act against their own incentive structures, which they will not do for long regardless of how compelling your AI vision is.
Build in public. One of the most effective practical tactics I have seen is to have the cross-functional AI team work visibly and share results (including failures) broadly across the organisation. This serves several purposes simultaneously. It demystifies AI for people who are anxious about it. It creates internal advocates as people see tangible results. It gives the shadow AI users a legitimate channel to contribute their knowledge and experience. And it builds the organisational AI literacy that will be necessary for scaling beyond the initial team. Kotter’s dual operating system concept is relevant here, where the cross-functional AI team operates as a faster-moving network alongside the existing hierarchy, and the visibility of its work gradually shifts organisational norms without requiring a top-down mandate that triggers resistance.
Plan for the second wave. The initial cross-functional team and its first projects will teach you things that no amount of upfront planning can predict. Build explicit review points where you reassess your strategy, capabilities, architecture, and management systems in light of what you have learned. Trevor’s concept of strategic realignment as a continuous leadership competency rather than a one-off transformation is particularly apt for AI, because the technology is evolving so rapidly that any fixed structure will be outdated within a year. The goal is not to design the perfect AI organisation on day one. The goal is to build an organisation that can adapt its AI capabilities continuously as both the technology and your understanding of it evolve.
Conclusion
Most traditional organisations are not structured for the kind of cross-functional, fast-moving, continuously-evolving capability that AI demands. Their hierarchies, incentive structures, decision-making processes, and cultural norms were all designed for a world where technology changed more slowly, where knowledge was more specialised, and where coordination costs were higher.
AI offers the opportunity to do fundamentally different things, and to organise differently to do them. This goes well beyond doing existing things faster. The organisations that recognise this and are willing to make structural changes, even uncomfortable ones, will outperform those that try to bolt AI onto their existing operating model and hope for the best.
Shadow AI is the canary in the coal mine. It is telling you that your people are ready for AI, even if your organisation is not. The question is whether leadership will listen to that signal and respond with genuine organisational adaptation, or whether they will respond with a reflexive control impulse.
The history of technology adoption in enterprises suggests that the control impulse always loses eventually. The people with the tools always outperform the people with the policies. The difference with AI is that “eventually” is measured in months rather than years, and the competitive consequences of being late are proportionally far more severe, perhaps even existential.
I am a partner in Better than Good. We help companies make sense of technology and build lasting improvements to their operations. Talk to us today: https://betterthangood.xyz/#contact