Intent Engineering Is Just Management. Nobody Wants to Hear That.
The AI industry keeps inventing new engineering disciplines to avoid admitting the real problem: most organisations can't articulate what they actually want.
The AI industry keeps inventing new engineering disciplines to avoid admitting the real problem: most organisations can't articulate what they actually want.

In January 2026, Klarna reported that its AI customer service agent had replaced the equivalent of 853 full-time employees and saved the company $60 million. By mid-2025, CEO Sebastian Siemiatkowski was telling Bloomberg that while cost had been the "predominant evaluation factor," the result was "lower quality." Klarna began rehiring the human agents it had gutted months earlier.
The comfortable reading of this story — the one that circulated through LinkedIn like a warm blanket — was that AI can't handle nuance. That human judgment is irreplaceable. That we're all safe for now.
The uncomfortable reading, and the correct one, is that Klarna's AI agent performed brilliantly. It resolved 2.3 million conversations in its first month across 23 markets and 35 languages. Resolution times dropped from 11 minutes to 2. The agent did exactly what it was told to do: close tickets fast. The problem was that closing tickets fast was never actually what Klarna needed. Klarna needed to build lasting customer relationships in a brutally competitive fintech market. Those are profoundly different goals, and they require profoundly different decision-making at the point of interaction.
A human agent with five years at the company knows this difference intuitively. She knows when to bend a policy, when to spend three extra minutes because a customer's tone signals they're about to churn, when efficiency is the right move versus when generosity is. She knows this not because someone gave her a 47-page prompt, but because she absorbed the company's actual values — not the ones on the website, but the ones encoded in decisions managers make every day, in stories veterans tell new hires, in the unwritten rules about which metrics leadership genuinely cares about when push comes to shove.
The AI agent knew none of it. It had a prompt. It had context. It did not have intent.
The AI industry loves a good taxonomy. It gives the impression of progress without requiring any. So here's where we are in 2026:
Prompt engineering was the first discipline. Individual, synchronous, session-based. You sit in front of a chat window, craft an instruction, iterate the output. It produced a thousand "how to write the perfect prompt" blog posts. Most of them were terrible. The skill was real but personal — it scaled about as well as handwriting.
Context engineering followed. Anthropic published a foundational piece in September 2025 defining it as "the shift from crafting isolated instructions to crafting the entire information state that an AI system operates within." LangChain's Harrison Chase put it more bluntly in a Sequoia Capital interview: "Everything's context engineering. It's such a good term. I wish I came up with it, because it describes everything we'd done at LangChain without knowing the term existed." This is where the action is right now — building RAG pipelines, wiring up MCP servers, structuring organisational knowledge so agents can access it.
And now, in early 2026, a third term is emerging: intent engineering. The practice of encoding organisational purpose into infrastructure — not as prose in a system prompt, but as structured, actionable parameters that shape how agents make decisions autonomously. Context engineering tells agents what to know. Intent engineering tells agents what to want.
It sounds compelling. It sounds like progress. It sounds like exactly the kind of thing that would justify another round of conference talks and another generation of tooling startups.
Here's the problem: intent engineering is not a new discipline. It's management. And the reason nobody wants to hear that is the same reason Klarna shipped an agent without teaching it what the company actually valued — because defining organisational intent is slow, political, unglamorous work that engineers have been trying to automate away since the first management consultant walked through a factory door in 1911.
Deloitte's 2026 State of AI in the Enterprise report surveyed 3,000 leaders across 24 countries and found that 84% of companies have not redesigned jobs around AI capabilities. Only 21% have a mature model for agent governance. Gartner predicts that by 2028, 15% of day-to-day work decisions will be made autonomously by agents.
These numbers are presented as an AI adoption challenge. They are not. They are a management failure that predates AI by decades.
Consider: how many organisations could, right now, hand you a document that clearly articulates their decision-making principles? Not their values statement — every company has one of those, and they're all identical ("integrity, innovation, collaboration" — the corporate equivalent of a dating profile that says "I love to laugh"). I mean an actual operational document that specifies: when quality conflicts with speed, which wins? When customer satisfaction conflicts with unit economics, where's the line? When a team member spots an opportunity that doesn't fit the quarterly plan, what should they do?
Most organisations cannot produce this document because they've never written it. They haven't needed to. Humans are remarkably good at absorbing implicit organisational intent through a hundred informal mechanisms: watching senior people handle ambiguous situations, overhearing hallway conversations, attending all-hands meetings, having a drink with a colleague who's been there for ten years. We call this "culture" and we treat it as if it's a competitive advantage, which it is — right up until the moment you need to transfer it to a system that doesn't drink, doesn't gossip, and doesn't attend happy hours.
The intent gap didn't appear when AI arrived. AI simply made it visible. The gap was always there, papered over by human adaptability. Every time a new hire spends six months "figuring out how things work around here," that's the intent gap. Every time a middle manager translates a vague executive directive into actionable team goals, that's the intent gap being bridged by human labour. Every time an experienced employee makes a judgment call that technically violates policy but serves the company's actual interests, that's the intent gap in action.
We've been running organisations on implicit intent for a century. It mostly worked because humans are extraordinarily good at inferring unstated goals from context. AI agents are not. And rather than acknowledge that the real work is articulating what was always left unsaid, the industry has invented a new engineering discipline to make it sound like a technical problem.
Microsoft poured billions into Copilot. They embedded AI into every Office application. They launched the most aggressive enterprise sales campaign in recent memory. Eighty-five percent of Fortune 500 companies adopted it.
Then adoption stalled hard. Gartner found that only 5% of organisations moved from a Copilot pilot to a larger-scale deployment. Only about 3% of the total Microsoft 365 user base actually adopted Copilot as paid users. Bloomberg reported Microsoft slashing internal sales targets after the majority of salespeople missed their goals. Reddit threads filled with engineers at multi-billion-dollar companies describing their organisations downgrading licences because employees preferred ChatGPT or Claude.
The standard explanation centres on UX problems and model quality. Those are real issues. But they're not the fundamental issue.
Deploying an AI tool across an organisation without organisational intent alignment is like hiring 40,000 new employees and never telling them what the company does, what it values, or how to make decisions. You get lots of activity and not much productivity. You get AI usage metrics on a dashboard and almost no measurable impact on what the organisation is actually trying to accomplish.
That's not a tools problem. That's a management problem wearing a technology mask.
KPMG's Q4 2025 AI pulse survey showed capital flowing, ROI confidence rising, and agents moving from pilots to professionalised platforms. On average, companies are spending $700 million on AI — for a company with $13 billion in revenue, that's about 5% of total revenue. Yet 24% of companies globally report they have yet to see tangible value from AI, and McKinsey found that only 33% of companies have scaled AI beyond pilots.
These numbers coexist with the investment numbers. There's no contradiction once you understand what's actually happening: organisations have solved "can AI do this task?" and completely failed to solve "can AI do this task in a way that serves our organisational goals across the organisation with appropriate judgment?" The first is an engineering question. The second is a management question. The industry keeps trying to solve the second one with engineering.
If you strip away the jargon, the "intent engineering" challenge operates across three layers. Each one sounds technical but is fundamentally organisational.
Layer one: unified context infrastructure. Right now, every team building agents rolls their own context step. One team pipes Slack data through a custom RAG pipeline. Another manually exports Google Docs into a vector store. A third builds an MCP server that connects to Salesforce but not to Jira. A fourth team doesn't know the other three exist. This mirrors the shadow IT crisis of the early cloud era, except the stakes are higher because agents don't just access data — they act on it.
The Model Context Protocol, which Anthropic introduced in late 2024 and donated to the Linux Foundation in December 2025, is the most promising attempt at standardisation. OpenAI, Google, Microsoft, and more than 50 enterprise partners have committed to it. Monthly SDK downloads are approaching 100 million. But protocol adoption and organisational implementation are different things entirely. Having a USB-C standard doesn't help if your company hasn't decided which ports to install, who maintains them, or what gets plugged in.
The real questions aren't technical: Which systems become agent-accessible? Who decides what context an agent can see across departments? How do you version organisational knowledge so agents aren't operating on stale information? How do you handle the fact that the sales team's Slack context and the engineering team's Slack context encode completely different institutional assumptions?
Deloitte found that nearly half of organisations cited data searchability and data reusability as top challenges blocking AI automation. The data exists inside corporations. The agents exist increasingly. The connective tissue between them — the organisational context layer, the structures and safeguards to ensure data is accessed correctly — mostly doesn't.
Layer two: coherent AI worker toolkit. Everyone's rolling out their own AI workflow. One person uses Claude for research and ChatGPT for drafting. Another uses Cursor for code and Perplexity for fact-checking. A third has built a custom agent chain using LangGraph. A fourth is copy-pasting into a chat window. None of these employees can articulate their workflow in a way that's transferable, measurable, or improvable by anybody else.
The difference between individual AI use and organisational AI compounding is enormous. It's the difference between having one good hire and having a system that makes everybody better. Individual AI adoption produces the 30% gains you get from bolting AI onto existing workflows. Organisational AI fluency produces the 300% gains you get from rethinking the workflow itself around AI capabilities. But fluency doesn't scale through training alone. It scales through shared infrastructure.
The Lloyds 2026 report found that workforce access to sanctioned AI tools expanded by 50% in a year. But access is not fluency. Organisations are giving people tools without giving them — or their agents — the organisational context and intent that would allow those tools to deliver real value.
Layer three: intent encoding. This is where the "intent engineering" crowd gets excited, and where I think they go wrong. The proposal is to create "machine-readable expressions of organisational purpose" — structured, actionable parameters that define goals, values, trade-offs, and decision boundaries for autonomous systems.
This is not a bad idea. It's just not a new one. It's what management consultants have been trying to do since Frederick Winslow Taylor published The Principles of Scientific Management in 1911. It's what OKR frameworks attempt. It's what balanced scorecards attempt. It's what every corporate strategy document attempts. The history of management theory is a 115-year-long attempt to make organisational intent explicit, transferable, and actionable.
The track record is not encouraging.
The optimistic case for intent engineering goes something like this: previous attempts to codify organisational intent failed because the consumers of that intent were humans, and humans are good enough at inferring implicit intent that explicit systems felt redundant. AI agents aren't. Therefore, the pressure to codify intent will be irresistible, and organisations will finally do the hard work they've been avoiding for decades.
There's a version of this argument I find persuasive. Agents that run for weeks or months — and we're starting to see those in 2026 — genuinely cannot operate on implicit intent. A human employee who makes a bad judgment call in week three gets corrected by their manager. An agent that makes a bad judgment call in week three might have already executed a thousand downstream decisions based on that call. The blast radius of misaligned intent is categorically different with autonomous systems.
But the pessimistic case is equally compelling: organisations haven't codified their intent not because they didn't need to, but because they can't. Organisational intent is inherently contradictory. Every company simultaneously wants to maximise revenue, minimise costs, delight customers, move fast, maintain quality, innovate boldly, and manage risk conservatively. The genius of human organisations is that humans navigate these contradictions through judgment, politics, and the selective application of different priorities in different contexts. That's not a bug in the system — it's the system.
Try encoding "use your judgment" into a machine-readable parameter. Try encoding "we value quality, but not when it threatens the quarterly target, except when the CEO is paying attention, in which case quality matters more than anything." Try encoding the difference between the company's stated values and its revealed preferences — the gap between what leadership says matters and what leadership actually rewards.
This is why Klarna's story is more instructive than the intent engineering crowd wants it to be. The most unsettling possibility isn't that Klarna's AI agent lacked the company's intent. It's that the agent perfectly reflected the company's actual intent — which was to cut costs — and the stated intent about customer relationships was always aspirational fiction. The customers' backlash didn't reveal a technology failure. It revealed an organisational one.
After 26 years in ecommerce, I've watched every wave of enterprise technology follow the same pattern: vendors sell the tool, buyers skip the organisational work, results disappoint, a new discipline emerges to explain the gap, and a new wave of tooling promises to fix it. Intent engineering fits this pattern perfectly.
What actually works — what has always worked — is boring, political, and deeply human:
Start with decisions, not tools. Before deploying an AI agent, map the decisions it will make. For each decision, identify the trade-offs involved. For each trade-off, get explicit sign-off from someone with authority on which way to lean. This is not engineering. This is the work that every good product manager does before writing a spec, and every good operations leader does before redesigning a process. The fact that it now involves AI doesn't change the work — it just raises the stakes.
Accept that intent is dynamic, not static. Organisations that try to encode intent as a fixed configuration will fail the same way organisations that tried to capture "business rules" in enterprise software failed in the 2000s. Intent shifts with market conditions, competitive pressure, leadership changes, and a hundred other factors. Any system that treats intent as a deployment-time configuration rather than a continuously updated signal will be wrong within months.
Build feedback loops, not governance frameworks. Deloitte found that only 21% of organisations have a mature model for agent governance. The instinct is to build a governance framework — committees, review boards, approval workflows. This will produce the same results as every other corporate governance framework: bureaucracy that slows everything down without actually improving outcomes. What works is tight feedback loops: agents log their decisions, humans review a sample, disagreements trigger updates to the intent parameters. It's less satisfying than a comprehensive framework, but it actually works.
Stop pretending this is a technical problem. The reason 84% of companies haven't redesigned jobs around AI capabilities isn't that they lack the right tools. It's that redesigning jobs requires difficult conversations about power, status, expertise, and value. It requires managers to articulate what they actually want from their teams, which many managers cannot do because they've never had to — they've relied on hiring experienced people and trusting them to figure it out. That approach doesn't transfer to agents.
Here's what I think the intent engineering discourse gets right, even if it wraps it in the wrong packaging: the organisations that win in the agent era will not be the ones with the best models, the most context, or the cleverest prompts. They will be the ones that can articulate what they want clearly enough that an autonomous system can act on it without constant supervision.
That's not an engineering capability. It's an organisational capability. And it's one that most companies don't have — not because the tooling doesn't exist, but because the leadership clarity doesn't exist.
The companies that are genuinely good at this tend to have something in common: they were already good at articulating intent before AI arrived. They had clear decision-making frameworks. They had leaders who could explain not just what they wanted but why they wanted it and what they'd sacrifice to get it. They had cultures where trade-offs were discussed explicitly rather than resolved through political manoeuvring.
These companies didn't need a new engineering discipline. They needed a new interface — a way to express the clarity they already had in a format that machines could act on. For them, "intent engineering" is a translation exercise. For everyone else, it's a mirror being held up to organisational dysfunction that was always there but never mattered this much.
The $700 million average AI investment isn't going to fix that. The 100 million MCP SDK downloads aren't going to fix that. The next generation of agent orchestration platforms isn't going to fix that.
The only thing that fixes it is the unglamorous, deeply human work of deciding what your organisation actually wants — and being honest about the answer, even when it's not what the values statement says.
Klarna's AI agent wasn't broken. It was the most honest employee the company ever had. It did exactly what it was told, with perfect fidelity, at massive scale. The problem was never the agent. The problem was the instructions. And the instructions were a perfect reflection of what the organisation actually prioritised, as opposed to what it claimed to prioritise.
That's not an engineering problem. That's a leadership problem. And no amount of renaming it will change what it takes to fix it.