Comprehension Lock-In: The Enterprise AI Bet Nobody Is Pricing
OpenAI's real play isn't the model. It's becoming your organisation's institutional memory — and the lock-in makes Salesforce look portable.
OpenAI's real play isn't the model. It's becoming your organisation's institutional memory — and the lock-in makes Salesforce look portable.

OpenAI engineers accidentally leaked GPT-5.4's existence by committing internal code to a public GitHub repo. Twice in five days. Prediction markets spiked. Twitter did its thing. The entire AI discourse pivoted to speculation about benchmark improvements and capability jumps.
None of that matters.
What matters is buried in the press release from OpenAI's latest massive fundraise: they're building a "stateful runtime environment" with AWS. That single phrase, which most commentators either ignored or didn't understand, describes the most consequential bet in enterprise software history. Not because of the model. Because of what sits around it.
Consider where your organisation's real knowledge lives. Not the documented kind — the kind that determines whether the business executes well or catastrophically.
Code sits in GitHub. Architectural decisions rot in Confluence pages nobody updates. Customer context lives in Salesforce. Project status hides in Jira. The informal reasoning — the why behind decisions — lives in Slack threads that scrolled past three months ago, in meeting transcripts nobody reads, or in the heads of senior people who are one LinkedIn message away from leaving.
Every one of these systems is a filing cabinet. The information exists in abundance. What doesn't exist is a synthesis layer.
Right now, that synthesis layer is human brains. And human brains are bandwidth-limited, context-switching impaired, and they leave when they get a better offer. When a senior engineer quits, the filing cabinets stay full. What disappears is the person who knew which cabinets to open and how to connect the contents in ways that created value.
Anyone who's worked in technology has felt this. A key person walks out, and the entire organisation stumbles for months. Not because the data is gone — because the understanding is gone.
In ecommerce, this problem is particularly acute. The head of digital who understood the intricate relationship between your Google Ads bidding strategy, seasonal inventory patterns, and Shopify checkout conversion rates — they held institutional context that no single system captured. They knew that reducing ad spend on Product A during March actually increased overall revenue because it shifted budget to Product B, which had higher margins and better stock availability post-Chinese New Year. That's not data. That's synthesis. And when that person moves to a competitor, it walks out with them.
The SaaS industry has been building solutions to the data storage problem for twenty years. Nobody has been building solutions to the understanding problem. That's the gap OpenAI spotted.
As AI strategist Nate B. Jones laid out in a recent analysis, OpenAI isn't making a single bet. They're making four interlocking bets that must all succeed simultaneously — and the failure of any one collapses the entire play. It's Ocean's Eleven with $600 billion of infrastructure on the line.
Bet 1: Intelligence × Context is multiplicative. Give a mediocre model a million tokens of organisational history and it drowns. It pattern-matches on surface-level similarity, finds a discussion that sounds related but was about a different service in a different context, and synthesises confidently from the wrong source. Coherent, well-sourced, and completely wrong. Long context with weak reasoning isn't just useless — it's actively harmful. A strong reasoning model changes the equation: each increment of reasoning expands the scope of context the model can productively use, generating nonlinear returns. If reasoning plateaus, the context layer degrades from institutional memory (incredibly valuable) to an expensive RAG pipeline that hallucinates organisational knowledge (actively harmful).
Bet 2: Memory that doesn't rot. Today's AI memory is a colleague who remembers your coffee order but forgets the substance of last week's conversation. What OpenAI needs is institutional memory at a depth that has never existed in software. Consider the architect who built the payment service in 2019 and knows — but has never documented — that the retry logic has a specific interaction with the rate limiter that causes cascading failures under a particular load pattern. The only reason this hasn't caused a production incident is that the team manually scales the threshold during peak periods. That knowledge is fragile. Every departure, every reorg, every on-call rotation contributes to continual organisational forgetting. And memory that preserves context without updating it is worse than no memory at all — it's institutional hallucination. The AI equivalent of a ten-year veteran who confidently explains how things work based on how they worked last year.
Bet 3: Retrieval at a scale that has never existed. This is the crux. When your agent has trillions of tokens of organisational history, current retrieval paradigms — RAG included — cannot solve the problem. RAG works for factual lookup. It breaks for enterprise-scale organisational context in specific ways. It can't handle relational queries across time ("find the chain of decisions that led to this vulnerability"). It can't distinguish current context from context about systems that no longer exist. And these failures get worse as the corpus grows: more false positives, more near-miss retrievals, more confident synthesis from irrelevant context. A solution probably requires structured indexing that tracks entities and causal chains over time, hierarchical memory at multiple granularity levels, temporal state tracking, and state-space compression for long-horizon context. Nobody currently benchmarks "find 2,000 relevant tokens in 10 trillion when relevance is defined by causal chains across 8 months." The company that solves even something close to this first has a lead competitors can't even assess from outside.
Bet 4: Execution at the speed of trust. When an agent runs autonomously across hundreds of tasks for weeks, even a 5% per-task failure rate compounds into systemic risk. The target for sustained long-running agentic workflows at enterprise context depth is closer to 99.5% accuracy or higher, across diverse tasks including situations where organisational context is ambiguous, contradictory, or incomplete.
Every capability reinforces the others. Better retrieval means more relevant context. Better intelligence means more careful reasoning. More coherent memory means context reflects reality. The compound improves together — or it all falls apart.
If this compound bet works, what you have isn't a better tool. You have a new layer in the enterprise stack that sits above every existing system and synthesises across all of them.
Think about what a system of record actually is. Salesforce is worth nearly $190 billion for owning customer data. ServiceNow is worth over $150 billion for owning IT workflow data. These companies aren't worth that because they store data well. They're worth it because they're the canonical source the rest of the organisation trusts and builds on.
The AI context platform becomes the system of record for something more valuable than any single data type: organisational understanding. Not customer data, not code, not project status — the synthesised understanding of how all of those relate, how they've changed, and what they imply for current decisions.
Consider a concrete scenario. A product manager asks: "Should we build the real-time analytics feature that Enterprise Customer X has been requesting?" Without institutional context, this is a one-dimensional question. With twelve months of accumulated organisational context and a working synthesis layer, the agent answering this question draws upon: the original conversation where the customer described the need; three other enterprise customers who made similar requests with different constraints; the engineering team's assessment from six months ago that the current pipeline couldn't support real-time processing at the volume required; the infrastructure upgrade last month that removed that constraint; competitive analysis showing two rivals shipped similar features in Q4; and the CFO's directive that new features need payback within two quarters.
No individual person has all of this context. The synthesis — turning fragmented organisational data into a coherent decision basis — currently requires getting all these people in a room, a weeks-long planning process, or just making the decision with incomplete information. The context platform does it in seconds. Not because it's smarter than people, but because it has access to all the filing cabinets simultaneously.
Here's where it gets genuinely uncomfortable for anyone who thinks about vendor dependency.
When an enterprise's organisational understanding lives on a context platform, switching to anything else means losing the synthesis layer that connects every system in the stack. The agent that knows how Salesforce data relates to GitHub decisions relates to the board deck — that understanding can't be exported.
Salesforce's lock-in comes from data. Data is ultimately portable (painfully, expensively, but portable). The context platform's lock-in comes from understanding. A year's worth of synthesised organisational knowledge is not portable. This is the deepest form of technology lock-in that has ever existed in enterprise software.
Think about this in commerce terms. Imagine a platform that has twelve months of context about your brand: every A/B test result, every seasonal pattern, every customer segment that over-indexes on specific product combinations, every pricing experiment, every supplier reliability issue, every customer service pattern that predicts returns. Now imagine switching to a competitor. You can export your product catalogue. You can migrate your customer database. But can you export the understanding that links a spike in returns for Product X to a supplier quality issue that started in September, combined with a misleading product description that was updated but still appears in Google Shopping feeds from cached pages? That understanding isn't in any single system. It's in the synthesis layer. And the synthesis layer doesn't have an export button.
This is what Jones calls "comprehension lock-in" — and it's qualitatively different from any lock-in the software industry has ever seen. Platform lock-in is about migration effort. Data lock-in is about format conversion. Comprehension lock-in is about losing your organisation's accumulated intelligence and starting the learning process from zero.
And it compounds with every day the platform operates. Month one, you have a smart but generic agent — a talented new hire who can read the wiki. Month three, agents have processed hundreds of code reviews and architectural discussions, synthesising across silos. Month six, agents know things no single person knows, connecting decisions across teams that would never surface in normal human workflows. By maturity, you have a network of agents operating as the institutional knowledge layer of the enterprise.
Fast forward to 2028. What does it cost to switch? Not the subscription — the understanding. The months or years of accumulated synthesis, decision histories, cross-team connections, pattern recognition from hundreds of code reviews and incidents. All of that disappears. The enterprise goes back to humans as the integration layer and resets from scratch.
That is institutional capture at a depth enterprise software has never seen. And there is no natural ceiling. The longer you stay, the deeper the understanding, the higher the switching cost.
While OpenAI builds top-down infrastructure for organisational-scale context capture with AWS, Anthropic is accumulating context organically. Claude Code has captured over half of the enterprise coding market. Every day, it generates Claude.md files, workflow patterns, team muscle memories, and project histories — session by session.
That context isn't currently labelled as a strategic asset. It isn't processed that way. But enterprises know it's valuable, and so does Anthropic.
The irony is that context accumulated organically through daily usage may be more valuable than context captured architecturally. It reflects how people actually work. The developer who's been using Claude Code for six months has built workflows deeply integrated into their actual process. A runtime capturing context from day one captures context about workflows that haven't adapted to its existence yet.
OpenAI's approach: sign up CIOs on master service agreements, deploy the stateful runtime, capture everything from the top. Anthropic's approach: developers choose the tool, workflows build organically, context accumulates bottom-up. If adoption flows bottom-up — with developers choosing tools and workflows building naturally — Anthropic potentially has a head start.
But head starts don't matter if OpenAI ships a fully capable stateful runtime environment first. The overwhelming enterprise sales motion — "you've got context, it comes from OpenAI, it runs on AWS" — is enough for most enterprises. Capital buys infrastructure. It doesn't necessarily buy product-market fit, but it buys a lot of CIO meetings. And in enterprise software, CIO meetings have historically been enough to win — regardless of whether the product actually delivers.
For ecommerce businesses, this isn't abstract futurism. The same dynamics apply at a smaller scale, and they're already in motion.
Your organisation's understanding is already fragmenting across tools. Your marketing team is on ChatGPT, your developers are on Claude, your analysts are on Gemini. Each team is building a valuable asset individually, but you're not building common understanding. That fragmentation will cost you.
Three questions worth asking now:
Where is your understanding actually accumulating? Not your data — your understanding. If the answer is "nowhere coherently," you have a problem that will compound monthly. You don't need to wait for OpenAI's context platform. You can build a more primitive version now — structured knowledge bases with proper hierarchical tagging, retrieval across a few hundred thousand documents, team-level context sharing. Getting to even a few million tokens of shared context will accelerate collective understanding dramatically.
Are you running a flywheel? Is there compound improvement in your AI systems, or are people just trying things and seeing what works? Is retrieval getting better? Is execution getting more reliable? Are you building agentic systems that grow across teams? If the answer to most of these is "I don't know," that conversation needs to happen — and not just at leadership level. AI champions at every level of the organisation need a voice.
What is your switching cost? If you're capturing 20-30% of your organisation's understanding in a system today, think about portability. If OpenAI or Anthropic offers you a beta in twelve months, how much work would it be to migrate? How portable is your context? Do you want your system of record in OpenAI's infrastructure, or are you in a sensitive industry where that's a non-starter? These questions determine whether you should invest more in your own context layer now or wait for the platforms to mature.
The game hasn't been won. The pieces are on the board, the clock is running, and most of us are staring at the wrong chess piece. Don't be distracted by GPT-5.4 leaks or model release dates. The race that matters is the race for comprehension lock-in. And it's already started.
The businesses that start building their own context layers now — however primitive — will be in a fundamentally stronger position when the platforms mature. They'll know what they need from a context platform because they'll have experienced the value of even partial synthesis. They'll have structured their organisational knowledge in ways that make migration possible. And they'll have built the internal muscle for evaluating which platform actually delivers on the compound bet.
Everyone else will be signing enterprise agreements in 2028 and hoping the sales pitch matches reality. By then, the early movers will be six months ahead on the flywheel — and in the context game, six months of accumulated understanding is a lifetime.