Your Commerce Team Has Amnesia — And It's Costing You More Than You Think

Every ecommerce team re-explains itself to AI daily. The ones building agent-readable memory systems are compounding an advantage that widens every week.

32 min read

32 min read

Published 2 March 2026

Blog Image

The Expensive Habit Nobody Talks About

There is a ritual happening inside every ecommerce operation in the country right now. It happens dozens of times a day. It costs nothing in direct spend, and it is bleeding teams dry.

Someone opens ChatGPT. They type three paragraphs of context. The brand voice. The product positioning. The seasonal campaign details. The customer segment they're targeting. The constraints — budget ceiling, platform limitations, the fact that their Shopify theme doesn't support X. They get a decent answer. They close the tab. Tomorrow, they'll type it all again.

A Harvard Business Review study found that digital workers toggle between applications roughly 1,200 times per day. Each toggle seems trivial. Collectively, it is devastating. But here's what the productivity discourse misses entirely: this isn't a personal efficiency problem. In commerce, it's a structural competitive disadvantage that compounds in the wrong direction.

The individual productivity community has been obsessing over "second brain" systems for the better part of a decade. Notion databases. Obsidian vaults. Apple Notes. Evernote graveyards. These tools were built for the human web — pages, folders, toggles, cover images. Beautiful for you. Functionally useless for the AI agents that are about to reshape how commerce actually operates.

And that gap — between human-readable notes and agent-readable knowledge infrastructure — is where the real money is being made and lost right now.

The Walled Garden Memory Tax

Let's talk about what the major AI platforms have actually built when they say "memory." Claude has memory. ChatGPT has memory. Grok has memory. Google has memory. Every one of them is a walled garden designed to create lock-in, and they're not shy about it.

Claude's memory doesn't know what you told ChatGPT. ChatGPT's memory doesn't follow you into Cursor. Your phone app doesn't share context with your coding agent. You haven't got a memory system. You've got five separate piles of sticky notes on five separate desks.

For individual users, this is annoying. For commerce teams, it's catastrophic. Think about the knowledge that lives inside an ecommerce operation of even modest size:

  • Product taxonomy decisions made six months ago that nobody documented

  • The reason you chose a specific shipping carrier for oversized items

  • Customer complaint patterns that informed your returns policy

  • Why the last Black Friday campaign underperformed (the real reason, not the deck)

  • Which supplier lead times are lies and which are accurate

  • The specific Klaviyo flow that drove 34% of Q4 email revenue

All of that knowledge exists. Some of it is in people's heads. Some is in Slack threads from 2024. Some is in a Google Doc that three people have access to and nobody remembers the title of. Almost none of it is in a format that an AI agent could access, search by meaning, and use to make a genuinely informed recommendation.

The memory problem isn't that AI forgets. The memory problem is that your organisation never made its knowledge agent-readable in the first place.

There's already a whole new category of products emerging in early 2026 specifically because platforms refuse to solve the cross-tool memory problem. Mem0, MemSync, OneContact — the problem is real enough to spawn an entire VC-backed industry. But handing your organisational knowledge to yet another SaaS platform that can reprice, pivot, or disappear? That's the same trap wearing a different hat.

Why Note-Taking Tools Were Never Going to Cut It

The internet is forking. There's the human web — fonts, layouts, the things you're reading right now. And there's the emerging agent web — APIs, structured data, machine-to-machine readability. That fork is happening to your internal knowledge systems too, whether you've noticed or not.

Your Notion workspace is built for human eyes. Pages, databases, views, toggles. It's beautiful to navigate. It's also nearly useless for an AI agent that needs to search by semantic meaning rather than folder structure. Your Confluence wiki is worse — a graveyard of outdated process documents that nobody trusts and nobody updates. Your Slack history is a firehose of context with zero semantic structure.

These tools were designed in the 2010s for humans to browse, organise, and read. They were never designed with the expectation that AI agents would query them. The AI features being bolted on now — "chat with your docs," "AI search" — are plasters on a structural wound. You get one AI that can kind of search one app. What about the other twelve tools your commerce team uses every week?

The entire "second brain" movement, for all its merits, was reaching for something that required a fundamentally different layer: infrastructure built for the agent web, not the human web.

Commerce teams that understand this distinction early are going to be the ones that survive the next three years. The rest will keep re-explaining themselves to every new tool, every new hire, and every new AI model that comes along.

What Agent-Readable Actually Means for Commerce

Here's the architecture that matters, stripped of the productivity-guru aesthetics. Your organisational knowledge needs to live in a proper database — not a pretty app, a database. Postgres. The most boring, battle-tested technology you can imagine. Postgres isn't chasing a growth metric. It isn't VC-backed and needing to hit a billion-dollar unicorn valuation. It just stores data reliably, and it's been doing so for decades.

Every piece of knowledge your team captures gets converted into a vector embedding — a mathematical representation of what it means. This is immediately, natively AI-readable. When someone asks "what was our strategy for reducing returns on fragile items last quarter," the system can find the relevant decision even if nobody ever used the word "strategy" or "fragile" in the original note. That's semantic search. It's a fundamentally different universe from keyword matching.

Layer MCP (Model Context Protocol) on top, and suddenly every AI tool your team touches can read from and write to the same knowledge base. Anthropic's open-source protocol has become the USB-C of AI connectivity — one protocol, every tool, your data stays yours. Claude, ChatGPT, Cursor, whatever ships next month — they all speak MCP. OpenClaw alone has passed 190,000 GitHub stars and spawned over 1.5 million autonomous agents in a matter of weeks. The infrastructure is moving fast.

The total infrastructure cost for this kind of system? Roughly 10 to 30 pence a month on free tiers. You'll spend more on the team's morning coffee run than on the system that could transform how your entire operation works with AI.

But the cost isn't the point. The point is what this enables. When every member of your commerce team — from the merchandiser adjusting product descriptions to the operations manager renegotiating supplier terms — can query the same living knowledge base from whichever AI tool they prefer, you stop losing decisions to the void. Nothing falls through the cracks because the cracks don't exist anymore.

The Compounding Advantage That Actually Matters

Consider two competing ecommerce operations. Both sell similar products. Both have access to the same AI models. Both employ smart people.

Team A opens Claude every morning, spends fifteen minutes explaining their brand, their current campaign, their inventory constraints, and the customer feedback pattern they noticed last week. They get reasonable outputs. They close the tab. Tomorrow, they start over. When they want to try GPT-4.5 for a different perspective, they lose all that context — not because the new model is worse, but because their context is trapped in the old one.

Team B opens Claude, and it already knows their brand guidelines, their active campaigns, their inventory position, their customer feedback trends, and the decisions they made last week. All of it lives in their agent-readable knowledge base, loaded via MCP before anyone types a word. When they switch to a different model for a second opinion, they get a different model but the same brain. Same context. Same answer quality. Every AI tool has the full picture.

Now compound that over six months. Every product decision Team B logs, every campaign post-mortem, every supplier negotiation note, every customer insight — it all becomes another node in a growing knowledge graph that every AI in the system can access. Team B's AI gets genuinely smarter over time because it has more context to work with. Team A's AI is resetting to zero every single session.

That's not a marginal efficiency gain. That's a structural competitive moat that widens every week. And it has nothing to do with which model is better or which platform is trendier. It comes down entirely to knowledge infrastructure.

According to the Financial Times, US productivity grew roughly 2.7% in 2025 — double the decade average, with a meaningful chunk attributed to AI adoption. But the organisations capturing that productivity aren't the ones with the fanciest subscriptions. They're the ones that restructured how they work with AI as a primary collaborator. You cannot collaborate with something that has no memory of you.

The Agency Angle Nobody's Discussing

If this matters for in-house commerce teams, it matters ten times more for agencies. And almost nobody in the agency world is talking about it.

The typical ecommerce agency manages a dozen or more client accounts simultaneously. Each client has their own brand voice, product catalogue, seasonal calendar, platform quirks, historical performance data, and accumulated institutional knowledge. Every time a strategist switches from Client A to Client B, they're performing a full context swap — reloading an entirely different world into their working memory.

Now multiply that by AI tools. Every time an agency strategist opens ChatGPT to draft copy for a client, they're re-explaining that client from scratch. Every. Single. Time. The cognitive overhead is staggering, and it's invisible in every timesheet and every scope of work.

An agency that builds agent-readable knowledge bases for each client account — accessible via MCP to every AI tool their team uses — fundamentally changes the economics of client service. A junior strategist with access to a well-structured knowledge base can produce work that previously required a senior account director's institutional memory. Not because the junior is suddenly more experienced, but because the context infrastructure does the heavy lifting that used to live exclusively in senior people's heads. The onboarding cost for new team members drops dramatically. The risk of losing institutional knowledge when someone leaves the business drops even further. The entire knowledge dependency chain shifts from fragile human memory to durable, queryable infrastructure.

That's not a nice operational improvement. That's a restructuring of the entire agency margin model. And the agencies that build this infrastructure first will be able to serve more clients at higher quality with leaner teams — which is exactly the competitive pressure that's about to reshape the agency world anyway.

The irony is thick: agencies are supposed to be the ones advising their clients on digital transformation. Meanwhile, most of them are running their own AI operations with the same sticky-note-on-five-desks approach as everyone else.

The Real Bottleneck Was Never the Model

There's a widely misunderstood belief in commerce technology circles that better AI models will solve operational problems. It won't. Model selection matters far less than memory architecture, and the gap between those two factors is growing, not shrinking.

Opus 4.6 shipped a few weeks ago. It's remarkable. But the best model in the world cannot compensate for an AI that doesn't know what your team has been working on, what you've already tried, what your constraints are, who your key suppliers are, or what you decided last Tuesday about the returns policy.

Toby Lütke — the CEO of Shopify, a man who knows a thing or two about commerce operations — recently said that he thinks a lot of corporate politics amount to bad human context engineering. It's a provocative take, and it's more relevant here than he probably intended. When organisations build clean, accessible knowledge infrastructure, they reduce the politics that come from information asymmetry. When everyone — human and AI alike — has access to the same well-structured context, the information hoarding that powers office politics loses its currency.

Good context engineering for agents happens to produce good context engineering for people. The clarity that AI demands from your knowledge systems creates clarity for your human team too. That's not a side benefit. For many commerce operations drowning in Slack channels and undocumented tribal knowledge, it might be the primary benefit.

The career gap of this decade isn't going to be "uses AI" versus "doesn't use AI." It's going to be "has built persistent, searchable, agent-accessible knowledge infrastructure" versus "keeps re-explaining themselves in every chat window and wondering why AI still feels like a party trick." Same technology. Wildly different outcomes. The variable is your infrastructure.

Start Before It's Comfortable

If you're running a commerce operation or an agency and you're reading this thinking "we'll get to it when the tools mature," you're making the same mistake that companies made about mobile in 2010 and about social commerce in 2015. The tools are mature enough. Postgres has been mature for twenty years. Vector embeddings work. MCP is an open standard with broad adoption. The infrastructure layer is ready.

What isn't ready is most organisations' willingness to treat knowledge management as genuine infrastructure rather than a nice-to-have. The teams that start capturing, structuring, and making their operational knowledge agent-readable this quarter will have six months of compounding advantage by the time their competitors start thinking about it.

And that advantage is real. Every product decision documented. Every campaign post-mortem structured. Every supplier negotiation noted. Every customer insight captured. Each one makes the next AI interaction smarter, the next strategic recommendation more informed, the next new hire's onboarding faster. It compounds. It compounds in a way that no amount of model upgrading or prompt engineering can replicate.

Commerce teams have spent the last decade learning this lesson with customer data — the move from third-party cookies to first-party data, the importance of owning your customer relationship rather than renting it from Facebook. The same principle applies to organisational knowledge. Own it. Structure it. Make it agent-readable. Don't hand it to another middleman who will charge you for access to your own thinking.

The SaaS model for knowledge management has a fundamental misalignment of incentives. These companies need you to stay. They need your data to be sticky. They need switching costs to be high. That's not a conspiracy theory — it's their business model, and it's the same one that got you locked into separate memory silos in the first place. The alternative is infrastructure you own outright. A Postgres database. Vector embeddings. An MCP server. No middlemen that can break or reprice or vanish. One brain that every AI you use can plug into. The technology is boring. The monthly cost is negligible. The competitive advantage is enormous.

The people who solve the memory problem — for themselves, for their teams, and for the agents that are rapidly becoming their most productive colleagues — will have a compounding advantage that widens every single week. The people who keep starting from zero will keep wondering what they're missing.

They're missing the infrastructure. They always were.

Explore Topics

Icon

0%

Explore Topics

Icon

0%