Your Company Has the Same Problem as a Kid Who Can't Do Long Division
Schools skipping the foundation create dependent learners. Businesses skipping it with AI create dependent organisations. The psychology is identical.
Schools skipping the foundation create dependent learners. Businesses skipping it with AI create dependent organisations. The psychology is identical.

A psychology concept called learned helplessness describes what happens when someone repeatedly has tasks handled for them: the neural pathways that would have managed those tasks don't develop, or if they existed, they weaken. The offloading becomes dependence. The dependence becomes helplessness. It happens gradually. Not a dramatic collapse — a quiet erosion of capability from never needing to exercise a skill.
That paragraph isn't about children. It's about your merchandising team.
In the 1970s, electronic calculators became affordable and the education establishment panicked. Calculators in classrooms were considered cheating, full stop. They would destroy children's ability to do arithmetic. They would produce a generation incapable of mathematical thought. Schools banned them. Parents protested them. The debate consumed education policy for over a decade.
We know how that ended. Calculators didn't destroy mathematical thinking. They changed what mathematical thinking meant. When a student didn't need to spend twenty minutes on long division, they could spend that time on proportional reasoning, algebraic thinking, problem decomposition — the concepts long division was supposed to serve. The tool freed the learner from the mechanical to engage with the meaningful.
But here's the part of that story that gets conveniently omitted by everyone selling AI tools to businesses today: the transition worked because students still learned the mechanics first. They understood what the calculator was doing. They could estimate whether an answer was reasonable. They could catch errors. They had the foundation, and the tool extended it.
The parents who said calculators would make kids stupid were wrong. But the parents who might have said "just give them calculators and skip the maths" would have been catastrophically wrong too. The right answer turned out to be both: build the foundation, then give them the tool.
Commerce is in that calculator moment right now, and most organisations are making one of the two wrong choices. They're either banning AI from workflows — the 1975 school board approach — or they're deploying it everywhere without ensuring anyone understands the fundamentals underneath. Both paths lead to the same place: organisations that can't function properly.
There's an insight from the AI agent space that should be tattooed on every ecommerce director's forearm: the quality of an AI's output is determined entirely by the quality of the human specification that directs it.
An autonomous agent recently discussed by Nate B Jones negotiated $4,200 off a car purchase while its owner was in a meeting. A different agent, built on the same technology with the same architecture, sent 500 unsolicited messages to a developer's friends, family, and wife. Same tools. Same capabilities. The difference was the human's ability to specify: clear objectives, defined constraints, bounded communication channels versus broad access and vague boundaries.
This maps onto ecommerce operations with uncomfortable precision. Consider what happens when a merchandising team deploys an AI-powered product description generator. The team that understands brand voice, conversion psychology, SEO mechanics, and customer intent — the team that could write decent product descriptions without AI — produces specifications that generate excellent output. They know what good looks like. They catch when the AI drifts off-brand. They can articulate exactly what needs fixing when the output isn't right.
The team that never properly learned those fundamentals, that was hired into a world where "the AI handles copy," produces specifications that are vague, contradictory, or simply absent. The AI dutifully generates mediocre descriptions that nobody on the team recognises as mediocre, because nobody has the foundation to evaluate the output. They can't catch errors they don't understand. They can't improve specifications they don't know are weak.
This isn't hypothetical. It's happening across thousands of ecommerce operations right now, and it extends far beyond copywriting into pricing, inventory forecasting, customer segmentation, campaign management, and every other function where AI is being deployed without foundational competence.
A Harvard study published in Nature found that students using AI tutors learned more than twice as much material in less time than students in traditional settings. When you combine human teachers with AI tutoring, knowledge transfer doubles. That's a massive effect.
But notice the structure of that finding. The best results came from human-AI collaboration, not from replacing the human with AI. The human needed to bring something to the collaboration. That something was the foundation — the domain knowledge, the ability to evaluate, the capacity to direct.
Now apply that to agencies. The agencies seeing genuine productivity gains from AI — 30%, 40%, in some cases dramatically more — are not the ones that bought Copilot licenses for everyone and held a lunch-and-learn. They're the agencies where every person deploying AI already understands the discipline they're automating. The senior developer who knows architectural patterns well enough to spot when Claude produces a subtly flawed component hierarchy. The strategist who understands customer psychology well enough to recognise when an AI-generated campaign targets the wrong emotional trigger. The analyst who grasps statistical methodology well enough to catch when an AI draws a conclusion from insufficient data.
The agencies that are struggling — and quietly, many are — deployed AI tools to people who lacked that foundation. Not because those people are incompetent, but because the agency never invested in building the foundation first. They went straight to the calculator without teaching the arithmetic. And now they're producing work that's faster but worse, delivered with high confidence and low quality, and the clients are starting to notice.
McKinsey's latest State of AI report shows that 72% of organisations have now adopted AI in at least one business function, up from 55% just a year earlier. But here's the figure that doesn't make the headline: only 26% of those organisations report capturing significant value from their AI investments. The gap between adoption and value isn't a technology problem. It's a foundation problem.
College professors are describing a phenomenon they've never seen before. Students are arriving in classrooms who can no longer read a full chapter. Who can no longer synthesise an argument from multiple sources. Who can't sit with a difficult text long enough to extract meaning from it. Writing quality hasn't just declined — it's collapsed. Not solely because students submit AI-generated work, though many do, but because even students not actively using AI have lost the habit of struggling through a draft. The muscle has atrophied before anyone noticed it was weakening.
If you think this is limited to universities, walk through your marketing department. How many people on your team can write a compelling product description from scratch, without AI assistance, right now? How many can build a coherent campaign strategy on a whiteboard without reaching for a prompt? How many can diagnose why a landing page isn't converting by reading the page rather than feeding it into a tool?
The honest answer, for most ecommerce operations, is fewer than last year. And fewer last year than the year before. This isn't because you hired worse people. It's because the environment systematically discourages the effortful practice that builds expertise. Why spend an hour crafting a product description when you can generate forty in ten minutes? Why learn to read analytics dashboards when you can ask the AI to summarise them? Why develop an intuition for customer behaviour when the model will segment your audience for you?
Each individual shortcut is rational. Collectively, they're hollowing out the organisational capability that makes those tools work properly. You're creating a team that depends on AI but can't direct AI, because directing AI requires the very expertise that AI made it easy to skip developing.
It's a dependency spiral, and it's silent. Nobody sends a memo announcing that the team has lost the ability to evaluate AI output. It manifests as slowly declining quality that everyone attributes to other causes — "the market's saturated," "customers are more demanding," "the algorithm changed" — because nobody can identify what's actually happening: the humans in the loop are losing the judgment that made them useful.
There's a beautiful example from watching children learn to code with AI. A child building a video game typed "add enemies." The AI added enemies — enemies that spawned off-screen, moved in the wrong direction, and couldn't be hit. "It doesn't work," the child said. After a conversation about what she actually wanted the enemies to do, she typed: "Add three enemies that spawn from the right side of the screen, move them left at medium speed, and make them disappear when the player touches them." Suddenly she got the behaviour she was looking for.
That child wasn't debugging code. She was debugging her own intent. And that little interaction taught her more about specification quality than any lesson that could have been scripted.
This is precisely the skill most ecommerce teams lack, and it's the skill that separates organisations that extract value from AI from organisations that just generate volume. When your merchandising team prompts an AI to "create holiday campaign copy," they're the child typing "add enemies." When they prompt it to "write three email subject lines for our Boxing Day sale targeting repeat customers who purchased outerwear in Q3, emphasising urgency without discount dependency, maintaining our brand voice of understated confidence," they're the child who figured out what she actually wanted.
The difference between those two prompts isn't technical sophistication. It's domain expertise. The second prompt can only be written by someone who understands customer segmentation, campaign psychology, brand positioning, and email marketing mechanics. Someone who has the foundation.
You cannot shortcut your way to that kind of specification. It comes from doing the work — manually, tediously, with all the friction and failure that builds genuine understanding. It comes from writing a hundred product descriptions by hand before you ever ask an AI to write one. It comes from building campaigns that fail and understanding why they failed. It comes from reading your own analytics until the patterns become intuitive, not just legible.
The struggle is the point. That's the sentence that should be plastered across every AI implementation deck in every boardroom in the country. The struggle is the point, because the struggle is where expertise is forged, and expertise is what makes AI useful rather than dangerous.
Schools are spending millions on AI detection software — tools that claim to identify whether a student's work was written by AI. Andrej Karpathy, Tesla's former head of AI and one of the architects of the deep learning revolution, has said plainly: you will never be able to detect the use of AI in homework, full stop. The arms race between AI writing detection and AI writing generation was over before it started. The tools being sold to schools are mathematically incapable of delivering what they promise, and students are being expelled based on heuristics that don't work.
Commerce has its own version of this futile detection game. Managers trying to monitor whether teams are "using AI properly" or "relying on it too much." Quality gates designed to catch AI-generated content without addressing why the content needs catching in the first place. Process controls layered on top of process controls, each one attempting to solve a problem created by the previous one.
The educational answer isn't better detection — it's a fundamental rethinking of what you're measuring and why. The same is true in business. Instead of asking "did the team use AI for this campaign," ask "does the team understand why this campaign works?" Instead of checking whether the product descriptions were AI-generated, check whether anyone on the team can explain the conversion psychology embedded in them. Instead of monitoring tool usage, evaluate outputs — and evaluate them with people who have enough domain expertise to tell the difference between good and good-enough.
The organisations that get this right aren't investing in detection. They're investing in foundation. They're making sure every person who deploys AI in a customer-facing function can do that job adequately without AI first. Not because they'll ever need to work without AI — they won't — but because the ability to work without it is exactly what makes working with it effective.
Karpathy founded Eureka Labs with a stated goal that should become the standard for every ecommerce organisation building AI capability: raise people who are proficient in the use of AI but can also exist without it. Not one or the other. Both.
That formulation is deceptively precise. "Proficient" means genuinely skilled with the tools — not afraid of them, not tentative, not using them for party tricks. "Can also exist without" means the foundation is there. The person understands the domain deeply enough that AI extends their capability rather than substituting for it.
In practice, for an ecommerce operation, this means your buyer should be able to construct a competitive pricing strategy on a spreadsheet before you give them AI-powered pricing optimisation. Your content team should be able to write conversion copy that performs before you hand them generation tools. Your analysts should be able to build a customer segmentation model manually before you deploy algorithmic clustering.
Not because the manual approach is better — it isn't, and nobody should use it in production. But because the manual approach is where domain expertise is built, and domain expertise is the only thing that makes the automated approach work properly. You need to know what long division is doing before the calculator becomes a power tool instead of a crutch.
The organisations that will dominate ecommerce in the next five years won't be the ones with the most advanced AI. They'll be the ones with the deepest human foundations — teams that understand commerce, customers, and conversion at a level that allows them to specify, direct, and evaluate AI rather than being directed by it. The AI is the amplifier. The human foundation is the signal. An amplifier with no signal produces noise. Impressive, expensive noise — but noise.
There's a researcher formulation gaining traction that describes the defining competence of the AI age: not what you know, not what the machine knows, but your capacity to move between the two. Strategically allocating cognitive effort, coordinating AI-assisted tasks, evaluating results against your own understanding.
In practice, it's the difference between a merchandiser who asks AI to generate a product taxonomy and accepts whatever comes back, and a merchandiser who drafts a taxonomy structure, uses AI to identify gaps and inconsistencies, strengthens the weak categories with her own product knowledge, and produces something neither she nor the AI would have created alone. The first merchandiser completed a task. The second one created something genuinely better and learned something in the process.
Same tool. Different metacognitive skill. The difference isn't taught in an onboarding deck or a prompt engineering workshop. It's taught through years of doing the work, building the judgment, developing the taste that allows you to know what good looks like when the machine can produce infinite mediocrity at zero cost.
A METR study found experienced developers using AI tools completed tasks 19% slower than those working without them — while believing they were 24% faster. They weren't just wrong about the magnitude; they were wrong about the direction. That's not a technology failure. That's a foundation failure. The developers hadn't rebuilt their workflows around the tools; they'd bolted AI onto existing processes and assumed the resulting friction was productivity.
Agency, taste, and specification quality — these are the human skills that determine whether AI makes your organisation smarter or just faster at producing average work. And none of them can be developed by people who never built the foundation.
According to Harvard Business Review's research on AI in customer service, the organisations seeing genuine returns aren't the ones with the most sophisticated AI — they're the ones where human agents understand their domain well enough to supervise, correct, and enhance AI outputs. The pattern is consistent across every industry vertical: AI extends expertise. It doesn't replace it.
The calculator didn't ruin mathematics. But if we'd given every child a calculator in Year 1 and never taught them what multiplication means, we'd have produced a generation that could press buttons without understanding what the buttons did. That's what most ecommerce organisations are building right now: teams that can press buttons. The ones that invest in the foundation — that insist on understanding before automation, expertise before efficiency, judgment before speed — will be the ones that survive. The rest will produce faster, cheaper, and increasingly irrelevant work until the market stops pretending not to notice.
The struggle is the point. For your children. For your teams. For your business. Build the foundation first, or don't bother building at all.