The VC Playbook Doesn't Work When the Product Improves Every 90 Days

Traditional startup investing assumes stable technology. AI breaks that assumption completely.

9 min read

9 min read

Blog Image
Blog Image
Blog Image

The 90-Day Obsolescence Cycle

A venture capitalist told me last week that his fund passed on an AI startup in November. By January, the startup had pivoted twice and was now building something entirely different — using capabilities that didn't exist when the pitch deck was written.

This is the new normal. The product your portfolio company ships today will be architecturally obsolete by the time the board meets next quarter. Not because the team is bad. Because the foundation they're building on literally changes underneath them every 90 days.

Consider the timeline: GPT-4 launched March 2023. Claude 3 in March 2024. By February 2025, both had been surpassed multiple times by their own successors. A startup that raised on a GPT-4 wrapper in April 2023 had its core value proposition commoditised before its Series A closed.

Why Pattern Matching Fails

VC pattern matching works when technology is stable. You see a team that looks like the last successful team, building in a market that looks like the last successful market. The playbook is proven: find product-market fit, scale, capture margin.

In AI, the "market" is a moving target. A feature that required a 50-person engineering team six months ago now ships as an API call. The moat you funded — proprietary data, custom models, domain expertise — gets commoditised between funding rounds.

The pattern matching problem runs deeper than just speed. VCs are trained to evaluate markets by TAM, competition, and timing. In AI, the TAM for any specific application is unknowable because the capabilities that define the market didn't exist last quarter. The competition includes foundation model providers who can build any vertical app as a feature. And the timing is impossible to judge because "too early" and "too late" are separated by weeks, not years.

Traditional due diligence asks: "Is this a $10 billion market?" In AI, the honest answer is: "It was last month. No idea about next month."

The Benchmarks Trap

One of the more dangerous patterns in AI investing is benchmark worship. A startup raises because their fine-tuned model scores 5% better than GPT-4 on a specific benchmark. Six months later, GPT-5 beats them by 20% out of the box.

The model performance benchmarks that VCs love — MMLU, HumanEval, whatever the flavour of the month is — are the wrong thing to optimise for. They measure a snapshot of capability in a domain where capability doubles annually. Investing based on benchmark scores is like investing in a car company because they have the fastest car today, in a market where engine technology improves every quarter.

The startups that survive aren't the ones with the best benchmarks. They're the ones with the best feedback loops from actual users doing actual work.

The New Evaluation Framework

The investors who are winning aren't the ones picking the best current product. They're the ones evaluating adaptability. Can this team rebuild their core product in two weeks when the next frontier model drops? Can they maintain margins when their key differentiator becomes a checkbox feature?

The answers to these questions have nothing to do with traditional due diligence. They're about engineering culture, architectural decisions, and the founder's willingness to kill their own product before someone else does.

Specifically, smart investors now look for: Model-agnostic architecture (can swap providers in days, not months). Thin wrapper, thick workflow (the value is in the orchestration, not the model call). Usage data that compounds (every customer interaction makes the product better in ways a foundation model can't replicate).

Related: The VC Playbook for AI Is Broken

Related: Your Favourite AI Startup Will Be Dead in 18 Months

What Smart Money Is Actually Doing

The best AI investors I know have stopped making 18-month bets. They're writing smaller cheques, more often, with shorter decision cycles. They're treating AI investments like options, not equity. And they're spending more time in GitHub repos than pitch meetings.

One prominent fund now requires every AI portfolio company to demonstrate a "90-day pivot capability" — the ability to fundamentally rebuild their product within a single quarter if the underlying technology shifts. Companies that can't demonstrate this don't get follow-on funding. Period.

The old playbook said: find a market, build a moat, scale. The new playbook says: find a team that can surf a wave that hasn't formed yet. Good luck pattern-matching that.

Related: The Solo Founder With 12 AI Employees Just Raised at $50M. Here's Why That's Normal Now.

Explore Topics

Icon

0%

Explore Topics

Icon

0%