Back to Writings

The Capability Overhang: AI Can Already Do More Than You Think

8 min read

There is a growing gap between what AI systems can do and what most people believe they can do. Sam Altman has called this "capability overhang" — the idea that deployed models already possess abilities that the majority of their users have never tested, never imagined, and in many cases wouldn't believe if you showed them.

This isn't a future problem. It's a present one. And it may be the single largest bottleneck to AI adoption in the enterprise.

The Gap Is Not Technical


The standard narrative about AI adoption goes something like this: models need to get better, hallucination rates need to drop, trust needs to build, and then — gradually — organizations will start using AI for real work.


That narrative is wrong, or at least incomplete. The models are already far more capable than the tasks most organizations are asking them to perform. The bottleneck isn't capability. It's imagination. It's the human side of the equation — people not knowing what to ask for, not knowing what's possible, and defaulting to using a frontier model as a slightly faster search engine.

What Overhang Looks Like in Practice


I see this constantly in enterprise software. A team will adopt an AI tool, use it for the most obvious task — summarizing documents, drafting emails, answering FAQ-style questions — and then plateau. They never discover that the same model can analyze a complex contract against a set of business rules, generate a working prototype from a verbal description, or restructure an entire data pipeline.


The capability was always there. Nobody asked.


Andrej Karpathy has made a similar observation about LLMs specifically: people dramatically underestimate what these models can do because they approach them with the mental model of previous software. They expect rigid input/output patterns. They don't realize they're interacting with something that can reason, plan, and adapt — within limits, but far beyond what most users ever test.

Why This Matters for Enterprises


In an enterprise context, capability overhang creates a specific kind of risk: your competitors might figure out what these tools can actually do before you do. Not because they have better models — everyone has access to roughly the same frontier models — but because someone on their team had the curiosity or the background to ask the right question.


David Sacks has argued that AI is compressing the timeline between "possible" and "deployed" in ways that favor organizations with strong technical taste — people who can look at a new capability and immediately see how it maps to a real business problem. That's a human skill, not a technical one. And it's in short supply.

The Role of the Solutions Engineer


This is where I think solutions engineering becomes unexpectedly relevant to the AI conversation. The core skill of an SE — understanding a customer's problem deeply enough to map it to a technical capability they didn't know existed — is precisely the skill that closes the capability overhang.


Every discovery call I've ever run is, at its core, an exercise in bridging a gap between what someone thinks is possible and what actually is. The technology has changed. The human dynamic hasn't.

Closing the Gap


There are a few things that help:


  • Hands-on experimentation. Not watching demos — actually using the tools on real problems. The overhang shrinks fastest when people experience capabilities firsthand rather than hearing about them secondhand.

  • Cross-functional exposure. The person most likely to discover a novel AI application is often not the one you'd expect. A product manager who understands workflow pain points may see an application that a data scientist focused on model architecture would miss entirely.

  • Intellectual honesty about defaults. Most people, when they encounter a new tool, try the most conservative possible use case first. That's rational — but it means the most valuable capabilities are systematically the last ones discovered.
  • The Overhang Is the Opportunity


    The uncomfortable truth is that the most transformative applications of current AI models probably haven't been built yet — not because the models aren't ready, but because the right person hasn't asked the right question yet.


    That's not a technology problem. It's a human capital problem. And it suggests that the organizations best positioned to benefit from AI aren't necessarily the ones with the biggest data teams or the most GPUs. They're the ones with people who are deeply curious, technically literate enough to prototype, and close enough to real problems to see where capability meets need.


    The overhang is real. The question is whether you're the one closing it or the one being left behind by it.