The Equation
One thesis. Three levels. Pick yours.
What Makes an Expert Human
The $50 billion blind spot in artificial intelligence
The equation everyone gets wrong
Every AI interaction on Earth is governed by one equation:
Expert Human + Expert AI = Outcomes
The entire AI industry is focused on the right side. Better reasoning. Larger context windows. Multi-modal understanding. Over $50 billion in venture capital aimed at making AI more expert.
The left side has received almost no systematic investment. There is no infrastructure for making humans more expert at using AI. There is no venture category for it. There is no research agenda.
And yet the human side is the binding constraint.
Model intelligence is advancing on a predictable trajectory. Human expertise in using these models is not advancing at all. Most people interact with Claude or GPT the same way they interacted with the first version: shallow prompts, no context, no domain grounding. They get polished text. They do not get results.
The bottleneck is no longer the model. It is everything that happens before the model receives its input.
What "Expert Human" actually means
An Expert Human is not someone who writes better prompts. It is someone who has engineered three things before they ever touch a model.
Layer 1: The hidden playbook โ know the game
Every professional domain runs on unwritten rules. Things that experienced practitioners know instinctively but that are never documented anywhere.
In fundraising: warm introductions convert at 5-10x the rate of cold emails. Investors who say "let me think about it" are usually saying no. You never send a pitch deck before establishing interest. Valuation conversations happen after traction conversations, not before. Angels decide in 48 hours. Institutional VCs take 4 weeks and need committee approval. Associating with the wrong early investor creates a signalling problem that poisons your next round.
A first-time founder doesn't know these rules. Neither does the AI.
When someone asks Claude to "write an investor email," the model produces a competent email. But competent by what standard? Without the domain rules loaded, the email might violate half a dozen norms that any experienced fundraiser knows in their sleep.
The hidden playbook covers:
- The rules nobody writes down. Don't send decks before establishing interest. Don't ask for feedback when you mean investment. Portfolio conflicts are an automatic disqualifier.
- The players and how they behave differently. Angels evaluate on personal chemistry. Micro-VCs on thesis alignment. Institutional funds run formal 8-week processes.
- The timing that kills deals. VC deployment cycles. End-of-fund pressure. The difference between January outreach and August outreach.
- The traps that look like opportunities. Premature valuation anchoring. Over-optimising for terms at the expense of partner quality.
This is what separates a $500-per-hour advisor from a generic AI assistant. The advisor has the playbook. The AI doesn't. Until you give it one.
Layer 2: The living profile โ know the player
The playbook is shared across everyone in a domain. Your context is unique to you.
Who are you? What's your company's traction? What's your communication style? What are your strengths and weaknesses? Who do you know? What have you already tried? What does your voice actually sound like when you're not performing?
Here's a concrete example. Two founders both ask AI to draft an investor email. Founder A has $1.2M in ARR and three warm connections. Founder B has zero revenue and zero connections. The domain rules are identical. But the correct email for each founder is completely different. Proof points, tone, sequencingโall divergent. Same playbook, different player.
This is not a bio or a preferences file. It is a living, evolving map of who you are.
Layer 3: The ask โ press go
Only after the playbook and the profile are loaded does the prompt matter.
And here's the counterintuitive finding: when the first two layers are deep enough, the prompt barely matters. You can type six words and get expert output.
"Prep me for the Pale Blue Dot meeting."
Prompt engineering is the tip of the iceberg. The playbook and the profile are the 90% below the waterline. When they're deep enough, prompt engineering gets absorbed by the infrastructure. It disappears.
What breaks when you skip layers
Each missing layer produces a specific failure with a recognisable personality.
The PR Agency
Prompt only
The AI produces polished, professional, utterly generic text. Sounds like a press release generated by a content mill. Technically flawless. Strategically empty.
Deleted in 3s
The Authentic Fool
Profile + Prompt
Genuinely sounds like the founder. But violates the unwritten rules of the game. Discloses valuation before establishing interest. Targets the wrong archetype.
Ignored politely
The $500/hr Advisor
Playbook + Profile + Prompt
Output is personally authentic AND operationally lethal. Sounds like the founder, follows the domain's rules, targets the right investor at the right time.
Gets response
From output to outcome
Output
AI generates technically competent text. Requires only model capability.
Frontier Model = OutputContextualised output
AI generates text grounded in who you are. Your voice, your facts, your style.
Profile + Model = Better OutputAligned decision
A recommendation grounded in you AND the domain. The right email, directed to the right person, at the right time.
Profile + Playbook + Model = Aligned DecisionsBusiness outcome
The recommendation is executed. Meeting booked. Term sheet arrives.
Decisions + Execution + Feedback = OutcomesMost AI products live at Stage 1 or 2.
The leap from Stage 2 to Stage 3 is the playbook. That is the gap nobody is filling.
The flywheel
The equation produces the first output. Maybe 70% right. What converts a single good output into compounding results is the loop that wraps around the equation.
HUMAN EXPERTISE
This is what WE control
AI EXPERTISE
This is what THEY control
OUTCOMES
The metric we use to tune them
Outcomes are never 100% perfect on the first try. That's why we have a feedback loop.
Cycle 1: 70%โCycle 50: 95%
โ1. Update Domain Playbook2. Enrich Human Graph3. Tune Negative Constraints
The playbook sharpens: if 80% of successful raises use a specific outreach sequence, that pattern strengthens. The profile deepens: if the founder softens aggressive language, the voice profile adjusts.
First cycle: 70% right. After 10 cycles: 85%. After 50: 95%.
Two graphs, two moats
The implementation is clean. Two knowledge graphs on a shared memory layer.
Graph 1: The Human Graph
Captures who you are. Your identity, expertise, network, goals, traction, voice, constraints. Private. Portable. Compounds with every interaction.
It is your data. Switching to a competitor means starting from zero โ rebuilding everything the system has learned about you.
Graph 2: The Job Graph
Captures how the domain actually works. The stages, the players, the unwritten rules, the timing, the traps. Shared across every user in that domain.
Every user operating within a domain generates outcome data that makes the playbook sharper for everyone.
"OpenAI and Anthropic are spending billions fighting over the reasoning layer. We are building the reality layer. They cannot copy our map without our users."
Independent validation
Three independent builders arrived at this same architecture in early 2026: Huntley for software engineering, Karpathy for ML research, Yegge for multi-agent code generation. Three different domains. Zero coordination. One identical architecture.
Garry Tan's analysis concluded: "The best program.md wins."
In our language: the best playbook wins. The model is commodity. The playbook is the moat.
The opportunity
$50 billion invested in making models smarter. Nearly zero invested in making the human side of the equation smarter.
Human-Edge.AI is building the infrastructure for the other side of the equation.