They're Measuring the Wrong Thing.

Anthropic's New Labor Market Report Tracks AI Displacement. The Real Crisis Is Context Erosion.

A close reading of Anthropic's labor market impact research—and why task-based measurement misses the deeper transformation already underway.

MA
Mohamed Anis

Founder, Human Edge

18 min readLabor Markets • Context Engineering
01

The Report Measures Tasks. Work Is Not Tasks.

Anthropic just released a detailed labor market analysis. It's methodologically sound, impressively scoped, and asks important questions about which jobs are most “exposed” to AI. The report maps task-level automation potential across occupations, measures adoption curves, and draws conclusions about which industries will be most affected.

But there is a problem with the framework. It treats work as a collection of tasks. And work is not a collection of tasks.

When a senior marketing director writes a campaign brief, the report sees “writing.” What it does not see is the 15 years of pattern recognition that shape the brief. The fact that last quarter's campaign underperformed because the messaging conflicted with a partner announcement. The awareness that the CFO is skeptical of brand spend and needs ROI framing. The instinct that this particular product launch needs a different tone because the competitive landscape shifted after a recent acquisition.

None of that is “writing.” All of it is context. And context is exactly what current AI measurement frameworks cannot see.

Anthropic's report measures the task. But the value of the task lives in the context that shapes it. When you strip context from measurement, you get clean data about the wrong thing.

02

The 14% Number Is Not a Labor Market Statistic. It's a Context Crisis.

The most cited finding from the report: hiring for roles with high AI exposure has declined by approximately 14% relative to roles with low exposure. The report frames this as a labor market signal, and it is. But it is also something more alarming.

14%
Decline in hiring for AI-exposed roles
Anthropic finding
37%
High-exposure roles in knowledge work
Report estimate
0
Metrics measuring context loss
Current frameworks

Every unfilled role is a context gap. When a company decides not to hire a mid-level analyst because “AI can handle the analysis,” they are not just eliminating tasks. They are eliminating:

Institutional memoryKnowledge of why decisions were made, not just what decisions were made.
Cross-functional awarenessUnderstanding of how their work connects to other teams' priorities.
Judgment calibrationThe ability to know when a standard process should be broken.
Relationship capitalThe informal networks that allow organizations to actually function.
Training pipelineThe mechanism by which the next generation of senior employees learns context.

The 14% decline is not just fewer jobs. It is less context. And less context means worse decisions, even if the remaining tasks are executed more efficiently.

The 14% decline is not a labor market adjustment. It is an organizational lobotomy performed one unfilled role at a time.

03

The Exposure Metric Has a Blind Spot

The report introduces an “AI exposure” metric that evaluates how much of a role's tasks can be performed or augmented by AI. This is useful. But it has a critical blind spot: it measures task substitutability without measuring context dependency.

Consider two roles with identical “exposure” scores:

A
Junior data entry clerk

Tasks are highly standardized. Context dependency is low. AI can substitute most tasks without significant loss of quality.

LOW CONTEXT
B
Senior account manager

Tasks look similar on paper (emails, reports, scheduling). But each task is shaped by 8 years of client relationship context, political awareness, and strategic judgment.

HIGH CONTEXT

The exposure metric would rate them similarly. But the organizational impact of replacing them with AI is wildly different. The data entry clerk can be replaced with minimal context loss. The account manager cannot—but the metric does not capture why.

This is not a flaw in Anthropic's methodology. It is a flaw in the entire paradigm. We are measuring AI's impact on labor as if labor is a set of tasks, when in reality, labor is a set of context-rich relationships, judgments, and accumulated knowledge that happen to produce tasks as outputs.

04

Who's Most Exposed? The People With the Most Context.

Here is the irony the report reveals without naming it: the roles most “exposed” to AI are often the ones that carry the most organizational context. Knowledge workers. Middle managers. Analysts. Strategists. Coordinators.

These are not people who do simple, repetitive tasks. These are people whose tasks are relatively straightforward on paper but whose real value is the context that shapes how those tasks are executed. A financial analyst does not just “analyze numbers.” She knows which numbers the CEO actually cares about, which assumptions the board will challenge, and which formatting choices signal confidence versus uncertainty.

Marketing Directors

Brand history, competitive positioning, stakeholder preferences, campaign learnings

Product Managers

Technical constraints, customer pain points, team capabilities, strategic priorities

Financial Analysts

Board expectations, historical assumptions, executive communication styles

Account Managers

Client relationship dynamics, deal history, political landscape, trust patterns

Project Coordinators

Cross-team dependencies, informal escalation paths, resource constraints

Executive Assistants

Leadership preferences, organizational politics, schedule optimization patterns

When AI replaces the task layer of these roles without capturing the context layer, organizations do not get more efficient. They get more brittle. The decisions still get made, but they get made without the contextual intelligence that prevented expensive mistakes.

We are not automating tasks. We are amputating context. And the patient will not feel the pain until the next crisis reveals what was lost.

05

What Should Actually Be Measured

If we are serious about understanding AI's impact on the labor market, we need to measure what is actually at stake. Task completion rates and job posting volumes are trailing indicators. The leading indicators are about context:

01

Context Dependency Score

For each role, how much of the work's value comes from accumulated context versus task execution? A role where 80% of value comes from context and 20% from task execution has a fundamentally different AI displacement risk profile than a role where the ratio is reversed.

02

Context Transfer Rate

When a role is automated or eliminated, what percentage of the contextual knowledge that role carried is preserved in the organization? Currently, this is probably close to zero. When a senior employee leaves and AI takes over their tasks, the tasks continue but the context evaporates.

03

Organizational Context Density

What is the ratio of context-carrying humans to total workforce? As this ratio declines, organizations become increasingly dependent on explicit, documented knowledge — which is always a fraction of the total knowledge that drives quality decisions.

These metrics do not replace task-based analysis. They complement it. And they reveal something task-based analysis cannot: the hidden cost of displacement that does not show up until it is too late.

06

This Is Why We're Building What We're Building

At Human-Edge.AI, our Context Engine does not start with task automation. It starts with context preservation. Before we automate anything, we ask: what context does this person carry, and how do we ensure it survives?

Our seven-layer context architecture—from individual identity to global knowledge—is specifically designed to capture what task-based frameworks miss. When we build a Digital Twin of a professional, we are not modeling their tasks. We are modeling their judgment. Their patterns. Their relationships. The accumulated wisdom that makes their task execution valuable.

This is not just a technology difference. It is a philosophical difference. We believe the purpose of AI in the workplace is not to replace human context but to preserve and amplify it. When context is captured first, task automation becomes dramatically more valuable—because the automated tasks carry the contextual intelligence that makes them actually useful.

Context first. Tasks second. This is not a slogan. It is an architecture decision that shapes everything we build.

07

The StepUp Question

The report's findings also speak directly to a question we are addressing through StepUp.one: what happens to the people whose roles are being displaced?

The standard answer is “reskilling.” Learn to use AI tools. Acquire new technical skills. Pivot to roles that are less exposed.

But this advice misses the point. The people most at risk are not people who lack skills. They are people whose skills are deeply contextual and therefore hard to measure, hard to transfer, and hard to advocate for in a hiring market that increasingly values task-level productivity metrics.

S1

StepUp.one

The Context-Aware Approach

StepUp approaches this differently. Instead of asking “what skills do you need to learn?” it asks “what context do you already carry, and how do we make that context visible and valuable?”

Context Mapping

Making invisible expertise visible and transferable

Value Translation

Expressing contextual knowledge in market-legible terms

Pathway Design

Building bridges from context-rich roles to context-amplified roles

This is not charity. It is recognition that context is valuable, and that a labor market that cannot see context will systematically undervalue the people who carry the most of it.

08

The Real Chart Anthropic Should Draw

The report includes helpful visualizations of task exposure by industry and occupation. But the most important chart is the one it does not include: a visualization of what is actually lost when AI replaces context-carrying roles.

Imagine a radar chart with three dimensions:

The Missing Visualization

TASK

Task Completion

What Anthropic measures

AI: High • Human: High

CTX

Context Preservation

What we measure

AI: Low • Human: High

ORG

Organizational Health

What matters most

AI alone: Declining

Current AI optimizes the blue axis while unknowingly degrading the red axis. The green axis—what organizations actually care about—requires both.

In this framing, the current approach to AI adoption is optimizing for one dimension while unknowingly degrading another. Task completion goes up. Context preservation goes down. And organizational health—which depends on both—follows the weaker signal.

09

One Last Number

The report notes that the 14% hiring decline occurred in the most recent period measured. It is accelerating. And with each cycle, the context gap widens. Organizations that lose context-carrying employees today will not feel the full impact for 12-18 months, when the downstream effects of degraded institutional knowledge begin to compound.

By then, the context will be gone. And no amount of task automation will bring it back.

Unless we build systems that preserve it first.

That is the bet we are making at Human Edge. That is the gap we are trying to fill with our Context Engine. And that is the question we wish Anthropic had asked in their otherwise excellent report:

“When AI can do the task, what happens to the context that made the task worth doing?”

The answer to that question will determine whether AI makes organizations smarter or just faster. Right now, we are getting faster. The smarter part requires us to measure—and build for—what we are about to lose.

MA
Mohamed Anis

Founder of Human Edge and StepUp.one. Building context infrastructure for the AI era—preserving and amplifying the human judgment, relationships, and accumulated wisdom that make work valuable. We believe measuring tasks misses the point. The future of AI requires measuring context.