Essay 9

The dangerous gap in the AI debate

The illusion of "the tool" and why the timeline is a trap

A debate essay on why AI cannot be understood merely as a productivity tool, why historical comparisons become misleading when thinking itself is being automated and why even a 10–15 year timeline requires immediate societal preparation.

AI-skiftet Debate essay Work, economy, transition
"AI will not take your job. A human using AI will."

Count how many times you have heard that sentence in the past year. It is repeated as a mantra by politicians, economists and tech commentators — and it sounds reassuring, because it places the technology safely in the role of tool and humans safely in the driver's seat. The implication: we have seen this before, it worked out with the steam engine and it worked out with computers. AI is just another upgrade.

But that very reassurance should worry us.

The mantra only works if we accept a silent premise: that AI is fundamentally the same kind of technology as the machines we already know. That it amplifies human capacity without ever replacing it. That premise feels reasonable as long as you look at today's systems, with their hallucinations and limitations. But it does not hold if you step back and ask: what happens if that premise falls?

There is a dangerous gap between fixing your gaze on AI's current limitations and daring to see the technology's possible endpoint. When we step back, something entirely different emerges than a routine labour market upgrade. Then we see the contours of a system shift that redraws the very premise of human work — and by extension, how society functions.

The history trap and the horse's fate

When the industrial revolution came, machines replaced our muscle power. That turned out largely in our favour, because humans could take a step up the value chain and instead sell our cognitive capacity. The brain became our primary economic asset.

But what happens when machines now develop the capacity to replace precisely that brain power?

Biologically, there is no obvious "third level" for us to move up to. If artificial general intelligence — or systems close enough to have the same economic effect — can perform cognitive work faster, cheaper and better than we can, what are we supposed to sell on the labour market then?

Economist Wassily Leontief drew a painfully precise parallel back in the 1980s. For thousands of years the horse was a central productive resource in the human economy. But when the tractor and the internal combustion engine took over, no "new, more skilled jobs" were created for horses. They simply became redundant as a workforce, because the machine was objectively better at the work that previously required their bodies.

If we stubbornly continue to believe that AI is merely "a new tractor," we miss the crucial point: in this revolution, it is not the tractor we are inventing for ourselves. It is the tractor for thought. And then there is a risk that we ourselves become the horse.

When costs approach zero

What many still underestimate is what happens when the technologies begin to converge.

Imagine a world — which we are now approaching rapidly — where advanced AI generates the ideas, does the research, writes the code, handles the administration and drives the analysis. Add to that physical, autonomous robots that extract raw materials, build factories, provide care, handle warehouses and run transport. Then imagine that all of this is powered by cheaper energy, massive computing power and ever better automated infrastructure.

When a machine can think out a product and another machine can build and deliver it — what happens then?

What happens is that marginal cost is pressed down. Not to exactly zero — the laws of physics, energy infrastructure and raw material availability set real limits, and the path there will be uneven and full of bottlenecks. But the trend is clear, and it does not need to reach the endpoint to have dramatic consequences. It is enough that marginal cost falls far enough to start undermining the logic of the old system.

This is the core of the idea of a technological abundance society. And it is also a direct collision with the capitalist model we have built over centuries. Our economy is built on people selling their time for wages and then using that wage to consume what is produced. If human labour loses its economic value, how then is wealth to be distributed? If people no longer have a wage to spend, it ultimately makes no difference how cheap it has become to produce goods and services. The system short-circuits itself.

The time trap: why 15 years is tomorrow

This is where I most often meet resistance. When you paint this full picture, you almost always get the same objection:

"You are exaggerating. That is far in the future. Robots are still clumsy and AI makes logical mistakes. It probably takes at least 15 years before we are there."

And right there is perhaps the debate's biggest blind spot.

Let me take the objection seriously, because it deserves it. It is entirely possible that AGI in its strict sense is never achieved, or that it takes significantly longer than the optimists believe. It is also possible that the economy — as so many times before — proves more adaptable than the forecasts suggest, and that new forms of meaningful work emerge in ways we cannot foresee today. It would be intellectually dishonest to dismiss that possibility.

But it does not change the basic calculation. We do not need full AGI to get massive labour market effects — it takes only systems that are "good enough" to replace large portions of the cognitive work that today employs millions of people. And we do not need to assume the economy cannot adapt; we only need to ask: what happens if it does not have time to?

For me, it makes very little difference at a fundamental level whether we reach that point in two years, seven years or fifteen years. In historical and societal terms, fifteen years is almost nothing. It is shorter than it takes a child to go from preschool to university. We are right now educating hundreds of thousands of young people for a labour market that very likely will not look like today's when they actually enter it.

Our societal systems — how we tax work, how we fund welfare, how the pension system works and how we create status, identity and meaning — still depend almost entirely on wage work. Rebuilding that foundation is not something you do overnight. Not even in one term of government. It requires years of political debate, institutional restructuring, experiments, mistakes and corrections.

If we know we are moving towards a point where human wage work loses its fundamental value — regardless of whether the turning point comes in 2028 or 2041 — then it is deeply irresponsible to wait until the technology is "completely finished" before we start thinking differently.

We should not wait for proof in the form of full collapse before we react. We should build the new societal structures while we still have time.

Time to wake up

Pointing out these consequences is not being a doomsayer. Quite the opposite.

If we play our cards right, this could be the beginning of humanity's most liberating era. We could get the chance to gradually free ourselves from wage labour's compulsion and spend more of our lives on things that actually make us human: relationships, culture, creativity, research, care and exploration.

But we will never land softly in such a society if we continue to reduce AI to "just a tool." We cannot meet the 21st century's most disruptive technology with 20th century economic reflexes.

What is needed is not a finished programme — it would be naive to claim anyone has all the answers. But the direction is clear enough to begin. We need pilots with new income models that are not tied to wage work. We need a tax base that gradually shifts from taxing human labour to taxing the value that automated systems produce. We need transition support that is scaled for structural shifts, not cyclical downturns. And we need a political debate that stops treating this as science fiction and starts treating it as societal planning.

It is time to stop asking the comfortable question:

"What new jobs will AI create?"

And instead dare to ask the real question:

How do we build a functioning, meaningful and just society when wage labour is no longer the engine?

The real gap in the AI debate is not about technology.

It is about how long we plan to pretend that society's rules can stay still.