The most common mistake in the discussion about AI is to treat the technology as just another digital tool. Like Excel, but faster. Like a more efficient layer on top of the same work logic as before.

The comparison is partly understandable but fundamentally insufficient. Traditional software automated mainly fixed rules, defined work steps and well-structured data flows. Generative AI and related models automate instead a growing part of work that previously required interpretation, language understanding, summarisation, comparison, code support and other applied cognitive processing.[1]

From rule automation to interpretation

Spreadsheets, databases and business systems were enormously important because they made calculation, recording and tracking faster and more scalable. But humans still clearly handled the actual interpretation: weighing alternatives, writing first drafts, reading between the lines, formulating hypotheses and translating unclear problems into actionable next steps.

This is precisely where AI shifts the boundary. Modern models can formulate summaries, write functioning first code drafts, compare alternatives, propose experiments, structure unstructured material and adapt their responses to goals and context. They do not do this flawlessly, but they often do it well enough to change how work is divided between human and system.[2]

Function before philosophy

A common objection goes: "But the model does not really understand." Philosophically, that might be an important question. Practically, it is not decisive for the organisation that has to decide how work should be done.

If a system can in a useful way go through a specification, find ambiguities, suggest reformulations, write test cases and help an experienced person get to a decision faster, then the task has already changed. That change does not become less real because we keep discussing whether the system "really" understands or merely approximates patterns.[3]

That does not mean philosophical questions lack value. It means they do not need to be solved for the technology's societal effects to become large. For planning, it is enough to ask: which tasks can the systems handle well enough to change role allocation, productivity and quality assurance requirements?

What this changes in organisations

When an organisation buys a spreadsheet programme it buys a tool. When it implements AI it often introduces something more like a qualified first responder: a system that needs instructions, scoping, verification, feedback and clear accountability.

This has consequences. Junior tasks can be thinned out or compressed. Senior people can get greater leverage. Demands on domain knowledge, data quality and review capacity increase at the same time that certain bottlenecks disappear. Therefore AI is not just an IT question. It is an organisational question and ultimately a labour market question.

What the Excel comparison still teaches us

There is meanwhile something useful in the Excel comparison: even a very powerful tool only becomes truly valuable when the organisation learns how to use it. AI does not automatically create good decisions. Wrong instructions, poor data, weak domain knowledge and unclear responsibility still produce poor outcomes.

The important difference is therefore not that old tools were simple and AI is magic. The difference is that AI operates higher up in the work chain, closer to language, judgement and problem-solving. That is why the consequences are broader when the technology works — and the costs higher when it is used carelessly.

The planning question

If AI were just another office tool the task would be limited: some training, some new routines and gradual adjustment. But when the technology starts affecting the actual content of qualified work, planning must also become larger.

That is why I say that AI is not Excel. Not because the comparison is stupid, but because it makes the shift less than it is. Whoever plans as if this were normal digitisation will likely build too little, train too narrowly and react too late.

Source notes

The essay is primarily conceptual. The sources below support the description of how AI is used in tasks and how its economic significance is discussed in research and policy materials.

  1. For use of generative AI in tasks, see Anthropic Economic Index and Economic primitives.
  2. Overviews of capacity development and diffusion: Stanford HAI, AI Index Report 2025 and PwC Global AI Jobs Barometer 2025.
  3. For labour market and task perspectives, see ILO 2025 update, WEF Future of Jobs 2025 and IMF.

Rolf Skogling writes AI-skiftet from an industry-near and practical perspective, grounded in working with AI in real operations.