The term AGI is often used as if everyone already meant the same thing. They don't. For some, AGI is a science fiction moment where a machine "wakes up". For others it is simply a marketing term for the next model generation. Both views make the discussion worse.
The most useful way to talk about AGI is instead as a performance profile: systems that can handle a broad spectrum of cognitive tasks at a high level, across many domains, with relatively little specialisation and with growing degrees of autonomous problem-solving.[1]
A working concept, not a magical moment
This approach has an advantage. It shifts the discussion from mystique to capacity. You can then ask: how general is the system, how robust is it, how much human oversight is required, how well does it handle new environments and how dangerous does it become if given greater autonomy?
It also means that AGI does not need to be a knife-sharp point in time. It can be a transition where systems successively become broader, more agentic and more useful in more types of work. That makes the question less dramatic in form — but no less important in practice.
What speaks for faster progress
There are good reasons to take faster development seriously. In recent years, several development paths have begun to work together: stronger base models, better tool use, longer context, multimodality, better fine-tuning, cheaper access and more agentic workflows. This makes systems more general than they were just a short time ago.[2]
It is also reasonable to assume that AI can to some extent accelerate AI development through better code support, research assistance and faster experimental cycles. This mechanism should not be romanticised, but neither should it be ignored.
What still holds back progress
At the same time there are clear limitations. Today's models can be fragile, overconfident in the wrong ways, poor at long-term robust planning and sensitive to shifts in environment. They still depend on human scoping, review and accountability in many real workflows.
That is why false precision is dangerous. Saying that AGI "arrives in 2028" or "not for decades" sounds clear but often rests on more certainty than the evidence warrants. The reasonable stance is instead that development is uncertain, but that the interval for serious scenarios is near enough to be a planning question already now.[3]
Timelines without false certainty
I therefore believe that AGI should be treated roughly like other high-impact uncertainties. You don't need to know exactly when a shift will occur to prepare for it. You don't plan electricity grids, defence or healthcare only for the normal case. You also plan for variation, failure modes and scenarios that could become decisive if they occur.
The weaker scenario — where AGI is delayed but the models still become steadily more general and more useful — is already itself enough to justify serious preparedness in companies, education and government.
The consciousness question is a separate question
Much of the public fascination with AGI quickly slides into the question of consciousness. Can advanced AI systems have subjective experience? That is a legitimate research question, but it should be kept separate from the more practical question of capacity and societal effect.
There is serious research suggesting the question should not be dismissed casually. But there is no robust reason to build today's policy or corporate planning on the assumption that machine consciousness is already established. For practical public debate, it is enough to note that systems can become very powerful without the consciousness question needing to be settled.[4]
Why AGI matters already now
The reason to write about AGI now is therefore not to claim that the goal is already achieved. The reason is that AGI as a possibility affects how we should think about security, governance, labour markets, geopolitics and accountability.
Whoever wants to be serious needs to hold two thoughts at the same time: we don't know when or if a particular threshold is crossed, but the uncertainty itself is large enough and the consequences significant enough to justify preparation.
Source notes
The sources below are used for definitions, uncertainty, security and the separate consciousness question.
- Google DeepMind: Levels of AGI for Operationalizing Progress on the Path to AGI.
- See Stanford HAI, AI Index 2025 for capacity development and Google DeepMind: Taking a responsible path to AGI for an industry perspective on development and risk.
- For scenario thinking and uncertainty, see RAND: Pivots and Pathways on the Road to AGI Futures and International AI Safety Report 2026.
- For the separate research discussion on AI and consciousness, see Butlin et al., Identifying indicators of consciousness in AI systems.