On April 6, OpenAI published a 13-page policy paper on how society should be restructured to handle superintelligence. Robot tax. Public wealth funds. Four-day workweek. The right to AI as a basic public utility, on a par with electricity.
On April 7, Anthropic announced that it refuses to release its best model. Claude Mythos Preview — their most capable system ever — finds zero-day vulnerabilities across every major operating system and browser. Autonomously. It chains together four vulnerabilities into working exploits. It finds bugs that have eluded human security experts for 27 years. Instead of a product launch, they gave 100 million dollars in credits to a dozen tech giants to scan their own code — defensively.
Two companies. 24 hours. The same message, delivered in completely different ways.
OpenAI said it in words: the socioeconomic contract may have to be rewritten.
Anthropic said it through an action: we do not dare release what we have built.
The curve that does not flatten
There is a pattern in how technological shifts are experienced. For a long time nothing noticeable happens. Then everything happens. The hockey stick — the exponential curve — looks flat right up until it doesn't.
We have been talking about that curve for years. But this week we got two concrete data points that show where we are on it.
Anthropic's previous top model, Opus 4.6, had close to zero percent success on autonomous exploit development. Mythos Preview sits at 83 percent. That is not an incremental improvement. It is a leap that happened within a single model generation. And Anthropic assesses that other labs — OpenAI, Google, DeepSeek — will have comparable capabilities within 6 to 18 months.
OpenAI, for its part, writes that it expects AI to make small scientific discoveries already this year, and more significant breakthroughs by 2028. They present two scenarios: either AI is "normal technology" and society has time to adapt, or superintelligence develops at a speed humanity has never seen before. They are clearly preparing for the second scenario.
None of this is speculative science fiction. It is what the two leading AI companies in the world are communicating — publicly, officially, in the same week.
The warning no one asked for
It is worth pausing on Anthropic's choice. They have built something extraordinary. Commercially, they should release it. Instead they create Project Glasswing — a partnership with Amazon, Apple, Microsoft, Google, Nvidia, CrowdStrike and others — with the stated purpose of giving defenders a head start.
Read that again. Give defenders a head start. That is the language used before a war, not a product launch.
Anthropic is spending 100 million dollars so that other companies can find their own vulnerabilities — before models with the same capability spread uncontrollably. They are briefing CISA, the US Department of Commerce and what they describe as "a broader set of actors." That is not marketing. It is a warning packaged as an initiative.
The blind spot we live in
Most people — and most organisations — still live as if the curve is flat. AI is something you "maybe should try." A chatbot you asked for a recipe. Something the IT department is looking into.
Meanwhile OpenAI is writing about how the tax system needs to be rebuilt from the ground up, and Anthropic refuses to give the world access to its own product for safety reasons.
The gap between what the people building AI see and what the rest of society experiences has never been wider. That is the real risk — not that AI destroys the world tomorrow, but that society's institutions, companies and individuals wake up too late to shape how the change unfolds.
OpenAI puts it themselves: "Without deliberate policy, the risks are structural and irreversible."
What does it mean for you?
If you lead an organisation: stop treating AI as an IT project. It is a strategic upheaval that affects every function, every role, every business model.
If you work in security: your industry has just been handed a completely new threat landscape. Models like Mythos change the playing field within months, not years.
If you are a policymaker: OpenAI's document is not a lobbying push — it is a wish list from a company valued at 850 billion dollars asking governments to act before the change becomes uncontrollable.
And if you are an ordinary person wondering what any of this has to do with you: everything. The two most advanced AI labs in the world spent the same week saying, in their own ways, that the world you live in is being fundamentally changed.
24 hours in April. Two signals. The same message.
The curve does not flatten.