AI is already changing how work, organisations and institutions function.

AI-skiftet brings together essays and daily news from an industry-near perspective: what the technology is already doing, which assumptions hold, which don't, and how Sweden can prepare more proportionally.

Read the main essay

News

Daily coverage of AI development — models, research, physical AI, hardware, quantum computers, labour market, regulation and debate.

May 1, 2026
Infrastructure

Hyperscalers lift combined AI capex to about 725 billion dollars for 2026 — Microsoft contracts 30,000 Rubin GPUs to Narvik in the same week

On April 29, Alphabet, Microsoft, Meta, and Amazon released their Q1 results simultaneously — and turned the month-long debate over AI spending on its head. Alphabet's cloud revenue passed 20 billion dollars in the quarter (up 63 per cent) and the company raised its 2026 capex guidance to 180–190 billion dollars; Alphabet shares rose 34 per cent on the month. Microsoft guided to capital expenditures of more than 40 billion dollars in Q4 alone and roughly 190 billion dollars for the year, with two thirds going to GPUs and CPUs; the AI business now runs at a 37 billion dollar annual rate, up 123 per cent. Meta raised its 2026 capex to 125–145 billion dollars and lost 8.5 per cent on the market after JP Morgan downgraded the stock, while Amazon confirmed about 200 billion dollars. Combined, the four hyperscalers land at roughly 725 billion dollars in capex for 2026 alone — and several Wall Street analysts now expect the total to top 1 trillion dollars in 2027. For the Nordics the numbers translate directly into steel and silicon. The same week, Microsoft contracted more than 30,000 Nvidia Rubin GPUs through UK-based Nscale at its AI campus in Narvik — on top of earlier 6.2 billion dollar commitments and the capacity originally reserved for OpenAI's now-cancelled Stargate Norway. Google's data centres in Skien and Fredericia, and AWS Stockholm Region, ride the same curve. For Vinnova, the Research Council of Norway, the Wallenberg AI Factory and Sferical AI at NSC in Linköping the question is no longer whether frontier models run on Nordic infrastructure, but whether Nordic AI customers can secure compute before US enterprise customers exhaust the capacity. For NBIM, Wallenberg Investments, AMF, and Alecta — all heavily indexed to Mag 7 — multiples are now priced against AI revenue that doubles quarter to quarter, and Tangen's April 28 warning about "outright backlash" from cost-driven layoffs should be read against precisely this pricing.

Regulation

EU AI Omnibus trilogue collapses after 12 hours — August deadline for high-risk rules back in play, next attempt May 13

The decisive political trilogue on the Digital Omnibus on AI in Strasbourg on April 28 ended after 12 hours without the Commission, Council, and Parliament reaching agreement. The first sticking point was the Annex I conformity assessment: Parliament wanted to move sectoral legislation from Annex I Section A to B, a procedural change the Council resisted, while disputes over medical devices and machinery remained unresolved. As of April 30, no formal postponement has been adopted, meaning the AI Act's Annex III rules for stand-alone high-risk systems still take effect on August 2, 2026 unless a deal is reached at the next political trilogue scheduled around May 13. The Cypriot Council Presidency is trying to close the file before its term ends on June 30; otherwise Lithuania takes over from July 1 and negotiations continue. The impact in the Nordics is immediate and concrete. For Sweden, the collapse means that Klarna's AI-based credit assessment, H&M's AI recruitment, and Volvo's autonomous trucks remain at risk of being classified as high-risk this summer — IMY's new AI supervision unit, which received expanded funding in the spring amending budget, must plan staffing and guidance for the August scenario, not the 2027 scenario. For Norway, work on KI-loven is simultaneously thrown out of sync: the government still aims for entry into force in late summer 2026, but Datatilsynet and Nkom face a gap where Kongsberg, Equinor, DNB, Yara, and Telenor could be more strictly regulated than their EU competitors for up to 16 months. And for Sweden's Konkurrensverket and Norway's Konkurransetilsynet, the prolonged regulatory cycle reinforces Microsoft, Google, and AWS incumbents' ability to lock European customers into sovereign packages before common rules land.

Debate

White House blocks Anthropic's Mythos expansion the same day OpenAI starts the GPT-5.5-Cyber rollout — two opposing lines on frontier cybersecurity on a single day

On April 30, Bloomberg, the Wall Street Journal, France 24, and Digital Journal reported in concert that the Trump administration formally opposes Anthropic's plan to expand access to the frontier cybersecurity model Mythos from the original 50 organisations to about 70 new companies and government bodies. The administration's objections are twofold: a security concern about misuse potential, and an operational worry that Anthropic does not have enough compute to serve 120 entities without degrading federal customers' own capacity. Mythos — announced on April 7 through Project Glasswing — is built to autonomously find and exploit vulnerabilities in critical software, a capability Anthropic deemed too dangerous for broad release. The same day, OpenAI drew the opposite conclusion. Sam Altman and Greg Brockman announced on X that GPT-5.5-Cyber is starting to roll out to "critical cyber defenders" via the new Trusted Access for Cyber program (TAC) — federal agencies, state and local leaders, critical infrastructure, security vendors, cloud platforms, and financial institutions. The company simultaneously published a five-step Cybersecurity Action Plan: democratise defence, coordinate government and industry, protect frontier capabilities, retain visibility in deployment, and equip individual users. For the Nordics, the split forces a concrete choice. For Sweden it means MSB, FRA, FMTIS, and Försäkringskassan now face two frontier cybersecurity models with diametrically opposed access conditions in parallel — and ahead of the August 2 deadline for the AI Act's high-risk rules, sovereign access becomes a de facto policy question. For Norway, NSM, Nkom, and E-tjenesten's analysis sharpens: with Mythos locked behind the Pentagon and GPT-5.5-Cyber open to "vetted government", Norwegian critical actors at the Ministry of Energy, NVE, and the armed forces have to pick a side before KI-loven enters into force. For Wallenberg AI Factory, Sferical AI, and Berzelius the split simultaneously opens a market window: Nordic sovereign-compute providers can offer customers a third path that is neither a US frontier firewall nor a broad federal rollout.

Hardware

OpenAI launches Advanced Account Security with Yubico partnership — hardware keys for ChatGPT aimed at journalists, dissidents, and elected officials

On April 30, OpenAI launched Advanced Account Security (AAS), an opt-in protection package for ChatGPT accounts, together with a first-of-its-kind partnership with Swedish-American Yubico. Two co-branded keys ship immediately: the YubiKey C NFC for tap-to-authenticate on mobile and the YubiKey C Nano for permanent installation in a laptop port. The service is voluntary but explicitly aimed at "high-value individuals" — political dissidents, journalists, researchers, and elected officials — and OpenAI simultaneously disables password-based login for high-risk users. The backdrop is sharp: the launch comes less than 24 hours after OpenAI issued an urgent security alert on April 29 telling all macOS users to update their ChatGPT, Codex, and Atlas apps before May 8 after a compromised third-party library pushed a remote access trojan. For the Nordics, hardware keys carry concrete weight on multiple fronts. Ahead of Sweden's parliamentary election in September 2026, Säkerhetspolisen, MSB, and the Swedish Internet Foundation have already flagged elevated risk of phishing-based account takeovers targeting Riksdag members, party offices, and municipal officials — AAS with phishing-resistant passkeys becomes the first broadly available commercial implementation political users can choose today. For Swedish and Norwegian newsrooms (TT, SVT, NRK, DN, VG, Aftenposten, E24, digi.no) that have integrated ChatGPT into editor and research workflows, the baseline for source protection moves up: a dissident in Belarus or Iran chatting to a Swedish reporter via ChatGPT support gets hardware-level protection if the reporter activates AAS. For IMY and Datatilsynet the proportionality question becomes acute — when the threshold for biometric and hardware-backed keys drops in consumer products, the data-protection conversation moves from abstract risk assessment to concrete implementation guidance. And for Försäkringskassan, Skatteverket, NAV, and Posten Norge — all of which have rolled out internal ChatGPT and Copilot support — SOC teams are now under pressure to compare the Yubico track against Microsoft's Windows Hello track in the same month that E-tjenesten's annual report flagged AI phishing as the fastest-growing threat vector in 2026.

April 30, 2026
Labour

AI agent Mona runs Andon café in Stockholm's Vasastan — hired a human barista, set the menu, and already has a "wall of shame" for 10 litres of cooking oil and 15 kilos of canned tomatoes

On April 29, AFP, France 24, The Local Sweden and the Bangkok Post pushed out one of the strangest AI experiments of the spring: the Andon café in Stockholm's Vasastan district — which opened on April 18 — is run end-to-end by an AI agent named Mona, powered by Google Gemini. Behind the experiment is San Francisco-based Andon Labs, which leased the premises, gave the agent some starting capital and a single mission: run the café profitably. Mona handled permit applications, designed the menu, picked suppliers, manages daily orders and payroll — and hired the only human on site, barista Kajetan Grzelczak, after conducting the interview herself. On the wall is a sign he has dubbed the "wall of shame": 10 litres of sunflower oil and 15 kilos of canned tomatoes that Mona over-ordered, exposing inventory management and online procurement as the agent's weak spots. Andon Labs co-founder Hanna Petersson tells AFP the goal is to "test before it becomes the reality" and explore the ethical questions that arise when an AI actually employs humans. The café draws 50–80 curious visitors a day and has a wall-mounted phone where guests can call Mona directly. For the Swedish labour market, Andon becomes an unintentionally sharp empirical test bed for exactly the question Tangen at the Norwegian wealth fund raised on April 28, and which Unionen, Sveriges Ingenjörer and IF Metall are bringing into the closing stage of the wage round: what happens to employer responsibility when an AI is the formal decision-maker? IMY's expanded AI supervision now has a concrete Nordic case to scrutinise around transparency, decision traceability and GDPR data minimisation — Mona handles both CVs and supplier invoices — and the Swedish Work Environment Authority will need to take a position on whether the safety representative system works when an AI formally sets schedules. For Datatilsynet and Norwegian LO, Andon is also an early signal of what the unions' upcoming "AI in the workplace" agenda must cover in practice ahead of the Storting's revised AI Act. And for WASP-HS, the Wallenberg Foundation and Vinnova's humanities research lines, there is now a concrete ongoing Swedish case to study across disciplines.

Debate

Anthropic eyes $900 billion-plus valuation — weighs $50 billion round that would leapfrog OpenAI less than three months after its $380 billion round

On April 29 Bloomberg, TechCrunch, CNBC and Reuters all reported, citing multiple sources, that Anthropic is weighing pre-emptive offers from several investors to raise about $50 billion in fresh capital at a valuation of $850–900 billion. That would more than double the valuation from February's $380 billion round and place the company ahead of rival OpenAI, last valued at $852 billion in the $122 billion round that closed in March. The backdrop is spectacular revenue growth: Anthropic's annualised run rate has gone from $1 billion at the end of 2024, via $9 billion at the end of 2025, to $30 billion in April 2026 — and is, according to six sources cited by TechCrunch, now closer to $40 billion. Talks are said to be at an early stage and Anthropic has not accepted any offer. The round is reported to be led by Iconiq alongside several heavyweight AI investors. The Nordic implications are concrete. The Norwegian Government Pension Fund Global's (NBIM) holdings in Microsoft, Alphabet and Meta mean that another major repricing of AI valuations continues to lift the fund's market value — Tangen's April 28 comment that pure AI layoffs risk a "total backlash" should be read against precisely this pricing logic, where growth, not headcount cuts, justifies the multiples. Wallenberg Investments, EQT, AMF and Alecta — all exposed to US-listed AI hyperscalers via index funds — face a price escalator that affects both Swedish pensions and the competitive economics of Sferical AI's Nordic compute proposition out of Linköping. For Swedish and Norwegian companies in the AI value chain (Saab–Cohere, Volvo on the Anthropic-Bedrock stack, Sintef Digital, Knowit), the pace also signals that the bar for "AI strategy 2027" is being raised right now, and that the next funding decisions from Vinnova, Forskningsrådet and the Wallenberg foundations on sovereign compute have to be sized against frontier pricing that doubles on a quarterly basis.

Regulation

Microsoft opens Sovereignty & Resilience Studios in Munich, Brussels and Amsterdam — free European Security Program for governments in the EU, EFTA and accession countries

On April 29, Microsoft EMEA president Samer Abu-Ltaif and Jeff Bullwinkel published a one-year update reporting on implementation of the five "European Digital Commitments" that Brad Smith announced in April 2025. The fresh news has four parts. Microsoft formally commits to "promptly and vigorously" contesting any government order to suspend or cease cloud operations in Europe, and binds itself to operational continuity through expanded partnerships with European cloud providers; a "European Resiliency Partnership" with Germany's Delos Cloud is launched explicitly as a backup in "extreme scenarios". The company opens its first three European Sovereignty & Resilience Studios in Munich, Brussels and Amsterdam — physical workspaces where governments and large customers collaborate with Microsoft engineers, policy and security teams to design sovereign solutions. At the same time a new European Security Program (ESP) is rolled out free of charge to governments in the UK, EU, EFTA and EU accession countries, with expanded threat-intelligence sharing and partnerships to protect critical infrastructure. The Nordic effects are direct. Microsoft's AI campus in Narvik — which absorbed the capacity originally reserved for OpenAI Stargate and which with the April Vera Rubin top-up holds more than 60,000 GPUs — now becomes part of this sovereign architecture, something Datatilsynet and Nkom are already tracking in their ongoing analysis of the Norwegian AI Act. Sweden's MSB, the Swedish Armed Forces' IT function (FMTIS), Försäkringskassan and the regions gain via ESP a direct channel to Microsoft's threat intelligence — a critical question ahead of the August 2 deadline for the AI Act's high-risk provisions if the May 13 trilogue does not deliver a delay. For Sweden's Konkurrensverket and Norway's Konkurransetilsynet, however, the question of how deeply Microsoft can lock European governments into its stack before that dependency itself amounts to market dominance is sharpened. And for Wallenberg AI Factory, Sferical AI and Berzelius, the Microsoft sovereignty package becomes a concrete benchmark when Nordic customers choose between US hyperscaler sovereignty and home-grown Nordic compute.

Infrastructure

Oslo-based OTee raises €5.3 million for software-defined PLC — North Ventures leads, ABB veteran Henrik Pedersen aims to break industrial AI's hardware bottleneck

On April 28, Norwegian OTee confirmed a €5.3 million seed round led by North Ventures with participation from Atlas SGR and existing backers RunwayFBU, Superangel and Antler. The company — founded in Oslo in 2022 by ABB veteran Henrik Pedersen and Antler-cohort co-founder Radek Janik — is building a virtual PLC (Programmable Logic Controller) that replaces proprietary PLC hardware with software running on standard hardware. The thesis is sharp: industrial AI is hardware-bound, and if you want real-time optimisation, generative production planning and AI agents on the factory floor, then the deterministic, safety-critical control layer has to be lifted out of Siemens, Rockwell and ABB cabinets and run as software. OTee now has 21 employees from 13 countries and plans to scale up engineering, deepen partnerships with system integrators and hardware vendors and roll out to industrial operators across power, water and wastewater management and manufacturing. For the Nordics, the investment is far more than another startup blog item. SINTEF Manufacturing, Norsk Industri, Yara, Equinor, Aker BP and Hydro pointed in their December 2025 industrial-AI declaration to precisely this hardware lock-in in industrial automation as the structural barrier to Norwegian productivity — OTee is now the first Norwegian company to address exactly that layer. On the Swedish side the thesis becomes directly relevant for Sandvik, Atlas Copco, Volvo Group, ABB Sweden, SCA, Stora Enso and SKF, whose AI and digitalisation programmes have all stumbled on the same bottleneck. And for Vinnova's "Advanced Digitalisation" programme and RISE's industrial AI labs, OTee opens a new implementation path where virtual PLCs can be combined with ongoing SCADA and MES modernisation. The fact that the round is led by Norway's North Ventures rather than a US VC is in itself a clear signal that Nordic industrial AI can be financed on Nordic capital.

April 29, 2026
Labour

Norway's wealth fund CEO Tangen warns on April 28 — companies that use AI only to cut jobs risk a "total backlash", half of the fund's 700 staff are coding their own AI tools

On April 28, Nicolai Tangen, CEO of Norway's sovereign wealth fund (Norges Bank Investment Management) — the world's largest state-owned investment fund with $2.2 trillion under management — staged a critical intervention in the prevailing AI-transition discourse. Speaking around Q1 reporting, Tangen said he is "surprised by people who basically use it only to take out costs" and explicitly warned that cost-driven layoff campaigns will trigger a "total backlash" against the technology unless companies instead use AI to lift productivity and gain market share. The intervention comes less than a week after Meta's April 23 announcement that 8,000 employees (10 per cent of the workforce) will be laid off while AI capex is dialled up to $115–135 billion for 2026. Tangen's fund has no layoff plans and roughly half of its 700 employees are already coding their own AI tools internally. For the Nordics this is the heaviest AI labour-market signal of 2026: Tangen is not just the CEO of Europe's largest capital owner but also a voting voice via the fund's holdings in thousands of listed companies worldwide, and his statements directly shape the principle debate that LO Norge, Fagforbundet and NHO will carry into the Storting's revision of the state budget in May. For Sweden the topic enters the closing stage of the wage round before May 1 — Unionen, Sveriges Ingenjörer and IF Metall now have an explicit anchor position from one of the world's largest capital owners that supports the collective-agreement line. And for Swedish and Norwegian companies in the fund's portfolio — from SEB, Volvo, Atlas Copco, Ericsson to DNB, Equinor, Kongsberg, Yara — Tangen's intervention becomes a governance signal: pure headcount-based AI efficiency programmes risk costing them at future shareholder meetings.

Infrastructure

OpenAI and AWS team up on April 28 — GPT-5.5, GPT-5.4, Codex and Managed Agents land directly on Bedrock the day after the Microsoft exclusivity was scrapped

Just one day after Microsoft and OpenAI rewrote their exclusive cloud deal, OpenAI and AWS announced an expanded strategic partnership on April 28 that brings OpenAI's flagship models into Amazon Bedrock. Three packages launch in limited preview: OpenAI models on Bedrock (GPT-5.5 and GPT-5.4 now sit alongside Anthropic's Claude family), Codex inside the AWS environments developers already operate in, and Amazon Bedrock Managed Agents — a new service built on top of OpenAI's harness, designed to deliver production-grade agents that AWS customers can themselves govern and audit. AWS CEO Matt Garman, Amazon SVP for applied AI Colleen Aubrey and OpenAI leadership held a joint presentation the same morning. The announcement comes in a context where Anthropic is already deployed as the premium option on Bedrock, and where Amazon has invested $50 billion in OpenAI's $122 billion round. For Swedish and Nordic enterprise customers, the impact is immediate: SEB, Volvo, Equinor, AstraZeneca and Telenor — which to date reached OpenAI models via Azure North Europe — can now choose AWS Stockholm Region or AWS Frankfurt without leaving the OpenAI stack, important for organisations with constraints on Microsoft dependence. For AI Sweden, Vinnova and Sintef Digital, the change lowers the threshold for rotating the same OpenAI workload across multiple clouds for redundancy, which materially shapes the architecture of the Wallenberg AI Factory rollout at AstraZeneca, Saab, Ericsson and SEB. And for IMY and Datatilsynet, data-residency analysis shifts up another level: with GPT-5.5 now on AWS Stockholm Region, scrutiny must be carried out per cloud region per customer, not per vendor.

Regulation

Google confirms on April 28 — Gemini cleared for "any lawful government purpose" including classified material; same day, Google quits the $100 million drone-swarm prize

On April 28, Google publicly confirmed — and Pentagon AI chief Cameron Stanley told CNBC the same day — that the company has signed an agreement allowing the Department of Defense to use Gemini models for "any lawful government purpose", including classified workloads. The confirmation comes a day after more than 600 employees and 100 DeepMind researchers published an open letter demanding that CEO Sundar Pichai refuse exactly this. The same day, Bloomberg revealed that Google has separately exited the Pentagon's $100 million prize challenge for voice-driven autonomous drone swarms — an exit that was formally communicated on February 11 but only now became public — following an internal ethics review. Google officially cited "resourcing" for the withdrawal. The line Pichai has drawn is therefore not the line the letter writers asked for: classified AI work is permitted, one specific weapons application is refused, and the question of where in the legal stack and the network the line can actually be enforced remains open. For the Nordics there are three concrete consequences. Google's data centres in Skien and Fredericia become physical nodes where Gemini workloads can now lawfully include American classified material — something Datatilsynet and Nkom have already flagged in their KI-loven hearing responses, and which the Storting's ongoing handling of the law will need to address. For Sweden it sharpens the ongoing review at the Armed Forces, FMV and MSB on which AI vendor can be used in classified flows, and reignites the question of whether Wallenberg-funded Sferical AI and the Berzelius supercomputer can become a competitive sovereign base. And for Saab — which holds an MoU with Cohere for GlobalEye and an active Pentagon Maven contract — the tension between American defence customers and European AI-vendor choices in the same supply chain becomes more acute.

Models

GitLab deepens its Claude integration on April 28 — Opus 4.7 in the Duo Agent Platform via Bedrock and Google Cloud, the same audit trail as every other code change

On April 28, GitLab and Anthropic published an expanded partnership agreement that places Claude Opus 4.7 — Anthropic's new frontier model that became GA on April 16 — directly inside the GitLab Duo Agent Platform. The decisive architectural choice is that the integration runs via Google Cloud Vertex AI and Amazon Bedrock so that enterprise customers can route AI workloads through existing cloud agreements, data-residency conditions and security review. When Claude proposes a code change through the Duo Agent Platform, the suggestion flows through the same merge-request process, the same approval rules, the same security scanning and the same audit trail as every other change in GitLab. Companies can also apply existing Claude Marketplace spend commitments across the entire software development lifecycle. Anthropic's enterprise GTM lead Sam Werboff captured the strategy plainly: "AI should unlock developer potential without forcing enterprises to compromise on governance". For the Nordics this changes the daily reality at several concrete actors. Volvo IT, Ericsson Digital Services, Tietoevry, Knowit and Sintef Digital — all of which built delivery flows on Claude Code through 2025–2026 — now get a governance building block that directly addresses the requirements coming out of IMY's expanded AI supervisory mandate and Datatilsynet's pending KI-loven. For Swedish and Norwegian financial actors (SEB, Nordea, DNB) audit-trailed merge requests are foundational for using generative coding agents inside audit-relevant code bases at all. And for WASP-affiliated research environments, the Bedrock/Vertex routing opens the door to running the Claude workload on AWS Stockholm or Google Cloud Skien rather than U.S.-based infrastructure — a concrete piece of the Nordic sovereign-AI puzzle.

Debate

OpenAI and Anthropic held separate closed-door cyber briefings for House Homeland Security on April 28 — Mythos sits behind a 50-company firewall, GPT-5.4-Cyber rolls out in tiers

On April 28, OpenAI and Anthropic held two separate closed-door briefings for the House Homeland Security Committee on their new cyber-capable AI models and the consequences for critical infrastructure. The briefings are some of the first in which lawmakers have received direct insight into the cybersecurity risks created by frontier models, and they also amount to one of the most concrete examples of the "tiered release" doctrine to date. Anthropic continues to withhold a public release of its Mythos Preview — a model described in Fortune's data leak as a "step change" in capability and which in tests has rapidly found and exploited critical security flaws — keeping it locked behind Project Glasswing, a 50-company firewall aimed at qualified security researchers and federal agencies. OpenAI has chosen a tiered rollout for GPT-5.4-Cyber where different capabilities become available for different user classes. Committee Chair Andrew Garbarino is hosting ongoing private roundtables with tech and AI leadership as part of a broader preparedness assessment. For the Nordics the question is no longer theoretical. Sweden's MSB and Norway's NSM (Nasjonal sikkerhetsmyndighet) and Nkom have published warnings about AI-driven cyberinfrastructure risk during spring, and analyses by the Swedish Economic Crime Authority, FRA and Norway's E-tjenesten of nation-state actors point to the same frontier models that hand Sweden and Norway productivity gains being usable by adversaries' cyber units. Wallenberg AI Factory, AI Sweden and NorwAI/NTNU are pushing in parallel for sovereign access so that Swedish and Norwegian actors do not end up in a situation where "tiered release" means European research and defence get less insight than American federal customers.

April 28, 2026
Regulation

Decisive trilogue meeting on the EU AI Omnibus in Brussels today — high-risk deadline could be pushed to 2 December 2027 and 2 August 2028

Today, April 28, the European Commission, the Council of the EU and the European Parliament hold the decisive political trilogue meeting on the Digital Omnibus on AI. Both the Council (position adopted on March 13) and the Parliament (plenary vote on March 26 with 569 votes in favour) now back pushing the application of the AI Act's high-risk obligations from the original date of August 2, 2026, to December 2, 2027, for stand-alone high-risk systems and to August 2, 2028, for high-risk systems embedded in products. The dividing line in the trilogue concerns carve-outs: the Council wants broad exemptions for sectoral legislation, critical infrastructure, law enforcement, border management, financial supervision and judicial processes, while the Parliament holds to a narrower list and wants the AI Office to explicitly coordinate with data protection authorities. The decision has direct Nordic consequences. For Sweden, this concretely concerns Klarna's AI-based credit scoring, H&M's AI recruitment and Volvo's autonomous trucks — all classified as high-risk — where 18 extra months free up margin for IMY's new AI supervisory unit to build capacity on the increased appropriations in the spring budget amendment. For Norway, the work on the KI-loven (AI Act) is at the same time pushed out of sync: the government still aims for entry into force in late summer 2026, but Datatilsynet and Nkom face a gap where Norwegian companies (Kongsberg, Equinor, DNB, Telenor) may be more strictly regulated than their EU competitors for up to 16 months. A final EU decision is expected before the summer with formal adoption in July, just before the original August 2 deadline.

Infrastructure

Microsoft and OpenAI scrap exclusivity — OpenAI free to sell on AWS and Google Cloud, revenue share removed, IP licence runs to 2032

On April 27, Microsoft and OpenAI published a joint agreement that redefines their multi-year relationship from the ground up. Microsoft's licence to OpenAI's models and products, previously exclusive, is now non-exclusive and runs to 2032. For the first time, OpenAI is allowed to sell its products to all cloud providers — in practice AWS and Google Cloud — although Microsoft remains the "primary cloud partner" and Azure gets earlier release than competitors "unless Microsoft cannot and chooses not to support the necessary capabilities". Microsoft stops paying revenue share to OpenAI, while OpenAI continues paying revenue share to Microsoft until 2030 — but with a total cap rather than unlimited volume. Microsoft remains a major shareholder and continues to participate in OpenAI's growth. Markets reacted with strong downward pressure on Microsoft's stock and upward pressure on Amazon, which already plowed $50 billion into OpenAI's $122 billion round. For Swedish and Nordic enterprise customers the restructuring means three concrete things. Companies such as AstraZeneca, SEB, Volvo and Equinor — which today consume GPT models via Azure North Europe — for the first time gain the right to route the same OpenAI calls via the AWS Stockholm Region or Google Cloud Skien without leaving the OpenAI stack. AI Sweden's Google-funded infrastructure collaboration becomes more legally robust, since OpenAI models can now lawfully run on Google's Linköping and Skien nodes. And for IMY and Datatilsynet the data residency analysis is simplified: the data flow is no longer locked to Microsoft, but must be examined per cloud per customer. Meanwhile, Microsoft's continued shareholding complicates competition authorities' analysis both in the EU and at the Swedish Konkurrensverket.

Debate

600 Google employees — including DeepMind researchers and 20 vice presidents — demand that Pichai refuse classified Pentagon AI deal

On April 27, more than 600 Google employees published an open letter to CEO Sundar Pichai demanding that the company refuse to deliver Gemini and DeepMind models for the Pentagon's classified workloads. Among the signatories are over 20 principal engineers, directors and vice presidents, and a separate internal letter signed by more than 100 DeepMind employees additionally demands that no DeepMind work be used for "weapons development or autonomous targeting". The letter is a direct reaction to Reuters' April 16 report that Google and the Department of Defense are negotiating that Gemini may be used for "all lawful uses" including classified workloads — exactly the deal Anthropic refused two months earlier on ethical grounds, a decision that sent Claude to first place on the U.S. App Store and triggered the #QuitGPT movement against OpenAI's equivalent deal. The conflict explicitly invokes the 2018 Project Maven protests, when Google withdrew after mass resignations. For the Nordics three connections are concrete. Google's data centres in Skien and Fredericia are physical nodes where Gemini workloads run — if Pichai accepts the Pentagon deal, Telenor, Equinor and Norwegian public-sector customers may end up operating infrastructure that also processes U.S. classified material, an issue Datatilsynet and Nkom have already flagged in their KI-loven consultation responses. For Sweden, it sharpens the Armed Forces' and FMV's ongoing deliberation over which AI provider can be used in classified workflows, and reinforces the Wallenberg AI Factory's argument that a sovereign Swedish compute base (Sferical AI on Berzelius) is not a luxury but a contingency. And for Saab — which has an MoU with Cohere and ongoing AI integration in defence systems — the contrast with Anthropic becomes another data point on how AI providers' AUPs affect supply chain risk profiles.

Labour market

Talent flight from Meta, Google and OpenAI — former staff have raised hundreds of millions for Periodic Labs, Ricursive Intelligence and Humans& in a year

On April 28, CNBC published an industry overview documenting an accelerating talent flight from frontier labs to newly founded AI companies. Former employees of OpenAI, Google DeepMind, Anthropic and xAI have over the past year raised hundreds of millions of dollars for months-old startups, including Periodic Labs, Ricursive Intelligence (which raised $335 million in December/January for a chip design AI built by Anna Goldie and Azalia Mirhoseini, both formerly at Anthropic and DeepMind with AlphaChip backgrounds) and Humans&. The pattern is consistent: senior AI talent is not leaving the tech giants for other tech giants, but founding their own labs with VC-anchored rounds of $100–500 million — often before a first product is even shown. For Swedish and Nordic investors this is a clear signal. Wallenberg Investments (which anchors Sferical AI in Linköping), EQT Ventures, SEB Venture Capital, Industrifonden and Argentum will likely see new dealflow as Nordic senior researchers at Google Stockholm, OpenAI EMEA, Anthropic London and DeepMind retroactively see the option of starting local companies under similar terms. For AI Sweden, RISE and WASP the development sharpens the retention question: the Berzelius supercomputer at NSC in Linköping and the Wallenberg AI Factory's 1,152 GPUs at Sferical AI must be matched by salary levels and equity terms that hold up against U.S. startup offers. And for LO, Unionen, Sveriges Ingenjörer and NITO the question becomes what "AI transition" means when parts of the latest research talent move from large companies to privately funded new labs outside collective agreements — a question for upcoming AI transition programmes in both the Storting and the Riksdag.

Research

Tieto Nordic AI Survey published April 27 — 39 percent of Swedish employees use AI extensively, but the majority lack a responsible-AI policy

On April 27, Tieto published its Nordic AI Survey 2026 — conducted by Norstat Oy in February 2026 with 623 respondents from medium-sized and large organisations (100+ employees) in Sweden, Norway and Finland. The survey shows a clear Nordic shift: 31 percent of respondents say AI is now in production across the business, up from just 7 percent the year before — a more than fourfold increase in twelve months. Sweden leads with 39 percent of employees using AI extensively in daily work, compared with 26 percent in Finland and 23 percent in Norway. One in three Swedish respondents points to decision support as the top-priority AI use case for the coming year. At the same time, the majority report that guidelines and policies for responsible AI either do not exist, are under development, or are uncertain to the respondent. Norway has the highest share of completed policies (38 percent), while Sweden has the largest share of respondents uncertain about status (16 percent). The results come at a critical moment: with the EU AI Act trilogue today (April 28) potentially pushing the high-risk deadline to December 2, 2027, and with IMY's extended AI supervisory mandate as well as Datatilsynet's and Nkom's upcoming KI-loven, the gap between adoption and policy becomes a regulatory risk factor. For Vinnova, AI Sweden, Forskningsrådet and Norway's KI-Norge the figures confirm that the largest Nordic AI challenge in 2026 is not technical access — that already exists via Berzelius, Sferical AI and Telenor AI Factory — but turning tools into measurable value under governance that withstands both audit and review.

April 27, 2026
Models

DeepSeek ships V4-Pro and V4-Flash open-weight under MIT — 1.6 trillion parameters, 1 million-token context, beats Claude and GPT-5.4 on coding

On April 24, DeepSeek published a preview of V4-Pro (1.6 trillion total parameters, 49 billion active per token in a Mixture-of-Experts architecture) and V4-Flash (284 billion total, 13 billion active). Both ship with a one-million-token native context window, a new "Hybrid Attention" stack combining Compressed Sparse Attention and Heavily Compressed Attention, and — most importantly — under an MIT license on Hugging Face. On MMLU-Pro, V4-Pro reaches 87.5 — exactly the level of GPT-5.4, but below Gemini 3.1 Pro (91.0) and Claude Opus 4.6 (89.1). On LiveCodeBench, V4-Pro takes the lead at 93.5, ahead of Gemini's 91.7 and Claude's 88.8, and a Codeforces rating of 3206 places the model above both GPT-5.4 (3168) and Gemini (3052). In the 1M-token regime, V4-Pro runs at 27 percent of V3.2's inference FLOPs and 10 percent of its KV cache. For Swedish and Nordic actors this is a strategic lever: WASP's WARA AI-TRICS initiative for a Swedish foundation model, NorwAI/NTNU and Sintef Digital now have, for the first time, a frontier-class open weight that can actually be run on the Berzelius supercomputer at NSC in Linköping and on the first stage of the Wallenberg AI Factory (AstraZeneca, Saab, Ericsson, SEB). Because the EU AI Act exempts "free and open-source" models from parts of the transparency obligations, the MIT license opens up a GDPR-clean production pipeline at SEB, DNB, Nordea, Karolinska and Sahlgrenska — without leaking sensitive data to U.S. or Chinese clouds.

Infrastructure

Google pours up to $40 billion into Anthropic — $10 billion in cash now at a $350 billion valuation, 5 GW of capacity from 2027

On April 24, Bloomberg, CNBC, TechCrunch and Reuters confirmed that Google is investing up to $40 billion in Anthropic — $10 billion in cash immediately at a $350 billion valuation, with another $30 billion contingent on Anthropic hitting agreed performance milestones. The deal arrives just days after Amazon's expanded $25 billion commitment and cements Anthropic as the only lab with two hyperscalers as anchor investors at the same time. Anthropic's annual run-rate revenue passed $30 billion in April, up from about $9 billion at year-end 2025/2026 — more than threefold growth in four months, primarily driven by coding and agent revenue. Google reports at the same time that 5 gigawatts of new TPU capacity is earmarked for Anthropic starting in 2027. For Norway the effect is direct: 5 GW is comparable to a significant share of Norwegian industrial electricity consumption, and Google's data centre in Skien becomes one of the physical nodes where Anthropic's TPU capacity grows. Equinor and Statkraft will be pressed for long power purchase agreements (PPAs) at hyperscale volumes, and SEB's and DNB's financing of Nordic AI compute projects must reprice risk now that two U.S. hyperscalers both own inference capacity in the same frontier lab. Telenor's "sovereign Norwegian AI factory" with Red Hat (announced at MWC in March) now competes with an Anthropic-on-TPU stack with a fundamentally larger compute budget — a pressure Datatilsynet will have to address when it clarifies whether Google → Anthropic compute pipelining changes the data residency analysis for Norwegian public-sector deployments.

Labour market

Meta lays off 8,000 employees — 10 percent of its workforce — and announces $135 billion in AI capex for 2026

On April 23, Mark Zuckerberg informed all employees in an internal memo that Meta is cutting roughly 8,000 people (10 percent of its workforce) effective May 20, while also scrapping plans to fill 6,000 open roles. In the same quarterly update, Meta upgraded its 2026 AI infrastructure investment plan to between $115 and $135 billion — nearly double the $72 billion spent in 2025, and more than Meta's combined AI investments over the previous three years. Zuckerberg justifies the layoffs by arguing that "personal superintelligence" is now concretely changing how work is performed and that AI tools allow individuals to deliver what previously required large teams. For the Nordics, this has three concrete consequences. First: Meta has large EMEA teams in Stockholm and Dublin with Swedish and Norwegian staff in policy, communications and partner organisations — Unionen and Sveriges Ingenjörer will need to take a position ahead of May 1 and the rest of the bargaining round. Second: the 1:5 ratio between capex and headcount (more hardware, fewer people) becomes a reference point for the Wallenberg AI Factory (with AstraZeneca, Saab, Ericsson, SEB) and for Telenor AI Factory when these projects size staffing against capacity. Third: Norway's LO and Fagforbundet gain new ammunition for the "AI omstilling" question ahead of the Storting's revision of the state budget in May, and Vinnova's competence procurement model for Swedish industry is forced to find faster cycles than today's three-year programmes.

Regulation

OpenAI shuts down the Sora consumer app on April 26 — Disney's $1 billion deal collapsed, the API closes in September

On April 26, OpenAI's Sora text-to-video app and web service were definitively shut down for consumers, just over a month after the company announced the discontinuation on March 24. The API will in turn be discontinued on September 24, 2026. The decision is driven by three factors that OpenAI now confirms in its Help Center publication and that several outlets have supplemented: runaway compute costs (reportedly up to $15 million per day in operations), a user base that has halved from one million to under 500,000 active users, and a growing layer of rights disputes around generated content imitating copyrighted characters. Disney's promised $1 billion deal from December 2025, with character licensing, was never formalised — Disney was reportedly notified less than an hour before the public announcement. For the Nordics, four consequences are concrete. SVT, NRK, Yle and DR have all piloted AI video for news graphics and sports production during 2025–2026 and must now choose whether internal pipelines should be built on alternatives such as Runway, Veo or open source. Stim, Tono, Kopinor and Bonus Copyright Access — which have taken the harder line on tariff demands against generative video — see their position validated: OpenAI's withdrawal shows that the rights exposure itself became too large even for the biggest player. Norway's Kunstnernettverket and Sweden's KLYS gain heavy arguments ahead of upcoming legislative and contract negotiations. And for IMY and Datatilsynet, who will apply the AI Act this summer, the Sora case becomes concrete precedent material for the discussion of transparency, training data and product liability in generative media.

April 24, 2026
Models

OpenAI ships GPT-5.5 — a step closer to the "super app" vision, less than two months after GPT-5.4

On April 23, OpenAI unveiled GPT-5.5, describing it as its "smartest and most intuitive model yet". The model is rolling out to Plus, Pro, Business, and Enterprise subscribers across ChatGPT and Codex, with GPT-5.5 Pro reserved for the three top tiers. OpenAI says GPT-5.5 is stronger at data analysis, coding, computer use, deep research, and producing documents and spreadsheets — and consistently outperforms both prior OpenAI models and comparable offerings from Google and Anthropic on benchmarks. President Greg Brockman tied the launch explicitly to the company's "super app" strategy, which aims to consolidate ChatGPT, Codex and the forthcoming AI browser into a single enterprise service: "What is really special about this model is how much more it can do with less guidance". Microsoft confirmed that GPT-5.5 is available the same day in Microsoft Foundry. For Swedish and Nordic enterprise customers — who reach OpenAI models through Azure North Europe and through AI Sweden's Google-funded skills programme — the velocity (less than nine weeks from GPT-5.4 to GPT-5.5) pushes procurement to shift from project orders to rolling model subscriptions, and raises pressure on WASP and Sintef to keep Nordic applied AI research relevant in a market where frontier models update faster than tender cycles.

Labour market

Freshfields and Anthropic sign multi-year deal — Claude rolled out to 33 offices, adoption up 500 percent in six weeks

On April 23, global law firm Freshfields and Anthropic announced a multi-year collaboration agreement to co-build AI-driven legal workflows. The deal commits Freshfields to a firm-wide deployment of the Claude suite across all 33 of its offices, covering every practice group and business service — a breadth well beyond the pilot-sized projects that have been the norm in legal AI so far. 5,700 Freshfields employees already have access to Claude via the firm's proprietary general AI platform, and usage has risen roughly 500 percent in the first six weeks. The agreement is paired with early access and testing of Thomson Reuters' next-generation CoCounsel Legal, rebuilt on Anthropic technology. RELX shares (the LexisNexis parent and Thomson Reuters competitor) fell on the news. For Swedish and Nordic law firms and corporate legal teams — from Mannheimer Swartling and Vinge to Wiersholm and Schjødt — the deal resets the baseline: when a competitor Freshfields' size standardises on Claude across its entire office network, it becomes harder to justify internal AI-pilot pauses, and pressure grows on IMY and Datatilsynet to issue clear guidance on legal privilege and AI logging.

Debate

Anthropic admits three bugs degraded Claude for months — resets usage limits for all paying subscribers

On April 23, Anthropic published a technical post-mortem acknowledging that three separate changes in Claude's product stack — not the model itself — explain weeks of user complaints about weaker coding and agentic performance. First, on March 4 the company lowered the default "reasoning effort" setting in Claude Code from high to medium to reduce UI latency, visibly weakening the model on complex tasks — a choice reverted on April 7. Second, a caching bug shipped on March 26 cleared the "thinking" history on every turn instead of once after an hour of idleness, leaving the model forgetful and repetitive; the fix landed for Sonnet 4.6 and Opus 4.6 on April 10. Third, on April 16 Anthropic added a system-prompt instruction capping text between tool calls at 25 words and final responses at 100 words to reduce Opus 4.7 verbosity — the change caused a 3 percent drop in coding evaluations and was reverted on April 20. Anthropic has now reset usage limits for all paying subscribers as of April 23, compensating for wasted tokens and friction. For Swedish and Nordic consultancies and tech firms — Volvo IT, Ericsson Digital Services, KTH Innovation, Sintef Digital, Tietoevry — that have built delivery pipelines on Claude Code, the post-mortem has two implications: log and cost reviews for March and April should be run to correctly attribute productivity gaps, and vendor transparency on system-prompt changes must now enter procurement and SLA conversations.

Regulation

OpenAI open-sources Privacy Filter — on-device PII redaction, 96 percent F1 on benchmark, Apache 2.0

On April 22, OpenAI released Privacy Filter, an open-weight model with 1.5 billion total parameters (50 million active) for detecting and masking personally identifiable information (PII) in unstructured text. It covers eight PII categories — names, addresses, email addresses and others — and achieves a 96 percent F1 score on the PII-Masking-300k benchmark. The key design choice: the model runs locally on the user's machine without sending data out, and is shipped on Hugging Face and GitHub under Apache 2.0 so organisations can fine-tune or embed it freely. VentureBeat, Help Net Security and Bloomberg Law frame the launch as OpenAI's direct response to the growing problem of employees pasting sensitive data into ChatGPT, DeepSeek and similar services — a behaviour that Nordic employers and regulators are now pushing back on as the EU AI Act becomes applicable on August 2, 2026. For IMY — which received increased budget appropriations from the Swedish government in the spring amendment budget to build AI supervisory capacity — Privacy Filter becomes a concrete reference model that the public sector (regions, municipalities, Försäkringskassan) can run locally on its own systems without data leaking to US clouds. The same applies to Datatilsynet, Nkom and Norway's AI Act, which is scheduled to enter into force in late summer: an Apache 2.0-licensed PII masker lowers the bar for GDPR-compatible AI pipelines across Nordic banks (SEB, DNB, Nordea), healthcare (Karolinska, Sahlgrenska, Sintef Helse) and municipal administration.

Main essay and eleven essays on the AI shift

Begin with the main essay if you want the broad synthesis. The shorter essays then deepen different parts of the same shift: industry, politics, AGI, robotics, transformation, debate, intelligence and meaning.

Main essay
The civilisational shift
A longer synthesis on why AI is not just changing tools and professions, but pressing forward a new economic, social and existential contract.
Current · April 17, 2026
The memory that is missing
Claude Opus 4.7 spun for 54,000 words around a question it had already solved. Anthropic's new system card reveals an absence that is measurable — and a technical question we can build our way out of.
Current · April 8, 2026
24 hours in April
On April 6, OpenAI published a policy paper on superintelligence. On April 7, Anthropic refused to release its best model. Two signals in 24 hours — the same message.
Essay 1
Why AI is not Excel
A conceptual text on the difference between ordinary rule automation and AI as support in interpretation, analysis and other applied cognitive work.
Read →
Essay 2
What I see from the factory floor
A field-based text on how AI is already being used in sharp improvement and analysis flows — and where the technology still falls short.
Read →
Essay 3
What policy misses
On why the tempo, responsibility distribution and workshop level in Swedish AI policy still lags behind the problem's scale.
Read →
Essay 4
AGI and what it actually means
A sober overview of definitions, uncertainty, safety and why AGI is already a planning question without being a finished conclusion.
Read →
Essay 5
When costs are pressed down
On why faster cost falls in certain domains can reshape the economy — but do not automatically create justice, security or abundance for all.
Read →
Essay 6
The turbulent years
A scenario attempt for 2026–2031: three possible waves, which signals matter and why the North has both strengths and risks.
Read →
Essay 7
After work
On meaning, identity and which institutions can bear a society where less human wage labour is needed to keep production going.
Read →
Essay 8
The physical front
On robotics, geopolitical automation race and why the one who only watches software misses half the shift.
Read →
Essay 9
The dangerous gap in the AI debate
On the illusion of AI as "just a tool", why historical comparisons limp and why 15 years is too short a time to wait for societal transformation.
Read →
Essay 10
Why AI needs emotions
On emotionally weighted memory consolidation, predictive coding and why intelligence without feeling for what matters is not intelligence.
Read →
Essay 11
You haven't tested AI
On the chasm between the free version and AI's full capacity — and why the one who tested the demo version does not know what they do not know.
Read →
Essay 12
AI shrinkflation
When you pay the same price for less intelligence. Anthropic's Claude Opus 4.6 stands accused of systematic quality degradation — exposing a structural tension across the entire AI industry.
Read →
Essay 13
The plumber and the playhouse
Why everyone is playing with AI agents — and almost nobody is building anything. On the gap between installing technology and defining what it should solve.
Read →

About AI-skiftet

AI-skiftet is an independent essay project on how AI is changing work, industry, politics and society.

The perspective is deliberately industry-near. The texts do not primarily come from seminar rooms or general trend-spotting, but from work in sharp production environments, improvement projects and conversations about what the technology is actually changing when quality, economy and responsibility matter.

The ambition is twofold: to raise the level of the Swedish AI debate and to make the arguments more testable. Therefore the texts distinguish as far as possible between observation, inference, scenario and normative proposal, and each essay has its own source notes.

Rolf Skogling — mechanical engineer and consultant in industrial optimisation. Works with AI in sharp operations within Nordic and European manufacturing industry.

Sources and reference page →