Glossary

AI concepts explained in plain English β€” from the shop floor, not the boardroom.

This glossary exists so you can read the essays on AI‑skiftet without tripping over jargon. The explanations are deliberately down-to-earth β€” the goal is for you to understand what the concepts mean in practice, not to memorize a textbook. The list is updated continuously.

Basics

AIArtificial Intelligence
Basics
Umbrella term for systems that perform tasks traditionally requiring human thought β€” interpreting text, making decisions, identifying patterns. In today's debate it almost always means generative AI, but the concept is broader and includes everything from spam filters to self-driving cars.
Generative AIGenAI
Basics
AI that creates new content β€” text, images, code, audio, video β€” instead of just classifying or sorting. ChatGPT, Claude, Midjourney and Suno are all generative. The key point: they don't automate computation β€” they automate cognition. That's a qualitative difference.
Cognitive automation
Basics
The core of the shift: machines that do thinking, not just manual or computational work. Interpreting reports, writing analyses, making judgments, drafting documents. What makes AI fundamentally different from Excel, welding robots or ERP systems. Earlier automation replaced hands and spreadsheets β€” this one replaces parts of the head.
Neural networkDeep learning
Basics
The mathematical architecture behind modern AI β€” layer upon layer of computational nodes loosely inspired by the brain's synapses. "Deep learning" means the network has many layers. All large AI systems today β€” language models, image generators, robotic control β€” are built on neural networks.
TransformerTransformer architecture
Basics
The architecture behind essentially all modern AI models. Introduced by Google in 2017, it solved a fundamental problem: how a machine should grasp context in long texts. The foundation for GPT ("Generative Pre-trained Transformer"), Claude, Gemini and others.
LLMLarge Language Model
Basics
A large language model β€” the core of systems like Claude and GPT. Trained on vast amounts of text to understand and produce language. "Large" refers to billions of learned parameters.Like a person who has read the whole internet and can express themselves on most things β€” but who has never seen your specific factory. That's where RAG and fine-tuning come in.
SLMSmall Language Model
Basics
Smaller language models β€” typically 1–10 billion parameters. Faster, cheaper, runnable locally. Often specialized via fine-tuning or distillation. Google's Gemma, Microsoft's Phi and Meta's Llama in smaller sizes are examples. These are what make AI on the shop floor practically feasible without a cloud service.
Token
Basics
The smallest unit a language model works with β€” roughly ΒΎ of an English word, slightly less in Swedish. When a model is said to have a "context window of 200,000 tokens", that's how much text it can keep in working memory at once: around 150,000 words, or a thick book.
Prompt
Basics
The instruction you give an AI. Can be a simple question or a detailed brief with context, role, format and constraints. The quality of your prompt determines the quality of the response β€” just as a good work instruction produces better results than "do something good".
Hallucination
Basics
When an AI presents information that isn't true, in convincing language. It "makes things up" β€” not out of malice, but because it optimizes for coherent text, not for truth. Serious in professional use. You have to be able to verify.
Multimodal
Basics
A model that understands several types of input β€” text, image, audio, video, code β€” in the same conversation. Claude can, for example, analyze a photo of a machine's control panel, read the values, and suggest adjustments. That's multimodality in practice.
Deepfake
Basics
AI-generated audio, image or video that realistically mimics real people. Can be a politician's voice saying things they never said, or a video call with a "boss" who doesn't exist. The EU AI Act requires labelling of such content.

Models & training

Foundation model
Models
A large, generally trained model that can then be adapted to specific tasks β€” via fine-tuning, RAG or prompt engineering. GPT-4, Claude, Gemini and Llama are foundation models. Think of it as raw material: enormously capable, but only truly useful once it's shaped for a purpose.
ParametersWeights
Models
The learned values in a model β€” the "dials" adjusted during training. A model with 70 billion parameters has 70 billion adjustable values. More parameters generally means more capacity but requires more memory and compute.
Fine-tuning
Models
You take a general model and specialize-train it on a defined body of material β€” an industry's documentation, a company's processes, a technical vocabulary.Like taking a generalist engineer and giving them six months of deep immersion in your specific process. They lose none of their breadth, but gain sharp depth.
LoRALow-Rank Adaptation
Models
A technique for fine-tuning large models without retraining the whole thing. Instead, you adjust a small portion of the parameters. Makes fine-tuning feasible with limited hardware.Like swapping the lens on a camera instead of building a new camera.
RAGRetrieval-Augmented Generation
Models
The AI retrieves relevant documents before answering, instead of guessing from memory. You connect the model to your own data sources β€” manuals, drawings, process sheets β€” and it searches them for each query.The difference between answering an exam question open-book vs. closed-book.
EmbeddingVector embedding
Models
The technique of converting text (or images, audio) into numeric vectors that capture meaning, not just word order. "Dog" and "puppy" get similar vectors, while "dog" and "tractor" land far apart. The cornerstone of RAG systems β€” it's how the AI finds the right documents.
MoEMixture of Experts
Models
A model architecture with several specialized "expert modules" of which only the relevant ones activate per query. DeepSeek V4 has a trillion parameters but activates only 37 billion at a time.Like a hospital with specialists β€” you see the orthopedist, not the entire staff.
Distillation
Models
A large, capable model teaches a smaller model to mimic its behaviour. The teacher generates examples, the student learns the patterns. Result: a small model that performs near the large one but needs a fraction of the resources. This is how the best SLMs are created.
Quantization
Models
Reducing the precision of a model's parameters β€” e.g. from 16-bit to 4-bit numbers β€” so it fits in less memory and runs faster. Some quality loss, but often surprisingly little. A crucial technique for running large models locally.
Diffusion model
Models
The architecture behind most AI image generators (Midjourney, DALL-E, Stable Diffusion), and now video as well. The model learns to gradually "denoise" a random image until it matches your description.
Synthetic data
Models
Data generated by AI rather than collected from reality. Used to train models when real data is expensive, sensitive or insufficient. One AI generates thousands of variants of, say, x-ray images or factory scenarios β€” and another AI is trained on them.
RLHFReinforcement Learning from Human Feedback
Models
Method for shaping a model's behaviour based on human feedback. People rate the responses β€” "this was good, this was bad" β€” and the model is adjusted. That's why Claude and GPT feel helpful: they have been trained to behave like good conversation partners.
Reinforcement learning
Models
Learning method where a model learns by trying and receiving reward or penalty. The basis of RLHF, but also of robotics, game AI (AlphaGo) and autonomous systems. Especially relevant for physical AI, where the model must learn to navigate real environments.
Emergent abilities
Models
Abilities that appear unexpectedly in a model once it becomes large enough β€” without being trained in explicitly. A model trained on text suddenly begins to reason about mathematics or solve logic problems it has never seen. Central to the debate about the path to AGI.
Benchmark
Models
Standardized test for comparing AI models' performance. MMLU tests broad knowledge, HumanEval tests coding, ARC tests reasoning. Useful as a rough measure β€” but no benchmark captures how well a model performs in your specific application.
Open-weight model
Models
A model whose weights are freely available to download and run yourself. Llama (Meta), Gemma (Google) and Mistral are examples. "Open" doesn't always mean completely free β€” licence terms vary. The opposite: closed models like GPT and Claude, accessed only via API or chat.

Use

AI agent
Use
An AI system that doesn't just answer questions but acts autonomously β€” plans, performs tasks, uses tools, searches for information, and iterates until the job is done.The difference from an ordinary chatbot is like the difference between asking someone for directions and asking someone to drive you there.
Agentic workflow
Use
A workflow in which AI agents collaborate β€” one gathers data, one analyzes, one writes the report, one quality-checks. Systems like Anthropic's "Managed Agents" automate whole chains of cognitive work, not just individual steps. The shift from "model as a service" to "agent as a service".
MCPModel Context Protocol
Use
Open protocol (developed by Anthropic) that lets AI models plug into external systems β€” databases, APIs, file systems, tools. It lets an agent query your ERP, read email or update a database.Think of a USB standard, but for AI integrations. A connector that fits everywhere.
Prompt engineering
Use
The craft of formulating instructions that yield the best possible result. Includes techniques like role-setting ("you are an experienced process engineer"), few-shot examples (showing what you want), chain-of-thought (asking the model to think step by step), and structured output. Not magic β€” structured communication.
Context window
Use
The amount of information a model can "hold in its head" in a conversation. Measured in tokens. Claude has 200,000 tokens (about a full book); DeepSeek V4 handles 1 million. Larger window = more material without losing the thread.
Reasoning / thinking models
Use
Models that "think" step by step before answering β€” reasoning through the problem instead of giving a direct reply. OpenAI's o-series and Claude's extended thinking are examples. Better on complex problems, but slower and more expensive per query.
Physical AIEmbodied AI
Use
AI that operates in the physical world β€” robots, drones, autonomous vehicles, cobots on the shop floor. Models that understand the laws of physics well enough to control motion, grasp objects and navigate real environments. See the essay The physical front.
Digital twin
Use
A virtual copy of a physical asset β€” machine, production line or whole factory β€” updated in real time with sensor data. With AI, the twin can predict failures, simulate process changes and optimize operations without touching the physical equipment.
Vibe coding
Use
Creating software by describing what you want in natural language and letting AI write the code. No traditional programming required. The term was coined by Andrej Karpathy (co-founder, OpenAI) in February 2025. The line between "user" and "developer" is blurring.
Guardrails
Use
Built-in or externally imposed constraints that keep an AI from producing harmful, incorrect or unwanted content. Can be technical filters, policy instructions in the prompt, or separate monitoring systems. You don't want your customer agent promising discounts that don't exist.

Infrastructure

Cloud AI
Infrastructure
You use an AI running on someone else's servers (Anthropic, OpenAI, Google). Your data is sent there, processed, and the reply comes back. Upsides: always the latest model, no hardware of your own. Downside: your data leaves the building.
Local AIOn-premise AI
Infrastructure
The model runs on your own hardware β€” your computer, your server, your network. No data leaves the building. Requires enough memory (usually 32+ GB of RAM). Tools like Ollama and LM Studio make it accessible today.The cloud is renting a workshop. Local is owning your own. Full control, but you maintain it yourself.
Edge AI
Infrastructure
AI that runs directly in the device β€” a camera, a sensor, a robot β€” instead of sending data to the cloud. Faster response, no network dependency. Critical for real-time use: a quality camera on the line rejecting defective parts in milliseconds can't wait for a cloud reply.
GPUGraphics Processing Unit
Infrastructure
The graphics processor β€” originally designed for games, but perfect for AI computation because it performs thousands of parallel operations. NVIDIA dominates. Access to GPUs is one of the biggest bottlenecks in AI development β€” TSMC's chip capacity and ASML's lithography capacity set the pace.
Compute
Infrastructure
Umbrella term for the computational power required to train and run AI models. Compute is the AI world's oil: Amazon, Microsoft and Google are investing hundreds of billions of dollars in data centers. Access to compute determines who can build frontier models β€” and who becomes dependent.
APIApplication Programming Interface
Infrastructure
Programming interface β€” how your code or your system talks to an AI model. Instead of using a chat website, you send requests directly, in automated fashion. That's how companies integrate AI into their systems and workflows.
Inference
Infrastructure
When the model is actually used β€” generating a reply from your input. Training happens once (and costs hundreds of millions). Inference happens every time you ask a question. Inference costs are falling dramatically β€” that's why the marginal cost of cognition is approaching zero.
Latency
Infrastructure
The time from when you send a request until the reply starts arriving. Cloud services have latency that depends on server distance. Local and edge models have lower latency. In industrial applications, latency can be the difference between catching a defect and missing it.

Society & future

AGIArtificial General Intelligence
Society
An AI able to perform any intellectual task at human level. The definition is contested and has become a political marker. Whatever the exact definition, we are moving toward systems with ever broader abilities β€” it's that movement that matters. See the essay AGI and what it actually means.
ASIArtificial Superintelligence
Society
An AI that surpasses human intelligence in every respect β€” including scientific creativity, social skill and strategic thinking. Still hypothetical, but timelines are shortening. The question is no longer if but when β€” and above all: who controls it.
Scaling laws
Society
The observation that AI models become predictably better as you increase data, compute and model size. That's why billions flow into data centers. The debate question: do the scaling laws hold, or will we hit a ceiling?
Exponential progress
Society
AI capabilities are accelerating. What took a year in 2023 takes months in 2025 and weeks in 2026. Humans think linearly by nature: we extrapolate yesterday's pace forward. Exponential progress means reality consistently outruns our expectations.
Zero marginal cost
Society
The cost of each additional unit of cognitive work approaches zero β€” just as the internet did for distributing information. Concretely: an ICCM focus diagram that normally takes 16 hours of expert work can be delivered in 40 minutes. See the essay When costs are driven down.
AI alignmentAI Safety
Society
The research field working to make AI systems do what we actually want β€” not just what we happened to ask for. Increasingly important as systems become more capable. Anthropic was founded specifically with alignment as its core mission.
Existential riskx-risk
Society
The risk that AI at AGI or ASI level causes irreversible harm. The leaders of Anthropic, OpenAI and DeepMind have all compared AI risk to pandemics and nuclear weapons. Split into misalignment (the AI doesn't do what we want) and misuse (the AI is deliberately used for harm).
Red teaming
Society
Systematically testing an AI by trying to make it behave dangerously or incorrectly β€” before it is released. Like penetration testing of software, but for intelligent systems. A cornerstone of responsible AI development.
Digital feudalism
Society
A scenario where a handful of tech companies control the cognitive infrastructure β€” the models, the data, the compute β€” and the rest of society becomes dependent. Medieval feudalism, but with servers instead of farmland. The counter: open models, local AI, political awareness.
Post-labor economy
Society
An economy where ever less human paid work is needed to keep production going. Not necessarily a utopia β€” it demands new answers about identity, meaning, distribution and social structure. See the essay After the job.
AI ActEU AI Regulation
Society
The EU legislation that classifies AI systems by risk level and regulates them accordingly. Bans some applications (social scoring), and demands transparency and documentation. An important framework β€” but designed for a reality that has already moved on. Main deadline: 2 August 2026.
No terms matched your search. Try another word, or suggest a term.