Glossary of terms

This glossary covers the key terms you'll encounter throughout the programme. You don't need to memorise them before you start; they're explained in context as they come up. This page is here as a reference if you want to look something up along the way.

A

Agentic AI
AI systems that go beyond generating text or images and take actions on your behalf. This might mean browsing the web, clicking buttons, filling in forms, or completing multi-step tasks across different applications. Agentic AI is still relatively early-stage and works best for straightforward, supervised tasks. Covered in Thing 21 and Thing 22.
AI model
The trained system that powers an AI tool. When you use ChatGPT, the model behind it might be GPT-4o or GPT-5. When you use Claude, the model is one of Anthropic's Claude models. Think of the model as the engine and the product as the car: you interact with the product, but the model does the thinking.

B

Bias (in AI)
The tendency of AI systems to reflect and sometimes amplify the biases present in their training data. Because models learn from content created by humans, they may default to stereotypical depictions, underrepresent certain groups, or make assumptions that reflect historical prejudices rather than reality. Covered in Thing 16.

C

C2PA
Coalition for Content Provenance and Authenticity. A technical standard for embedding invisible metadata in AI-generated images (and other media) that identifies them as AI-created. Think of it as a digital watermark that helps establish whether content is real or synthetic. Covered in Thing 9.
Chain-of-thought prompting
A technique where you ask the AI to work through a problem step by step, showing its reasoning as it goes. This tends to improve accuracy on complex tasks because it forces the model to build its answer logically rather than jumping straight to a conclusion. Covered in Thing 3.
Chatbot
An AI tool you interact with through conversation, typing messages and receiving text responses. ChatGPT, Claude, and Gemini are all chatbots, though they can do much more than just chat.
Closed-source model
An AI model whose internal workings are not publicly available. You can use it through a product or API, but you cannot download, inspect, or modify the model itself. ChatGPT's GPT models and Anthropic's Claude models are closed-source. The opposite is an open-source model.
Context window
The amount of text an AI model can process at once, measured in tokens. A larger context window means the model can work with longer documents or remember more of your conversation. If you hit the limit, the model starts forgetting the earliest parts of your conversation.

D

Deepfake
Synthetic media, typically images, video, or audio, created using AI to convincingly depict someone doing or saying something they never did. The term now covers any AI-generated media designed to look or sound like a real person. Covered in Thing 9 and Thing 10.
Deep research
A category of AI tool that goes beyond simple search to produce structured, multi-page reports with citations. Unlike a standard chatbot response, deep research tools browse multiple sources, follow links, cross-reference findings, and take several minutes to produce their output. Covered in Thing 8.
Diffusion model
The type of AI architecture behind most current image generators. Diffusion models learn to create images by being trained on a process of adding noise to images and then reversing it. When you give one a text prompt, it starts with random noise and gradually refines it into an image guided by your description.

F

Few-shot prompting
A prompting technique where you provide the AI with one or more examples of the kind of output you want, so it can follow the pattern. "Here are three examples of how I'd like this formatted; now do a fourth one in the same style." Covered in Thing 3.
Fine-tuning
The process of taking an existing AI model and training it further on a specific dataset to make it better at a particular task or domain. This is how a general-purpose model can be adapted for medical, legal, or other specialised uses.
Free tier
The basic, no-cost level of access that many AI tools offer. Free tiers typically come with usage limits and may use your data for model training. Throughout this programme, every recommended tool has a free tier sufficient to complete the activities.

H

Hallucination
When an AI generates content that sounds confident and plausible but is partially or entirely fabricated. This might be an invented statistic, a non-existent source, or a factual claim that's simply wrong. Hallucinations are a fundamental feature of how current language models work, not a bug that's about to be fixed. Covered in Thing 15.

L

Large language model (LLM)
The type of AI that powers tools like ChatGPT, Claude, and Gemini. LLMs are trained on enormous amounts of text data and generate responses by predicting what word is most likely to come next, based on patterns in their training data. They don't "know" things the way people do; they construct responses that sound like plausible answers.
Local AI
Running an AI model on your own computer rather than using a cloud-based service. With local AI, your data never leaves your machine. The trade-off is that local models are generally less capable than the best cloud-based ones, because your laptop has less computing power than a data centre. Covered in Thing 22.

M

MCP (Model Context Protocol)
A standard, pioneered by Anthropic, for connecting AI tools to external applications. MCP allows a chatbot like Claude to interact with other software through conversational commands rather than requiring you to switch between applications. Covered in Thing 21.

N

Negative prompt
In image generation, an instruction telling the AI what not to include in the image. Not all tools handle negative prompts in the same way, but they're a useful way to refine your results. Covered in Thing 9.
No-code / vibe coding
Building functional applications, tools, or websites by describing what you want in plain language, with AI writing the code for you. You don't need to understand or even see the code. The term "vibe coding" was coined by AI researcher Andrej Karpathy in 2025. Covered in Thing 20.

O

Open Badge
A digital credential that provides verified evidence of learning or achievement. Each Thing in this programme has an associated Open Badge, which you can earn by submitting your activity output via cred.scot. Open Badges can be shared on LinkedIn, included in a portfolio, or referenced in professional reviews.
Open-source model
An AI model whose code and trained weights have been released publicly, allowing anyone to download, run, inspect, and modify it. Meta's Llama, Mistral, and DeepSeek are prominent open-source models. The opposite is a closed-source model. Covered in Thing 22.

P

Parameters
The internal learned connections in a neural network; roughly, the numbers that encode everything the model has learned during training. Model size is measured in parameters: "7B" means 7 billion parameters. More parameters generally means more capable, but also more demanding on hardware.
Prompt
The text you type into an AI tool to tell it what you want. A prompt can be anything from a single sentence to several paragraphs of detailed instructions. The quality of your prompt has a significant effect on the quality of the response. Covered in Thing 3.
Prompt engineering
The skill of writing clear, well-structured prompts to get better results from AI tools. Despite the name, it's not technical; it's closer to the skill of writing a good brief or giving clear instructions to a colleague. Covered in Thing 3.

Q

Quantisation
A technique for compressing an AI model to use less memory, with a modest trade-off in quality. A quantised version of a large model might run on hardware that couldn't handle the full version. Covered in Thing 22.

R

RAG (Retrieval-Augmented Generation)
A technique that reduces hallucinations by giving the AI access to a specific set of source documents to draw from, rather than relying purely on its training data. When an AI tool searches the web before answering, or when you upload a document for it to analyse, that's a form of RAG. Covered in Thing 15.

S

Speech-to-text (transcription)
AI that converts spoken words into written text. Modern speech-to-text systems can handle accents, background noise, and natural speech patterns with accuracy rates above 95% in good conditions. Covered in Thing 10.

T

Text-to-speech
AI that converts written text into spoken audio. Modern AI text-to-speech generates speech from scratch rather than stitching together recorded fragments, producing voices that are often indistinguishable from human recordings. Covered in Thing 10.
Token
The unit AI models use to process text. A token is roughly three-quarters of a word in English. Tokens matter because they determine how much text a model can process at once (context window) and are often used for billing on paid tiers.
Training data
The enormous dataset of text, images, audio, or other content that an AI model learns from during its training process. The content and biases of training data directly shape what the model can and cannot do, and what assumptions it makes. Covered in Thing 9, Thing 15 and Thing 16.

V

Voice cloning
AI technology that can create a convincing replica of a specific person's voice from a relatively short audio sample. The cloned voice can then be used to generate new speech that sounds like that person. Covered in Thing 10.

Z

Zero-shot prompting
Giving an AI a direct instruction without any examples; just telling it what you want. This is how most people use chatbots by default. It works well for clear, straightforward tasks but can struggle with more nuanced or specific requirements. Covered in Thing 3.