Thing 3

Prompt engineering — how to talk to AI

Last reviewed: March 2026 30–45 minutes

The name makes it sound more technical than it is. Prompt engineering is really just the skill of giving AI clear, well-structured instructions, the same skill you'd use when briefing a colleague or leaving a note for someone covering your work. The difference is that AI is unusually literal. It does exactly what you ask, which means the quality of what you ask matters enormously.

In Thing 2, you discovered that the conversation after your first message is where much of the value lives. That's still true; iteration is one of the most important habits you can build. But there's a natural follow-up question: what if the first message were better to begin with? That's what this Thing is about.

Most people, when they first use a chatbot, type something short and vague, the same way they'd type a Google search. And they get back something generic in return. That's not because the tool is limited. It's because the tool had almost nothing to work with. Give it more, and the results change dramatically.

This Thing will give you a small number of practical techniques that improve almost every interaction with AI. None of them are complicated. All of them are things you can start using immediately and keep using throughout the rest of this programme.


Why the way you ask matters

A person typing a prompt into an AI chatbot on a laptop screen
The quality of your prompt determines the quality of the response you get back.

Here's a quick demonstration. Imagine you need help writing an email to a client about a project delay. You could type:

Write me an email about a project delay.

You'll get something back. It'll be competent, generic, and almost certainly not what you'd actually send. The AI has no idea who you are, who the client is, what the project is, why it's delayed, or what tone you need to strike. So it guesses, and it guesses in the safest, most generic way possible.

Now compare that with:

I'm a project manager at a small marketing agency. I need to email a long-standing client to let them know their website redesign will be delivered two weeks late. The delay is because of a supplier issue, not our fault, but I don't want to sound like I'm making excuses. The client is generally reasonable but has been pushing hard on this deadline. I'd like the tone to be professional, honest, and reassuring. Acknowledge the delay, explain briefly, and focus on what we're doing to minimise the impact. Keep it to about 150 words.

That second prompt gives the AI everything it needs to produce something genuinely useful on the first try. And if it's not quite right, you're starting from a much better place for your follow-up conversation.

The difference isn't magic. It's just clarity.


Four things that make a prompt work

You don't need a rigid formula to write good prompts, and you'll develop your own style over time. But when you're getting started, it helps to think about four elements. Not every prompt needs all four, but the more of them you include, the better your results tend to be.


Some prompting habits worth building

Beyond the four elements, a few general habits make a noticeable difference.

Start a new conversation for a new task. AI tools carry context through a conversation, which is useful when you're iterating on something. But leftover context from a previous task can bleed into a new one and produce odd results. When you're moving to a genuinely different task, start fresh.

Break complex tasks into steps. If you need a 2,000-word report with specific sections, don't ask for the whole thing in one go. Ask for an outline first, refine it, then ask for each section. You'll get better results and more control over the final output. This approach (sometimes called "prompt chaining") works because the AI can focus its attention on one thing at a time rather than trying to juggle everything at once.

Tell the AI what role to play. Starting a prompt with something like "You are an experienced HR manager" or "Act as a plain-English editor" isn't just a gimmick. It shifts the vocabulary, assumptions, and framing the AI uses throughout its response. This is particularly useful when you want specialist knowledge presented in a particular way.

Don't be afraid to be direct. AI doesn't have feelings. You don't need to soften your instructions or say please (though you can if you want; it won't make a difference either way). "This is too long. Cut it in half and make the language less formal" is a perfectly good follow-up message. Clarity is kindness when you're working with AI.

Say what you'll use the output for. There's a meaningful difference between "summarise this document" and "summarise this document; I need to brief my manager in a two-minute conversation." The intended use shapes what a good response looks like, and the AI can only take that into account if you tell it.


What prompt engineering isn't

It's worth clearing up a few misconceptions.

Prompt engineering isn't about finding secret magic words that unlock hidden capabilities. There's no cheat code. It's about clear communication, the same skill that makes you good at writing briefs, running meetings, or explaining things to colleagues.

It also isn't something you need to perfect before you start using AI. The iterative approach from Thing 2 still applies. A decent first prompt followed by a good conversation will almost always beat an agonised-over "perfect" prompt with no follow-up. The techniques in this Thing simply give you a better starting point.

And it isn't a fixed discipline. The specific techniques that work best shift as models improve. A year ago, many prompt engineering guides recommended elaborate workarounds for limitations that no longer exist. The fundamentals (context, clarity, specificity, examples) are durable. The specific tricks are not. Focus on the fundamentals.


Resources to explore

Anthropic's prompt engineering guide

Clear, practical, and written for non-technical readers. Covers everything from basic structure to advanced techniques. Most of the advice transfers well to other AI tools.

Read article
OpenAI's prompt engineering guide

Similar ground from OpenAI's perspective. Worth comparing with Anthropic's guide to see where the advice overlaps (most of it) and where it diverges.

Read article
Prompt engineering best practices (DigitalOcean)

A vendor-neutral overview of the core techniques including zero-shot, few-shot, and chain-of-thought prompting, with clear explanations of when to use each.

Read article

Activity: the same task, three ways

30–45 minutes Any chatbot (ChatGPT, Claude, or Gemini)

You're going to write three versions of the same prompt, each one better than the last, and see how the quality of the response changes. This is the best way to internalise what makes a good prompt, because you'll see the difference with your own eyes.

  1. Choose a task. Pick something you could genuinely use help with, whether from your personal life or a made-up scenario. Ideas include drafting an email you've been putting off, writing a summary for colleagues, creating an outline for a presentation, or writing a social media post.
  2. Write a lazy prompt. Deliberately vague, the kind of thing most people type in a hurry. Something like "Write me a job description for a project officer" or "Help me write an email to my team about the new policy." Send it and save the response.
  3. Write a better prompt. Add context about who you are and what this is for. Be specific about what you want. Include at least one constraint (tone, length, audience, format). Send it and save the response.
  4. Write your best prompt. Go further. Add everything you think is relevant: context, specific task, constraints, audience, format, and if it makes sense, an example of the kind of output you're looking for. Don't hold back on detail. Send it and save the response.
  5. Compare and annotate. Put all three prompts and responses side by side. Note what the AI got right, what was generic, and what specific change in the prompt caused the biggest improvement in the response.
Privacy reminder: use personal examples or fictional scenarios, never actual work materials or confidential documents.

Your output: a document containing all three prompts, all three responses, and your annotations. This can be as simple as copying and pasting from the chatbot with your notes added in a different colour or between the sections.

The most useful part is identifying what you changed in the prompt and what effect it had on the output. That's the insight that carries forward to every future interaction with AI.

Bonus: iterate on your best version

If you have time, take the response from your best prompt and have a short conversation to refine it further, just as you practised in Thing 2. See how close you can get to something you'd actually use.


Claim your Open Badge

Submit your three-prompt comparison with annotations as evidence for your Thing 3 badge via cred.scot.

Thing 3: Prompt engineering open badge
Thing 3: Prompt engineering

Submit your three-prompt comparison with annotations as evidence to claim this badge via cred.scot.

Claim now

What's next

You now have two key skills: productive conversation with AI (Thing 2) and clear prompt writing (Thing 3). In Thing 4, we'll move beyond the chatbot interface and look at how AI is changing the way we search for and make sense of information.