Thing 2

How AI actually works (and how to work with it)

Last reviewed: March 2026 30–45 minutes

In Thing 1 you had your first real conversations with AI tools and started to get a feel for how they respond. You probably noticed that they're impressive, surprisingly fluent, and occasionally a bit odd. Before we go further, it's worth understanding what's actually going on behind the scenes.

This isn't a computer science lecture. You don't need to understand the engineering to use these tools well, any more than you need to understand internal combustion to drive a car. But a basic mental model of what AI is doing, and what it isn't doing, will change how you use it. It explains why AI sometimes gets things confidently wrong, why the way you phrase things matters so much, and why most beginners leave enormous value on the table by treating it like a search engine.

That last point is where this Thing gets practical. Once you understand that AI is a conversation, not a one-shot command, your results get noticeably better.


What's happening when you talk to a chatbot

An abstract illustration representing how AI language models process and generate text
AI chatbots generate responses by predicting what text is most likely to come next, one word at a time.

When you type something into ChatGPT, Claude, or Gemini, it feels like you're talking to something that understands you. The reality is both less and more interesting than that.

Large language models (LLMs) work by prediction. They've been trained on enormous amounts of text and they've learned patterns in how language works. When you give them a prompt, they generate a response by predicting, one word at a time, what's most likely to come next given everything that's come before.

That's a simplification, but a useful one. It explains several things you'll have already noticed:


The most important thing most beginners miss

Most people starting out never discover this: you're not meant to stop at the first response.

Most beginners treat a chatbot like a search engine. They type a question, read the answer, and either accept it or move on. But that's not how these tools work best. The first response is a starting point, not a final answer.

You can, and should, push back, redirect, and refine. Some examples:

  • "That's too formal. Can you make it more conversational?"
  • "Good structure, but point three isn't relevant to my context. Can you replace it with something about remote teams?"
  • "Can you make this shorter, about half the length?"
  • "I liked the second suggestion best. Can you develop that one further?"
  • "That's not quite what I meant. I'm looking for something more like [explanation]."

Each follow-up gives the AI more information about what you actually want, and the responses get better as a result. A five-message conversation almost always produces a better outcome than a single prompt, no matter how carefully you crafted it.

Think of it less like typing a query into Google and more like briefing a colleague. If you asked a colleague to draft something and the first version wasn't quite right, you wouldn't throw it away and start again with someone else. You'd say "this is good, but can you change this bit." That's exactly how to work with AI.


What AI is good at (and where it falls apart)

Having a rough sense of where these tools are strong and where they struggle saves a lot of frustration. This isn't an exhaustive list, but it covers the most useful territory for getting started.

Generally strong

  • Drafting and rewriting text (emails, reports, summaries, job descriptions)
  • Explaining complex topics in accessible language
  • Brainstorming and generating ideas
  • Structuring and organising information
  • Summarising long documents
  • Working with code (even if you're not a developer; more on this in later Things)

Generally weak

  • Complex reasoning and multi-step logic (it can lose the thread on problems that need sustained, careful thinking)
  • Very recent events (unless web search is enabled)
  • Anything about you personally (it doesn't know who you are)
  • Providing reliable sources (it may invent citations)
  • Counting things (letters in words, items in lists, all surprisingly unreliable)
  • Tasks requiring genuine understanding or judgment (it's pattern-matching, not thinking)

The sweet spot for professional use is usually in the "strong" column: tasks where you need well-structured text or ideas generated quickly, and where you can check and refine the output yourself. The more you use these tools, the more intuitive this becomes.


Resources to explore

These go a bit deeper into how AI works, all pitched at a non-technical audience.

How large language models work (YouTube, 5 min)

A short, visual explanation of what's happening inside an LLM. No jargon required.

Watch video
Anthropic's guide to prompting Claude

Written for a general audience, and much of the advice applies to any chatbot. Worth bookmarking for Thing 3.

Read article
ChatGPT is bullshit (Ethics and Information Technology, 2024)

A provocative but readable academic paper arguing that what AI does is closer to "bullshitting" (speaking without concern for truth) than lying. A useful lens for understanding why verification matters.

Read paper

Activity: push it, test it, break it

30–45 minutes Any chatbot (ChatGPT, Claude, or Gemini)

This activity has two parts. In the first, you'll see how iterative conversation turns a generic AI response into something genuinely useful. In the second, you'll deliberately test where the AI stumbles.

Part 1: the conversation is the point

  1. Start a new conversation with your chosen chatbot and give it this prompt:

    Write me a short paragraph I could use at the start of a presentation to my team. The presentation is about why our team should start experimenting with AI tools in our daily work.

  2. Read what comes back. It'll probably be fine: competent, reasonable, a bit generic.
  3. Now have a conversation. Instead of accepting it and moving on, send at least four or five follow-up messages to shape the output:
    • Ask it to change the tone (more informal, more confident, more cautious)
    • Tell it something it got wrong or that doesn't fit your context
    • Ask it to make it shorter, or longer, or to emphasise a different point
    • Tell it what you liked and ask it to do more of that
    • Ask for three different versions to choose from
  4. Pay attention to how the output changes with each turn. Notice how much more useful the final version is compared to the first.

Part 2: find the edges

In the same conversation or a new one, deliberately test where the AI struggles. Try some of these:

  • Ask about yourself. "What do you know about me?" or "Where do I work?" See how it handles not knowing.
  • Ask about something very recent. "What happened in the news today?" Does it have web search, or does it tell you its knowledge has a cutoff?
  • Ask it to count letters. "How many letter Rs are in the word 'strawberry'?" is a famous test that AI has historically struggled with. Try it, and try a few of your own. Words like "nevertheless" or "accommodation" work well.
  • Ask for a source. "Can you recommend three academic papers about [a topic you know well]?" Then check whether those papers actually exist.
  • Ask it something niche that you're expert on. This is the best test. You'll immediately see where it's confident but wrong.
Privacy reminder: use personal examples or fictional scenarios, never actual work materials or confidential documents.

Your output: copy or screenshot the conversation from both parts. Annotate it with your own observations: what surprised you, where the iterative conversation made a real difference, where the AI fell down, and anything that changed how you think about using these tools. This doesn't need to be polished. Bullet points and honest notes are fine.

Why this matters

This activity builds two habits. The first is iteration: learning to have a conversation rather than accepting the first thing that comes back. The second is healthy scepticism: knowing that AI will sometimes get things wrong, and developing an instinct for when to check.

Thing 3 will give you structured techniques for writing better prompts, but the mindset you develop here, that AI is a collaboration rather than a vending machine, is arguably more important.


Claim your Open Badge

Submit your annotated conversation from both parts of the activity as evidence for your Thing 2 badge via cred.scot.

Thing 2: How AI actually works open badge
Thing 2: How AI actually works

Submit your annotated conversation (both parts) as evidence to claim this badge via cred.scot.

Claim now

What's next

In Thing 3, we'll build on what you've learned here with specific prompting techniques: practical, repeatable methods for getting better results from any AI tool. You've already seen that follow-up messages make a difference; now we'll look at how to get the first message right more often too.