Ask an image generator for a picture of "a CEO" and you'll likely get a middle-aged white man in a suit. Ask for "a nurse" and you'll probably get a young woman. Ask for "a scientist" and see what happens. These results aren't random errors. They're the statistical echoes of decades of photographs, articles, and captions that the model absorbed during training. The AI learned that certain words tend to appear alongside certain types of people, and it reproduces those associations faithfully.
This is AI bias, and it matters far beyond image generators. When AI tools are used for recruitment screening, content generation, decision support, or customer communications, the biases embedded in them can shape real outcomes for real people. A hiring tool that subtly favours certain patterns in CVs, a chatbot that uses gendered language in its recommendations, a content tool that defaults to certain cultural assumptions. These aren't hypothetical problems. They're happening now, in tools that millions of people use every day.
The good news is that you don't need a computer science degree to spot bias or to think critically about it. What you do need is awareness: an understanding of where bias comes from, what it looks like, and what questions to ask when you're using AI in a professional context. That's what this Thing is about.
Where AI bias comes from
AI bias has several sources, and understanding them helps you know what to look for.
What bias looks like in practice
Bias in AI shows up in several ways that are relevant to professional use.
In image generation, it's often the most visible. Prompts for professional roles tend to produce results that reflect stereotypical demographics. Prompts involving beauty, success, or authority tend to default to narrow physical ideals. Cultural prompts can produce exoticised or stereotyped imagery. You may have noticed some of this yourself in Thing 9 when you were generating images. If you didn't notice it then, the activity in this Thing will make it much harder to miss.
In text generation, bias can be subtler. A chatbot asked to write a job advert might use language that subtly signals a preference for certain candidates. Asked to describe leadership qualities, it might default to traits associated with one cultural norm. Asked to give career advice, it might make assumptions about someone's circumstances based on limited information. These aren't dramatic failures. They're quiet defaults that can go unnoticed unless you're looking for them.
In decision-support contexts, bias has sharper consequences. AI tools used for CV screening have been found to penalise candidates with employment gaps, which disproportionately affects carers and disabled people. Predictive tools used in lending, insurance, and even healthcare can embed historical patterns of discrimination into automated systems that process thousands of decisions without individual human review. These are not tools you'll encounter in this programme's activities, but they're the professional context that makes understanding bias important.
What the AI companies are doing about it
It's worth being fair about this: the major AI companies are aware of bias and have put real effort into addressing it, with varying degrees of success.
Most image generators now have diversity interventions built in. If you ask for "a doctor" you may get a range of genders and ethnicities rather than a single default. Some tools do this more aggressively than others, and it has occasionally created its own controversies (Google's Gemini image generator was widely criticised in early 2024 for producing historically inaccurate diverse imagery in contexts where it didn't make sense, like depictions of specific historical figures).
Language models are tested for bias during development, and companies employ red teams specifically to find and report problematic outputs. Safety layers are added to reduce overtly biased or harmful responses. These interventions have made a real difference: the worst outputs from early AI models are much less common in current versions.
But mitigation is not elimination. Subtle bias persists, and it's particularly persistent in exactly the areas where it's hardest to spot: assumptions embedded in language, defaults that seem "normal" until you question them, and gaps where certain perspectives or experiences simply aren't well represented. The responsibility for noticing these patterns doesn't sit solely with AI companies. It also sits with the people using the tools, which, increasingly, means professionals like you.
A framework for thinking about AI bias
You don't need a complex methodology to think critically about bias. A simple set of questions, applied habitually when you're reviewing AI output, goes a long way.
Who is represented, and who isn't? When AI generates content about people, look at who appears. Are certain demographics consistently present or absent? Are some groups only shown in particular contexts? This applies to images, but also to examples, case studies, and scenarios in generated text.
What assumptions are being made? AI outputs often contain embedded assumptions about what's "normal" or "default." These might be assumptions about family structure, working patterns, cultural practices, physical abilities, or socioeconomic circumstances. They're usually not stated explicitly. They're baked into the choices the AI makes about what to include and what to leave out.
Whose perspective is centred? When an AI produces advice, analysis, or recommendations, consider whose viewpoint it's coming from. Is it assuming a particular cultural context? A particular level of privilege or access? A particular set of values? AI trained primarily on English-language, Western content will naturally centre those perspectives unless prompted otherwise.
What would happen if this output were used at scale? A single biased output is a curiosity. The same bias applied to a thousand hiring decisions, customer communications, or content pieces is a systemic problem. When evaluating AI output, think about what would happen if this pattern were repeated across an entire organisation.
Resources to explore
A clear, non-technical explainer on how bias enters AI systems and what it looks like in practice. Good starting point if you want a visual overview.
A free online textbook that goes deeper into the technical and social dimensions of AI fairness. More academic, but the early chapters are accessible.
Founded by Joy Buolamwini, whose research exposed significant racial and gender bias in facial recognition systems. The organisation's resources are practical and well-presented.
The UK's framework for AI regulation, which includes principles around fairness and transparency that are increasingly relevant to how organisations use AI.
The first global standard on AI ethics, adopted by 193 countries. Worth reading for the principles even if you're not in a policy role.
Activity: conduct a bias audit
This activity has two parts. In the first, you'll test an image generator for visual bias. In the second, you'll examine how a chatbot handles language in a professional context. Together, they'll give you a first-hand sense of how AI bias works in both images and text.
-
The image bias test. Generate images for each of the following prompts, one at a time. Use exactly the same wording for each; don't add extra descriptors. You want to see what the AI defaults to when given minimal information.
-
The language bias test. Switch to a chatbot and give it the following prompts, one at a time. Save the responses.
-
Your reflection. Bring your findings together in a short reflective piece (a few paragraphs).
Why this matters
There's a temptation to treat AI bias as someone else's problem: the AI company's problem, the regulator's problem, the data scientist's problem. And those groups do have responsibilities. But the moment you use AI to produce content that other people will read, view, or be affected by, you become part of the chain too.
This doesn't mean you need to become an AI ethics expert. It means you need to develop the habit of looking critically at what AI produces, especially when it involves people. The bias audit you've just done is a practical version of that habit, and like the hallucination skills you built in Thing 15, it's something you can apply every time you use these tools.
The broader point is worth sitting with too. AI reflects the world it was trained on, and that world is not equally fair to everyone. The patterns that emerge from AI aren't just technical artefacts. They're mirrors of real inequalities in how different people are represented, described, and valued. Using AI thoughtfully means being willing to notice those patterns and to push back when they'd produce outcomes you wouldn't be comfortable putting your name to.
Claim your Open Badge
Submit your bias audit document with all three parts: your image bias test results (screenshots and analysis), your language bias test results (chatbot responses and analysis), and your written reflection connecting the findings to your own professional context.
Submit your image bias test, language bias test, and reflection to claim this badge via cred.scot.
Claim now