You've described your interests, your work context, your creative ideas. You may have uploaded documents, images, or audio files. You've had conversations that, taken together, paint a surprisingly detailed picture of who you are and what you're thinking about.
What happened to that data (whether it was stored, how long it was kept, whether it was used to train future AI models, and who could access it) depends entirely on the specific tool you used, the tier you were on, and the settings you had enabled at the time.
If you haven't thought much about this until now, you're in good company. Most people don't. The sign-up process for AI tools is designed to be frictionless: create an account, start chatting. The privacy implications are buried in terms of service that almost nobody reads.
This isn't about being paranoid or avoiding AI tools altogether. It's about understanding the trade-offs so you can make deliberate decisions rather than default ones.
How AI tools handle your data
To understand AI privacy, you need to understand a basic tension in how these tools are built and funded.
Training an AI model requires enormous amounts of data and computing power, costing hundreds of millions of pounds in infrastructure. When you use a free tier, you're not paying for that with money. You may, depending on the tool, be paying for it with your data. Your conversations, prompts, and uploads can be used as training material to improve the next version of the model. This isn't a secret (it's in the terms of service), but it's not exactly advertised on the sign-up page either.
The picture varies significantly by provider and by tier. Here's a simplified overview of where the major platforms stood as of early 2026.
There are a couple of important things to notice about this picture.
First, the details matter more than the brand. "I use Claude because it's more private" was a reasonable shorthand in 2024. By 2026, it's an oversimplification. Every major provider has shifted policies at least once, and the direction of travel for most has been towards using more data, not less.
Second, there's a consistent pattern: free tiers offer the weakest privacy protections, paid consumer tiers are somewhat better, and enterprise or team plans offer the strongest guarantees. This is not a coincidence. When you're not paying with money, data is part of the business model.
Third (and this is easy to miss), opting out of training doesn't mean your data disappears. Most providers still retain conversations for some period, for safety monitoring, abuse detection, or legal compliance. "Not used for training" and "not stored" are very different things.
What data you're sharing
What exactly are you sharing when you use an AI tool? Often more than you think.
Your prompts and conversations are the obvious one. Everything you type is sent to the provider's servers for processing. This includes the context you provide (descriptions of your work, your interests, your situation), not just the question you're asking.
Uploaded files (documents, images, spreadsheets, audio files) are sent to the provider's servers when you use features like document analysis or image editing. Depending on the tool and tier, these may be retained after your conversation ends.
Account information (your email address, name, and payment details if you're on a paid tier). This is standard for any online service but is worth noting in the context of data that could be linked to your conversations.
Usage patterns (when you use the tool, how often, what features you use, how long your sessions are). This metadata is routinely collected and is often used for product improvement even when conversation content isn't.
Connected services: if you've enabled integrations (Gemini accessing your Google Drive, Copilot connected to your Microsoft 365 account), the AI may have access to data well beyond your direct conversations. When a chatbot can see your calendar to help schedule a meeting, it can also see everything else on your calendar.
For the activities in this programme, you've been working with personal examples and publicly available content, so the risk is modest. But in day-to-day professional use, the stakes get higher quickly. A prompt like "Here's our draft strategy document; can you summarise it?" sends that entire strategy document to the AI provider's servers. Whether that's acceptable depends on your organisation's policies, the sensitivity of the document, and the specific data handling commitments of the tool you're using.
The UK context: data protection and AI
If you're working in the UK, your data is protected by the UK GDPR and the Data Protection Act 2018, now amended by the Data (Use and Access) Act 2025 which became law in June 2025. These give you rights over your personal data, including data processed by AI systems.
In practical terms, this means AI providers operating in the UK (or processing UK residents' data) are required to have a lawful basis for processing your data, be transparent about what they collect and how they use it, allow you to access and delete your personal data, and implement appropriate security measures.
The Information Commissioner's Office (ICO) has been pushing harder on AI. In June 2025, the ICO launched a dedicated AI and Biometrics Strategy, focusing on transparency, bias, and people's rights when AI is involved in decisions that affect them. The ICO has also signalled that it's paying particular attention to agentic AI (tools that take actions on your behalf), because these create new data protection challenges around accountability and consent.
In practice, enforcement is still catching up with the technology. Most AI providers include GDPR-compliant language in their privacy policies, but the practical implementation varies. Exercising your right to delete data from a trained AI model, for instance, is much harder than deleting an email. Once data has been used in training, it's baked into the model's weights and can't easily be extracted or removed.
The regulatory picture is changing fast. The ICO is currently developing a statutory code of practice on AI and automated decision-making, expected in 2026, which should provide clearer guidance for organisations using AI tools. For now, the practical advice is to treat AI privacy as your responsibility rather than relying on regulation to protect you.
Practical habits for AI privacy
You don't need to become a privacy expert to use AI tools sensibly. A handful of habits go a long way.
Check your settings. Every major AI platform has privacy and data settings, and the defaults are rarely the most private option. Spend five minutes reviewing what's turned on and whether you're comfortable with it. This is the minimum, and it's the core of this Thing's activity.
Be deliberate about what you type. Before entering anything into an AI tool, ask yourself: would I be comfortable if this text appeared in a data breach, was read by a human reviewer, or was used to train a model that anyone could query? If the answer is no, rephrase to remove the sensitive details, use a tool with stronger privacy protections, or don't use AI for that task.
Understand the tier you're on. If you're using a free tier, assume your data is being used to improve the model unless you've explicitly opted out, and even then, it's being retained for some period. If privacy matters for a particular task, a paid tier with clearer data handling commitments may be worth the cost.
Don't upload sensitive work documents to free tiers. This bears repeating because it's one of the most common mistakes. Uploading a confidential report to a free-tier chatbot for summarisation is convenient, but it sends that document to servers you don't control, under terms that may allow it to be used for training. If your organisation has an enterprise AI agreement, use that instead.
Check your organisation's AI policy. Many organisations now have policies on which AI tools staff can use and what data can be shared with them. If yours doesn't, that's worth raising, not as a complaint, but as a genuinely useful thing for the organisation to clarify.
Review and clear your history periodically. Most AI tools let you delete individual conversations or your entire history. If you've been using a tool for months, your conversation history contains a substantial amount of information about you. Periodic cleanup is good hygiene.
Resources to explore
The UK Information Commissioner's Office guidance on how data protection law applies to AI systems. Currently under review following the Data (Use and Access) Act 2025, but the existing guidance remains a useful starting point.
A practical walkthrough of the opt-out settings for ChatGPT, Claude, Gemini, and other major tools. Includes step-by-step instructions with screenshots.
A regularly updated comparison of privacy policies and data handling across the major AI platforms. Useful for checking the current state of play.
A more technical ranking of AI tools by privacy practices, focusing on what happens to your data in practice rather than what the privacy policy promises.
The ICO's 2025 strategy document setting out its priorities for AI regulation, including transparency, bias, and rights. Worth reading if you want to understand the direction UK regulation is heading.
Activity: conduct a privacy audit
This activity asks you to do something most people never do: actually look at the privacy settings and data policies of the AI tools you've been using. It's not complicated, but what you find may surprise you.
-
Find the privacy and data settings. For each of your three chosen tools, log in and navigate to the privacy or data settings.
-
Answer the key questions. For each of your three tools, find the answers to these questions. You'll likely need to check both the settings page and the tool's privacy policy or help documentation.
-
Build your comparison table. Create a simple table comparing your three tools across the questions above.
- Check and change your settings. Now that you know what's switched on, decide whether you want to change anything. There are no right answers here; it depends on how you use each tool and what you're comfortable with. But if you find that a training toggle has been on by default and you'd rather it wasn't, now is the time to switch it off. Note any changes you make.
-
Write your personal privacy checklist. Based on what you've learned, write a short checklist of things you'll do differently going forward.
-
Write your reflection. Write a few paragraphs reflecting on what you found and what it means for how you use AI tools.
Why this matters
Privacy in AI isn't an abstract concern. It's a practical issue that affects every interaction you have with these tools. And unlike hallucinations (which are obvious once you know to look for them) or bias (which reveals itself through careful observation), privacy risks are invisible by design. You can't see your data being retained or used for training. The only way to know what's happening is to look.
The settings exist, the policies are published, and the opt-out mechanisms (while not always prominent) are available. What's been missing for most people is the prompt to actually go and check. That's what this activity provides.
Throughout this programme, we've emphasised that AI tools are powerful and useful, and they are. But power and usefulness don't exempt a tool from scrutiny. The same critical mindset you applied to hallucinations in Thing 15 and bias in Thing 16 applies to privacy too. The question isn't "should I use AI?" It's "am I making informed choices about how I use it?" After this Thing, you should be much better placed to answer that honestly.
Claim your Open Badge
Submit your privacy audit document with all components: your settings screenshots, your completed comparison table, your list of any settings changes, your personal privacy checklist, and your written reflection.
Submit your settings screenshots, comparison table, settings changes, privacy checklist, and written reflection to claim this badge via cred.scot.
Claim now