In Thing 17, you explored what happens to your data when you use cloud AI: how it might be stored, used for training, or retained under terms you didn't read closely enough. You made informed choices about what to share. But there's a more fundamental option: running AI on your own computer, where nothing ever leaves your machine. A few years ago, this required serious technical knowledge and expensive equipment. Today, you can download an application, install a model with a couple of clicks, and have a conversation with it entirely offline.
The models you'll be running are open source: AI systems whose code and trained weights have been released publicly by companies like Meta, Mistral, and others. Anyone can download, run, or modify them. This is a fundamentally different approach from the closed, proprietary models you've been using, with real implications for privacy, cost, and who controls the technology.
There are trade-offs, of course. Local models are generally less capable than the best cloud-based models. They can't browse the internet, and the smaller models you'll run comfortably will sometimes struggle with complex reasoning. But for many everyday purposes, they work surprisingly well: drafting text, answering questions, brainstorming, summarising, and handling data you don't want leaving your machine.
This Thing is more technically hands-on than most in the programme. You're going to install software, download a model, and run AI on your own hardware. It's not difficult (if you can install a regular desktop application, you can do this), but it is different from the browser-based tools you've been using. And once you've done it, you'll have a genuinely different understanding of what AI is and where it can live.
Open source versus closed source: why it matters
Throughout this programme, you've used tools like ChatGPT, Claude, and Gemini. These are closed-source (sometimes called proprietary) AI systems. You can use them through their websites or apps, but you can't see how they work internally, you can't download the model, and you can't run them independently. You're a customer of a service.
Open-source AI is different. When a company releases an AI model as open source, they're making the model itself (the trained neural network with all its learned patterns) available for anyone to download and use. Meta's Llama, Mistral's models, and DeepSeek are among the most prominent examples. You don't need to pay for a subscription, you don't need to agree to usage monitoring, and you don't need an internet connection once you've downloaded the model.
This distinction matters for several reasons.
Privacy is the most immediately practical one. When you run a model locally, your data never leaves your computer. There's no server logging your conversations, no possibility of your input being used for training, and no privacy policy to scrutinise. The conversation exists only on your machine. For anyone working with sensitive information (personal data, confidential documents, client details, health records), this matters.
Cost is another factor. Cloud-based AI tools charge subscription fees or limit free tiers with usage caps. Local AI is free to run once you've downloaded the model. There are no monthly charges, no per-message limits, and no pricing tiers. Your only cost is the electricity to run your computer.
Then there's control and longevity. Cloud services can change their terms, raise their prices, discontinue features, or shut down entirely. A model running on your own hardware doesn't depend on anyone else's business decisions. It works today, and it'll work tomorrow, regardless of what any company does.
The trade-off is capability. The best closed-source models, running in massive data centres with cutting-edge hardware, are still more capable than what you can run on a laptop. They have more parameters (the internal connections that store learned knowledge), more training data, and more computing power behind them. But the gap has been narrowing steadily, and for many everyday tasks, the difference is smaller than you might expect.
The tools: LM Studio and Ollama
Two tools have emerged as the main ways for non-technical users to run AI locally.
For this programme, LM Studio is the recommended starting point. If you're comfortable with command-line tools, or if you want to try both for comparison, Ollama is a good alternative.
Understanding model sizes
When you open LM Studio and browse available models, you'll see a lot of numbers. Terms like "7B", "8B", "13B", and "70B" refer to the number of parameters in the model. Roughly speaking, that's the number of learned connections in the neural network. More parameters generally means more capable, but also more demanding on your hardware.
Here's a rough guide to what will run comfortably on different machines:
Don't worry too much about these numbers for the activity. LM Studio will show you which models are compatible with your hardware and will warn you if something is likely to be too large. Start small. A 7B or 8B model is the right place to begin, and you might be surprised at how capable it is.
You'll also see references to "quantised" versions of models. Quantisation is a technique that compresses a model to use less memory, with a modest trade-off in quality. A quantised version of a 13B model might run on hardware that couldn't handle the full version. LM Studio handles this for you: when you browse models, it shows the file size and memory requirements, so you can pick one that fits your machine.
Resources to explore
The main tool for this Thing. Download, install, and browse the model library directly from the application. Free for Windows, Mac, and Linux.
The command-line alternative. The website includes clear installation instructions and a list of available models. If you prefer a visual interface, search for "Open WebUI", a popular web-based frontend that gives Ollama a chat-style look.
The largest repository of open-source AI models. You don't need it for LM Studio (which has its own model browser), but Hugging Face is where most open-source models are published and discussed. Worth bookmarking.
A thorough overview of the local AI landscape, including hardware recommendations and model comparisons. More technical than this Thing, but useful if you want to go deeper.
A comparison of the two main local AI tools, covering their different strengths and use cases.
Activity: run AI on your own computer
In this activity, you're going to install LM Studio, download an open-source AI model, have a conversation with it entirely offline, and compare the experience to the cloud-based tools you've been using throughout the programme. You'll need a computer made in the last five years or so, with at least 8 GB of RAM (16 GB is more comfortable), and around 4–8 GB of free disk space.
-
Install LM Studio. Go to lmstudio.ai and download the version for your operating system. Install it as you would any other application.
-
Download a model. Go to the model browser and look for a recommended starting point: Llama 3.1 8B, Mistral 7B, or Phi-3 Mini.
-
Have a conversation. Open the chat interface, select the model you downloaded, and start talking to it.
-
Go offline. Disconnect your computer from the internet (turn off Wi-Fi or switch to aeroplane mode) and try another conversation.
-
Compare with a cloud model. Open one of the cloud-based chatbots you've used earlier in the programme and ask it the same questions you asked the local model.
-
Reflect. Write a short reflection covering your experience.
Your output
A document or journal entry containing:
- A brief note on your hardware (what computer you used, how much RAM it has) and which model you downloaded
- A screenshot of your local AI conversation (taken while offline, if possible)
- Your comparison of local versus cloud AI across at least three prompts, noting where each performed better or worse
- Your reflection on the experience, including scenarios where local AI would be most useful and how this has affected your understanding of AI
Why this matters
There's a tendency to think of AI as something that lives "out there", in the cloud, on someone else's servers, behind a login screen and a subscription fee. Local AI challenges that assumption.
The privacy argument is the most obvious. If you work in any context where data sensitivity matters (and most professionals do), knowing that a completely private AI option exists is useful. You can use cloud AI for general tasks and switch to local AI when you're working with anything sensitive. That's a level of control that wasn't available to non-technical users a couple of years ago.
Cost matters too, particularly over time. Subscriptions add up. A local model, once downloaded, costs nothing to run beyond your electricity bill.
But the most important shift is probably conceptual. You've seen that AI isn't just a service provided by a handful of large companies. It's a technology that can run on your own hardware, built on models that anyone can access and use. The open-source AI ecosystem is large, active, and growing. The models are improving with every release. And the tools for running them keep getting easier to use. You're no longer just a consumer of AI services. You understand the technology well enough to run it yourself.
Claim your Open Badge
Submit your hardware description and model choice, your screenshot of a local AI conversation, your comparison of local versus cloud AI responses, and your reflection on the experience.
Submit your local AI comparison and reflection to claim this badge via cred.scot.
Claim now