You've had real conversations with AI, you understand a bit about how it works under the bonnet, and you've started writing prompts that get useful results. Now we're going beyond the chatbot interface.
Traditional search engines give you a list of links. You type in some keywords, you get ten blue results, and then you spend twenty minutes clicking through them, scanning for the bit that actually answers your question, closing tabs full of ads, and trying to piece together a picture from fragments scattered across multiple sites. It works. It's worked for twenty-five years. But it's slow, and a surprising amount of the effort falls on you.
AI-powered search tools work differently. Instead of handing you a list of links, they read those pages themselves, pull out the relevant information, and give you a synthesised answer with references. The best of them tell you exactly where each claim came from, so you can check for yourself. Some go further still: "deep research" tools that spend several minutes browsing dozens of sources, following leads, and producing structured reports on complex questions.
If any part of your work involves finding things out, understanding a topic, or pulling together information from multiple sources, the tools in this Thing will change how you do it.
From search engine to answer engine
The shift is subtle but important. Traditional search engines are really finding engines. They find pages that might contain your answer and leave you to do the reading. AI search tools are closer to answer engines. They do the reading for you and present what they found.
The most established tool in this space is Perplexity AI. If you haven't come across it, think of it as what would happen if a search engine and a chatbot had a very well-organised child. You ask a question in natural language, it searches the web, reads multiple sources, and gives you a clear answer with numbered citations. You can see exactly which source each piece of information came from, and click through to read the original if you want to verify something or go deeper.
This matters because it addresses one of the biggest problems with using a standard chatbot for factual questions: hallucinations. As you saw in Thing 2, chatbots can confidently make things up. Perplexity and tools like it reduce this risk by grounding their answers in actual web sources. Instead of predicting plausible-sounding text, they're summarising what they've actually found. The citations aren't a guarantee of accuracy (the sources themselves might be wrong, or the AI might misinterpret them), but they give you something to check against.
Google has responded to this shift by building AI-generated summaries directly into its search results. If you've noticed a block of text appearing above the normal search results when you Google something, that's Google's AI overview, an attempt to give you an answer without making you click through to a website. Microsoft has done something similar with Copilot in Bing. These features are useful in a casual way, but they tend to be less thorough and less transparent about sources than a dedicated tool like Perplexity.
The key difference to understand: a standard chatbot gives you an answer from memory. An AI search tool gives you an answer from the web, right now, with receipts.
Deep research: when you need more than a quick answer
Beyond AI-powered search, there's a newer category worth knowing about if you do any research-heavy work: deep research tools.
These are different from both chatbots and AI search in an important way. When you give a deep research tool a question, it doesn't just search once and summarise the results. It creates a research plan, runs multiple searches, follows links from one source to another, reads full pages rather than snippets, cross-references what it finds, and then writes up a structured report. The whole process might take three to ten minutes. That feels slow compared to a chatbot's instant response, but compare it to how long the same research would take you by hand.
Several of the major providers now offer deep research features:
The outputs from these tools aren't perfect. They can still include errors, they sometimes miss important sources, and they occasionally overweight one perspective. But as a starting point for research (a first pass that would have taken you an hour or more by hand) they're very good. The skill is learning to use them as a starting point rather than a final word.
The supervisor mindset
This is a good moment to introduce a concept that runs through this whole programme: the supervisor mindset.
Think of it this way. When you use an AI research tool, you're not the person doing the research. You're the person who delegated it. The AI is a fast, tireless member of your team who you've asked to go away and pull something together. They've come back with a first draft. Now it's your job to review it.
If you've ever managed or supervised anyone, you know what this feels like. The person you delegated to might be capable and enthusiastic, but you still need to check the work before it goes anywhere. Does it actually answer the question you asked? Are the facts right? Has anything important been missed? Is it fit for the audience it's going to? You wouldn't forward a junior colleague's first draft to your director without reading it, and you shouldn't treat an AI-generated research summary any differently.
This is a different skill from doing the research yourself, and in some ways it's harder. When you research something from scratch, you build understanding as you go. When you're reviewing an AI-generated summary, you need to bring enough knowledge and critical judgement to spot problems in a polished, confident-sounding document. It's closer to editing than writing, and closer to supervision than doing.
This mindset develops quickly with practice, and the activities in this programme are designed to build it. Every time you use an AI tool and check its output (which you've already been doing in Things 1 to 3) you're strengthening exactly this skill.
For AI search specifically, the most important habits are:
Check the citations. Click through to the actual source. Does it say what the AI claims it says? Is the source itself credible?
Look for what's missing. AI search tools summarise what they find, but they can only find what's publicly available and well-indexed. Important perspectives, paywalled research, or very recent developments might not appear.
Cross-reference important claims. If a fact or statistic matters for your work, verify it independently. One AI-generated summary is a starting point, not a source you'd cite.
Notice the confidence level. AI tools tend to present everything with the same level of certainty. Learn to spot the difference between claims backed by multiple strong sources and claims resting on a single blog post.
Resources to explore
The tool you'll use in the activity. Free to use with a limited number of Pro searches per day. No account required for basic use, though signing up gives you more features.
Available through the Gemini interface. You'll need a Google account. The deep research feature is on paid tiers, but standard Gemini search is free and still useful for comparison.
A practical comparison of the three main deep research tools, with side-by-side examples. Useful for understanding how each approach differs.
Another comparison focused on output quality and sourcing. Worth a skim if you want to see how the tools handle the same question differently.
Activity: the research showdown
You're going to research a real question using two different approaches (an AI search tool and a standard chatbot) and then evaluate the results like a supervisor reviewing work you've delegated. This builds a core information literacy skill: judging whether an answer is trustworthy, not just whether it sounds right.
-
Choose a research question. Pick something genuinely relevant to your work or professional interests. The more you already know about the topic, the better, because you'll be in a stronger position to judge what comes back. Good questions require pulling together information from multiple sources. Some examples:
- "What are the current best practices for [something in your field]?"
- "What are the main arguments for and against [a policy or approach relevant to your sector]?"
- "What has changed in [your area of work] in the last two years?"
Avoid questions with a single factual answer ("What year did X happen?"). Those don't give the tools enough room to show their strengths and weaknesses.
-
Ask the chatbot first. Open your preferred chatbot (ChatGPT, Claude, or Gemini) and ask your research question. Use the prompting skills you developed in Thing 3: give it context about why you're asking and what kind of answer would be useful. Save the response and note whether it cites any sources, how specific the information is, and whether anything sounds plausible but uncertain.
-
Ask Perplexity. Go to perplexity.ai and ask the same question. If you're prompted to use Pro Search or Quick Search, try Pro Search if you have free uses available. Save the response and pay attention to how it compares structurally, how many sources are cited, and whether those sources look credible. Click on at least two or three citations and check whether the original source actually supports what Perplexity claims.
-
Evaluate like a supervisor. Put both responses side by side and write your evaluation. Consider:
- Accuracy: based on what you already know, which response was more accurate? Did either include anything you think is wrong?
- Sources: which approach gave you more confidence in the information? How reliable were Perplexity's citations?
- Completeness: did one approach cover important aspects that the other missed?
- Usefulness: if you needed to brief a colleague tomorrow, which response gives you a better starting point?
- Effort: how much additional checking would each response need before you'd be comfortable using it?
Your output: a document containing your research question, both responses (copied or screenshotted), and your evaluation covering at least the points above. Add a few sentences on what this exercise taught you about when to use AI search versus a standard chatbot.
Bonus: try deep research
If you have access to a deep research feature (Gemini Deep Research, or Perplexity Pro Search with free uses remaining), try running the same question through it. These tools take longer, so give them a few minutes to work. The depth of output is noticeably different. If you do this, add a brief note to your evaluation about how the deep research output compared to the other two approaches.
Why this matters
Most professionals will end up using AI search tools more than any other AI feature. Getting a grounded, cited answer to a complex question quickly, and knowing how much to trust it, is a skill you'll use constantly. This activity gives you your first structured experience of that process.
It also reinforces something that will come up again and again: different tools are good for different things. A chatbot is great when you need ideas, drafts, or explanations. An AI search tool is better when you need facts, evidence, and references. Knowing which to reach for, and when to use both, develops with practice. It starts here.
Claim your Open Badge
Submit your research question, both AI responses, and your comparative evaluation as evidence for your Thing 4 badge via cred.scot.
Submit your research comparison and evaluation as evidence to claim this badge via cred.scot.
Claim now