Deep research tools are different from anything you've used so far. When you give a chatbot a question, it answers from memory. When you use an AI search tool like Perplexity, it searches the web once and summarises what it finds. Deep research tools do something more ambitious: they create a research plan, run multiple searches, follow links from one source to another, read full pages rather than snippets, cross-reference their findings, and then write up a structured report with citations. The whole process might take three to ten minutes, which feels slow compared to a chatbot's instant reply, but remarkably fast compared to doing the same work yourself.
If Thing 7 gave you a personal research assistant who could work with sources you'd already found, Thing 8 is about giving that assistant the ability to go to the library on your behalf: browsing the shelves, pulling out relevant material, and coming back with a written briefing.
This matters for anyone whose work involves understanding complex topics, making evidence-based decisions, or staying current in a fast-moving field. The question isn't whether these tools are perfect (they aren't), but whether they can give you a better starting point than doing everything manually. For most people, the answer is yes, with some important caveats.
How deep research works
To understand what makes deep research tools different, it helps to think about what happens when a person does thorough research on an unfamiliar topic.
You'd start by searching for the basics, reading a few overview articles, and noting which sources keep getting referenced. You'd follow those references to primary sources. As you read, you'd spot gaps in your understanding and search for those specifically. You'd notice when two sources disagree and look for a third to help resolve the question. Eventually, you'd pull everything together into something coherent. The whole process might take an afternoon, a day, or a week depending on the complexity.
Deep research tools compress this process into minutes. They don't just search once; they search iteratively, adjusting their approach based on what they find. They read dozens of pages, not just snippets. They construct a research plan at the outset and show it to you, often letting you refine it before they begin. And when they're done, they produce a structured report, typically several pages long, with citations you can trace back to the original sources.
The experience of watching one work is genuinely interesting. You submit your question, and instead of getting an instant response, you watch the tool thinking. It might show you a list of the sub-questions it's planning to investigate, or a live feed of the sources it's reading. Then, a few minutes later, it delivers something that looks very much like the kind of briefing document you might get from a diligent colleague.
The tools
Several of the major AI providers now offer deep research features. They take noticeably different approaches, and the one that works best for you will depend on what you're trying to do and which tools you already use.
These tools are all tackling the same problem, automating the kind of multi-source research that takes a human hours, but they make different trade-offs between speed, depth, and citation quality. Knowing which to reach for in a given situation is more useful than deciding which one is "best."
What deep research is good at (and what it isn't)
Deep research tools are impressive, but it's important to be clear-eyed about what they can and can't do.
They're very good at getting you up to speed on an unfamiliar topic quickly. If you need to understand the current state of a field, the main arguments in a policy debate, or the range of options for a particular decision, a deep research report can save you hours of reading and give you a solid foundation to build on.
They're also good at finding sources you might not have discovered on your own. Because they follow chains of references rather than just searching once, they often surface material that wouldn't appear in a standard search: specialist reports, government publications, academic papers, and niche industry sources.
Where they fall short is more nuanced. They can only find what's publicly available and well-indexed online. Paywalled academic papers, internal reports, very recent developments, and perspectives from communities that aren't well-represented online will be missing. The reports they produce can also have a false sense of completeness. The polished structure and confident tone can make it easy to forget that there might be important perspectives or evidence they simply didn't encounter.
Citation quality varies between tools, and even when citations are provided, they need checking. A citation might support the claim it's attached to, or the tool might have misinterpreted the source, taken a quote out of context, or attributed a finding to the wrong paper. The supervisor mindset from Thing 4, treating AI output as delegated work that needs review, applies even more strongly here. The reports are longer and more detailed, which makes it tempting to trust them wholesale.
Deep research tools also work best with questions that have factual, researchable answers. They're excellent for "What are the current best practices for X?" or "What does the evidence say about Y?" They're less useful for questions that require original analysis, creative thinking, or synthesis of information that doesn't exist yet in published form. They find and organise what's already out there; they don't create new insight.
Resources to explore
Your primary tool for the activity. Select "Deep Research" from the prompt bar. Requires a free Google account.
Your comparison tool. Look for the "Research" mode when entering your query. No account required for basic use.
A practical comparison of the three main deep research tools, useful for understanding the different approaches.
Another comparison focused on output quality and sourcing, with side-by-side examples.
Google's own explanation of how Deep Research works, including the planning and browsing process.
Activity: the deep research showdown
You're going to ask two different deep research tools the same question and compare what comes back: not just the answers, but the process, the sources, and how useful each report turns out to be. This builds directly on the comparison work you did in Thing 4, but the outputs here will be much longer and more detailed, which gives you more to evaluate.
- Choose a research question on a topic you already know something about.
- Run Gemini Deep Research and save the report it produces.
- Run Perplexity deep research on the same question and save that report too.
- Compare and evaluate both reports across several dimensions.
Choose your research question
Pick a question that genuinely interests you, something connected to a personal hobby, a community you're part of, or a topic you've been curious about. The more you already know about the subject, the better positioned you'll be to judge the quality of what comes back.
Good questions for this activity require pulling together information from multiple sources and have enough complexity to give the tools something to work with. For example:
- "What are the current best approaches to training for a first marathon for someone over 40?"
- "What is the evidence for and against four-day working weeks, and which countries or organisations have trialled them?"
- "What are the most effective strategies for reducing food waste at a household level, and what does the research say about which ones actually work?"
- "What are the main arguments in the current debate about screen time guidelines for children under 12?"
Run Gemini Deep Research
Go to gemini.google.com and look for the Deep Research option (it's typically available as a mode selector in the prompt bar). Enter your question and submit it.
Gemini will present a research plan, a list of sub-topics or angles it intends to investigate. Take a moment to read this. Does it cover the areas you'd expect? Is anything obviously missing? You can adjust the plan before Gemini begins its research if you want to, though for this activity it's also fine to let it run as proposed.
Then wait. Deep research takes a few minutes. You'll see Gemini working through its plan, and it may show you which sources it's reading. When it's done, you'll get a structured report, typically several pages long, with citations throughout.
Save or copy the report. If Gemini offers to export it to Google Docs, that's a convenient option.
Run Perplexity deep research
Go to perplexity.ai and enter the same question. Look for the "Research" mode; this is Perplexity's deep research feature, distinct from its standard search. Select it and submit your query.
Perplexity's process is typically faster than Gemini's. It will search across multiple sources, compile its findings, and produce a report with inline citations. The output may be shorter and more focused than Gemini's, which is part of what you're comparing.
Save or copy this report too.
Compare and evaluate
Now comes the important part. Read both reports carefully and compare them across these dimensions:
Your output
A document containing:
- Your research question
- Both reports (saved, copied, or screenshotted)
- Your comparative evaluation covering at least the dimensions above
- A short reflection (a few paragraphs) on where deep research fits in your toolkit: when would you use it, when would you stick with a standard search or chatbot, and what would you always want to check before relying on a deep research report?
Why this matters
Deep research tools have changed how people can gather and make sense of information. For many professionals, research is a regular part of the job, whether it's staying current in a field, preparing for a meeting, writing a report, or making a decision that requires evidence. These tools can compress hours of work into minutes.
But knowing the tools exist is only the beginning. The real skill is learning to evaluate what they produce, understanding their limitations, and developing the judgement to know when a deep research report is a good enough starting point and when you need to dig further yourself. That's what this activity is really about.
Claim your Open Badge
Submit your research question, both reports, and your comparative evaluation with reflection as evidence for your Thing 8 badge via cred.scot.
Submit your research question, both reports, and your comparative evaluation with reflection as evidence to claim this badge via cred.scot.
Claim now