When not to use AI

Jon AI Document Generator
by Stélio Inácio, Founder at Jon AI and AI Specialist

When Not to Use AI: The Wisdom of Knowing When to Put a Tool Down

We've just seen how AI can be a powerful tool for our minds. But like any powerful tool, from a chainsaw to a sports car, wisdom lies not just in knowing how to use it, but when not to. Every new technology brings with it a debate. Decades ago, many worried that calculators would make children unable to do basic math. Did that happen? Not really. It just freed up their minds to tackle more complex problems, moving from tedious calculation to higher-level thinking. AI presents a similar, but much more profound, set of questions.

The key is to think about the trade-offs. What are we gaining, and what might we be losing? The goal isn't to fear technology, but to use it intentionally, ensuring it serves our best interests as thinking, feeling human beings.

The Knowledge: Exercising Your Brain's Muscles

To become a licensed black-cab driver in London, candidates must pass a legendary test called "The Knowledge." It requires them to memorize, without any notes, a labyrinth of over 25,000 streets and thousands of points of interest. It's an incredible feat of memory. Neuroscientists studying these drivers found that the part of their brain responsible for spatial memory, the hippocampus, was significantly larger than in the average person. Their brains had physically changed and grown stronger to meet the demand.

This is a stunning example of neuroplasticity: our brains are like muscles; they grow with use and atrophy with disuse. Now, we can all pull out our phones and get perfect, turn-by-turn directions. This is incredibly convenient, but it's worth asking: what mental muscles are we choosing not to exercise? If we outsource all our writing, do we risk losing our ability to form a compelling argument? If we outsource all our planning, do we weaken our ability to think ahead? The danger isn't in using AI to help, but in letting it become a crutch for cognitive skills that are essential to a rich life.

Key Concept: AI should not make decisions about any life

AI should not be used to make decisions about any life, because it lacks the moral capacity to do so. A tool should not be making decisions on behalf of a human being, especially when it comes to significant life choices that can affect well-being, happiness, and personal growth.

One thing is to use AI to assist in decision-making, another is to allow AI to take control of decisions that should be made by individuals, as this can lead to a loss of autonomy and responsibility.

A Practical Guide: When to Think Twice Before Using AI

Here are some situations where you should pause and consider if AI is the right tool for the job.

  • Don't: Rely on AI for tasks where factual accuracy is non-negotiable without checking its work.
  • Do: Use AI to get a first draft or summary, but always fact-check any statistics, dates, medical information, or legal advice. AI can "hallucinate" and invent facts.
  • Don't: Use AI to handle deeply personal or emotionally sensitive conversations.
  • Do: Write your own condolence letters, apologies, or heartfelt messages. AI simulates empathy; it doesn't feel it. Your genuine (even if imperfect) words are more meaningful.
  • Don't: Outsource tasks that are designed to help you learn and build a core skill.
  • Do: Use AI as a study partner to explain a concept you don't understand, but do the homework yourself. The struggle is part of the learning process.
  • Don't: Ask AI to make a final, significant life choice for you.
  • Do: Ask it to help you explore the choice. Instead of "Should I quit my job?" ask "Help me make a list of pros and cons for quitting my job to start a new business." You own the final decision.

Study 1: What Happens in Your Brain When You Use AI?

We often think about AI in terms of what it produces—an essay, a piece of code, an email. But what if we could look directly at what’s happening inside our own brains while we use it? A fascinating 2025 study from MIT, titled "Your Brain on ChatGPT," did just that, giving us a remarkable glimpse into the cognitive cost of relying on AI.

The Experiment: Brains, Search Engines, and AI

Researchers assembled three groups of people to write essays. One group could only use their own knowledge (the "Brain-only" group). Another could use a search engine like Google (the "Search" group). The third group used an AI assistant (the "LLM" group). While they worked, their brain activity was measured using an EEG, which tracks the brain's electrical signals.

What they found was striking. Brain connectivity—how different parts of the brain talk to each other—scaled down dramatically with the amount of external help.

  • The Brain-only group showed the strongest and most widespread brain networks. Their brains were firing on all cylinders, orchestrating memory, language, and planning.
  • The Search Engine group showed intermediate activity. Their brains were working hard, but also integrating visual information from the screen.
  • The LLM group showed the weakest overall brain coupling. Their brains were significantly less engaged, especially in the networks tied to deep thinking and memory.
The Debt of "Cognitive Offloading"

The researchers called this phenomenon the accumulation of "cognitive debt." When we "offload" the hard work of thinking onto an AI, we might get the task done faster, but we skip the mental workout. This has a few key consequences:

  1. Memory Suffers: The LLM group performed significantly worse when asked to quote from the essays they had just written. In the first session, 83% of them couldn't recall a single sentence accurately, compared to only 11% in the other groups. Their brains hadn't done the deep encoding needed to form a lasting memory.
  2. Ownership is Lost: Participants in the LLM group felt a weaker sense of ownership over their work. They saw it less as "their" essay and more as something the tool produced. In contrast, the Brain-only group felt strong ownership.
  3. Critical Thinking Atrophies: Over time, the study suggests that reliance on AI can weaken the neural pathways needed for critical thinking and independent problem-solving. When former LLM users were asked to write without AI, their brain connectivity was weaker than those who had practiced without it all along. They had become accustomed to the AI doing the heavy lifting.

This study doesn’t say AI is bad. It reveals a trade-off. AI offers incredible convenience, but that convenience comes at the cost of cognitive engagement. As the researchers put it, AI might streamline the process, but "the user's brain may engage less deeply in the creative process." Just like the London cab drivers, we must consciously choose to keep our mental muscles strong, even when a powerful tool is ready to do the work for us.

Study 2: How AI Changes the *Work* of Thinking

Generative AI isn't just changing the tools we use; it's changing the very nature of our work. A large-scale 2025 survey of knowledge workers from Microsoft Research, "The Impact of Generative AI on Critical Thinking," provides a clear picture of this transformation. The study found that AI fundamentally shifts our cognitive effort. We spend less time creating from scratch and more time directing, verifying, and integrating what the AI produces.

The Great Shift: From Gathering to Verifying

The researchers identified three major shifts in how we use our critical thinking when working with AI:

  • From Information Gathering to Information Verification: Before, a huge part of knowledge work was finding information. With AI, that part becomes almost effortless. But a new, more critical task emerges: verifying that the AI's output is correct. As one lawyer in the study noted, "AI tends to make up information to agree with whatever points you are trying to make, so it takes valuable time to manually verify."
  • From Problem-Solving to Response Integration: AI is excellent at generating solutions. The effort for workers now lies in integrating that solution—editing it, aligning it with specific goals, and ensuring it fits the context. The AI provides the raw material; the human provides the critical judgment to make it useful.
  • From Task Execution to Task Stewardship: Perhaps the biggest change is the move from "doing the task" to "overseeing the task." The human becomes a "steward," responsible for guiding the AI, setting quality standards, and remaining accountable for the final outcome, even though the AI did much of the production.
The Confidence Paradox: Who Thinks Critically?

The study uncovered a fascinating paradox about confidence. You might think that people who are less confident in their own abilities would be *more* critical of AI output. The opposite was true.

  • Higher confidence in the AI was associated with *less* critical thinking. When workers trusted the tool, they tended to accept its output with less scrutiny.
  • Higher self-confidence in their own skills was associated with *more* critical thinking. Experts used AI as a partner but didn't blindly trust it. They invested more effort in evaluating and applying the AI's responses because they had the knowledge to do so.

This tells us something vital: expertise doesn't become obsolete with AI; it becomes more important. The ability to critically evaluate an AI's output is itself a high-level skill. The risk is that over-reliance on AI could prevent us from developing that very expertise in the first place, leaving us unable to spot errors or biases. As the study notes, this creates "new challenges for critical thinking" that we must learn to navigate.

Quick Check

What is the main idea behind the "London cab driver" anecdote in the context of AI?

Recap: When not to use AI

What we covered:
  • The importance of using AI intentionally, considering the trade-offs between convenience and skill development.
  • The "use it or lose it" principle for our brains (neuroplasticity) and the risk of cognitive atrophy.
  • How AI recommendation engines can create "filter bubbles" that limit our exposure to diverse ideas.
  • The critical moral boundary: AI should assist, but never make, significant life decisions for us.

Why it matters:
  • Wisdom isn't just about using a tool, but knowing its limits. By understanding when not to use AI, we ensure that it remains a true assistant that enriches our lives, rather than one that diminishes our own capabilities and autonomy.

Next up:
  • We've talked a lot about what AI does. Now we're going to get more specific. In the next lesson, we'll dive into the technology that powers tools like ChatGPT: What is a Large Language Model?