AI Isn’t Killing Our Ability to Think. But Fear-Mongering Might
- Merrill Keating

- Jan 9
- 2 min read
Let’s replace scare tactics with strategy, and start building the frameworks people need.

Who else is tired of the headlines?
"AI is destroying our thinking."
"Students can’t think for themselves anymore."
"ChatGPT is the end of human intelligence."
I’ve seen more of these in the past six months than actual conversations about how people learn to think with AI. And that’s the problem.
The real danger isn’t that AI is killing our cognition—it’s that we’re failing to teach people how to use it well.
When you look at the research, it doesn’t point to some inevitable erosion of intelligence. It points to a gap in guidance, pedagogy, and reflective practice.
This is not an apocalypse. It’s a call to grow.
What the Research Actually Shows
🔹 AI enhances reasoning when used collaboratively.
A 2023 study by Stanford’s Human-Centered AI group and Microsoft found that people working alongside AI performed better on complex reasoning tasks than those working alone or using AI as a shortcut.Source → Stanford HAI & Microsoft (2023)
🔹 AI strengthens metacognition and self-reflection.
OECD’s 2024 report highlights that when students learn to prompt effectively, they also improve in metacognitive areas like self-correction, reasoning, and the ability to evaluate their thinking.Source → OECD (2024)
🔹 AI tutoring scaffolds—not replaces—student effort.
A 2023 paper in Computers & Education found that intelligent tutoring systems don’t make students passive. They encourage learners to explain, reflect, and build stronger reasoning pathways.Source → Woolf et al., 2023
This isn’t the AI boogeyman. It’s a mirror. Garbage in, garbage out. Better inputs → better thinking.
What We’re Missing Is a Framework
We don’t panic about calculators anymore. We learned to teach math differently.
Spellcheck didn’t kill literacy. But it forced us to reframe how we teach writing.
Google didn’t end research. But it exposed how shallow our critical inquiry skills were.
AI isn’t different. What’s missing isn’t intelligence—it’s infrastructure.
We need cognitive co-design: a collaborative approach where humans and AI build thinking together, not separately. That means teaching:
· Prompt literacy
· Metacognitive awareness
· Critical evaluation of AI output
· Ethical reasoning about when and how to engage AI
Because better inputs don't just mean better prompts. They mean better questions, better judgment, better values brought to the interaction.
Stop Fear. Start Frameworks.
The next time you see an image of a robot clutching its head like it’s having an existential crisis, ask yourself:
Who benefits from this fear?
Because it’s not learners. It’s not educators. And it’s not society.
We don’t need another scare graphic.
We need training that helps humans and AI grow together.
Final Thought
AI doesn’t diminish human potential. It mirrors our habits, our systems, and our questions.
Let’s stop yelling “don’t use it” —And start showing people how to use it well.



Comments