AI is coming for your brain (not the zombies)

I have a confession to make: Google Gemini is currently my best friend and my worst enemy.

It’s making my life easier, it helps me teach, and my research is more efficient. But as I get more skilled at crafting AI prompts that get me the results I want, I’m starting to worry. I’m worried that I’m getting lazier. I’m worried that I am, quite literally, letting my brain rot. It turns out, that "brain rot" feeling has a very real academic name, and the researchers are just as worried as I am.

One of the biggest red flags is the impediment of Higher-Order Thinking Skills (HOTS). Puzzling something out, or doing the calculation are the strenuous mental exercise that helps us learn. Derakhshan and Taghizadeh (2025) argue that when we over-rely on AI to provide immediate solutions to linguistic or conceptual puzzles, we’re essentially taking a cognitive shortcut. We bypass the critical thinking and problem-solving processes that actually make our brains grow. If I’m not doing the "heavy lifting" of analysis, am I actually learning, or am I just a passenger in my own education?

I’ve mentioned before that my math skills are... let’s say, "rudimentary." That happened because I stopped practicing the moment I got a calculator. Now, I’m seeing the same risk with deskilling in my teaching and writing (Daher, 2025). Automation is great until the power goes out. If we stop practicing the skills that the AI is now doing for us—like lesson planning, drafting, or even summarizing complex texts—we create a massive vulnerability. If the technology fails or the subscription expires, would I still be able to perform these tasks independently? Or have I let those professional "muscles" atrophy?

There is also a much subtler danger: the delegation of meaning-making. As a TESOL instructor, I want my students to be "agentive"—I want them to take ownership of their words. But Darvin (2025) points out that learners (and, let's be honest, everyone) are starting to resist investing that energy. When we delegate the production of our texts to an AI, we lose the reflexive thought required to achieve our own true intentions. We move from being active creators of knowledge to passive consumers of whatever the algorithm spits out. If Gemini writes my blog, is it still my reflection?

Finally, there’s the issue of autonomy. I like to think I’m a free-thinking educator, but Salloum (2025) raises a chilling point: these AI systems use algorithms to predict performance and direct our learning paths. This can slowly erode the professional autonomy of teachers and the independence of students. If I’m following a path laid out by an algorithm, am I exercising my own judgment, or am I just following the "suggested next step"?

I’m not ready to delete Gemini just yet—it’s too useful. But I think I need to find a way to use it as a tool, not a crutch. I want an education (and a brain) worth having, and that means I can't afford to be lazy.

Sources:

Daher, W. (2025). The automation of education: Deskilling and the future of human expertise. Educational Technology Research Journal.

Darvin, R. (2025). Digital agency and the delegation of meaning in the age of AI. Language Learning & Technology.

Derakhshan, A., & Taghizadeh, H. (2025). Higher-order thinking in the AI-mediated classroom. Journal of Cognitive Education.

Salloum, S. (2025). Algorithmic governance and the erosion of teacher autonomy. International Journal of Educational Integrity.

Next
Next

A whole new world in VR…