The first time I spent an entire afternoon working with AI, I closed my laptop with that strangely satisfying feeling of having done hard intellectual work. My brain felt cooked. I had compared models, refined prompts, rewritten outputs, tested workflows, chased better phrasing, discarded entire approaches. It felt intense. Dense. Productive.
But later that evening, an uncomfortable thought appeared. What exactly had I been working so hard on? Not the actual text, at least not in the way I used to. Not the slow process of building an argument sentence by sentence, wrestling vague intuitions into something coherent, discovering what I actually think while writing. A large part of the effort had moved elsewhere. Into steering the machine.
Recently I came across a clip of Cleo Abram talking about »time under tension« in weightlifting as a metaphor for intellectual growth. Her point was simple and powerful: muscles grow under resistance, and maybe thinking works the same way. Writing, editing, struggling with ideas, staying inside the tension of not yet knowing where a thought leads. That process is not just the path to a result. In many cases, it is the thinking.
What struck me was the strange psychological territory AI creates around that idea. Because using AI can feel exhausting.
We’re Not Lazy, And We Don’t Want to Cheat
Learning these tools costs real mental energy. You have to understand interfaces, prompting logic, context windows, model behavior, workflow design, iteration strategies. You experiment constantly. You make decisions. You troubleshoot. Especially in the beginning, it can feel like learning a new instrument while rebuilding your entire relationship with work, creativity and attention. And that is exactly where things get interesting.
The effort of learning and operating AI can mask the fact that we may be outsourcing parts of the intellectual struggle that certain kinds of work used to require. Not because we are lazy. Not because we consciously want to cheat. But because the new form of effort feels real enough to compensate for the old one disappearing.
This is not the argument that AI users are not »really working.« Anyone seriously integrating these tools into daily workflows knows how demanding it can be. The cognitive load is real. But not every form of cognitive load produces the same kind of intellectual development. There is a difference between struggling with a problem and struggling with the orchestration of tools around a problem. And because both experiences feel mentally effortful, they can become psychologically interchangeable.
It Would Be Too Easy to Romanticize The Old Struggle

That may be one reason why the current AI transition feels morally smoother than it otherwise would. If a calculator solves a complex equation for me, the delegation is obvious. The machine did the math. But generative AI creates a far more ambiguous experience. You prompt, edit, curate, redirect, reject, refine. You stay involved. The process still contains friction. Which means it still feels earned. That feeling might be doing more psychological work than we realize.
Psychology has names for nearby mechanisms: effort justification, moral licensing, cognitive offloading. We tend to value what costs us effort, and effort often becomes internal proof that what we are doing is legitimate. AI complicates this because the difficulty increasingly moves from doing the cognitive work to managing the systems that do parts of it for us.
At the same time, it would be too easy to romanticize the old struggle. Not every hour staring at a blank page was sacred. Not every painful writing process produced insight. Some friction is just friction. Some cognitive burdens deserve to disappear. The history of human progress is also the history of building tools that reduce unnecessary effort and distribute cognition beyond the individual mind.
But generative AI is not just another efficiency tool. It increasingly participates in formulation, synthesis and expression, the very processes through which many people discover what they think. That matters because self-generated material is not psychologically identical to material we merely receive, select or refine. That changes the question. Not: Is AI good or bad? Not: Does using it still count as real work? But: Where do I still want genuine desirable difficulty, genuine time under tension?
Because maybe the danger is not that AI makes thinking effortless. Often, it does not. The danger is that AI can make the avoidance of certain kinds of thinking feel effortful enough to remain invisible. The machine does not have to replace our minds entirely. It only has to make the replacement feel like work.
