The Cognitive Debt Collectors

The Cognitive Debt Collectors

There's this new kind of debt popping up, not money-related, but cognitive. Think of it like this: every time we let a system handle our remembering, thinking, or writing, we're basically taking a loan of clarity from our future selves. Sure, it feels good right away, but the real cost often hits later, usually when things are critical and we're short on time.

Let's call it cognitive debt. The principal is all the thinking we just didn't do. The interest adds up in sneaky ways: we get fewer hooks for our memory, our mental models become thinner, and it takes us longer to realize when an answer sounds right but isn't. By themselves, these costs are small. But all together, they really change how we deal with complexity.

An illustration of a person looking at a complex, fragmented puzzle with a confused expression.

What the data is starting to show

It turns out this isn't just a guess. Gerlich's 2025 study, involving 666 participants, showed a clear negative connection between using AI tools and critical thinking scores on the Halpern Assessment. The numbers are pretty stark: AI use negatively correlated with critical thinking (r = -0.68). At the same time, there was a strong positive link between using AI and 'cognitive offloading' (r = +0.72) – basically, letting the AI do your thinking. And, critically, this cognitive offloading was strongly tied to lower critical thinking scores (r = -0.75). Analysis suggests this offloading helps explain why relying on AI seems to drop those scores. Younger users appeared more dependent and scored lower than older groups, though getting more education seemed to soften the impact. Several summaries also point to a 'diminishing returns' situation: a little AI use doesn't hurt much, but past a certain point, engagement with tasks drops, and so do the scores. (Sources listed below.)

Why this becomes a loop

It's less about how good the tools are, actually. The big change comes from the feedback loop they set up.

An illustration of a person using an AI tool, with an arrow looping from the person to the AI and back, signifying a feedback loop of increasing reliance.

When those collectors turn up

You don't really notice cognitive debt during your routine tasks. Instead, it shows up in trickier situations: a brand-new problem with no existing solutions, an unexpected input causing issues, or when a fragile system breaks after an update. It also appears when you're making a strategic decision, but the core question isn't even clearly formed. These are precisely the moments that demand slow, careful thought. And if you haven't been exercising that mental 'muscle,' these are also the times when you'll be least prepared.

A useful way to think about AI.

Consider three separate ways you could use this. It's important to tell them apart, because each one will build a different kind of habit.

Autopilot means the system does your first draft. You'll probably accept most of what it gives you, with just a few small edits. This helps you get a lot done quickly and takes the mental load off. It trains you to catch surface errors, not to truly evaluate the content.

Think of it as an accelerator: you get your initial thoughts or a draft down first. Then, you can use AI to really challenge, expand on, or even condense those ideas. This way, your original thinking stays central, and the AI just helps make it stronger. It teaches you to dig deeper, instead of just taking what's given.

Think of AI like a personal airlock for your brain—it's there to guard your focus, not to do your thinking for you. It can block noisy feeds, group up constant notifications, give you specific things to study, create focused work blocks, or just filter out everything irrelevant. All that helps save your attention, so you can actually put your mind to work.

Most people sort of blend their approaches without really thinking about it. But the important thing is to purposefully put things in the right order: start with the airlock, then the accelerator, and only then the autopilot. That specific sequence keeps the parts of your thinking safe—the parts you don't want to just hand over to something else automatically.

An illustration of a person looking at a complex, fragmented puzzle with a confused expression.

A useful way to think about AI.

The quiet counter-movement

A small but growing group is already using AI in a different way. Instead of having models write things, they're asking them to help them *not* read. These tools filter the information overload, shape initial thoughts into better questions, and even enforce periods of deep focus. Reports from 2025 on AI-assisted digital minimalism describe these as tools that act more like a gatekeeper than a wish-granter—scheduling intense work, screening sources, and just reducing the general mental noise. That way, people have enough energy for the tasks that still really need a human touch.

This isn't about being against technology. It's about how we use our attention. When your mental capacity is limited, the real win isn't a better answer; it's just fewer pointless questions coming your way to begin with. It's easy to miss the downsides when everything feels so simple.

The idea that offloading tasks is just about efficiency often hides a different truth. What you're really doing is trading immediate speed for your ability to handle things in the future. It's a bit like a big loan: everything seems fine until something unexpected happens. The real problem isn't that people will stop thinking entirely; it's that they'll do less of the crucial thinking needed to prevent small mistakes from becoming huge problems.

Here's another problem, something called 'calibration drift.' When you spend less time actually dealing with real, messy data, your instincts really start to come from what the model tells you. So, after a while, your idea of what's normal, or rare, or even risky, aligns with the tool's version of the world, not the actual world. You get super fast when you're using the tool, but you're not as reliable when you're trying to operate without it.

Focus on monitoring, rather than just optimizing.

When you focus heavily on optimizing, you often drift into autopilot; the dashboards make it seem like you're moving forward. A more helpful way to approach this is diagnostically. Look closely for weak spots and how resilient you truly are.

Exposure. Where are you relying on models for decisions that could really go wrong? How often do you get to see the actual problem before a model interprets it for you? What percentage of your day starts with a fresh, human look?

Let's talk about resilience. Where in your work is it absolutely crucial to get it right on the first try, because the consequences of failure are just too high? What are some quick checks you can do without a complex system to keep those essential skills sharp? And when do you intentionally make yourself operate without any assistance?

Escalation. What's your guideline for when you shift from manual control to automated? Are there specific situations where you'd still refuse to switch, even if the model performs very well? And who bears the responsibility when the outcome is incorrect?

Big companies often run into the same issues, just on a larger scale. If you let one model handle everything, then your training, how new people get started, and even the quality of your decisions will all depend on a tool you don't actually control. You won't be able to inspect its data or even know when it's updated. While that might seem like a win in the short term, the long-term danger is relying entirely on something with no backup.

An illustration of a person transferring a glowing thought from their mind to a computer screen with an AI interface.

What moderation actually looks like

Research on identifying thresholds and diminishing returns is useful because it shows a shift in quality, not just a linear one. Using something moderately and thoughtfully can help your thinking, especially when it keeps you engaged with the problem. But relying on it too much pulls you away from the vital information you need to make sound judgments. Practically speaking, moderation isn't so much about a time limit; it's more about a sequence: connect with the problem, then consult the model, and then make your decision. If you start by simply consulting the model, you'll likely just defer to it. However, if you use it to inform your *final* decision, that's how you truly develop discernment.

This doesn't mean we should ignore AI. It just means we should be clear about how and when we're using it. Figure out which of your thoughts you want to keep private, then use these tools to protect them. Let the AI handle tasks where the risk of failure is low and speed is the priority. But rely on your own judgment when things are complicated, when people have conflicting interests, or when the consequences of a mistake are significant.

Debt collectors rarely make a big scene. Instead, they show up as little mistakes at crucial moments: a signal missed, a false alarm, or a plan that was just too tight. You can handle the small costs now, or face the full bill unexpectedly later. But either way, someone settles up.