Cognitive debt: the work behind the work

The scene is now familiar.

A team presents a strategic report during a meeting. The document is polished, structured, and coherent. It was produced quickly. Very quickly. Thanks to ChatGPT or any other LLM (Large Language Model: an AI that generates text by predicting the probable sequence of words). In the room, the atmosphere is one of satisfaction: efficiency, time saved, a feeling of having worked well.

Then management asks a question not found in the document.

Not a technical question. Not a request for clarification.

A question about a second-order implication. About the why behind a key recommendation.

A silence settles. The team hesitates.

The deliverable is there, the conclusions too, but the reasoning that led to this position is no longer truly accessible. This scene reveals a tension that has become structural. Under the pressure of speed and productivity, AI takes on an increasing share of cognitive work, and something essential quietly shifts.

The question is therefore not just what we produce faster.

But what we stop building along the way.

1. Attention fragmentation is not an individual flaw

The background noise of our work is no longer silence. It’s cognitive noise.

Our capacities are limited. George Miller, an American cognitive psychologist, showed in the 1950s that our working memory can only process a limited number of items at once, often summarized as “7 ± 2.” This formula does not describe a fixed limit, but our ability to organize information into meaningful units: what we can think depends less on the quantity of information than on how it is structured and contextualized. However, our professional environment now relies on a continuous flow of solicitations, alerts, and fragmented content.

Graham Burnett, a historian of science specializing in the transformations of attention, describes this phenomenon as attention fragmentation. It is not an individual failure, nor a lack of personal discipline. It is a systemic logic, embedded in digital economic models: capturing attention, fragmenting it, and keeping it under permanent tension.

This model is extractive by nature. It transforms our capacity for concentration into an exploitable resource.

“A wealth of information creates a poverty of attention.” — Herbert Simon, cognitive scientist, known for his work on attention.

Generative AI acts as an accelerator of this dynamic. Its ability to produce content at high speed far exceeds our capacity for assimilation. It further densifies the flow, reduces downtime, and leaves little room for thought maturation.

It’s no longer just a one-off distraction.
It’s a fundamental cognitive condition.

2. When we delegate too much, the brain withdraws

A recent MIT study, Your Brain on ChatGPT, allows us to observe this phenomenon very concretely.

Three groups are compared. All must write an essay. The first uses an LLM, the second a search engine, the third no external tool. Their neural activity is measured during the task.

The result is unambiguous: the more significant the external support, the more overall brain connectivity decreases. The group using the LLM shows the lowest neural engagement.

In other words, the brain delegates.
And when it delegates too much, it withdraws.

This neurological observation translates into measurable effects. Participants in the LLM group show almost no memory of the content they have just produced. Their sense of intellectual ownership is weaker, more diffuse. And the generated texts are more homogeneous, less distinctive.

It’s not a sudden collapse of cognitive abilities.
But a gradual deactivation of the internal structuring effort.

And this withdrawal is not temporary.

3. Cognitive debt: a cost that comes later

Cognitive debt is what accumulates when we outsource the effort of thinking: we save time now, we lose autonomy later.

Each time a central cognitive task is outsourced, such as formulating an argument, structuring an idea, or synthesizing information, a small debt is incurred. An invisible but real debt: less engagement, less internal consolidation, less ability to take back control without assistance.

The study clearly shows this. When the LLM is removed from the group that systematically used it, performance drops. The brain remains in a low-power state. The debt comes due.

This phenomenon extends far beyond the individual.

Organizations, and then States, enter a competitive race where speed of adoption takes precedence over the evaluation of long-term effects. Little time is devoted to the cognitive, cultural, and educational costs of these technologies. Everyone has an interest in accelerating to avoid being left behind, even if collectively everyone would benefit from slowing down and establishing common rules. This discrepancy between what is individually rational and what would be collectively preferable is at the heart of what is called a prisoner’s dilemma. And this movement remains understandable. In many organizations, speed, standardization, and “proof by deliverable” are rational responses to pressure, risk, and uncertainty. The problem does not appear when this logic exists, but when it becomes hegemonic: when it gradually replaces thinking time with delivery time, and when quality is confused with form. At that point, we don’t just gain efficiency. We change the type of minds the organization produces. The question then becomes: who decides to delegate these cognitive uses to AI, within what frameworks, with what explicit or implicit rules?

The paradox is that technology is not the problem in itself. The same tools can be used to manipulate or to emancipate, to impoverish thought or to strengthen it. Everything depends on the context of use and the mode of engagement.

The real question then becomes strategic.

Are we training augmented professionals, or AI operators?

In one case, the human masters the tool. In the other, the tool masters the domain, and the human becomes its interface.

This choice is not technological. It is cultural.

Thinking is increasingly becoming a distributed, fragmented, assisted, and sometimes unconsciously delegated activity. However, human thought is not just a capacity for calculation or producing answers; it is also a slow, embodied process, traversed by doubt, hesitation, and internal conflict. It is precisely this time that disappears when everything is optimized for speed. The risk is not to become less intelligent, but to become less capable of sustaining autonomous, continuous, and responsible thought. Thought capable of connecting heterogeneous elements, resisting immediate evidence, and formulating judgment where no optimal answer exists. In other words, what is at stake is not punctual cognitive performance, but the collective capacity to maintain minds capable of decision, discernment, and disagreement.

When a capacity becomes rare, it becomes strategic. And when it becomes strategic, it becomes attackable.

4. Cognitive sovereignty becomes a strategic issue

A shift is underway.

Subtle but structural.

The human mind is now recognized as a field of conflict in its own right. After land, sea, air, space, and cyberspace, cognition has become the sixth domain of warfare. This is not a metaphor, but an assumed doctrinal evolution. Contemporary armies explicitly integrate it. In France, with structures like Viginum. In Switzerland as well, through cognitive protection missions. The stakes of current conflicts are no longer limited to the destruction of infrastructure or classic military superiority. They are shifting.

To act on perceptions.

On mental frameworks.

On the ability to decide.

This war does not aim so much to impose ideas as to exploit the conditions already described: fragmented attention, accelerated flows, thought under pressure.

At this stage, cognitive debt changes its nature.

It is no longer just individual, nor solely organizational.

It becomes an issue of collective resilience. A society that massively outsources the structuring of thought, memory, and the capacity for synthesis weakens its own autonomy of judgment. Not due to a lack of intelligence, but due to a weakening of cognitive continuity, with direct consequences on the ability to debate, to decide collectively, and to resist influence, manipulation, and the oversimplification of complex issues.

5. The last bastion

Faced with the massive outsourcing of cognition, the question is not to reject the tool. It is to define what cannot be delegated.

What remains are distinctive human capacities: originality, critical thinking, metacognition, the ability to contextualize, to connect, to judge in uncertainty.

These are no longer “soft skills.”

They become the core of high-value work.
The work the tool cannot do for us. The true work behind the work: the moments when we must formulate a position, connect contradictory elements, explain without support why a decision holds.

“Once men turned their thinking over to machines in the hope that this would set them free. But that only permitted other men with machines to enslave them. ” — Frank Herbert, Dune, 1965

The question is therefore not to reject AI.

It is to decide what we agree to no longer train.

Conclusion: the non-delegable part

The most important question AI poses to us is not about the machine.
It is directly about us.

In a world where the production of content, arguments, and reasoning can be delegated in a few seconds, the true scarcity is no longer information, nor even technical skill. It is the ability to think for oneself, to connect, to doubt, to judge, to take a stand.

Defining the non-delegable part of work is therefore not a defensive or nostalgic exercise. It is a strategic act. For individuals. For organizations. For democratic societies.

Because what we choose to delegate today directly shapes the kind of minds we cultivate for tomorrow.

And the question, ultimately, remains open:

what do we want to remain responsible for, cognitively, collectively, and humanly?

Three references for further reading:

Note on this article and the use of AI:
For this article, NotebookLM was used to process 39 different sources (academic articles, podcasts, YouTube videos, etc.) and extract the main themes to develop. Then, ChatGPT was used to refine the style and structure with a complex prompt developed by the author. Two human reviewers were consulted (thanks Marc and Lorain), and a long two-week infusion period with final corrections led to the published version.

Consultant, Trainer and Coach

I joined Paradigm21 in 2019 after writing of a book on new organizational paradigms in Switzerland. My ambition is to build more conscious organizations and inspired organic leadership.

As a specialist in personal and organizational transformation, I make the link between economic performance, personal well-being and organizational effectiveness.

Related Articles

Responses

Your email address will not be published. Required fields are marked *