Facing AI
When technology reveals how we decide.
Introduction — What are we really facing?
“The system is designed this way.”
How many times have you heard this sentence? You want to complete an administrative procedure, change a booking, ask a question that falls outside the predefined framework — and the system sends you back to its own rules, without any room for discussion.
With artificial intelligence, this logic crosses a new threshold. These are no longer just rigid pathways, but automated choices, calculated priorities, decisions made according to criteria that are rarely made explicit.
AI is everywhere: in strategic narratives, executive committees, promises of productivity. It is often presented as an unavoidable lever — saving time, automating, optimizing, deciding faster.
But behind the technology, something else is at play.
AI acts as a systemic revealer: it sheds light on how we decide, how we exercise power, how we distribute responsibility. It forces us to look at our own modes of operation — including what we would rather no longer see.
A poorly framed question
La plupart des débats sur l’IA commencent par les mêmes interrogations : Most debates about AI begin with the same questions:
What will it replace? Which jobs will disappear? How much time will we gain?
These questions are legitimate. But they rest on an implicit assumption: that AI is primarily a neutral tool, something we can simply add to organizations the way we once added ERP systems or spreadsheets.
In reality, things are rarely that simple. The real question is not only “What can AI do?”
It is rather: “What are we willing to let it do in our place — and why?”
In many organizations, these trade-offs remain implicit. We automate because “it works,” because “it’s more efficient,” because “everyone is doing it.” The discussion about meaning, responsibility, or long-term effects comes later — when it comes at all.
AI amplifies what we already struggle to face: the way we decide, and sometimes the way we arrange things so that we no longer really have to decide.
AI as a systemic revealer
AI does not primarily transform organizations through what it does, but through what it reveals.
In most companies today, it is not an “autonomous brain” taking over decision-making. It operates in much more ordinary ways: writing assistants, recommendation engines, sorting systems, dashboards, prioritization algorithms.
In other words, AI inserts itself at the heart of our everyday cognitive processes.
This is where its impact becomes systemic. What it infiltrates is not just execution — it is the way we think, prioritize, and arbitrate.
Gradually, it suggests instead of deliberating, prioritizes instead of choosing, recommends instead of discussing, accelerates where slowing down might sometimes be necessary.
In the short term, this infiltration is largely invisible. It relieves, streamlines, “helps.”
In the medium term, it reveals something else: a silent displacement of discernment.
What AI brings to light are not its own technical limitations, but the limitations we have already embedded in our organizations: weakly debated decision processes, implicit criteria never revisited, fragmented responsibilities, constant pressure to “go faster,” “do better,” “decide without discussion.”
AI does not create these dynamics. It accelerates them. It makes them structural. It turns habits into rules, and discreet renunciations into normal functioning.
Where does the power to decide still lie?
Behind every automation lies a prior decision. And behind every decision lies a political choice, in the noblest sense of the term: who decides, according to which criteria, with what possibility of challenge.
What AI reveals particularly clearly is that the power to decide is becoming increasingly difficult to locate.
Not because it has disappeared. But because it has shifted — into technical parameters, prioritization criteria, models trained on historical data, processes that no one really questions anymore.
Gradually, decisions are no longer carried by identifiable people, nor debated as such. They become the “normal” outcome of how the system works.
And when decisions are no longer clearly assumed, collective discernment erodes.
We no longer follow an orientation because it has been understood or co-constructed, but because it appears as the logical — even inevitable — consequence of the existing setup.
This is not imposed obedience. It is a functional, gentle, often unconscious submission: “It’s not me deciding, it’s the tool.” “It’s not a choice, it’s the system.” “We don’t have time to do it differently.”
Re-instituting human decision-making: concrete practices
If AI acts as a revealer, then the central question becomes institutional: how can we preserve — or rebuild — spaces where decision-making remains alive, debatable, and evolutive?
Contrary to what one might think, this does not require inventing models from scratch. Many organizations are already experimenting with approaches that restore human discernment to a central place.
Explicit decision-making processes
Clearly distinguish what falls under automation and what falls under human judgment. Before “coding” criteria into tools, ask: who decided on these criteria? Are they still relevant? Can they be challenged?
Distributed governance models
Rather than centralized or diluted power, clarify who holds authority over what. Each person can exercise contextual judgment in their domain, without waiting for hierarchical validation or hiding behind “the system.”
Decisions by consent
Accept decisions that are imperfect but adjustable, enabling action without waiting for an absolute truth. Consent is not unanimity—it is the absence of a blocking objection. This preserves the ability to iterate.
Spaces for deliberation on criteria
Before entrusting trade-offs to automated systems, take the time to debate collectively: which criteria? Which priorities? What possible unintended effects?
These practices share one thing: they recognize that decision-making is a process, not an outcome. And that this process deserves to be cultivated as a collective capability—especially in a world where the temptation to delegate it has never been stronger.
Conclusion — Taking back control of our decisions
Ultimately, facing AI is not about deciding whether to adopt it or reject it.
It is about deciding how we want to keep deciding.
What AI reveals—sometimes brutally, sometimes insidiously—is not only our relationship to technology, but our relationship to power, discernment, and responsibility.
So the question is not “how do we prevent automation?” but rather: which decisions do we still want to fully inhabit as human beings—individually and collectively?
The point is not to slow progress, nor to sanctify humans against machines. It is to avoid letting technology decide, on its own, what we stop questioning.
The real work ahead is not only technological—it is organizational, political, and deeply human.
And AI, if it acts as a revealer, also offers us a rare opportunity: to consciously rethink how we decide, before those ways harden elsewhere, without us.
Further reading:
I joined Paradigm21 in 2019.
I found a very open and empowering development vector in the "Paradigm21" project. At the same time, it is a path to ever more courage to offer the world a radically different way of organizing human ties and power relations in organizations.


Responses