Faut-il encore décider ?
Hazan, E., & Sibony, O (2026). Faut-il encore décider ? La décision humaine à l’ère de l’intelligence artificielle. Flammarion.
Our opinion
This month, we have selected a book that, although it does not come from the safety science research community, raises a question that is likely already having an impact on risk management: what happens when an algorithm proposes a choice and the human expert disagrees? Who should make the final decision, and according to which rules? The subject of this book is not artificial intelligence, but decision making. The authors explore the transformation of decision making in a world where algorithms outperform humans in many fields. They show that the delegation of decision making is already under way and that it raises new ethical, political, organizational, and democratic issues. They call for a clear distinction between what can be entrusted to machines and what must remain a non negotiable human prerogative.
The authors
Éric Hazan runs an investment fund and teaches digital strategy and artificial intelligence at HEC and Sciences Po.
Olivier Sibony is a professor at HEC and at Oxford.
Our Summary

The book by Éric Hazan and Olivier Sibony is structured in three parts.
The first part explores the history of decision making, showing that the history of AI and that of decision sciences are closely intertwined. The questions raised today by artificial decision making are not as new as one might imagine.
The second part presents a mapping of ways of making decisions, with or without AI.
In the third part, the authors attempt to envision what tomorrow’s society might look like, depending on the place that artificial decisions will occupy within it.
Part 1: Decision‑making and AI – intertwined paths
Decision science and artificial intelligence emerged at the same time, during the cognitive revolution of the 1950s. Among the first “cognitivists” were pioneers of a new discipline: artificial intelligence. Early proponents of AI set out to model mental processes using symbolic machines: to understand how thought works, one must decode its rules, much like the rules that enable a machine to operate. Thus, both brain function and AI are based on the same functional analogy between the brain and the machine.
Among today’s AI systems, a distinction is often made between the expert systems of the 1980s, built on symbols and rules (rule based AI, which made it possible to defeat Garry Kasparov at chess), and connectionist AI, which seeks to reproduce the brain’s neural networks. From the second half of the 2010s onward, AI reached a new milestone with the invention of Transformer Architecture, ushering in the era of large language models (LLMs).
Many researchers have highlighted the analogy between the two system model developed by psychologist Daniel Kahneman (Système 1 / Système 2. Les deux vitesses de la pensée) and the two major families of AI models. Symbolic AI mimics the functioning of our “conscious” thinking, or Système 2. In their own way, neural networks draw inspiration from Système 1. They “learn” quickly and associatively, without the need for explicit rules. The analogy is not perfect, but it remains very useful for understanding the strengths and weaknesses of the two types of AI. The strength of symbolic AI lies in its reliability and the transparency of its reasoning. But its rigid rules cannot grasp the ambiguities of the real world. By contrast, neural networks are flexible but opaque: very often, we do not understand how they arrive at their conclusions. When they make mistakes (the famous “hallucinations”), these errors resemble the mistakes made by inattentive humans – those we commit when our Système 1 is in control and the supervision of our Système 2 is at rest. It is therefore up to us, as humans, to understand how AI systems work, to know their limits, and to adapt our decision making modes to their strengths and weaknesses.
Part 2: AI or me – who decides?
What makes AI powerful is its ability to identify regularities (pattern recognition), that is, patterns within vast datasets.
The authors define three key conditions:
- clearly bounded environments,
- clears objectives,
- massive datasets.
Together, these three conditions determine technical feasibility.
The authors develop a mapping of the application domains of decision making AI, based on two dimensions: technical feasibility and acceptance by humans. This mapping distinguishes five zones: the “conquered territories,” the “swamp of concern,” the “forbidden zone,” the “valley of temptation,” and the fields of “co decision.”
The “conquered territories”
These are sectors that already make extensive use of AI (market finance, logistics, cybersecurity, etc.).
The “swamp of concern”
Many of us feel a shiver at the idea that crucial decisions could be artificial, mechanical, and dehumanized. The AI Act defines “high risk” applications in which human oversight is mandatory (justice, border control, education, etc.).
However, algorithmic decision making also gives rise to concerns that are not well founded. The authors identify several criticisms that are partly unjustified.
Here are two well known examples:
AI makes mistakes, so humans must supervise
This is the idea of keeping a human in the loop. This reassuring narrative reinforces decision makers in the belief that they will remain irreplaceable forever. Yet if AI is, on average, more effective than humans, it is precisely because, when AI and a human disagree, it is more often the AI that is right – and the human who is wrong.
Treating humans in an algorithmic way is dehumanizing
With or without AI, bureaucratic decision making resembles algorithmic decision making. The defining feature of bureaucracies is that they follow rules, in order to avoid judgments based on personal bias or favoritism.
The “forbidden zone”
There are several types of decisions for which relying on AI raises insurmountable problems because it is technically impossible: this is the “forbidden zone.” As soon as a crisis is unique (an international crisis, the triggering of a nuclear weapon, etc.), a rule based AI cannot handle it with the required level of nuance. The authors suggest that it might be possible to consider an international “charter of decisions that cannot be delegated to AI.” In a different register, in the artistic domain, a human artist’s work expresses an intention and implies the capacity to feel emotion. AI, by contrast, feels nothing. Another family of decisions that must remain human includes choices with a significant emotional component and long term commitment (for example, choosing a romantic partner).
The “valley of temptation”
There are a number of situations in which an AI could make a decision (because sufficient data are available and objectives are clear enough) and yet, the authors suggest, we should resist the temptation to use it. This is the case, for example, for decisions that would be ethically shocking to entrust to a machine (autonomous lethal weapons, judicial decisions, end of life decisions, etc.).
The fields of “co decision”
There is a wide range of situations in which data are insufficient or objectives are not clear enough to train an AI. In such cases, AI does not replace human decisions, but it can contribute to them by playing a role within a broader method (complex medical diagnosis or recruitment, for example).
Part 3: What kind of society do we want?
Imagine a list of patients waiting for a transplant. How should we choose the one who receives the kidney, liver, or heart that could save their life? Today, the allocation of organs is determined using an algorithm. But defining an algorithm requires making trade offs explicit. That is why there is no such thing as a neutral algorithm: there is no algorithm without priorities. Whether we like it or not, every parameter setting is an ethical – indeed, sometimes a political—decision. Social acceptability depends on two essential variables: risk perception and trust.
Building on this observation, the authors project themselves into 2036.
They imagine a range of possible scenarios along two dimensions:
• the speed and scale of the deployment of AI systems across society (the AI deployment dimension);
• more or less humanity and democracy to guide AI and make it politically acceptable (the democratic governance of AI dimension).
Four future scenarios emerge: “Technophob.IA,” “Technocrat.IA,” “Nostalg.IA,” and “Democrat.IA.”
Technocrat.IA
The society of algorithms,
or hyper optimized technocracy
AI deployment (+)
Democratic governance of AI (−)
Démocrat.IA
AI in the service of democracy – and vice versa
AI deployment (+)
Democratic governance of AI (+)
Technophob.IA
The status quo produced by maximum resistance to change
AI deployment (−)
Democratic governance of AI (−)
Nostalg.IA
A return to artisanal decision making
AI deployment (−)
Democratic governance of AI (+)
“Technophob.IA”: distrust
Political, intellectual, and administrative counterpowers have banded together in the service of the status quo. Innovation has remained confined to private or peripheral applications. The functioning of institutions has not changed: decisions remain in the hands of traditional elites and continue to produce outdated public policies. Many citizens, frustrated by the inefficiency of current systems, lose trust in institutions. The country fragments: on one side, technophiles form autonomous communities and widen the gap between the state and new forms of social organization; on the other, crises periodically ignite the streets and fuel the rise of populism. One of the reasons for this malaise is chronic economic stagnation, as the country struggles to take advantage of AI’s capabilities. The prosperity gap widens with nations that adopt AI proactively; purchasing power stagnates while social frustrations and a sense of downward mobility grow.
“Technocrat.IA”: a hyper optimized world
Society has chosen efficiency above all else. Political legitimacy no longer comes from voting, but from calculation. Efficiency replaces justice as the cardinal value. Society loses in freedom what it gains in order. Human errors have disappeared – but so have emotions, hesitations, and exceptions, which are what make human judgment rich. The majority of citizens, freed from repetitive tasks, invent new forms of symbolic, aesthetic, or relational contribution. This technocracy also reveals major flaws. Everywhere, perfectly rational algorithmic tradeoffs exclude what makes us human. AI systems, which excel at optimizing what already exists, struggle to imagine breakthroughs. Any radical innovation becomes impossible, gradually leading to a form of stagnation. Moreover, dependence on AI systems creates considerable systemic vulnerabilities. Technocratic society becomes fragile in the face of cyberattacks, technical failures, or hidden biases that can propagate throughout the entire system. In addition, AI integrates a “social score” into its decisions, modeled on the Chinese “social credit” system.
The “Technocrat.IA” scenario highlights the tension between systemic optimization and citizen involvement. Where citizens could once challenge an embodied public servant, they now confront abstract and indisputable logics deemed “rational.” This scenario raises a fundamental question: is a rationally optimized society still democratically legitimate?
“Nostalg.IA”: preserving the human at all costs
This scenario tackles the challenge of re humanizing decision making and democratically reinvesting governance. Society knowingly chooses to say “no” to the rapid and widespread deployment of AI. It has observed that optimized efficiency is not necessarily synonymous with well being. Citizens express a growing aspiration to regain influence over decisions that affect them. They recognize AI’s inability to honor the complexity of human situations. In short, critical decisions are “guaranteed AI free” (medicine, education, justice, etc.).
This scenario tackles the challenge of re humanizing decision making and democratically reinvesting governance. Society knowingly chooses to say “no” to the rapid and widespread deployment of AI. It has observed that optimized efficiency is not necessarily synonymous with well being. Citizens express a growing aspiration to regain influence over decisions that affect them. They recognize AI’s inability to honor the complexity of human situations. In short, critical decisions are “guaranteed AI free” (medicine, education, justice, etc.).
“Democrat.IA”: when democracy reinvents AI – and vice versa
In this scenario, AI spreads quickly and everywhere, but not in just any way: under effective citizen control. Citizens invest directly in the governance of AI to retain control over critical decisions. The aim is to organize the democratic governance of AI by humans, through a reinvented form of participatory democracy strengthened by technology. First, through the reappropriation of data, by developing public or cooperative infrastructures for information processing to limit dependence on private giants. Then through a shared definition of objectives: AI forces difficult choices about the purposes algorithms should pursue. The premise of this scenario is that such choices must not be treated as technical adjustments delegated to experts. They are governance orientations that must be debated and decided by citizens. Citizens thus become regular actors in public debate – not only through voting or polling, but through citizens’ assemblies. They are no longer passive beneficiaries of optimized decisions, but active participants in defining the common good. To this end, participatory democracy tools rely on a suite of AI tools that multiply the effectiveness of consultations (recruitment of participants, asynchronous debates, synthesis of contributions, etc.). It becomes possible to involve thousands of citizens on each issue of public interest (health, education, justice, climate, etc.) and on defining the desirable balance between automation and human decision making. The governance of public AI also serves as a model for decisions by private actors. The second pillar is democratic oversight of AI. The main mechanism of this oversight is algorithm auditing. Just as auditing corporate accounts is essential to the proper functioning of financial markets, auditing algorithms creates the conditions for a credible algorithmic social contract. This new industry equips itself with a professional council and codes of ethics.
Bringing this scenario to life faces considerable challenges: citizen engagement is not guaranteed; administrations and agencies concerned will have to accept relinquishing part of their power to citizens’ assemblies; elected officials will have to learn to work with participatory democracy. Finally, “Democrat.IA” requires rapid citizen education and a profound overhaul of educational pathways and elite selection.
The authors advocate this final scenario. In their view, it is the only one that offers a model of society open to progress, that democratically manages the contradictions generated by change, and that mobilizes the energies of the entire population in the service of a shared project. “Democrat.IA” entails a dual transformation – technological and democratic. The society that succeeds in achieving this dual transformation will gain the capacity not to choose between more or less AI, but to become a full agent of its own destiny.
Conclusion
Artificial intelligence (AI) is a technology that carries both serious threats and immense opportunities. Because it unsettles us, AI forces us to confront questions we have long tended to sidestep. It requires us to prioritize criteria and face dilemmas. It compels us to decide how we want decisions to be made. The other temptation is to rely blindly on the machine. This is the path promoted by algorithm designers who claim to spare us difficult choices. It is the path that sets us on the road to servitude. The question is not whether AI will make decisions – it already does – but according to which rules and in the service of which purposes. We must learn how to frame it and govern it.
- Framing it: in some cases, where objectives are perfectly defined and data are abundant, we will be able to delegate clearly specified decisions to AI. But for the majority of decisions, we will instead treat it as a “colleague” occupying a well-defined role, within a logic of co-decision-making.
- Governing it: our society is marked by a divide between the grassroots and the top, between those who bear the consequences of decisions and those who make them. This divide endangers democracy. AI risks widening it, even though it can also be part of the solution. Its deployment offers a unique opportunity to involve a broad segment of the population in defining its purposes and in the concrete steering of algorithms.
By delegating to AI what it can do better than we can, we can free up time and energy for what only we can do: deliberate, interpret, imagine, and arbitrate between conflicting values. The path proposed by the authors implies an evolution of institutions – but above all, choosing before others choose for us. AI will not wait for us. The question is no longer whether machines can decide better than we can; it is whether we will have the wisdom to organize ourselves so that they do so in our interest and in our service.
Comments by Thomas Merlet from the Foncsi team
The book by Éric Hazan and Olivier Sibony does not belong to the world of high risk industries with which Foncsi is familiar. So why choose it?
First, as the authors themselves point out, this is not a book about AI, but about decision making.
Second, one of the book’s major strengths lies in its clarity and pedagogical approach. The authors start with the history of algorithms to arrive at a clear observation: where data are sufficient and objectives are clear, machines now make better decisions than humans.
The book is particularly useful for managers in high risk industries, because it:
- sheds light on the areas in which AI can improve decision making performance;
- warns against blind delegation to automated systems;
- highlights the very real risk of loss of accountability or of an invisible shift of power toward algorithm designers;
- calls for the definition of rules for governance, accountability, and control;
- encourages reflection on the role of human judgment.
One of the book’s major contributions is its political analysis of automated decision making.
The authors raise essential questions: Who decides? According to which rules? With what democratic oversight? This reflection is fundamental, because automation is never politically neutral. It reflects choices about society, values, and priorities.
Finally, the breakdown into four illustrative scenarios (“Technocrat.IA,” “Technophob.IA,” “Nostalg.IA,” and “Democrat.IA”) can be transposed to the scale of an organization or a high risk company. These representations can lead us to ask productive questions:
- Does the “Technocrat.IA” vision, synonymous with cost reduction, risk generating new systemic vulnerabilities?
- Does the “Technophob.IA” vision, synonymous with stagnation, risk causing us to miss out on decisive risk control tools?
- Does the “Nostalg.IA” vision, synonymous with a return to the human, risk leaving us overwhelmed by new competitors?
- Does the “Democrat.IA” vision, synonymous with extensive consultation, risk leading to decision making chaos? And, in any case, is it realistic?
Two examples illustrate the limits of the “Democrat.IA” vision:
- The ability to audit algorithms, as mentioned by the authors, is highly problematic when it comes to large neural networks (which raises major issues for the certification and control of AI systems in critical contexts).
- Current large language models (LLMs) have persuasive capabilities that exceed those of humans, which does not seem conducive to facilitating democratic debate.
The book by Éric Hazan and Olivier Sibony deserves credit for providing a solid conceptual framework for structuring strategic reflection, even though the approach is more forward looking than prescriptive.