There’s a 1979 IBM training memo that has become a permanent resident of the “AI Ethics” hall of fame. You’ve likely seen it in your LinkedIn feed or a tech-critique meme. It’s a grainy, monochrome slide that states:
“A computer can never be held accountable, therefore a computer must never make a management decision.”
In 1979, this statement felt like an impenetrable fortress of logic. Accountability, after all, is a human trait. It is a social contract tied to responsibility, consequences, and moral judgment. Machines could calculate and process data, but the burden of decision-making—and the weight of its consequences—belonged firmly to human beings.
Fast forward to 2026. Artificial Intelligence now writes reports, analyses financial markets, diagnoses diseases, drafts legal briefs, and recommends strategic decisions. The power of Generative AI has grown so dramatically that the old memo circulates not as a historical artefact, but as a warning.
Yet every time this image appears in my feed, I find myself pausing. Not because the statement is wrong, but because it assumes something that may not be entirely true.
It assumes that humans are actually accountable.
The Many Meanings of Accountability
Before we ask whether machines can ever be accountable, we must first understand what we mean by the word itself. In everyday conversation, accountability sounds simple, but in reality, it contains several distinct layers.
1. Answerability vs. Consequences
Accountability requires explanation. Someone must be able to ask: Why did you make this decision?
But explanation alone is not enough. True accountability also involves consequences. If a poor explanation leads to no meaningful repercussions, accountability becomes little more than a carefully worded press release.
2. Retrospective vs. Prospective Accountability
Most of the time, we look backward. After a failure, we search for the person responsible.
But genuine accountability works in the opposite direction. It is prospective. It exists before the decision is made because people know their actions will later be scrutinized.
The awareness of future judgment shapes present behavior.
3. Internal vs. External Accountability
External accountability is supervision—rules, auditors, regulations, compliance frameworks.
Internal accountability is something deeper. It is the moral compass that guides behavior even when nobody is watching.
A person who behaves ethically only when the auditor is in the room is not truly accountable. They are simply being monitored.
4. Individual vs. Systemic Accountability
Organisations often prefer to identify a single individual responsible for a failure.
But in complex systems—banks, governments, large corporations—mistakes are rarely the product of one person alone. They emerge from structures, incentives, and collective decisions.
And when everyone is responsible, we frequently discover that no one actually is.
The Myth of the Fully Accountable Human
The 1979 memo assumes that humans naturally embody accountability. Yet modern history suggests that this ideal often exists more in theory than in practice.
Consider the 2008 financial crisis. Millions of families lost homes and livelihoods as financial systems collapsed under the weight of risky derivatives and predatory lending practices.
Despite the enormous scale of the crisis, very few senior executives faced legal consequences. Institutions paid large fines—often funded indirectly by shareholders and taxpayers—while many executives walked away with generous severance packages.
In the United Kingdom, the Post Office Horizon scandal offers another sobering example. For years, faulty accounting software falsely indicated financial discrepancies in local post offices. More than 700 sub-postmasters were accused of fraud and theft.
Many were prosecuted. Some went bankrupt. Others suffered immense psychological trauma.
Only decades later did investigations reveal that the problem lay in the software system itself. For many victims, the acknowledgement of wrongdoing arrived far too late.
The digital world offers its own version of diluted accountability. Social media platforms struggle to control waves of abuse directed at athletes, public figures, and ordinary individuals alike. After high-profile sporting events, thousands of abusive posts may appear within hours.
Yet the gap between anonymity, platform policies, and legal enforcement often ensures that consequences remain limited.
The uncomfortable truth is this: humans are remarkably adept at avoiding accountability.
Humans are masters of obfuscation. We have egos, self-preservation instincts, and “selective” memories. We hide behind “institutional knowledge” and “collective responsibility” to ensure that when the hammer falls, it hits the floor, not us.
Could AI Actually Improve Accountability?
If we set aside sentiment and examine the structure of decision-making, Artificial Intelligence does offer certain advantages over humans.
1. The Infinite Audit Trail
Humans forget. They misremember meetings, reinterpret events, or claim they never received the email.
AI systems, by contrast, operate through data logs and traceable processes. Decisions can be analyzed after the fact, reconstructed from records, and evaluated systematically.
In theory, this makes AI decisions more auditable than many human ones.
2. Consistency Over Mood
Research has repeatedly shown that human judgment is affected by surprisingly trivial factors. Studies of judicial decisions, for instance, have demonstrated that rulings can vary depending on fatigue, stress, or even whether the judge has recently taken a lunch break.
AI does not become tired, irritable, or impatient. Its outputs are governed by models and data rather than emotional fluctuations.
3. No Ego to Protect
Humans sometimes distort information to protect careers or reputations. An AI system has no pride, no ambition, and no personal narrative to defend.
It does not lie to avoid embarrassment or shift blame to a subordinate.
4. Explicit Criteria
Human intuition often relies on implicit assumptions. AI systems, on the other hand, require explicit criteria. Their decision rules can be examined, debated, and revised.
In principle, this transparency could allow society to challenge and refine the values embedded within algorithms.
The Alignment Problem
Of course, these advantages do not resolve the most difficult question: whose values should AI follow?
An AI designed as an independent ethical agent might pursue objectives in ways that conflict with human priorities. Popular culture has long explored this fear in science fiction, where machines interpret instructions with unsettling literalness.
Alternatively, an AI treated purely as a tool simply amplifies the intentions—good or bad—of its operators.
This dilemma is often described as the alignment problem. Engineers attempt to design guardrails that guide AI systems toward socially acceptable behaviour. But these guardrails themselves reflect decisions made by small groups of designers, policymakers, and corporate leaders.
The risk is not that machines will suddenly become tyrants.
The risk is that human biases and priorities will quietly become embedded in code, shaping decisions at a massive scale.
A New Principle for the AI Era
The 1979 IBM memo was right about one thing: Computers don’t suffer. You can’t put an algorithm in prison, and it doesn’t feel the sting of social shame. Because there are no consequences for the machine, the machine cannot be the final seat of accountability.
However, the human “management decision” is often a black box of bias and self-interest.
At the same time, modern institutions demonstrate that human decision-making is not always the pinnacle of accountability either. Human systems can obscure responsibility through hierarchy, bureaucracy, and ambiguity.
Perhaps the new rule should be this:
An AI system is more auditable than a human, but to ensure accountability, every AI decision must have a “Named Human Liaison.” This person must share the outcomes, understand how the AI reached its conclusion, demonstrate they understood the AI’s reasoning, and have possessed the genuine power to override it.
We don’t need to choose between the flawed intuition of a human and the “cold” logic of a machine. We need a system where the AI provides the transparency, analysis, and consistency, while humans provide ethical judgment and accountability.
When the Lights Come On
Accountability ultimately reveals itself not in policy documents or technology architectures, but in moments of crisis.
When a system fails, when harm occurs, when difficult questions are asked—someone must stand forward and say: This decision happened under my watch.
The future will undoubtedly involve more intelligent machines, deeper automation, and increasingly sophisticated decision systems.
But the essence of accountability will remain human.
AI may analyse the world with extraordinary precision. It may detect patterns that escape human intuition. It may even guide decisions with remarkable efficiency.
Yet when the lights come on, society will still look for a human being willing to stand beside the decision—and accept its consequences.
Perhaps the real lesson of that old IBM memo is not that machines must never make decisions. It is essential that humans never stop owning them.
