Back to blog
AI EthicsMarch 23, 2026 4 min read

Every Safety Law Was Written in Blood. AI Governance Won't Be Different.

AP
Angelo Pallanca
Digital Transformation & AI Governance

There is a phrase engineers repeat like a prayer in every safety-critical industry:

"Regulations are written in blood."

It means exactly what it sounds like. The rules that protect you today exist because someone died yesterday. Not because a committee was visionary. Not because a company chose ethics over profit. Because the body count got high enough that ignoring the problem became politically impossible.

The Pattern

In July 1976, a chemical reactor at the ICMESA plant near Seveso, Italy, released a cloud of dioxin over the surrounding countryside. Children developed chloracne. Livestock died. The cleanup took years.

The European Community responded with the Seveso Directive: a framework for preventing industrial accidents that is still in force today, updated three times.

In March 1979, the Three Mile Island reactor in Pennsylvania partially melted down. No one died directly, but the political fallout rewrote the rules of nuclear energy regulation in the United States and effectively killed the American nuclear industry for three decades.

In October 2018 and March 2019, two Boeing 737 MAX aircraft fell out of the sky, killing 346 people. Investigators found that Boeing had designed an automated flight-control system called MCAS, then downplayed it to regulators and hid it from pilots.

The FAA, captured by decades of industry self-certification, had delegated safety oversight to the very company it was supposed to regulate.

The script is always the same: industry innovates, regulation lags, someone pays the price, then the rules get rewritten.


AI Is Following the Script

We are in the second act of this story. The part where everyone knows the risks but no one has enough political will to act.

The numbers are not subtle.

AI-related incidents rose 50% year-over-year from 2022 to 2024, and by October 2025, they had already surpassed the entire 2024 total.

A Waymo autonomous vehicle struck a child near an elementary school in Santa Monica in January 2026. A teenager's parents sued OpenAI after ChatGPT allegedly encouraged their son to take his own life. The Warsaw Stock Exchange had to halt all trading for an hour after automated systems triggered a flash crash.

These are not edge cases. These are the early chapters.

Meanwhile, the regulatory apparatus meant to prevent this is running on delay. The European Commission missed its own February 2 deadline to provide guidance on Article 6 of the EU AI Act, the section that defines what counts as high-risk AI.

The two standardization bodies tasked with writing technical standards missed their fall 2025 deadline. They now aim for late 2026.

Companies are legally required to comply with rules that have not yet been written.


The Governance Gap Is the Real Risk

Here is what makes this dangerous. Not as a policy debate, but as a business reality.

73% of enterprises have deployed AI in production. Only 7% have real-time governance monitoring what those systems actually do.

That is a 66-point gap between deployment and oversight.

38% of companies that deployed AI without governance frameworks report regretting that decision. Not in a survey about ethics. In post-incident reviews, after something went wrong.

The Boeing parallel is instructive. Boeing did not lack engineering talent. It lacked a culture where safety concerns could override commercial pressure. The FAA did not lack authority. It had outsourced that authority to the entity it was supposed to oversee.

When the MCAS system malfunctioned, there was no independent layer to catch it.

Most enterprises deploying AI today are in the same position. The AI system is live. The business case is approved. The governance layer is a PDF somewhere on SharePoint.


What This Means for Your Business

If you are waiting for regulation to tell you what governance looks like, you are waiting for the blood to be written.

The EU AI Act will eventually produce its guidelines. National authorities will eventually appoint their enforcers. But "eventually" is not a compliance strategy.

The companies that will navigate the next five years without a catastrophic AI incident are not the ones with the best models. They are the ones that built governance into the architecture before it was required by law.

Seveso gave us the Seveso Directive. Three Mile Island gave us modern nuclear regulation. Boeing MAX gave us a congressional overhaul of FAA certification.

AI governance will get its defining incident. The only question is whether your organization will be the cautionary tale, or the one that was already prepared.

Want to discuss this further?

Book a discovery call