I watched it happen in three different organizations within a year. Each one had done the hard work—hired the right talent, built capable systems, deployed models that worked. Then came the moment when someone in the board room asked a question that sounded simple: “Who approved this?”
The room went quiet. Not because they didn’t have an answer. They had too many answers, all contradicting each other.
In one case, the answer was the VP of Analytics. In another, Product. In a third, it was “well, the data science team built it, and operations deployed it, so…” That trailing off matters. It’s the sound of governance architecture breaking under the weight of something genuinely new.
This isn’t a technology problem. It’s an organizational design problem. And most enterprises haven’t yet realized their governance structures—the things they’ve spent decades perfecting for traditional software, for regulated processes, for operational risk—are fundamentally misaligned with how AI actually works in practice.
Where Traditional Structures Break
Traditional governance works because it assumes clear ownership boundaries. The application owner is responsible. The security team reviews. Compliance signs off. Audit checks the box. There’s a chain of custody.
AI breaks this. Not because AI is magic, but because it distributes responsibility across domains that don’t typically talk to each other. A model’s behavior depends on data quality (traditionally data engineering’s problem), training methodology (data science), business logic interpretation (product), deployment infrastructure (operations), and ongoing performance monitoring (analytics or sometimes risk management).
When something goes wrong—a model drifts, produces unfair outcomes, or makes a costly decision—the question “who owns this?” becomes genuinely difficult to answer. Was it the data scientists who built it? The engineering team that deployed it? The business stakeholders who set the success metrics? The person who defined what “fair” means?
I’ve heard CFOs push back on AI initiatives not because they doubt the technology works, but because they can’t see the accountability chain. That’s not caution. That’s competence. They’re asking the right question, just about the wrong structure.
What Governance Actually Needs to Answer
Real governance has to answer three things:
First: Who can approve the deployment decision? Not who built it—who actually says “yes, this goes to production.” The temptation is to assign this to the most senior technical person in the room. That’s backward. This decision is business risk, not technical risk. A model can be technically sound and still a bad business decision. The person who signs off on deployment needs to understand what it does, what can go wrong, and what the costs are. They need to be positioned in the organization so that cost falls on them if it materializes.
Second: Who monitors for the specific failures that matter? Traditional systems have ops teams watching for downtime. But AI systems that are running perfectly fine technically can still be producing systematically biased outputs, or slowly drifting away from the decision quality they had on day one. You need someone looking for those failures. The person has to understand what they’re looking for. And they have to have the authority to pull the cord if they find it.
Third: Who decides what the model is actually supposed to optimize for? This is the one that trips people up. Data scientists are trained to optimize for mathematical objectives—accuracy, AUC, F1 scores. But a business decision that’s technically accurate can still be wrong. A lending model might predict default probability accurately but produce disparate impact. A hiring model might predict job tenure accurately but systematically screen out qualified candidates from certain groups. The technical metrics don’t capture what actually matters.
Getting this wrong creates a specific kind of organizational failure: governance theater. You’ll put in a review process. You’ll create a checklist. You’ll assign AI governance committees. And then you’ll still have decision-making happening in the gaps—skipped reviews, reinterpreted policies, informal approvals. This happens not because people are cutting corners; it’s because the formal structure doesn’t actually address the real decision point.
The Three Gaps Most Organizations Face
In the organizations I’ve watched go through this, three patterns appear consistently.
The first is accountability without authority. Risk and compliance teams are asked to govern AI but don’t have the budget authority, the technical knowledge base, or the position in the decision flow to actually prevent bad deployments. They review after decisions are made. They’re auditors, not governors.
The second is speed pressure meeting governance design. You’ll see this as “we need to govern responsibly, but we can’t slow down.” So you design governance that theoretically makes sense but requires five review gates, each with different stakeholders, none of whom are empowered to make the final call. Then the organization learns to route around it. You end up with faster, less governed decision-making, not slower, more governed decision-making.
The third is confusing process with governance. You’ll see organizations build elaborate approval workflows for AI—more elaborate than they have for traditional software—thinking that more process equals better governance. It doesn’t. Better governance is clearer accountability, better information to the person making the decision, and real authority to say no.
What Actually Works
The organizations that get this right share something: they’ve explicitly designed governance around how decisions actually get made, not around what they wish would happen.
They assign clear approval authority—often to a business owner, not a technical person—with explicit responsibility for the downstream impact if the decision goes wrong. They build monitoring into the system itself, not as an afterthought, with someone empowered to act on what’s learned. They translate business requirements into the actual constraints that matter for model behavior, and make those explicit at training time, not as a risk to be managed after deployment.
None of this requires new technology. It requires thinking through organizational design with the same rigor you’d apply to any other high-risk process.
The moment your AI strategy became a governance problem wasn’t when you deployed your first model. It was when you assumed your existing governance structure would work for something fundamentally distributed across your organization. Most enterprises haven’t yet recognized this moment. When you see the room go quiet at the question “who approved this?”—that’s when you know you’re there.
The good news: this is solvable. It’s just not a technical problem.