Myth: Regulators are coming after your AI with specific technical requirements.
Reality: They’re coming after you for governance failures that AI just happens to amplify.
This distinction explains why so many enterprises are spending heavily on the wrong things.
Over the past year, I’ve talked to regulatory and compliance leaders at a dozen organizations. The conversation is remarkably consistent. They’re building AI governance programs that look sophisticated: model risk frameworks, fairness audits, documentation requirements. But when they actually talk to regulators, whether in an exam, an inquiry, or a pre-filing consultation, the questions are almost never about the AI itself.
The regulators ask:
- How do you know what your model is doing?
- Who decided this was acceptable to deploy?
- What happens when something goes wrong?
- How do you know it’s degraded if you’re not looking?
- Did you consider this particular risk?
- Can you actually explain the decision it made in that case?
These aren’t technical AI questions. They’re governance questions. They’re accountability questions. They sound like questions a board would ask about any high-risk business process. Because that’s what regulators think AI is: a high-risk business process, not a technical marvel.
Most organizations have gotten this backward. They’re building technical AI governance, fairness metrics, model documentation, validation frameworks—as though the regulatory risk is technical. It isn’t. The regulatory risk is organizational. Did you think about it? Did you decide it was acceptable? Can you prove it?
What Regulators Actually Care About
Start with what they don’t care about (or at least, not what you think).
They don’t care whether you use logistic regression or a transformer model. They don’t care about your accuracy metrics. They don’t care whether you’ve achieved parity in your fairness score. They don’t even care, technically, whether your model is biased, they care whether you knew it might be biased and whether you decided that was acceptable.
This is crucial. Regulatory failure isn’t “we had a biased model.” It’s “we had a biased model and either we didn’t know it could happen, or we knew it could happen and deployed it anyway without disclosing it or managing it.”
Regulators work from an assumption: you have obligations to your customers, shareholders, or society. You make a decision to deploy AI. If that decision creates a risk of unfair treatment, of customer harm, of financial loss, of regulatory violation, you’re accountable for that decision. Not for the model being perfect, but for the decision being made with appropriate information and oversight.
This is why compliance teams often feel like AI governance is a moving target. They’re being asked to regulate decision-making, not technology. And you can’t write a regulation that says “make good decisions about AI” the way you can write a regulation that says “maintain a capital ratio of X.” So regulators fall back on the question: how do you know you’re making good decisions? Document your process. Show your thinking. Prove you considered the risks.
The compliance framework regulators actually want to see is the framework you’d use for any major business decision that carries significant risk.
Where Enterprises Overinvest (The Wrong Things)
I’ve watched organizations spend significant resources on governance activities that, if challenged by a regulator, would do almost nothing to defend their decision-making.
Building fairness metrics without any process for deciding whether the level of fairness is acceptable. You measure bias, but who decided “this level of disparity is okay, and here’s why”? Nobody. You just built a dashboard.
Creating model documentation templates that describe what the model does, but not why the business decided it was worth doing. A regulator looks at your documentation and asks “okay, but who approved deploying this knowing it could make this particular decision incorrectly 3% of the time?” You don’t have an answer because you never framed it as a decision.
Implementing validation frameworks that test whether the model works, but not whether it’s solving the right problem. You can prove the model is accurate. You can’t prove the business thought through the consequences.
Putting fairness audits in place after deployment and calling it governance. Audits are useful, but audits are reaction. They tell you what went wrong. They don’t tell regulators you thought it through before you took the risk.
These activities aren’t wrong. But they’re supporting infrastructure, not the core governance question. The core governance question is: did you make this decision deliberately, with appropriate oversight, knowing the risks, and documenting your reasoning?
What Regulators Actually Expect to See
When a regulator examines an AI program, the framework they’re using is decades old. It’s the same framework they use for credit decisions, lending practices, capital allocation. Does the organization have:
- Clear governance structure. Is there someone responsible for AI decisions? Not someone overseeing audits—someone who actually approved the deployment and is accountable for it. They’ll trace reporting relationships. They’ll ask how the decision got made.
- Risk assessment before deployment. Did you think about what could go wrong? Not “was the model tested” (that’s validation). But “did you consider whether this model could produce unfair outcomes, or incorrect decisions, or customer harm, and did you decide the benefit was worth the risk?” They’ll look for documentation of this thinking.
- Clear criteria for continuing operation. You’ve deployed the system. How do you know if it’s still acceptable? What thresholds, metrics, or observations would cause you to pull it? They’re asking: did you think about failure scenarios before you had to deal with them? Or are you just running it until someone complains?
- Evidence of oversight and control. Is someone actually monitoring this, or did you set up an automated system and forget about it? They’re looking for evidence that a human being with responsibility is actually paying attention.
- Ability to explain decisions. Pick a customer outcome. Tell us why that happened. Can you trace the decision through your process? Can you explain the model’s logic? Can you show that this outcome is consistent with how you use the model overall? This matters even if (especially if) the model is black box.
This is not a technical framework. This is an organizational framework. It applies to loan decisions, hiring decisions, insurance underwriting, investment decisions, and now AI decisions. The fact that a machine is involved in the decision doesn’t change the fundamental regulatory expectation: you thought about it, you decided it was acceptable, you documented that thinking, and you’re overseeing it.
The Specific Compliance Gaps
Here’s where most organizations are vulnerable:
- Decision documentation. You can explain what the model does. You probably can’t explain why someone in your organization decided it was acceptable to deploy a model that makes this decision this way. Write that down. For every major AI system, there should be a document that says: “We considered deploying this model. It will make decisions about [X]. We know it could [create these risks]. We decided to deploy it because [business rationale]. We are accepting [specific consequences]. We will monitor for [these indicators]. If we see [these outcomes], we will [take these actions].” Most organizations don’t have this. Most regulators would expect it.
- Ownership and delegation. The same person accountable for the decision should be accountable for ongoing oversight. If you delegate monitoring to a team, you’ve still got the accountability at the top. But that line has to be clear and documented. Regulators will ask “who is accountable for this system?” If the answer is a data science team, you’ve revealed that you don’t have governance. The team is technically supporting a system, but nobody with P&L accountability is responsible for it.
- Consideration of alternatives and harm. You deployed this AI system. Did you consider not deploying it? Did you consider less risky alternatives? You don’t have to choose the lowest-risk option, but you have to show you considered it. This is especially critical if there’s an alternative (a simpler model, a human review, a rules-based system) that would have lower regulatory risk. You don’t have to choose it, but you have to show you knew about it.
- Threshold and escalation. When do you pull this system? Not theoretically—actually, what’s the threshold? “If accuracy drops below X” or “if we detect [pattern]” or “if we receive more than Y complaints about this.” Regulators know you won’t have a perfect answer, but they want to know you thought about what failure looks like rather than just discovering it after customers are harmed.
What This Means for Compliance Teams
If you’re building an AI governance program, prioritize the governance questions first. Not because fairness metrics or model documentation don’t matter, but because they’re supporting infrastructure. The core governance question is: did the organization make this decision deliberately?
That means:
- Work with the decision-maker (whoever that is in your organization) to document the reasoning behind each major AI deployment before it happens, not after. What are we trying to accomplish? What’s the risk? Who decided this was worth it? This is accountability documentation, not technical documentation.
- Make sure oversight is actually assigned to someone with real authority. Not a committee reviewing outcomes, but someone who can actually make changes. Regulators will ask about the escalation path. If everything escalates to a committee that meets quarterly, you’ve just shown that you don’t actually have real oversight.
- Define what failure looks like. For each major system, identify: what indicators would suggest this system is no longer operating acceptably? Not “what data would we want to look at,” but “what specific observations would cause us to take it offline or reconfigure it?” And assign someone to actually look at those indicators regularly.
- Be transparent with regulators about what you don’t know and how you’re managing it. “We’ve deployed an AI system and we’re monitoring X, Y, and Z” is strong. “We’ve deployed an AI system and we’re monitoring what we can” is weak. If you say “we don’t fully understand how this system behaves in edge cases,” but you’ve mitigated it with human review or conservative thresholds, that’s defensible. If you’ve deployed it and you don’t understand what it does and nobody’s overseeing it, that’s not defensible.
The Regulatory Trend
Regulatory frameworks for AI are still emerging, which creates urgency. When frameworks crystallize, organizations that have been building decision accountability from the start will have a much easier transition than organizations that have been building technical governance and hoping it satisfies the regulatory need.
The organizations building fairness audits and model documentation aren’t wrong. But they’re building infrastructure for a governance system they haven’t actually built yet. The core system—clear decision authority, deliberate deployment decisions, documented reasoning, real oversight, has to come first.
When a regulator asks about your AI program, they’re going to ask about governance first, and technology second. If you can’t explain who decided this was acceptable and why, all your fairness metrics and validation frameworks are supporting a house with no foundation.
That’s the compliance gap nobody’s talking about. You’re preparing for technical audits. Regulators are looking for organizational governance. These are different things.