AI in Internal Audit: What We Still Prefer Not to Admit
Truth-Tolerance: The Critical Layer AI Still Misses in Audit.
A brief warning before we start: this is not a short article. It’s not long for the sake of being long, but it takes more than two paragraphs to say something meaningful about the future of internal audit. If you’ve chosen to stay, you probably belong to the small group of readers who don’t fear depth. The ones who appreciate arguments, not slogans. This is written for you.
And let me say this upfront because it tends to get misunderstood: I’m not anti-technology. I don’t wash my costume in the stream, banging it against rocks the morning after a day-long conference. I use technology. I teach it. I understand its value. But embracing technology is not the same as surrendering judgment, and that distinction is becoming increasingly important.
This morning, an article titled “How AI Is Shaping the Future of Internal Audit” appeared in my feed. Usually, I would scroll past it—my feed has been a rotating theatre of AI optimism and AI anxiety for months now—but this time I paused. And I paused because only weeks earlier, the profession had been discussing a very different story: the widely reported case in October involving Deloitte Australia, where AI contributed to a flawed government report and the firm returned part of its fee. I won’t re-explain the case; anyone who cares can look it up. The point is simply that an AI-assisted process failed under real-world conditions, and the profession felt the shock.
With that still in the air, I opened the internal audit article with a mix of curiosity and caution. It offered the usual promises—speed, accuracy, efficiency—and drew an interesting contrast: external audit is more eager to embrace AI, internal audit more hesitant. And to be fair, the PDF is honest about the structural reasons. Internal audit teams are smaller, more embedded, more constrained by stakeholders and security demands. External auditors operate with more independence and more pressure to demonstrate efficiency. None of this is wrong. But it remains incomplete.
Because internal audit’s hesitation is not simply structural. It is behavioural. Internal auditors do not challenge a client. They challenge their own system. Their work lands inside the hierarchy, not outside it. They operate where truth collides with tolerance, and where risks are inseparable from power, relationships and timing. In that environment, a machine that accelerates visibility is not always an asset. Sometimes it is a liability.
This is where the profession needs a new model—something sharper than the binary of “AI good” or “AI risky.” So allow me to propose what I call the Three-Layer AI Validation Framework for Internal Audit. Most conversations fixate on whether AI is technically accurate. That is the first layer, and it is the least interesting. The second layer is process validation: how the tool is embedded into review, oversight, and traceability. Important, but not decisive. The critical layer is the third: behavioural validation. Does the output respect the organisation’s truth-tolerance? Does it account for political impact? Does it understand narrative timing, or at least allow a human to assess it? AI can process data. It cannot judge whether the truth it surfaces is deliverable. Without this layer, deploying AI is not modern. It is negligent.
There is a deeper analogy here, and although it risks sounding abstract, it explains the resistance better than most operational charts. Jeremy Bentham’s Panopticon—the idea of a perfect, unblinking inspector—did not create compliance through force. It created compliance because people never knew when they were being observed. AI introduces a similar dynamic into organisations. It is tireless, dispassionate and incapable of discretion. It does not forget. It does not soften its gaze. It does not sense when a finding lands too sharply. And internal audit has always operated with judicious discretion: knowing when to look away, when to delay a truth by a week, when to choose a format that a strained system can absorb. AI does not understand mercy. That alone explains more hesitation than any budget constraint or process flow.
What is interesting—and rarely said aloud—is that the real opportunity in all this is not about replacing anything. It is about elevation. AI can and should take over the Audit of Numbers: the mechanical scanning, matching, verifying. But that only heightens the importance of the Audit of Narratives: understanding the stories, incentives, silences and emotional economies behind the numbers. If we get this right, the profession does not shrink. It expands. The auditor of tomorrow is not a technician. The auditor of tomorrow is a philosopher of organisational truth.
And we will need that philosopher, because organisations are already discovering that AI makes uncomfortable data unavoidable. Internal audit must become the function that interprets the human meaning of that data, not just its accuracy. Someone must tell the organisation not only what happened, but why it happened, and why the system tolerated it for as long as it did.
This is why the call-to-action cannot be polite. If you deploy AI without a behavioural oversight layer, you are not advanced. You are negligent. If audit committees continue to ask only how AI improves efficiency, they are asking the wrong question. The right one is: how will this tool affect the psychological safety and credibility of the system? And if AI vendors continue to sell insights without judgment, they are selling half a product. Start building tools that allow auditors to toggle between raw findings and politically sensitive framing. That is the future of responsible AI in governance.
I will end with this. We can make AI faster, smarter, and more integrated. But the true evolution will come only when we integrate the behavioural layer—when we understand that data lives inside systems shaped by incentives, silence and human limits. AI may accelerate the work, but only humans can interpret the truth that work reveals. The future of internal audit will not belong to those who adopt AI the fastest, but to those who understand the emotional, political and narrative consequences of using it.
If you’re still reading, you’re exactly the reader I wrote this for.

