Executive Summary
AI can generate endless content, but it cannot decide what matters. This article argues that creativity’s value is shifting away from making individual assets and toward leading the human–AI system: framing the right problems, setting constraints and guardrails, separating signal from synthetic noise, and turning output into meaning that communities can trust. The risk is not too much AI, but uncritical AI that scales mediocrity, drifts brands, and creates accountability gaps. For leaders, the question is no longer whether AI can “make” things, but whether your organization has the judgment and governance to direct it responsibly and turn possibility into outcomes.
Key Takeaways for Leaders
Treat AI as a force multiplier for direction, not a replacement for it. Use AI to expand the option space, but keep humans accountable for intent, tradeoffs, and the final call.
Demand problem framing before production. If your teams cannot clearly name the tension, the outcome, and the stakeholder risk, AI will simply accelerate noise with confidence.
Build constraints as a leadership discipline. Brand rules, ethical boundaries, data guardrails, and feedback loops are not implementation details. They are the operating system that prevents AI-generated output from becoming reputational and strategic drift.
What started as a court order has ended as a lesson every leader should read carefully: AI doesn’t fail quietly anymore. It fails publicly, and it takes trust with it.
In today’s market, anyone can claim AI expertise. There is no licensing body. No shared standard for validation. No universally accepted way to separate genuine capability from confident performance. Vocabulary has become currency. Prompts have become proof. And for leaders under pressure to “do something with AI,” confidence often substitutes for competence.
The result is what we’re now seeing, across sectors, in real time: AI theater – the performance of AI expertise without the rigor, verification, or outcomes required for real transformation.
- Good-looking proposals.
- Sophisticated language.
- A sense of inevitability.
And behind it, too often, very little actual expertise, leading to failed AI initiatives and wasted resources.
The Illusion of Expertise is Now a Documented Failure Mode
At Saxum, we’ve been watching the rise of performative AI accelerate across organizations.
Visionary leaders understand the need to integrate modern technology, but they also understand that progress built on performance is fragile.
The difference between transformation and theater is now showing up in places that do not tolerate ambiguity: the courts.
–
In January 2025, a federal judge in Minnesota excluded the testimony of a Stanford AI expert after discovering that his sworn declaration contained fabricated academic citations generated by GPT-4o. The citations were not real. They were not verified. They were included in a legal filing submitted under penalty of perjury.
The court was unequivocal: even if the error was unintentional, submitting AI-generated false sources “shatters [the expert’s] credibility.”
The opinion went further, naming the deeper issue leaders should not ignore. AI itself was not the problem. The problem was abdication of independent judgment, the quiet assumption that AI output could stand in for expertise, verification, and responsibility. The judge warned that when professionals rely on AI without rigorous validation, they degrade professional quality, waste institutional resources, and erode trust in systems that depend on accuracy.
We’ve seen over-promising and poor communication, but this wasn’t that. It was a credibility collapse in a high-stakes, public arena.
Why AI Theater Spreads So Easily
Performative AI thrives because it looks like leadership.
- It speaks the language of inevitability.
- It promises acceleration.
- It offers reassurance that complexity can be bypassed.
But underneath, it exploits three conditions leaders are operating within right now:
1. Scarcity of real expertise
Few people have studied the consequences of false certainty as thoroughly as Hart Brown, Saxum’s President of AI & Transformation.
Over the past two decades, his work has spanned national security, crisis response, enterprise risk, and AI-driven transformation across government, insurance, healthcare, financial services, and global enterprises. He has advised heads of state, boards, and C-suites, built AI-powered risk and governance systems, and authored statewide AI strategies focused on economic development, policy, and accountability.
On the state of real expertise in the United States, Hart puts it plainly:
“In the U.S., the number of people who can actually provide a real level of AI expertise is relatively small, less than 1%.”
That scarcity creates space for performance. When leaders don’t know how to evaluate AI expertise, confidence becomes persuasive.
2. Pressure to act
Boards ask questions. Regulators ask questions. Employees ask questions. Doing nothing feels irresponsible. In that environment, a polished, but unsupported, AI narrative can feel safer than admitting uncertainty.
3. The collapse of verification norms
AI outputs feel authoritative. They sound right. They cite sources. Demos look impressive. Deliverables are flying in. Without disciplined verification, it becomes easy, even for credentialed professionals, to mistake fast fluency for truth.
This erosion of verification did not begin with AI. It mirrors a broader cultural shift shaped by anonymous accounts, unverifiable social media narratives, and viral content optimized for attention rather than accuracy. In that environment, exaggeration spreads faster than evidence, and confidence is rewarded long before credibility is tested.
AI amplifies this dynamic. It produces content that looks finished, sounds informed, and travels quickly, often without clear provenance or accountability. When organizations adopt AI inside that same cultural current, theater scales faster than truth. But when close attention is turned towards these narratives, they fall apart, leaving a vacuum into which organizational trust disappears.
AI theater fills the gap between what leaders are expected to do and what the market is actually capable of delivering.
The Risk Isn’t Your Budget; It’s the Compounded Erosion of Confidence.
Organizations caught by unverified AI deployment have lost more than budgets.
They lost confidence.
- Confidence from boards that approved the investment.
- Confidence from employees asked to adopt tools that never delivered.
- Confidence from regulators and partners watching closely.
- Confidence from the public when missteps surface externally (GovTech, 2025).
In many cases, organizations try again. And then again. Each iteration leaves them further behind, stakeholders more skeptical, and less willing to trust transformation efforts of any kind.
This is how AI becomes noise instead of leverage.
And this is why leaders must now treat AI not as a technology decision, but as a credibility decision.
The Leadership Shift: From Performance to Proof.
The next phase of AI adoption will not reward the loudest voices in the arena.
It will reward the most disciplined ones.
Leaders who navigate this moment well are making a subtle but critical shift:
They are no longer asking, “Who sounds like they understand AI?”
They are asking, “Who can withstand scrutiny?”
Hart Brown offers a practical filter that cuts through the charade:
- What is their actual background in the space, and what have they delivered?
- Do they have published work or peer recognition, or are they recycling others’ ideas?
- When they speak, do they explain real concepts or invent language to obscure gaps?
- Are they grounded in data and evidence, or building untested visions?
- Do they rely on fear and sensationalism, or do they discuss opportunities, costs, challenges, and limitations honestly?
These questions matter because AI transformation is not about being first.
It is about being right.
What Real AI Transformation Requires
Avoiding AI theater does not mean avoiding AI.
It means building transformation in a way that cannot be faked.
At Saxum, safeguarding against AI theater starts with rejecting shortcuts. Real transformation is not sparked by tools, prompts, or demonstrations. It is built through a system that forces clarity, demands evidence, and holds up under scrutiny. This is how we ensure AI serves institutions, not spectacle.
- Clarity
Before any technology is introduced, we pressure-test the problem itself. What decision, risk, or outcome actually matters? What assumptions are being made? What must be true for AI to create value rather than noise? - Vision
Vision defines what AI is allowed to change, what it must protect, and where it should never be applied. This prevents the common failure mode where technology outruns purpose. - Momentum
AI theater thrives in pilots and prototypes that never mature. We design for adoption from the beginning. That means embedding AI into systems, workflows, and accountability structures that already matter to the organization. - Influence
Unchecked AI initiatives fail quietly until they fail publicly. At Saxum, influence is built deliberately. Stakeholders are brought into the understanding of what AI is doing, why it is being used, and how risk is being managed. - Adapt
AI environments change quickly. Regulations shift. Expectations harden. We design transformation with the assumption that today’s answer will be questioned tomorrow. Adaptation is built into governance, measurement, and decision-making so organizations can evolve without breaking trust.
This is not symbolic AI adoption.
Why This is a Partner Decision, Not a Vendor Decision
Vendors deliver outputs.
Transformation partners carry outcomes.
A true transformation partner shares risk, challenges assumptions, and insists on verification, even when it slows things down. They are willing to say “not yet” or “not this way.” They treat AI as infrastructure, not spectacle. They don’t push demos. They build systems.
This distinction matters because AI transformation exposes leaders personally. Reputational risk is not abstract. As the federal court made clear, credibility can collapse in a single filing when rigor is missing (Provinzino, 2025).
In that environment, the right partner becomes an embedded advantage.
No fear tactics.
No inflated claims.
No invented language to disguise uncertainty.
Just clear thinking, disciplined execution, and accountability.
The Real Outcome Leaders Should Seek
The outcome leaders should seek is not novelty.
It is confidence.
Confidence that:
- AI decisions are grounded in truth, not theatrics.
- Partners can explain what they are doing and why.
- Systems will hold up under scrutiny from boards, courts, regulators, and other stakeholders.
- Transformation strengthens trust among stakeholders rather than eroding it.
In a moment where courts are now openly warning professionals to verify AI-generated work, leaders cannot afford ambiguity.
A Final Question for Leaders
Before your next AI initiative, ask one question, out loud:
“If this work were scrutinized tomorrow, would we stand behind it?”
- If the answer depends on confidence, language, or assumptions, pause.
- If the answer rests on clarity, evidence, and accountable partnership, you’re on the right path.
That is the difference between AI theater and real transformation.
Provinzino, L. M. (2025). Order granting in part and denying in part plaintiffs’ motion to exclude expert testimony and denying defendant’s motion for leave to file an amended expert declaration (Kohls v. Ellison, Case No. 24-cv-3754). United States District Court, District of Minnesota. https://fingfx.thomsonreuters.com/gfx/legaldocs/lgpdjbnkjpo/Kohls%20v.%20Ellison%20-%20Provinzino%20order.pdf
GovTech. (2025). Judge blasts Stanford AI expert’s credibility over fake, AI-created sources. GovTech Insider. https://insider.govtech.com/california/news/judge-blasts-stanford-ai-experts-credibility-over-fake-ai-created-sources