AI gets things wrong with confidence. So does your exec team.

There’s a fascinating common point between a generative AI and an executive committee.

Both can produce answers that are smooth, structured, convincing. Both inspire trust through sheer confidence. And both can be spectacularly wrong—without anyone in the room raising a hand.

This isn’t a provocation. It’s an observation I’ve made for years—first as a CTO/GM making major technology calls, then as a mentalist on stage.

The mechanisms are the same. And that’s exactly why this is dangerous.

On stage, I can create a sense of obviousness. In a meeting, AI can create the same feeling: “It’s clear, therefore it’s true.” And when that clarity isn’t challenged, confidence turns into autopilot.

The 4 illusions AI and your brain share

1 - The authority illusion

A generative AI delivers outputs with confidence—even when they’re approximate or wrong. Not because it’s “lying,” but because its job is to generate something plausible, not to experience doubt.

And our brains routinely confuse confidence with competence. A certain tone feels more credible, regardless of truth.

A mentalist knows this well: confidence manufactures credibility.

In an exec team, it’s the same. The person who speaks with the most certainty steers the decision. Not necessarily the person who’s right. The one who voices doubt can lose influence—even when that doubt is the most valuable signal in the room.

2 - The coherence illusion

A well-structured answer feels correct. It’s a powerful shortcut: if it’s clear, it must be true.

AI excels at producing perfectly organized prose—intro, arguments, conclusion—while the content can be wrong end to end. The form is flawless. The foundation can be fragile.

In meetings, same dynamic. A smooth presentation, clean slides, a linear storyline: people lower their guard. We confuse polish with proof. We mistake narrative quality for hypothesis quality.

I’ve watched multi-million-dollar investments get approved on immaculate decks—without anyone really stress-testing the assumptions. Not out of negligence. Out of comfort: when it’s well written, doubt feels less necessary.

3 - The alignment illusion

AI is optimized to be helpful and acceptable. It naturally tends to produce an answer that fits your context and tone. It rarely challenges you by default.

That’s a feature… until an organization confuses “satisfying” with “robust.”

In exec teams, confirmation bias runs both ways: the leader looks for data that supports the vision, and the team produces analysis that matches the leader’s implicit expectations. Everyone agrees. No one verifies what should have been contradicted.

The danger isn’t error. The danger is comfortable unanimity.

4 - The control illusion

You think you’re driving the tool. But the way AI phrases an answer shapes your thinking: it frames the options, sets the vocabulary, defines what feels possible.

It doesn’t “decide.” It structures the mental space in which you will decide.

In meetings, the first frame—set by a leader, a consultant, a one-pager—does exactly the same thing. It doesn’t choose for you. But it draws the map. Then everyone debates… inside the map. And nobody notices.

THE MENTALIST’S EYE

When a biased human uses a tool optimized for plausibility to validate a decision that was already tilted, you don’t get augmented intelligence. You get augmented certainty. My job is to reintroduce useful doubt—before it becomes autopilot.

The real issue isn’t technological

Most AI talks focus on models, prompts, productivity, technical governance. Useful topics. But the most important issue sits elsewhere.

The real risk of AI in business isn’t that it gets things wrong.
It’s that it gets things wrong in the same ways we do—and when you combine that with our own biases, you amplify errors instead of correcting them.

AI doesn’t “believe” anything. It generates a plausible answer. The danger begins when we confuse plausibility with truth.

When a biased human uses a plausibility engine to validate a decision that was already framed, the result isn’t augmented intelligence. It’s augmented certainty.

This is exactly what mentalism makes visible. On stage, I show—in real time—how the audience’s mind works like an AI: fast, fluent, convincing… and sometimes completely wrong.

The moment a room realizes it just got trapped by its own cognitive fluency is the moment everything shifts. Suddenly we’re not discussing a concept. We’re discussing a lived experience.

The real problem isn’t error. It’s unchallenged confidence.

D.I.R.E. — Four reflexes to keep your hands on the wheel

I built a simple protocol from years in leadership and years on stage:

D — Decide what stays human.
Some decisions are non-delegable. Define them before you “plug in” AI, not after.

I — Interrogate every output.
Test against reality. Hunt the counterexample. Ask AI to defend the opposite position—then critique its own answer.

R — Reject the prose.
Demand evidence, not eloquence. “It’s well written” has never been a truth test.

E — Establish guardrails.
Clear rules, explicit limits, governance that doesn’t depend on individual good will.

The 2026 question

In 2026, the question isn’t “Should we adopt AI?” Your teams already use it. Every day. Often without you knowing. The real question is: Who decides when AI is confidently wrong?

If no one in your organization can answer that, AI isn’t assisting you. It’s steering you.

And the mentalist in me will confirm this: when something steers you without you noticing, it’s no longer assistance. It’s influence.

YOUR LEVER

In 2026, AI shouldn’t be a blindfold—it should be a mirror.

The real difference isn’t adopting the tool. It’s a leader’s ability to stay in control of the decision lever—even in the face of algorithmic confidence.

If you want your teams to stop being steered by their own certainty, that’s exactly what I work on in my keynotes on AI and human decision-making.

References

For those who want to dig deeper, here are the scientific studies and reviews cited in this article.

Huang, L., Yu, W., Ma, W., Zhong, W., Feng, Z., Wang, H., … & Liu, T. (2023). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. arXiv:2311.05232. [Also published in ACM Transactions on Information Systems, 43(2), 2024.] [The go-to survey on LLM hallucinations: why AI generates content that sounds right but is factually wrong, and how to spot it.]

Sahoo, P., Meharia, P., Ghosh, A., Saha, S., Jain, V., & Chadha, A. (2024). Unveiling Hallucination in Text, Image, Video, and Audio Foundation Models: A Comprehensive Review. Findings of EMNLP 2024. [A cross-modal review of hallucinations: beyond text, AI models also "hallucinate" in image, video, and audio.]

Parasuraman, R., & Riley, V. (1997). Humans and Automation: Use, Misuse, Disuse, Abuse. Human Factors, 39(2), 230–253. [The seminal paper on human-machine dynamics: we over-rely on, underuse, and misuse automation depending on our trust levels and cognitive load.]

Skitka, L. J., Mosier, K. L., & Burdick, M. (1999). Does Automation Bias Decision-Making? International Journal of Human-Computer Studies, 51(5), 991–1006. [Evidence that automated decision aids distort our judgment: we follow their recommendations even when contradictory cues are right in front of us.]

Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., & Krishna, R. (2023). Explanations Can Reduce Overreliance on AI Systems During Decision-Making. Proc. ACM Hum.-Comput. Interact., 7(CSCW1), Article 129. [Proof that AI-generated explanations can curb overreliance — but only if they lower the cognitive effort needed to verify the AI's output.]

Henderson, E. L., Simons, D. J., & Barr, D. J. (2021). The Trajectory of Truth: A Longitudinal Study of the Illusory Truth Effect. Journal of Cognition, 4(1), 29. [A longitudinal study showing that mere repetition makes information feel more credible — a critical mechanism when dealing with AI's repetitive outputs.]

Pearson, J., Dror, I., Jayes, E., et al. (2026). Examining Human Reliance on Artificial Intelligence in Decision Making. Scientific Reports, 16, 5345. [Positive attitudes toward AI increase our dependence on its answers, even when they are wrong — measured through a real-vs-synthetic face discrimination task.]

Précédent
Précédent

From Secret to Impact: Inside a Team Building You Don’t Experience Anywhere Else

Suivant
Suivant

What a Mentalist Sees That You Don’t in Your Meetings