AI Leadership
Mar 2, 2026No One in the C-Suite Wants to Admit They Don't Understand AI
A qualitative study of 35 senior leaders reveals that the biggest obstacle to AI adoption isn't technology — it's the fear of admitting what you don't know.
A qualitative study by The Positive Group interviewed 35 senior leaders from global organizations across financial services, consulting, aviation, consumer goods, and life sciences. What they found isn't a technology problem. Executives get stuck with AI because they don't know how to use it — and they don't want anyone to notice.
The Fear No One Discusses
The board asks "what are we doing with AI?" The CEO wants to give a solid answer. The CTO prepares a presentation on current pilots. Somewhere in that chain, someone approves something they don't fully understand.
A CTO at a financial services firm described it this way: With shareholders, everything collapses into a single "AI" label. It doesn't matter what it is or how it's used. The question is "what are you doing with AI?" And when you try to go deeper, they lose interest.
At the executive committee level, the conversation becomes "if it has AI, we need to hurry up," with little discussion about how early the technology is or what risks it poses to the brand, customers, or intellectual property.
Frontline employees, on the other hand, turned out to be the most receptive. With them, there were clear conversations about what AI can and can't do. The higher up the hierarchy, the more pressure to project mastery and the less room to ask questions.
Executive FOMO
A senior executive at a global retail company put it in these terms: The biggest challenge isn't resistance. It's leadership FOMO. Everyone wants an AI update before understanding the problem they're trying to solve.
The urgency doesn't come from a diagnosis — it comes from the fear of falling behind. In practice, that translates into approved budgets for pilots no one knows how to scale, activity metrics instead of impact metrics, and tools piling up without a strategy connecting them.
A senior vice president at a financial firm explained it this way: Many AI applications won't have an immediate, clean return. A tool that helps draft clearer emails or reduces reconciliation time can save 10 or 30 minutes a day. That doesn't eliminate a position, but it compounds. And when leadership obsesses over short-term returns, people stop proposing ideas.
The pressure to demonstrate fast progress ends up slowing down real progress.
It's Not the Job — It's the Image
For an executive with a 20-year career, AI isn't an immediate job threat. No one is going to replace the CFO with a chatbot. But it does put you in an uncomfortable position when you don't command the tool that everyone says will change the business.
The study found that anxiety was strongest among senior professionals whose authority was built on specialized knowledge. The typical reaction, according to a global consulting executive: "That's my job. You can automate that part, but not this one."
For decades, knowing more than everyone else was the currency of the C-suite. AI changes that. A junior analyst with a good prompt can produce in 20 minutes what used to take two weeks of senior work. The executive's job isn't at risk, but the foundation on which they built their credibility is being questioned. And when professional identity feels threatened, the reaction tends to be defensive, not curious.
What Companies That Actually Advance Are Doing
The study identified three patterns in organizations where adoption works. All three are cultural.
The first is normalizing "I don't know." A senior vice president of technology in the insurance sector described how the conversation shifted in his organization: I took a 100-plus page board document and showed how I used ChatGPT to summarize it. Not because it was perfect, but to demonstrate that you don't need to be technical to get value. It's about asking the right questions. People stopped asking "is this allowed?" and started asking "could this help me with the decisions I'm making?"
When a leader shows they're learning, it gives the rest of the organization permission to do the same.
The second is giving tools to non-technical teams. A technology leader in financial services described the approach: We focused on functions that normally lag in AI adoption. Instead of assuming deep technical expertise was needed, we gave teams like HR, legal, and finance tools they could experiment with directly. The people closest to the work are the ones who best see where AI can help.
If adoption depends on the technology team being an intermediary, it will be slow. If teams can experiment on their own, they learn faster and generate use cases that the technical people wouldn't have seen.
The third is accepting that things will go wrong. A chief innovation officer at a global law firm put it this way: We had to move fast, and from the start we were clear it wasn't going to be perfect. We said "we're going to launch it, we're going to iterate, we're going to take feedback." It's an 80-20 approach, not 100 percent. That made me nervous, because people in our world don't tolerate that. But at the speed we operate, if we wait to have everything figured out, we're not going anywhere.
In regulated industries, the alternative to tolerating imperfection is eternal pilots that generate activity without learning.
What the Data Says
A 2026 survey of global AI and data leaders found that 93 percent identified human factors as the main barrier to adoption. Not technology, not budget, not regulation.
The group with the most influence over AI adoption outcomes is the same group least willing to admit they need to learn. The organizations that manage to scale won't be the ones with the best technology — they'll be the ones with leaders comfortable saying "I don't know" in front of their team.
Ready to build your AI strategy?
Let's identify where AI can create the most value for your organization.
Contact Us