Why AI Adoption Stalls Even When the Tools Work
New research with 1,200 employees finds the biggest barrier to AI adoption isn't capability or training—it's six forms of psychological debt that compound the longer leaders ignore them.

The CEO has signed the enterprise license. Training is rolling out. Dashboards say usage is climbing. And yet the productivity story the board was promised is not landing. New research published this month by Guy Champniss—a visiting professor at IE Business School in Madrid and a founder and director of Meltwater Consulting—suggests the issue may not be the tool, the rollout, or the training. It may be a category of cost most adoption strategies do not measure.
Champniss surveyed more than 1,200 full-time employees across 10 sectors in the U.S. and UK. He measured how often people use AI, how complex the tasks are, and how often they avoid the tool on tasks where they know it would help. He then measured what he calls psychological debt—the accumulated motivational cost of using AI inside a workflow that was not designed with the human side in mind. The results are uncomfortable for any leader treating AI rollout as a logistics problem.
Employees who used AI rarely scored 60 on a 0–100 psychological debt scale. Employees who used it multiple times a day scored 36. Employees using AI only for simple drafting scored 46; those using it for complex and strategic work scored 35. Career stage tracks the same way: people up to five years into their careers scored 54, while those with more than 20 years of experience scored 40. The pattern points in a single direction: psychological debt acts as a barrier. It drives avoidance and shallow use. When leaders systematically lower the debt, employees naturally use the tools for deeper, more strategic work.
Six forms of debt, all motivational
Champniss decomposes the load into six categories. Bundling them under “resistance” loses the diagnostic resolution that makes them addressable; each one points to a different design lever:
- Cognitive debt: The slow erosion of decision-making and pattern-recognition skills as work gets offloaded to a model. (Fix: Redesign the workflow)
- Autonomy debt: The sense that the AI, not the employee, is now choosing how the work happens. (Fix: Clarify who decides)
- Competency debt: The dent in self-perceived skill that comes from watching a model finish in seconds what used to take a day. (Fix: Adjust training framing)
- Relatedness debt: The loss of the social contact that work used to require, because a model never argues, tires, or pushes back. (Fix: Rethink team architecture)
- Credibility debt: The worry that being seen using AI lowers your standing, even among colleagues who are doing the same. (Fix: Elevate cultural visibility)
- Identity debt: The deepest of the six—the sense that doing the work this way is no longer doing the job at all, particularly in roles whose distinctiveness is the work itself. (Fix: Redefine the role)
What the leading adopters are doing differently
Champniss documents a handful of companies that have built design responses to specific debts.
J.P. Morgan positions its AI tools as insights providers rather than decision-makers. The model produces material the employee then has to argue with, fit into a hypothesis, and defend. The friction is the point: it keeps the higher-order cognitive work in human hands.
ING’s AI Principles in Practice program requires product teams to document how human judgment is preserved before any model ships. The bank also publishes plain-language “nutrition labels” describing where each model’s data comes from, what its limits are, and where it is known to fail. Employees are not handed a verdict about whether the tool helps; they are given the inputs to decide.
Microsoft’s Copilot Champs program runs as a peer-to-peer community rather than a top-down rollout. Employees explore where the tool fits their specific work in conversation with peers in a trusted, dynamic environment. The structure protects against competency debt by helping everyone keep sight of their own unique skills and competencies, rather than feeling evaluated against the tool.
To combat relatedness debt, P&G coordinates cross-functional teams to collectively review innovation outputs from AI chatbots. By designing processes where teams interpret AI outputs together, not only does performance increase, but normally siloed roles become more collaborative in the process.
Klarna’s Kiki assistant—introduced in June 2023, with 90 percent of employees using it within a year and more than 250,000 questions answered—was treated as a cultural artifact rather than a productivity tool. The CEO’s framing, “we push everyone to test, test, test, and explore,” made visible AI use the norm rather than the suspicious behavior. That collapses credibility debt by removing the social cost.
Philips, working with clinicians who had pushed back hard on AI in care settings, redesigned the framing rather than the tool. AI was positioned as identity-affirming: it sharpens diagnostic precision, removes logistical bottlenecks, and surfaces expertise in multidisciplinary review. The clinician role becomes more visible, not less.
The pattern across all six is that the work redesign is the intervention. The tool is constant.
What this changes for leaders
The temptation, reading any list like this, is to treat it as a checklist. The data suggests that would miss the point. Psychological debt does not respond to a single program; it responds to the architecture of how the work is asked of people. When companies push AI hard while leaving roles, review structures, and team norms unchanged, employees fill the gap with their own interpretation—and the path of least resistance is to assume the AI is the replacement.
The framing decision at the very start of an AI rollout—tool or colleague—shapes which of these debts compounds first. The labeling choice examined in putting AI on the org chart is itself an architectural one: it sets expectations about accountability, oversight, and what the human role becomes. Champniss’s findings extend that argument across motivation, competence, and identity. The framing is the workflow.
The leaders making this work are not running better training programs. They are doing the harder thing: redesigning the work so that the human role inside an AI-supported process is still recognizably a job worth doing. The most important AI capability is judgment—and judgment is what the six forms of psychological debt all corrode when left unaddressed. As a practical first step, leaders should take just one common workflow and audit it against these six debts before rolling out the next tool.
. . .
C. Everett Koop, the former U.S. surgeon general, said that drugs do not work in patients who do not take them. The same logic applies here. The AI advantage your competitors cannot copy is not the model selection or the prompt library. It is the part of the rollout that takes the human cost seriously enough to design around it.