There is a comforting fiction in enterprise AI adoption: that the problem is skills. If we just train people, they’ll use the tools. If we just give them access, adoption will follow. So organizations run AI literacy programs, check the “upskilled” box, and wait for transformation to arrive. It doesn’t. Because training creates awareness, not behavior change. And behavior change requires something most organizations aren’t willing to do: redesign how work actually gets done.

This is the Culture layer — talent, change readiness, and organizational honesty. It’s the layer where the gap between what organizations say and what they do becomes most visible.

The training-to-nowhere pipeline

The organization runs AI literacy training, checks the “upskilled” box, and changes nothing about how work is actually done. Employees attend a 2-hour workshop on prompt engineering, go back to their desks, and continue doing their jobs exactly the same way.

A Big Four consulting firm gave all consultants access to an internal AI tool for research synthesis. After 6 months, 80% of queries were basic factual lookups — the equivalent of using a Ferrari to drive to the mailbox. The tool could synthesize 50-page reports, but nobody was trained on those capabilities because the rollout focused on “how to log in” rather than “how this changes what your Tuesday looks like.”

Then there’s performative honesty. Leadership asks “are we ready for AI?” and gets optimistic answers because nobody wants to slow down the CEO’s vision. McKinsey’s research found that while 88% of organizations report using AI, only 39% can point to measurable EBIT impact. That delta represents organizations that told leadership “we’re doing AI” but couldn’t prove it was working — and nobody asked hard enough.

Redesign the job, not just the toolkit

Organizations that actually transform don’t just give people AI tools — they redefine what good performance looks like. The performance review, the workflow, the daily standup, the definition of “done” — all of it shifts.

Klarna eliminated reliance on Salesforce and Workday, replacing significant portions with AI. The key: they simultaneously restructured teams, eliminated middle-management layers, and redefined success metrics. Customer service agents weren’t just given an AI assistant — their role was redefined from “resolve this ticket” to “handle the 15% of cases AI can’t.”

GitLab’s radically transparent culture means AI readiness gaps get surfaced immediately. When their AI code review tool showed lower accuracy on proprietary Ruby code, the team publicly documented the limitation, proposed a timeline to fix it, and set explicit criteria for when it would be ready. No political cover-up, no inflated metrics. Leaders were rewarded for identifying gaps, not punished for admitting them.

The honest questions

Has your organization redesigned any job role or workflow around AI — not just added AI to the existing one? Can you describe an AI initiative that didn’t work, and what was learned? What percentage of employees with AI tool access actually use the tools weekly? And most diagnostically: is “we’re not ready for this” a safe sentence to say in a meeting? If it isn’t, you don’t have a training problem. You have a culture problem.