AI Change Management Is a Lie. Build for Humans Instead. AI Education
Home  /  Blog  /  AI Change Management Is a Lie. Build for Humans Instead.

AI change management is a lie. Build for humans instead.

Published March 23, 2026

This is part of our AI Implementation Training series.

I have a simple test for whether an AI implementation was done well. Does the team need a change management programme to use it?

If yes, the implementation was done poorly. Full stop.

AI change management is a concept that shouldn’t need to exist. It exists because consultancies and implementation partners build things without understanding how real people do real work, then sell you a second engagement to convince those people to use the thing that wasn’t built for them.

That’s a racket. I’m going to explain why.

Why change management is a band-aid

Change management is a legitimate discipline. It has its place. When a company restructures, merges, or shifts its business model, you need to bring people along. That’s real.

But AI tools? AI systems that are supposed to make someone’s job easier? If you need a formal programme to get people to use something that’s supposed to help them, something went wrong upstream.

Think about the last time you adopted a new tool willingly. Maybe you started using a better notes app, or a calendar tool, or a new messaging platform. Did you need a change management programme? Did someone have to schedule six weeks of coaching to get you to try it?

No. You tried it. It was better than what you had. You kept using it. That’s how adoption works when the tool is built right.

AI change management has become an industry of its own. There are consultancies that specialise in it. Frameworks with acronyms. Certification programmes. All built on the premise that humans are naturally resistant to AI and need to be managed through the transition.

I think that premise is wrong. Humans aren’t resistant to AI. They’re resistant to bad tools that make their jobs harder.

The design problem hiding behind “resistance”

Every time I’ve dug into a so-called resistance problem, I’ve found the same thing. The AI tool doesn’t fit the workflow.

A law firm built an AI document review system. Adoption was low. Management said “the lawyers are resistant to technology.” I looked at the system. To use it, a lawyer had to export documents from the case management system, upload them to a separate platform, run the AI, download the results, then re-import them. Five extra steps. Of course adoption was low. That’s not resistance. That’s rational behaviour.

A recruitment agency bought an AI screening tool. Recruiters weren’t using it. “They’re set in their ways.” Actually, the tool only worked with structured data and the recruiters’ CVs were all in different formats. The tool needed 20 minutes of manual formatting before it could do anything useful. Again, not resistance. Basic cost-benefit analysis.

A consulting firm deployed an AI knowledge base. Low usage. “Culture problem.” The AI kept surfacing irrelevant results because it was trained on outdated internal documents that nobody had maintained. People tried it, got bad results, and stopped. That’s not a change management problem. That’s a quality problem.

In every case, the company was about to spend money on change management. In every case, what they actually needed was better design.

What good AI adoption looks like

Good adoption doesn’t feel like adoption. It feels like relief.

The property management company I worked with didn’t need any convincing. We built a system that reads incoming tenant emails, classifies them by type and urgency, checks lease terms, and drafts responses. The property manager opens their inbox and there’s a draft sitting there, ready to review and send. Their email response time dropped from hours to minutes.

Nobody had to “manage” that change. The manager tried it once, said “oh, this is good,” and never went back. That’s the standard.

A coaching business with 200+ students was drowning in repeat questions. We built a knowledge assistant trained on the coach’s entire content library. Students ask questions and get instant, accurate answers. The coach’s inbox cleared out. Support tickets dropped by 60%.

No change management workshops. No adoption champions. No resistance to overcome. People used it because it solved a real problem they actually had.

If this sounds like your business, let's talk about building it.

Designing for humans, not managing them

The difference is in the approach. Most AI implementations follow this sequence:

  1. Decide to implement AI
  2. Choose or build a tool
  3. Deploy it
  4. Discover people aren’t using it
  5. Start change management

Here’s the sequence that works:

  1. Study how people actually work
  2. Find the pain points worth solving
  3. Design a solution around existing behaviour
  4. Build it into existing tools
  5. Show people the new thing (15 minutes)

Step five is the only one that resembles “training,” and it’s a brief walkthrough, not a programme. I covered this in more detail in the adoption failure piece, but the principle is the same. Start with the human. Build for the human. The adoption takes care of itself.

The “meets you where you are” principle

Here’s a rule I follow for every build. The AI must meet the user exactly where they are. Same tool. Same workflow. Same interface. No new logins. No new tabs. No new habits required.

If the user lives in Gmail, the AI is in Gmail. If they work in Slack, it’s a Slack bot. If they run their life from a spreadsheet, the AI runs inside that spreadsheet.

The moment you ask someone to go somewhere new to get the benefit, you’ve introduced friction. Friction kills adoption. Every single time.

This is also why off-the-shelf AI tools often fail in enterprise settings. They require people to change how they work to match the tool. That’s backwards. The tool should match how people work.

What to do if you’re stuck in the change management loop

If you’re currently in this situation, here’s how to get out.

First, stop the change management programme. Seriously. It’s not working because it can’t work. You’re trying to convince people to use something that doesn’t fit their day.

Second, go sit with the people who aren’t using the tool. Not their managers. Them. Ask them why. Listen without defending. They’ll tell you exactly what’s wrong. The answers are almost always one of three things: too many steps, bad output quality, or it doesn’t solve a real problem. I’ve outlined all three in the adoption piece.

Third, rebuild. Not from scratch necessarily, but rebuild the integration. Take the AI capability and wire it into the actual workflow. Remove the friction. Put the output where people already look. Make it invisible.

Fourth, put it in front of the same people who rejected it before. Tell them you rebuilt it. Ask them to try it once. If you did the design work right, you’ll see the difference in a week.

The consultancy incentive problem

I should be honest about why AI change management persists as a concept. There’s a business model behind it.

An implementation partner builds your system. Adoption is low. They sell you a change management engagement. That’s a second revenue stream from the same failed project. There’s no incentive to get the design right the first time when the failure creates a follow-on sale.

I’m not saying all implementation partners do this consciously. But the incentive structure is there. And it explains why “you need change management” has become the default response to low adoption instead of “we need to rebuild this.”

At Easton, our approach is different. We don’t separate design from adoption. The adoption plan is the design. If we do our job right, there’s nothing to manage.

The only change management that matters

There is one kind of change management I believe in. It’s what happens after the system works.

Once people are using the AI system and getting real value, you need to evolve it. Listen to feedback. Watch how people use it versus how you designed it. Find the gaps. Iterate. That’s ongoing improvement, and it matters.

But that’s product management, not change management. It’s “how do we make this better,” not “how do we get people to use it.”

If you need to convince people, you built it wrong. Build it right instead.

Frequently asked questions

What is the key reason behind “AI change management”?

AI change management exists because consultancies and implementation partners build AI systems without understanding how real people do real work, then try to convince those people to use the thing that wasn’t built for them.

How can you tell if an AI implementation was done well?

If your team needs a formal change management program to get people to use an AI tool, then the implementation was done poorly. AI tools that are genuinely helpful and intuitive to use should be adopted naturally without extensive training or change management.

What are common reasons behind “resistance” to AI tools?

According to Harvard Business Review research on AI adoption, so-called “resistance” to AI tools is actually due to the tools not fitting the existing workflow or providing low-quality results. People are naturally rational and will not use tools that create more work for them or deliver poor performance.

Keep reading

Stop training. Start building.

We design AI systems your team actually uses. Training is built in, not bolted on.

Book a discovery call
Or explore our AI Education service →