Resistance Isn't the Problem. Misreading It Is

·

9 min read

Cover Image for Resistance Isn't the Problem. Misreading It Is

Change Management, Culture, and AI Adoption in Higher Education

Post 3 of 6 in the series: Leading Through the AI Shift: A Higher Education Leadership Series


Every institution I've seen try to move quickly on AI adoption has encountered the same thing: people who push back. People who drag their feet. People who go quiet in meetings where AI initiatives are announced and then return to doing things the way they've always done them.

And almost every institution I've seen misnames what's happening. They call it resistance. They frame it as a culture problem. They talk about overcoming it.

That framing is not just unhelpful — it actively makes the problem worse. Because most of what gets labeled as resistance in higher education AI adoption is not resistance at all. It's something more legitimate, more specific, and far more addressable — if leaders are willing to look at it honestly.

This is the third post in a series on AI and workforce transformation in higher education. The EDUCAUSE Summit that anchors this series puts "change management and culture" as a distinct theme — and it deserves to be, because this is the place where more AI initiatives fail than anywhere else. Not in the technology. Not in the strategy. In the human infrastructure.


What Resistance Actually Looks Like

Let's disaggregate the thing we call resistance, because it's not a single phenomenon.

Some of what looks like resistance is actually justified skepticism. Faculty and staff in higher education have watched many technology initiatives arrive with promises and depart with mixed results. They've implemented systems that made their work harder while the dashboard showed productivity gains. They've attended mandatory training for tools that were abandoned two years later. When they greet a new AI initiative with caution, they're not being irrational. They're being empirical.

Some of what looks like resistance is actually anxiety about professional identity. For staff whose expertise has been built over years in a specific domain — advising, financial management, research administration, communications — AI raises a question that isn't really about the tool. It's about whether their expertise still matters. That's not a technology question. It's an existential one, and it deserves a human answer, not a change management slide deck.

Some of what looks like resistance is actually ethical concern that hasn't found a legitimate channel. Staff who worry about AI bias in student-facing decisions, about data privacy, about the accuracy of AI-generated content bearing their institution's name — these are real concerns. When they're not invited into the conversation explicitly, they tend to express themselves as general reluctance rather than as the specific, valid objections they actually are.

And yes, some of what looks like resistance is cultural inertia — the ordinary human preference for known routines over unfamiliar ones. That form of resistance does exist, and it requires genuine change management attention. But treating all of the above as if it's just inertia is a mistake leaders can't afford to make.

A 2025 study published in Frontiers in Education found that faculty resistance to AI in higher education often presents as passive non-use rather than active opposition — people minimally engaging with tools, avoiding their more powerful features, or simply not mentioning that they're not using them. Leaders who are looking for visible pushback will miss most of the actual resistance patterns.


The Senior Leadership Signal Problem

There's a dynamic in higher education AI adoption that's easy to overlook from the top of an organization. Senior leaders who are genuinely enthusiastic about AI — and many are, for legitimate reasons — can inadvertently create a signal problem down the chain.

When a provost or VP openly champions AI adoption and describes it as a strategic priority, the message that lands several layers below is often not "here's an exciting opportunity" but "this is happening regardless of what you think, so you'd better get on board." That message produces compliance behavior, not genuine adoption. People learn to talk the right language in the right meetings while continuing to work in ways that feel safe and familiar.

The institutions navigating this best are the ones where senior leaders are enthusiastic and explicitly make space for honest concern. Where a cabinet member can say in a town hall, "I'm excited about what AI can do for this institution, and I also know this is genuinely uncertain territory, and I want to hear what you're worried about" — and mean it. Where the feedback channels that exist actually lead somewhere.

The Frontiers in Education research on responsible strategic AI leadership in higher education is clear on this: faculty and staff participation in AI decision-making is not just a goodwill gesture. Participatory workshops, co-designed pilot programs, and interdisciplinary advisory structures produce better AI outcomes — because the people closest to the work know things that senior leadership doesn't, and excluding that knowledge from the process produces predictable failures.


Redesigning Organizational Models Is Harder Than It Sounds

One of the things the EDUCAUSE summit theme identifies is the need to "redesign organizational models" as part of AI adoption. This phrase tends to provoke one of two reactions: excitement among leaders who see opportunity for structural change, or anxiety among staff who hear it as a veiled reference to restructuring.

Both reactions are understandable. And both point to a real challenge: meaningful AI adoption often does require changing how work is organized, not just adding AI tools to existing workflows. But organizational redesign that happens to people rather than with people almost always fails — or produces exactly the resentment and disengagement that makes AI adoption harder over time.

There's a pattern I've seen work. Institutions that start by identifying specific workflows — not job families, not roles, but specific tasks and processes — that are good candidates for AI augmentation. They pilot those changes with small, willing teams. They measure what actually changes, including things like staff experience, not just efficiency metrics. They publish what they learn, honestly, including what didn't work. And they let that evidence base drive broader adoption, rather than driving adoption with mandate and hoping the evidence follows.

This approach is slower than some leaders would like. But it produces genuine cultural change rather than surface compliance — and it produces staff who are active participants in AI adoption rather than subjects of it.


What an AI-Ready Culture Actually Looks Like

Culture is the hardest thing to change and the most consequential thing to get right. So it's worth being specific about what an AI-ready institutional culture actually looks like in practice.

It looks like psychological safety around experimentation. Staff who feel they can try something with an AI tool, have it fail, and report honestly about what happened — without that failure being used against them — are the staff who build genuine capability over time. Institutions where the only acceptable AI story is a success story will end up with a lot of polished success stories and very little actual learning.

It looks like a healthy relationship with uncertainty. AI tools are genuinely uncertain in ways that differ from other professional tools. They produce outputs that look authoritative but may be wrong. They perform differently across contexts. Their capabilities change as the tools update. A culture that's comfortable saying "we don't know yet" is better equipped for this than one that requires confident projections.

It looks like rewarding adaptation, not just performance. In a period of rapid technological change, the capacity to change — to learn, to adjust, to try new approaches — is itself a form of performance. Institutions whose evaluation and reward systems recognize adaptation, not just output, will build more adaptive workforces.

And it looks like honest communication about hard things. Times Higher Education has noted that universities risk irrelevance by failing to engage fully and honestly with AI — and the same observation applies internally. Institutions that communicate with their staff about AI in vague, aspirational terms, without honesty about what's uncertain, what's at stake, and what the real tradeoffs are, are building cultures on an unstable foundation.


Overcoming Resistance: What That Actually Requires

If I had to distill the change management insight from the research and from watching institutions navigate this, it would be this: the goal is not to overcome resistance. The goal is to understand it specifically enough that you can actually respond to it.

Resistance from skepticism requires evidence and track records, not enthusiasm.

Resistance from professional identity anxiety requires genuine conversation about the future of roles, not reassurance that everything will be fine.

Resistance from ethical concern requires legitimate channels for those concerns to influence decisions, not town halls where questions are answered but rarely change anything.

Resistance from inertia requires supportive structure, peer modeling, and reduced switching costs, not more mandates.

The leaders who are getting this right are the ones who slow down enough to distinguish between these things — who resist the temptation to treat cultural change as a communications problem that can be solved with the right messaging.

The change is real, and the leadership it requires is correspondingly demanding. That's the honest version of this conversation, and higher education deserves it.

The next post in this series examines the evidence base for AI impact in higher education — what we actually know, from real case studies, about productivity gains, student outcomes, and what's worked. The evidence is more nuanced than either the evangelists or the skeptics tend to acknowledge.


This series draws on the themes from the EDUCAUSE Summit on AI and Workforce Transformation in Higher Education.