The Leader's Role in an AI-Transformed Institution
10 min read

Direction, Trust, and What Institutional Readiness Really Requires
Post 6 of 6 in the series: Leading Through the AI Shift: A Higher Education Leadership Series
We've spent five posts in this series working through the workforce, team, cultural, evidence, and strategic dimensions of AI adoption in higher education. This final post is the most personal — and in some ways, the hardest to write.
Because everything we've discussed depends, in the end, on a specific quality of leadership. Not just strategic competence or technological fluency. Something harder to name and harder to develop: the capacity to hold direction, build trust, and create the conditions for an institution to do something genuinely difficult, in the face of real uncertainty, with people who have real stakes in the outcome.
The EDUCAUSE Summit frames its sixth and final theme as "leadership for institutional readiness" — and it's placed last deliberately. Because institutional readiness is not a precondition for leadership. It's an outcome of it.
What Leaders Are Actually Being Asked to Do
Let me start by being direct about the weight of this moment.
Higher education leaders navigating AI adoption right now are being asked to make consequential decisions about technology they don't fully understand, for an institution facing financial, demographic, and competitive pressures that didn't exist a decade ago, in an environment where the pace of change outstrips any institution's comfortable pace of response, and with a workforce that is simultaneously the subject of the transformation and the means by which it has to happen.
That's a genuinely hard situation. And the response to genuinely hard situations in leadership tends to go one of two directions: leaders either contract into the appearance of certainty — projecting confidence they don't feel, moving fast to look decisive, suppressing complexity because complexity is uncomfortable — or they expand into the honesty of the situation, which is harder but ultimately far more effective.
The EDUCAUSE 2026 research on AI's impact on work in higher education found that among the most significant barriers to AI adoption is insufficient trust — trust between staff and leadership about the purpose of AI initiatives, trust between institutions and their communities about data use and decision-making, trust between departments about whose AI strategy is authoritative. Trust is not a soft variable. It's a structural prerequisite for the kind of institutional coordination that AI adoption actually requires.
And trust is built, primarily, by how leaders behave in conditions of uncertainty. Not by what they say about their values. By what they do when the situation is genuinely ambiguous and the easy answer would be to project false confidence.
Setting Direction Without Pretending to Have All the Answers
Direction-setting in an AI-transformation context is different from strategic planning in more stable periods. The tools, capabilities, costs, and ethical questions around AI are changing fast enough that any direction set with false specificity will need to be revised — often publicly, in ways that can look like inconsistency or failure.
The institutions navigating this most effectively tend to set direction at the level of principles and priorities rather than at the level of specific tools and timelines. "We will invest in AI applications that demonstrably improve student success outcomes and that are governed by rigorous equity and privacy standards" is a direction that survives tool evolution and capability shifts. "We will deploy X platform by Q3 and achieve Y percent efficiency gains" is a direction that becomes a liability when the platform underperforms or the timeline slips.
This doesn't mean vague ambition. It means clarity about values and priorities, combined with intellectual honesty about uncertainty on specifics. It's actually harder to communicate than specific commitments — it requires more trust between leaders and their communities, because it asks people to follow a direction without the comfort of detailed certainty.
Elsevier's framework for developing strategic AI leadership in higher education identifies a core competency for AI leaders that they call "navigating uncertainty" — the capacity to make decisions and set direction with incomplete information, while building in mechanisms to learn and adjust. That's not a natural disposition for most institutional leaders, who typically advance by demonstrating mastery and certainty. Developing it requires deliberate effort.
Trust Is Structural, Not Just Interpersonal
When we talk about building trust in AI adoption, we tend to think of it interpersonally — the trust that staff have in their leaders, or that students have in their institutions. And that dimension matters enormously. But trust in the context of AI is also structural.
Structural trust means that the processes and governance systems around AI are designed to be trustworthy, not just the people running them. It means that faculty and staff can see how AI-assisted decisions are made and can challenge them when they're wrong. It means that data use in AI applications is governed by clear, publicly known principles rather than by whatever the vendor contract happened to specify. It means that AI governance bodies include voices from across the institution — not just technology leaders, but academic affairs, student affairs, HR, legal, and frontline staff.
The EDUCAUSE Review's analysis on ethics and AI in higher education makes the point clearly: institutions that position ethics as a constraint on AI adoption will find themselves in a constant tension between innovation and governance. Institutions that position ethics as a design principle — embedded in how AI is selected, implemented, evaluated, and governed — build systems that are trustworthy by construction.
That distinction matters for leadership because the former tends to produce adversarial dynamics between the people championing AI adoption and the people raising ethical concerns. The latter tends to produce collaborative dynamics where diverse perspectives improve AI outcomes rather than slow them.
The Trust Gap That's Widening Right Now
Let me name something that I think is happening in many institutions and isn't being addressed directly enough.
EdTech Journal's research on institutional trust in higher education found that rapid AI investment is, in some institutions, outpacing coherent governance — and that this is eroding student and faculty trust in ways that will be costly to rebuild. EDUCAUSE's own data shows that fewer than half of institutions have formal AI governance frameworks in place, even as 80% are actively experimenting with AI.
This gap — between the speed of AI adoption and the development of governance structures that make adoption trustworthy — is a leadership problem. Not an IT problem, not a policy problem, not a communications problem. A leadership problem.
The leaders who are closing that gap are the ones who treat governance not as a bureaucratic overhead on AI adoption but as a prerequisite for the kind of adoption that actually sustains. They're the ones who push back on timelines that outrun governance readiness. They're the ones who say, in cabinet meetings and board presentations, "we're moving this fast, and here's what we might be missing."
That kind of leadership is not comfortable. It requires willingness to slow things down when the pressure is to go faster, to raise concerns when the enthusiasm is high, and to protect institutional integrity at the cost of short-term momentum. It's also, I'd argue, the only kind of AI leadership that produces institutions you'd actually want to be a student or staff member at, five years from now.
Creating the Conditions for Others to Lead
One thing I've come to believe strongly about institutional AI leadership is that the most important thing senior leaders do is not make decisions — it's create conditions. The conditions under which middle managers can make better decisions about AI in their departments. The conditions under which faculty can experiment honestly and report what they've learned. The conditions under which staff can raise concerns without fear and contribute ideas without having to fight their way through hierarchy.
This is harder than it sounds in the actual texture of how institutions work. It means protecting the time and resources for departments to do AI adoption well, not just fast. It means building feedback channels that actually surface dissent to the level where it can influence decisions. It means modeling intellectual humility publicly — saying in open forums that you're still learning, that you've changed your view on something, that a concern you heard changed how you're thinking about a decision.
The Frontiers in Education research on responsible strategic leadership in AI adoption is emphatic about this: AI initiatives cannot succeed without strong leadership at the senior level, but that leadership expresses itself primarily through creating enabling conditions, not through top-down mandate. The institutions where AI adoption has gone best are ones where staff at every level feel some ownership over how AI is implemented in their domain.
The Longer View
I want to close this series with something that doesn't get said enough in the AI-in-higher-education conversation.
This is not primarily a technology story. It's a story about whether higher education — as a sector, as individual institutions, as collections of people who've dedicated careers to this work — can navigate a period of genuine discontinuity with its values intact and its people well-served.
The technology is real. The stakes are real. The pressure on institutions is real. But so are the people who make these institutions function: the advisors who know their students by name, the researchers who've spent decades building domain expertise, the administrative staff who hold institutional memory in ways no system can replicate, the leaders who stayed in higher education despite offers that would have paid them more elsewhere because they believe in what education does for people.
An AI transformation that treats those people as variables to be optimized will produce institutions that are technically capable and institutionally hollow. One that treats them as the point — as the reason the transformation matters — has a chance to produce something genuinely better.
Leading toward that outcome is the work. It's demanding, it's uncertain, and it's consequential. But it's exactly the kind of leadership that higher education has always asked of its best people.
This series has examined six dimensions of that work. The EDUCAUSE Summit on AI and Workforce Transformation is one of the places where leaders across the sector are gathering to think through these questions together — because none of us has all the answers, and the best thinking tends to happen in community.
This six-part series has been informed by the themes from the EDUCAUSE Summit on AI and Workforce Transformation in Higher Education. We are grateful to EDUCAUSE for the framework that anchored this conversation.
If you've found this series useful and want to continue the conversation — about AI strategy, workforce readiness, or any of the specific challenges your institution is navigating — edtechniti.com is where you can reach us.
