From Pilot to Platform: Why Scaling AI Is Nothing Like Piloting It

·

9 min read

Cover Image for From Pilot to Platform: Why Scaling AI Is Nothing Like Piloting It

Integration, Scaling, and Sustainability in Higher Education AI Adoption

Post 5 of 6 in the series: Leading Through the AI Shift: A Higher Education Leadership Series


Every campus I've talked to in the past two years has at least one AI pilot they can point to and feel good about. A predictive analytics system that's improving advising outcomes. A chatbot handling first-tier student queries. A department that's using generative AI to accelerate grant writing or communications. A data team that's surfaced insights faster than ever before.

And then there's the larger, harder question that most of those institutions are now sitting with: How do we turn this into something coherent?

Because what tends to exist on most campuses right now is not a strategy — it's a collection of experiments. Some of them are genuinely promising. Some are fading out as the novelty wears off and the adopters move to other priorities. Some are duplicating each other across departments without anyone noticing. And a few are producing results that nobody's measured, because the evaluation infrastructure was never built.

This is post five in a series on AI and workforce transformation in higher education. The EDUCAUSE Summit that frames this series identifies "integration, scaling, and sustainability" as a distinct theme — and rightly so. Because the gap between a good pilot and a campus-wide capability is not a linear extension. It's a different problem entirely.


The Pilot-to-Scale Problem

Let me describe what typically goes wrong when institutions try to scale AI from department experiments to institutional strategy.

The first failure mode is what I'd call the successful pilot trap. A team runs a well-designed pilot, gets promising results, and the institution decides to scale it broadly. But the conditions that made the pilot work — a motivated team, an engaged leader who championed it, a specific workflow context where the tool fit naturally — don't automatically scale. What scales instead is the tool, without the conditions. Results deteriorate. People lose confidence in the initiative. The next attempt at scaling something gets more resistance because "we tried that before."

The second failure mode is fragmentation persistence. The pilot happened inside a silo, and scaling it campus-wide means crossing department boundaries, data governance jurisdictions, and budget authority lines that nobody mapped when the pilot was designed. The tool that worked for enrollment management doesn't integrate with the student information system. The AI vendor whose tool works for institutional research has a different data architecture than the system academic affairs is using. Scaling becomes a systems integration project that nobody budgeted for.

The third failure mode is governance vacuum. AI proliferates across the campus without anyone having clear authority over standards, vendor selection, data use agreements, or acceptable use boundaries. When something goes wrong — and eventually something does — nobody knows whose responsibility it is.

WCET's 2025 survey on AI in higher education found that while the number of institutions with AI policies has grown significantly (from 23% in 2024 to 39% in 2025), most institutions are still running AI primarily at the department or initiative level rather than as a coherent institutional strategy. EDUCAUSE data suggests up to 57% of institutions consider AI a strategic priority — but strategic priority declarations and coherent implementation strategies are different things.


What "Cohesive Campus-Wide Adoption" Actually Requires

Moving from experimental to institutional requires building several things that pilots don't need.

The first is a governance architecture. This is not one committee with AI in the title. It's a set of clear accountabilities: who has authority to approve new AI tools for institutional use, what data governance standards apply to AI applications, how privacy and security requirements are assessed, what happens when an AI system produces outcomes that are discriminatory or inaccurate, and how those decisions connect to existing institutional governance bodies. EdTech Magazine's 2026 overview of AI governance in higher education makes this point concretely: AI governance that touches only IT or only academic affairs will miss the cross-functional complexity that makes AI governance genuinely difficult.

The second is a data infrastructure that AI can actually use. Most higher education institutions have data environments that were built for reporting, not for machine learning or real-time AI applications. Siloed systems, inconsistent data definitions across departments, legacy architectures that don't expose data in the formats AI tools need — these are not just technical problems. They're strategic constraints that determine what's actually possible at scale.

The third is an evaluation infrastructure that produces ongoing evidence rather than one-time pilot results. Institutions that sustain AI adoption have built regular review cycles into how they use these tools: what's working, for whom, at what cost, with what unintended consequences. This is different from the ad-hoc evaluation that characterizes most pilots. It's a built-in organizational practice.

And the fourth — and this is the one most often underestimated — is workforce infrastructure. The capacity to sustain AI adoption at scale requires sustained investment in the human capabilities needed to use, supervise, evaluate, and iterate on AI systems. BCG's 2026 analysis on AI and higher education frames this as a once-in-a-generation opportunity, but also notes that the institutions positioned to capture it are those that have invested in the human and governance infrastructure, not just the tools.


The Environmental Question Nobody's Budgeting For

There's a sustainability dimension to AI in higher education that deserves more leadership attention than it's currently getting, and it's not about organizational sustainability. It's about environmental sustainability.

Large-scale AI deployment is computationally intensive, and computationally intensive means energy intensive. The training and inference operations behind the AI tools your institution is now using — or planning to use at scale — have a real carbon footprint. For institutions with climate commitments, this is not a hypothetical concern. It's a governance question: how does AI procurement align with your institution's sustainability commitments?

EAB's analysis on AI's environmental impact in higher education notes that some leading institutions are already factoring AI energy consumption into their sustainability accounting. Harvard's Kempner AI Cluster operates on carbon-free energy. MIT's Lincoln Laboratory Supercomputing Center leads projects focused on computational efficiency. These are not peripheral concerns — they're part of responsible AI strategy.

Most institutions are not yet incorporating AI environmental impact into their vendor assessment or strategic planning processes. This is a gap that will likely need to close, particularly for institutions that have made public climate commitments and are now significantly expanding their AI footprint.


Evaluating Sustainability Over Time

Let me say something that's counterintuitive given how much pressure there is to demonstrate AI results quickly: sustainable AI adoption is usually slower than headline-driven AI adoption.

The institutions that deploy AI tools rapidly, across broad use cases, with minimal governance infrastructure, and then report impressive pilot results — these institutions are also the ones I see struggling with adoption fatigue 18 months later. The tools are underused. The governance problems have materialized. The staff who were skeptical from the beginning are experiencing a kind of quiet vindication that makes the next initiative harder to launch.

The institutions building sustainable AI capabilities tend to be more deliberate and less flashy. They pick specific, high-value use cases and build them out completely before moving to the next. They invest disproportionately in governance and evaluation infrastructure that will serve them across multiple AI applications. They communicate honestly with their communities about what's being tried, what's working, and what's uncertain. They build AI capability into their people, not just into their systems.

This is the institutional version of the adage about going slow to go fast. And it requires exactly the kind of patience and discipline that institutional pressure — from boards, from peer comparison, from enrollment and financial stress — tends to work against.

Times Higher Education's guidance on scaling AI adoption in higher education makes a point I've found to be true from experience: the institutions that scale AI successfully treat it as an institutional transformation project that happens to involve technology, not a technology project that happens to affect the institution. That reframing changes what gets prioritized, who's in the room for decisions, and how success gets measured.


A Practical Framework for Moving Forward

For leaders who are looking at a collection of department-level AI experiments and wondering how to make sense of them, a few questions worth working through systematically:

What AI initiatives are actually running at your institution right now, including the ones you didn't sanction? A complete inventory — honest, non-judgmental — is the starting point for everything else.

Of those, which ones have produced evidence of impact that meets a reasonable evidentiary standard? Not anecdote, not vendor claims, but measured outcomes compared to a baseline.

What governance gaps would become serious problems if any of these initiatives scaled to full institutional deployment? Identify those now, before the crisis.

What's the data infrastructure investment required to make your highest-priority AI use cases actually work at scale? Budget that investment explicitly rather than expecting it to be absorbed.

And who in your institution has the cross-functional mandate to coordinate AI governance? If the answer is nobody, that's the most urgent thing to fix.

The final post in this series examines what all of this requires from leaders — not just structurally, but personally. Because leading through this moment requires something more than good strategy and good governance. It requires a particular kind of presence that I want to try to name directly.


This series is informed by the themes of the EDUCAUSE Summit on AI and Workforce Transformation.