Your Team Already Includes AI. Does Anyone Know How to Lead It?

·

9 min read

Cover Image for Your Team Already Includes AI. Does Anyone Know How to Lead It?

Building AI-Augmented Teams in Higher Education

Post 2 of 6 in the series: Leading Through the AI Shift: A Higher Education Leadership Series


There's a management challenge quietly spreading through higher education departments right now, and it rarely gets named correctly. We call it "AI adoption" or "technology integration." But what it actually is, is this: staff members are working alongside AI tools — some sanctioned, some not — without clear frameworks for how those tools fit into the team, who's accountable for their outputs, or how to develop the skills needed to use them well.

This is not an IT problem. It's a management problem. And the leaders who recognize that distinction early are the ones who will build teams that actually get better with AI, rather than teams where AI becomes another source of confusion, inequity, and workaround behavior.

This post is the second in a series examining the workforce themes at the center of EDUCAUSE's summit on AI and higher education. Where the first post dealt with the broader question of what's happening to the workforce, this one gets more specific: what does it actually look like to build and lead an AI-augmented team?


The "Shadow AI" Problem Nobody Talks About

Let's start with an uncomfortable truth. In most institutions, AI augmentation is already happening — it's just happening informally, inconsistently, and largely invisible to leadership.

Your financial analyst is using an AI tool to accelerate data modeling. Your communications coordinator is drafting first cuts with a generative AI assistant. Your academic advisors are experimenting with AI to help manage their caseloads. None of this is necessarily bad. Some of it is genuinely impressive. But when it happens without institutional structure, you end up with wide disparities in capability across your team, uneven quality standards, accountability gaps when AI-assisted work goes wrong, and staff who can't distinguish between using AI as a collaborator and outsourcing their judgment to it.

The EDUCAUSE 2026 Top 10 report identifies technology literacy for the future workforce as one of the defining challenges facing higher education — and notably, it's not primarily about whether staff can use the tools. It's about whether they understand the tools well enough to supervise, evaluate, and take responsibility for their outputs.

That's a meaningful difference, and it's one most professional development frameworks haven't caught up to yet.


What "AI Literacy" Actually Means for Higher Ed Staff

The term "AI literacy" has become so broadly used that it risks meaning nothing. So let me try to be more precise about what it actually needs to mean for a higher education workforce.

At a minimum, a staff member working in an AI-augmented role needs to understand how large language models work at a conceptual level — not the engineering, but the implications. That AI tools generate probabilistic outputs, not factual certainties. That they reflect patterns in training data, which means they inherit biases. That they can be confidently wrong in ways that don't look like being wrong. That their outputs require human judgment to validate, not just human eyes to skim.

Beyond that baseline, there's a dimension of AI literacy that's specific to each professional domain. An advisor using AI-assisted early alert systems needs to understand how those systems weight risk signals, and what they systematically miss. An HR director using AI for candidate screening needs to understand the documented patterns of bias in those tools. A researcher using AI for literature synthesis needs to understand hallucination rates and citation verification requirements.

The National Academies has framed this as the core challenge of AI-era reskilling: we're not just teaching people new tools, we're teaching them a new relationship with information and output — one where human oversight is not a formality but a genuine cognitive responsibility.


Skills, Yes. But Also Structures.

The conversation about reskilling tends to focus heavily on individual skills — courses, certificates, training modules. That's not wrong, but it's insufficient on its own.

You can send every member of your team through an AI literacy program and still have a badly structured AI-augmented team, because the structural questions are distinct from the skill questions.

Who on your team is responsible for reviewing AI-assisted outputs before they leave the department? How does that review work — is it a spot-check, a full audit, something in between? When an AI tool makes an error that reaches a student or a faculty member, what's the accountability chain? Who's authorized to make decisions about which AI tools are sanctioned for which kinds of work? How does your team handle situations where AI-generated recommendations contradict human professional judgment?

These are workflow and governance questions, not skill questions. And the institutions building the strongest AI-augmented teams are working on both dimensions simultaneously — investing in skill development while also redesigning how work flows through the team.

The World Economic Forum's research on reskilling makes clear that effective upskilling programs share a common feature: they're tied to real work contexts, not abstract competencies. The most effective approach isn't a training catalog — it's structured practice in actual job functions, with feedback loops that connect learning to performance.


Leading a Team You Don't Fully Understand

Here's the leadership dimension that doesn't get enough attention: many managers of AI-augmented teams are themselves still learning what these tools can and can't do. Which creates a genuinely new managerial challenge — how do you supervise work you don't fully understand?

This is not unique to AI. Department heads have always been responsible for work that requires expertise they don't personally hold. But AI augmentation is moving fast enough, and the range of tools broad enough, that the gap between what staff can do with AI and what their managers understand about AI can open up very quickly.

The best mitigation I've seen is radical transparency about that gap. Leaders who acknowledge that their teams may know more about specific AI capabilities than they do — and who create structured mechanisms for that knowledge to flow upward — build more adaptive teams than leaders who maintain the pretense of comprehensive oversight.

That might look like regular "what's working, what's concerning" conversations specifically about AI tool use. It might look like designating an informal AI champion within the team who serves as a resource for both staff and leadership. It might look like building AI tool effectiveness into performance conversations — not as a compliance check, but as a genuine learning dialogue.


The Equity Dimension Nobody Wants to Acknowledge

Any honest discussion of AI-augmented teams in higher education has to include the equity dimension — and it tends to be the dimension leaders are most reluctant to surface directly.

AI augmentation does not distribute itself evenly across a workforce. Staff with higher prior technical confidence will adopt AI tools faster and more deeply than those with lower confidence. Staff who are earlier in their careers, who have more time and flexibility to experiment, will often develop AI capabilities faster than senior staff managing heavier workloads. Staff whose primary language aligns more closely with AI training data may find these tools more useful and reliable than staff whose professional communication patterns or subject-matter expertise are less represented.

This means that an AI-augmented team, without active design to the contrary, will tend to amplify existing disparities. The staff members who were already most productive will likely become dramatically more productive. Those who were already marginalized or under-resourced will likely fall further behind.

Leaders who are serious about human-centered AI adoption need to account for this explicitly. Not with platitudes about inclusion, but with specific commitments: equitable access to tools, differentiated support for onboarding, protected time for all staff (not just the most confident ones) to develop AI capabilities, and active attention to whether AI-driven productivity gains are flowing equitably across the team.

This is also, frankly, a talent retention question. Staff who feel that AI is being used to extract more work from them without investment in their growth will leave. Staff who feel that their institution is genuinely investing in their professional development in an AI-augmented environment will stay — and will bring others.


Practical Moves for Leaders Right Now

The distance between where most higher education teams are and where they need to be on AI augmentation is significant. But it's navigable. A few places to start:

First, get honest about where AI use is already happening on your team. You probably know less than you think. A simple, non-judgmental survey or conversation — "what tools are you using, what are you using them for, what's working?" — will surface things that help you make better decisions.

Second, establish a team norm around AI output review rather than AI output trust. The culture you want is one where AI is a useful first draft, not a final authority — on analysis, on communications, on recommendations. That norm has to be explicitly set; it won't emerge on its own.

Third, make AI skill development a team activity, not just an individual responsibility. Regular shared learning — where team members show each other what they've discovered, what's failed, what's promising — builds collective capability faster and more equitably than any training catalog.

And fourth, bring the structural questions into the open. The accountability, governance, and quality review questions don't resolve themselves. They require deliberate leadership attention, and they're worth the time.

The next post in this series examines the change management and cultural dimension of AI adoption — specifically, why resistance to AI in higher education is so often misdiagnosed, and what leaders can do differently.


This series is informed by the themes from the EDUCAUSE Summit on AI and Workforce Transformation.

If your institution is thinking through AI team strategy and workforce readiness, edtechniti.com works specifically with AI in higher education.