These are not implementation steps. They are the leadership decisions that, once made, let implementation proceed coherently. Most institutions skip these and start implementing — which is why most implementations stall.
Most university mission statements were written before generative AI existed. They need re-examination.
If most students already use AI in coursework but few feel prepared to use it professionally, the gap your graduates face is not knowledge — it is fluency in working alongside AI. If research can be partially automated, what is the human contribution?
You do not need perfect answers. You need to have asked the question — at the leadership level, in writing — before any agent is deployed on your campus. Otherwise the agents define the answer for you, by accident.
Action: Schedule one leadership session this semester whose only agenda is "what is our institution for, in the AI era?" No vendors, no technical agenda. Document what you decide.
- What is our institution for, in the AI era?
AI agents work best on problems with three properties: repetitive, clear success criteria, low cost when something goes wrong.
Where agents are working today:- Student support (registration, deadlines, financial aid questions)
- Course advising
- Administrative processing (invoices, expenses, scheduling)
- First-line tutoring support
- Application screening against transparent criteria
Where agents are not ready yet:- High-stakes individual decisions (admissions, discipline)
- Research integrity judgments
- Nuanced cultural or ethical context
- Strategic external communications
The mistake is using agents on problems with high stakes and unclear criteria. The opportunity is using them on problems with high volume and clear criteria.
Action: List the ten most repetitive operational tasks consuming staff time. Cross out anything involving high-stakes individual judgment. The remainder is your agent shortlist.
2. Which problems are actually worth solving with agents?
The question most likely to be answered badly, by default.
Common failure patterns: IT owns it (treats it as a tech problem), academic affairs owns it (treats it as a teaching problem), a committee owns it (no one owns it), nobody owns it (most common).
AI cuts across teaching, research, operations, governance, and brand. It needs an owner at leadership level whose role explicitly includes integrating across these domains — usually the Provost, a Deputy VC, or a new Chief AI Officer.
The owner does not need to be technical. They need authority and time. "Time" is what most institutions get wrong — assigning AI strategy to a senior person already at 110% capacity is the same as not assigning it.
Action: Name the person. Put it in their objectives. Give them authority to convene other senior leaders.
3. Who owns AI strategy at your institution?
AI agents are only as good as the data they can access. Two questions:
What data do you have? Most universities underestimate this. Student records, course materials, research outputs, alumni data, operational records — most of it fragmented across systems that do not talk to each other.
What data are you willing to expose? Sending student data to a US-hosted AI service has implications under your privacy commitments and your relationship with students. There are real options — local hosting, hybrid architectures, sovereign infrastructure — but they require decisions, not defaults.
Action: Commission a basic data inventory. One page per major system: what data lives there, who owns it, who has access. First version will be incomplete. That is fine.
4. What is your data position?
Your faculty are already dealing with AI in coursework, whether your policy acknowledges it or not. Institutions that handle this well do three things:
- They have an explicit policy, even if the policy is "course-by-course at faculty discretion." Ambiguity is worse than any clear position.
- They focus on assessment redesign rather than AI detection. AI detectors do not work reliably. Assessments requiring process artifacts, oral defence, or in-class synthesis are detection-resistant by design.
- They treat AI fluency as a graduation outcome. The question becomes not "did the student use AI?" but "can the student use AI well, and explain when they did?"
Institutions that handle this badly run an AI-detector arms race they will lose, while their graduates enter a workforce where AI fluency is assumed.
Action: Within the next academic year, every department should have a written position. They do not all need the same position. They all need
a position.
5. How will you handle academic integrity?
Faculty resistance to AI is usually framed as "fear" or "luddism." This framing is wrong and condescending. Most faculty resistance is rational concern about workload, autonomy, and academic standards.
What faculty need from leadership:
- Time and training, not announcements. Telling faculty "we are an AI-enabled institution" without funding paid time to learn the tools is empty.
- Clarity on autonomy. Will faculty choose how AI is used in their courses, or will it be mandated? Both are valid. Pretending the choice does not need to be made is not.
- Protection from quality erosion. Concerns that AI will be used to justify class size increases, casualisation, or curriculum dilution are not paranoid. Address them directly.
Action: Run a faculty consultation specifically on AI. A dedicated session, senior leadership listening more than talking. The information you gather is more valuable than any vendor proposal.
6. What does your faculty actually need?
Most AI conversations at universities are stuck at the 30-day level: which tool to permit this semester, which policy to update this month. Necessary work, but not strategy.
A two-year horizon forces different questions:
- What will our institution be known for in two years?
- Which capabilities will be table-stakes by then versus differentiating?
- What investments need to start now that take 18-24 months to land?
- What partnerships should we be exploring before peers do?
Action: Write a two-page sketch — not a strategy, just a description — of what your institution looks like in two years. Where AI lives, what it does, what stays human. Revise in three months.
7. What is your two-year horizon?