Rethinking AI Leadership in Academic Libraries
Academic libraries are no strangers to disruption. We’ve been navigating technological change, constrained budgets, and shifting institutional priorities for decades. But the rise of artificial intelligence—particularly generative AI—calls for a different kind of leadership: one that is innovative, inclusive, and scalable.
To lead this transition thoughtfully, I developed the CALM framework—Communication, Adaptability, Learning, and Management—to support academic library leaders navigating change. CALM offers a way to stay centered in our values while responding with clarity to institutional complexity and technological acceleration.
It also calls for care. In Race After Technology, Ruha Benjamin challenges us to examine the social dimensions of innovation and warns of the risks in automating inequality. She invites us to move beyond convenience and ask: What kind of world are we building with our tools? That question should guide every library’s approach to AI.
A recent article in Harvard Business Review profiling Blue Cross Blue Shield of Michigan (BCBSM) offers compelling insight into how large, regulated institutions can lead AI transformation. Their strategies resonate strongly with CALM leadership and with the values at the core of academic librarianship.
Make the Case to Senior Leadership, Over and Over Again
AI isn’t just a technology conversation—it’s a strategic one. Library leaders must regularly brief provosts, presidents, and trustees on not just what AI is, but how it's aligned with institutional goals like student success, inclusive teaching, and research excellence. This is Communication in action—clear, ongoing, and connected to mission.
BCBSM’s leadership invested in educating their board on AI risks and rewards. In the academic setting, that same proactive engagement builds trust and keeps libraries at the forefront of institutional planning.
Build Cross-Functional, Diverse Implementation Teams
True Adaptability means listening across the institution. BCBSM’s success stemmed from cross-functional, weekly meetings with compliance, IT, legal, and analytics. Libraries can do the same: include faculty, students, accessibility advocates, and IT staff in every step of AI integration.
This kind of collaboration ensures more ethical, thoughtful innovation—and reflects the inclusive leadership that scholars like Ruha Benjamin urge us to prioritize.
Prioritize Secure and Equitable Access
Security and access go hand-in-hand. At BCBSM, they developed custom tools to ensure only the right people could access AI models, with guardrails and audit logs in place.
For libraries, protecting user data, maintaining accessibility, and ensuring equitable access to AI tools—regardless of discipline, status, or tech-savviness—is foundational. This is ethical Management: managing risk, privacy, and institutional reputation while keeping users centered.
Invest in Scalable Architecture, Not Just Tools
Scalability requires Learning—about your infrastructure, your data, and your institutional capacity. BCBSM moved away from legacy systems and built flexible, modular data platforms to support AI growth.
Libraries should explore systems integration, open metadata practices, and cross-platform compatibility. The tools we choose today should work with the workflows and users of tomorrow.
Train and Retrain Your Workforce
Generative AI changes quickly. So must we. BCBSM made staff education a core component of their AI strategy, emphasizing that ethical AI use is everyone’s responsibility.
Library leaders can embed regular learning into operations—from workshops on prompt writing and bias mitigation to reflection sessions on AI ethics and community impact. Continuous Learning isn't optional—it's leadership.
Monitor for Bias and Document Decisions
Ruha Benjamin reminds us that inequality can be embedded in systems and masked as innovation. That insight must guide our AI practices. BCBSM tracks bias, audits models, and documents decisions to ensure fairness and compliance.
Libraries must do the same. This is where Learning meets Management: documenting processes, auditing outputs, and treating every AI deployment as an opportunity to improve equity, not unintentionally reinforce bias.
Create Safe Spaces for Innovation
Innovation thrives with psychological safety. BCBSM partnered with a tech subsidiary to experiment with AI before scaling. Libraries can start with sandbox environments, pilot programs, or student-centered experimentation zones.
This is the CALM way—Adaptability through structure, creativity within boundaries. Safe experimentation leads to confident, ethical implementation.
What This Means for Academic Libraries
Libraries don’t need to mirror healthcare firms—but we do need to act with similar clarity and care. The path to responsible, scalable AI adoption is not paved by early adoption alone. It requires inclusive governance, transparent communication, a commitment to security, and an unwavering focus on human impact.
BCBSM didn’t become a tech leader by chasing trends. They did it by reimagining infrastructure, embracing distributed leadership, and staying accountable to the people they serve. Academic libraries can do the same.
By applying the CALM framework and heeding Ruha Benjamin’s call for justice-centered technology, we can integrate AI in ways that are not only innovative but also equitable, responsible, and sustainable. This is what scalable, inclusive leadership looks like: not reaction, but reflection. Not speed, but solidarity. Not efficiency alone, but empathy.
Let’s lead with care—and lead with CALM.
I’d love to hear your experiences.
Ready to join the conversation on how to disrupt toxic dynamics and build more inclusive, transformative spaces? Sign up for the Inclusive Knowledge Solutions newsletter to stay updated on resources, events, and insights to help you lead the way in creating change.
Add comment
Comments