Issue #6April 6, 2026

PromptResponse #6 - Weekly Insights for AI in Higher Education and the Humanities

Latest News

Adapting Education for an AI Future

Mark Daley advocates for a transformative approach in higher education to effectively integrate AI across disciplines, stressing the necessity for educational reform. He underscores the critical role of collaboration between academic institutions and industry to adequately prepare students for the evolving job market.

University of Kent Makes Bold AI Play as UK Institutions Race to Adopt Educational Technology

The University of Kent has become one of the first major UK universities to provide ChatGPT Edu access to all staff and students, signaling a growing willingness among British institutions to embrace OpenAI's education-focused platform. For university leaders watching this space, Kent's move raises strategic questions about competitive positioning and institutional readiness as AI tools become increasingly expected across campus.

The Erosion of Learning: AI's Hidden Risk

As AI technologies become more integrated into higher education, university leaders must consider the potential erosion of the learning process beyond issues of academic integrity. A comprehensive understanding of AI's impact on educational outcomes is essential for fostering effective and meaningful student engagement.

CU Community Pushes Back Against Top-Down AI Adoption

Hundreds of University of Colorado faculty, staff and students are resisting the university's planned rollout of a university-controlled OpenAI system, marking one of the most visible instances of institutional resistance to centrally mandated AI tools in higher education. The pushback underscores the growing tension between administrative efficiency goals and faculty concerns over academic freedom, data privacy and the pace of technological change on campus.

WSU Study Gives ChatGPT D Grade for Research Accuracy

A Washington State University study testing ChatGPT with scientific paper hypotheses found significant accuracy and consistency failures, raising fresh questions about AI's reliability for academic research. University leaders may want to review their AI policies as faculty and students increasingly turn to generative tools for scholarly work.

CU Boulder Faculty Question OpenAI Partnership Terms

University of Colorado Boulder faculty are raising questions about the institution's enterprise agreement with OpenAI, specifically regarding contract transparency and academic integrity safeguards. As higher education increasingly adopts generative AI tools, administrators must balance innovation with clear policies that address faculty concerns about pedagogy and student work authenticity.

Faculty Resistance to AI Partnerships Poses New Governance Challenge

The California State University system's $17 million ChatGPT rollout has become a flashpoint for faculty opposition, signaling that institutional AI adoption now faces the same stakeholder resistance historically seen with commercial textbook and learning management system deals. University leaders treating these partnerships as purely technical decisions may find themselves navigating significant faculty governance battles that could delay or derail planned implementations.

Upstate NY College Mandates AI Literacy for All Freshmen

A growing number of university leaders are watching institutions like SUNY ESF, which now requires all freshmen to complete an AI literacy course focused on ethical reasoning rather than technical skills alone. The approach—asking students what AI should do rather than merely what it can do—reflects a broader shift in higher education toward preparing graduates for the societal complexities of emerging technology.

AI's Impact on Learning Experiences

As artificial intelligence technologies become increasingly integrated into higher education, the priority for university leaders should shift from concerns about cheating to safeguarding the integrity of learning experiences. This calls for a reevaluation of pedagogical strategies to ensure that genuine educational engagement is not compromised.

AI Integration in Higher Education

Case Western Reserve University exemplifies how AI is being woven into the fabric of higher education, offering insights that could inform broader institutional strategies. As universities navigate these developments, leadership must consider both the innovative potential and ethical implications of AI applications in academic settings.

Admin Signals

What Faculty Need Most From AI Leaders Isn't Training, but Trust

After decades of watching universities navigate technological disruption, I've learned this truth: the institutions that succeed with AI aren't the ones with the biggest budgets or the most sophisticated tools. They're the ones that put their faculty's anxiety at the center of the strategy. Faculty aren't resisting AI because they're technophobes. They're worried about their relevance, their students' futures, and whether they'll have a voice in decisions that shape their classrooms. Smart leaders recognize that this anxiety is legitimate and address it directly, not with mandatory workshops, but with genuine conversation. The most effective AI pivots I've observed start with listening sessions, not town halls where administrators present and depart, but small group conversations where faculty can voice concerns without judgment. University of Arizona's approach comes to mind: they trained faculty facilitators to lead these discussions across departments, creating space for honest dialogue about what AI means for pedagogy, assessment, and academic integrity. The key insight wasn't what they learned about AI; it was what they learned about their faculty's hopes and fears. That understanding became the foundation for every subsequent decision. Practical support matters, but it must be layered and voluntary. One-size-fits-all training programs consistently underperform because they ignore the reality that a tenured professor in the humanities has different needs than a tenure-track computer scientist. The institutions making progress offer multiple pathways: peer mentoring networks where early adopters help colleagues, stipends for faculty who develop AI-integrated curriculum, and clear policies that give instructors autonomy to set their own boundaries. When Georgetown University launched their AI faculty fellowship program, they explicitly told participants they'd have creative control over how they integrated AI into their courses, and that autonomy transformed engagement. Here's what veteran administrators know and what the data confirms: faculty who feel trusted and included become your strongest AI advocates. Those who feel imposed upon become your biggest obstacles - - not because they oppose innovation, but because they feel voiceless in their own institutions. The AI pivot isn't really a technology project. It's a change management challenge that happens to involve technology. Lead with respect, involve faculty in governance decisions, and remember that the goal isn't AI adoption, but empowering your faculty to use AI in service of their students.

AI in the Classroom

Teaching Research Skills When Students Can Generate Citations in Seconds

The first time a student shows you a perfectly formatted bibliography for a paper they wrote in 45 minutes, don't panic - - but do pause. What we're witnessing isn't the death of research instruction; it's a correction. For years, many of our students treated citations as a box-checking exercise: grab something from a database, format it in MLA or APA, and move on. AI tools have simply exposed how little that approach had to do with actual research thinking. Here's what still matters, perhaps more than ever: the question. Teaching students to craft researchable questions, narrow enough to answer, broad enough to matter, has become our most valuable work. An AI can generate a bibliography on "climate change policy," but it cannot define why a student cares about climate change policy in their particular community, for their particular major, with their particular career in mind. When you build assignments around student-generated questions, you're asking something AI cannot produce: genuine intellectual investment. The second shift is equally important. Instead of treating citation generation as a skill to test, make the evaluation about what happens before and after the citation. Ask students to annotate their sources: Why did they choose this one over the ten others they found? What did they have to discard, and why? How does this source complicate or confirm their argument? These are the moves that separate researchers from content consumers, and no chatbot can do the choosing for them. Finally, be honest with your students about what you're teaching. Tell them directly: "I'm not grading your ability to generate a Works Cited page. Your phones can do that. I'm grading whether you know why a source belongs in a paper, whether you can evaluate its credibility, and whether you can build an argument that uses evidence rather than just displays it." When students understand the real assignment, most of them want to meet that standard. The technology changes, but the intellectual work at the center of good research hasn't...and that's the piece only you can teach.

Incubator Playbook

Your Newsletter Is Client Infrastructure

The most powerful business development tool most professionals ignore is sitting right in their inboxes. I'm talking about the newsletter; not the sporadic announcement blast, but the systematic, curated content pipeline that positions you as the indispensable expert your ideal clients didn't know they needed. Here's what the humanities-trained mind understands intuitively but often undervalues: curation is intellectual labor. When you sift through the noise of your industry and deliver a thoughtful synthesis - - three links, one insight, one provocation - - you're doing what academics do best. You're organizing knowledge for others. The difference now is you're doing it for clients, not tenure committees. A well-crafted weekly digest becomes a relationship-building instrument that works while you sleep, reaching prospects who've already decided you understand their world before you ever speak. The pipeline magic happens through consistency and specificity. Target a narrow enough audience that your curation feels like it was written for them. It was. A historian-turned-consultant doesn't send a general business newsletter; she sends one tracking how institutions navigate legacy and change. A literature PhD building a brand strategy practice curates around narrative and meaning-making in commercial contexts. The specificity creates trust. When they finally need your services, you're the obvious choice they already know. Start with building up to fifty subscribers who fit your ideal client profile. Interview three of them about their challenges. Build your first eight issues around what you learn. The infrastructure pays dividends not in weeks, but in the compound interest of being top-of-mind when transformation becomes inevitable for someone you've already been serving quietly, weekly, through the inbox they open every Monday morning.

Prompting 101

The Simple Rule That Tells You When to Show AI Examples, and When to Just Ask for What You Want

Here's something that trips up a lot of people getting started with AI prompting: whether to give the AI examples of what you want, or just describe it in plain language. The good news is there's a straightforward principle that covers most situations, and once you understand it, you'll make better prompts almost immediately. When you're asking the AI to do something it already knows how to do, like writing a summary, translating a sentence, or answering a factual question, you usually don't need to provide examples. This is called "zero-shot" prompting, and it works because the AI has already learned these patterns from its training. Just tell it clearly what you want: "Write a professional email declining this meeting invitation" or "Explain photosynthesis to a fifth grader." The more specific you are about your goal and any constraints, the better it performs. But when you're asking the AI to follow an unusual format, adopt a specific style it might not guess, or handle a task with particular nuances, that's when you want to throw in a few examples. Typically, two to five work well. More than five is usually wasted energy. This is "few-shot" prompting. For instance, if you want it to extract information from customer reviews in a specific table format, show it what that format looks like. If you need it to respond to inquiries in your company's particular voice, give it a sample exchange. The examples act as a template the AI can follow. The key insight: use zero-shot when the task is standard and well-defined; use few-shot when the task requires a specific structure or style the AI couldn't otherwise guess. One practical tip as you practice: start with zero-shot. If the output isn't quite right, then add examples to steer it. This approach saves you time and helps you learn what actually moves the needle on quality. You'll develop an intuition for this quickly, and soon enough, you'll be prompting with confidence.