AI Goes to College

Craig Van Slyke

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.

  1. 3D AGO

    We're On Our Own: Academic Integrity through AI Resilience

    Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States. They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words. Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much. The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials. That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it. Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction. Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades. The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode. Links referenced in this episode: notebooklmanthropicclaudegooglecanvaseinstein Mentioned in this episode: AI Goes to College Newsletter

    48 min
  2. FEB 16

    Students Are Confused About AI and It's Our Fault (with Dr. Bette Ludwig)

    Dr. Bette Ludwig spent 20 years in higher ed working directly with students before leaving to build something different — a Substack (AI Can Do That), a consulting practice, and most recently, the Socratic AI Method, an AI literacy program that teaches students how to think critically alongside AI while keeping their own voice intact. That last part is the hard part. Craig opens with the question that drives the whole episode: Socratic dialogue requires you to already know enough to ask good questions. So what happens when a student doesn’t know enough to push back on what AI is telling them? Bette’s answer is both practical and unsettling — younger students literally don’t know what they don’t know, and that gap is where the real danger lives. The conversation moves into dependency territory when Craig shares a moment from his own morning: Claude froze while he was editing a manuscript, and he felt a flash of genuine panic. Two seconds later, he remembered he could just… write. But he names the uncomfortable truth — his students won’t have that fallback. Bette compares it to the panic we feel when the wifi drops, which is both funny and a little alarming when you sit with it. From there, the three dig into the policy mess — teachers across the hall from each other running opposite AI rules, students confused about what’s allowed, and educational systems moving at what Bette calls “a glacial pace” while the technology sprints ahead. Craig shares his own college’s approach: you have to have a policy, it has to be clear, but how restrictive or permissive it is remains your call. The non-negotiable? You can’t leave students in the dark. The episode’s most surprising thread might be Bette’s observations about how students actually use AI. It’s not just homework. They’re using it for companionship, personal problems, cooking questions, building apps — ways that don’t even register as “AI use” to most faculty. Her closing point lands hard: students have never used technology the way adults assume they should, and they’re going to do the same thing with AI. Key Takeaways 1. The Socratic method has an AI prerequisite problem. You need existing knowledge to know what questions to ask, which means younger students are especially vulnerable to accepting AI output uncritically. Bette and Craig agree that junior/senior year of high school is roughly where the cognitive capacity for meaningful pushback begins. 2. AI dependency is already happening to experienced users. Craig describes a two-second panic when Claude froze mid-editorial. He recovered by remembering he could just write the way he always has. His concern: students who grew up with AI won’t have that muscle memory to fall back on. 3. The “helpful by default” design is a subtle problem. Craig raises the point that AI systems are programmed to be agreeable, which means they can lock students into a single mode of thinking without anyone noticing. The hallucinations get all the attention, but the quiet steering might be worse. 4. Policy chaos is the norm, not the exception. Teachers in the same hallway can have opposite AI rules. Bette recommends clarity above all: whatever your policy is, make it explicit. In K–12, she argues for uniform policies. In higher ed, where faculty governance complicates things, Craig’s approach works — require a policy, let faculty own the specifics. 5. Grace matters more than enforcement right now. Both Craig and Bette push back on the “AI cop” mentality. Students sometimes cross lines they didn’t know existed, just like past generations plagiarized without understanding citation rules. Teaching moments beat punitive responses, especially when the rules themselves are still being written. 6. Students use AI in ways faculty don’t expect. Companionship, personal problems, everyday questions, building apps. Bette’s observation: students are as likely to use AI for roommate conflicts as for essay writing. Faculty who don’t use AI themselves can’t begin to understand these patterns. 7. Education isn’t moving fast enough. New York got an AI bachelor’s program launched in fall 2025, which Bette calls “Mach speed for higher ed.” Most institutions are still in the resistance-or-denial phase. The shared worry: AI across the curriculum could become another empty checkbox, like ethics across the curriculum before it. Links Dr. Ludwig's website: https://www.betteludwig.com/ AI Can Do That Substack: https://betteconnects.substack.com/ AI Goes to College: https://www.aigoestocollege.com/ Craig's AI Goes to College Substack: https://aigoestocollege.substack.com/ Mentioned in this episode: AI Goes to College Newsletter

    40 min
  3. FEB 2

    Human-AI Collaboration: Outsourcing vs Offloading and the Rise of Co-Produced Cognition

    Recording from the Deep Freeze: Craig broadcasts from snow-covered north Louisiana (running on generator and Starlink!), where AI helped him MacGyver a propane tank solution involving ratchet straps, a plastic bucket, and a shop light. Welcome to the wild world of practical AI applications. Featured TopicsOboe.com: The Future of Self-Directed Learning? Craig and Rob explore Oboe (oboe.com), a free AI-powered platform that creates customized courses on virtually any topic in minutes. Craig demonstrates by building a course on AI agents, and Rob becomes his first student. The hosts discuss: How the platform auto-generates quizzes with reasonable multiple-choice options and helpful feedbackThe potential to revolutionize textbook accessibility with low-cost or no-cost alternativesUsing Oboe to supplement existing textbooks (like adding blockchain content to their own textbook)The limitations: shallow sourcing and need for instructor vettingCredit to the AI and I podcast from Every.to (makers of Lex.page) for the discovery Security First: The Moltbot Warning Not all that glitters is AI gold. Rob raises important concerns about new tools like Moltbot that can automate processes but may introduce security vulnerabilities. Key takeaway: Educators must apply the same critical thinking they expect from students when evaluating new AI tools for classroom use. Craig's Three-Stage Hierarchy: A Framework for Human-AI Interaction The centerpiece discussion introduces Craig's developmental model for understanding how we work with AI: Cognitive Outsourcing - AI does the task for you (the "easy" but often problematic approach)Cognitive Offloading - AI handles specific components while you maintain controlCo-Produced Cognition - True collaborative thinking that produces outcomes neither human nor AI could achieve alone Craig shares his experience co-writing with Claude, comparing it to the collaborative process of updating their textbook with co-author Franz. The magic: AI enables 24/7 expert-level collaboration that would be impossible with humans alone. The Big Idea: This hierarchy should guide our teaching. Rather than telling students to "think critically" (a vague catchall), educators should actively move students from outsourcing toward co-produced cognition, where AI's power truly unlocks. Geeking Out on Affordances Craig unpacks how AI is fundamentally "a bundle of affordances" - potential uses that only matter when actualized. Using the metaphor of a rock (hammer? erosion control? weapon? stepladder?), he explains: The same AI tool can be used to cheat on an assignment or to write a meaningless email nobody will readWhat matters isn't just what AI can do, but which affordances we choose to actualizeUnderstanding affordances helps us guide students toward productive uses Rob adds that affordances can be actualized poorly (like dropping a rock on your toe), emphasizing the need for purposeful, intentional use. The Balanced Path Forward The hosts reject both AI extremism and AI evangelism, calling for nuanced, intentional engagement. Whether it's Oboe.com or ChatGPT, tools can be used for good or ill - context and purpose matter. The Challenge: You can't understand AI's affordances without using it. Even if your conclusion is not to use AI in your classroom, that decision should come from informed experimentation, not avoidance. Key Quotes"What we need to do as educators is we need to push students from that outsourcing to the offloading to the co-produced cognition. I see that as our main job with generative AI." - Craig "The whole idea of think critically I think is a catch all phrase that we use very often that's very hard to quantify... I do really like that example of pushing students towards that co-produced cognition." - Rob "If you don't use them, you're not going to know what they're capable of either harm or benefit. So it's really, I think anybody in higher ed, it's your responsibility to start using these tools." - Craig Episode ResourcesOboe.com - Free AI course creation platform (for now)AI Agent Oboe.com courseAI and I Podcast from Every.toWatch out for: Moltbot security concerns Bottom LineDon't be blindly pro-AI or anti-AI. Be intentionally informed. Understanding the affordances of AI tools - and helping students actualize them purposefully - may be one of higher education's most important responsibilities in 2025. AI Goes to College is your guide to navigating generative AI in higher education. Hosted by Dr. Craig Van Slyke (Louisiana Tech University) and Dr. Rob Crossler (Washington State University). Takeaways: In the podcast, we discussed the emergence of Oboe.com, an innovative platform that facilitates self-directed learning through AI.We emphasized the importance of critically evaluating new AI tools before implementing them in educational settings.Our conversation highlighted the significance of distinguishing between cognitive outsourcing and cognitive offloading in the context of AI use.The hosts expressed their belief that AI can democratize learning, but it must be used responsibly and with proper oversight from educators.We reflected on the collaborative potential of AI, stressing that true innovation arises from synergistic human-AI interactions.The episode concluded with a call to action for educators to engage with AI tools to better understand their affordances and implications. Companies mentioned in this episode: GoogleOboeEvery2Lex PageMoltbotClaude Mentioned in this episode: AI Goes to College Newsletter

    31 min
  4. JAN 14

    Confronting Higher Ed's Grade Economy: A Call to Action on AI

    Welcome to another episode of AI Goes to College, the podcast where Craig and Rob break down what’s really happening with Generative AI in higher education. In this episode, Rob shares a professional update and the hosts dive straight into a candid conversation about the urgent need for action when it comes to embracing and experimenting with AI in the classroom. Forget waiting for the “perfect plan.” Craig and Rob encourage faculty and academic leaders to start doing, iterating, and learning as the technology—and the educational landscape—continues to evolve. They tackle the risks and realities educators face, from teaching evaluations to institutional inertia, and explore the challenges of moving beyond a “grade economy” where effort is traded for grades. The conversation gets real about shifting mindsets, focusing on genuine demonstrations of learning, and the importance of collective action in higher ed to adapt to the AI transformation. Plus, get a practical tip to supercharge your workflow: how to use Chrome Split View (and Edge’s version) to work side-by-side with AI tools and documents. If you’re looking for honest discussion, actionable advice, and a bit of humor about the trials and opportunities of AI in academia, this episode is for you. Don’t miss out! Takeaways: The concept of the grade economy has led to a transactional view of education, where students equate effort directly with grades, rather than focusing on genuine learning.It is essential for educators to embrace iterative approaches in their classrooms, similar to how college football playoffs evolved, rather than waiting for the perfect solution.The rapid evolution of AI tools necessitates that educators continuously adapt their teaching methods to remain relevant and effective in fostering student learning.We must challenge the conventional grading system that incentivizes minimal effort by students, and instead focus on developing intrinsic motivation to learn.Transparency in teaching strategies and the incorporation of AI should be communicated to students to foster a collaborative learning environment.Educational institutions must engage in systemic change to address the flaws in the current grading system, moving away from a production line mentality towards genuine assessments of learning.

    38 min
  5. 12/22/2025

    Building Resilience in the AI Era: What Faculty Need to Know (Live from ICISER)

    Episode DescriptionJoin Craig and Rob for the very first live stream of AI Goes to College, recorded at the International Conference on Information Systems Education Research Workshop. In this special episode, we explore how generative AI is fundamentally changing knowledge work, starting with our own field of Information Systems as the "canary in the coal mine." Craig shares his surprising experience with vibe coding—creating deployable web applications and productivity tools in hours rather than days—and explains why this signals a massive shift coming for all knowledge workers. We also tackle the troubling trend of students using AI to avoid productive learning friction, discuss the dark side of AI monetization and data privacy, and wrestle with difficult questions about AI companionship in an increasingly lonely society. This conversation moves beyond the hype to examine both the genuine opportunities and serious concerns that educators and technologists need to grapple with as AI becomes embedded in every aspect of work and learning. Key Topics & TimestampsWelcome and introduction to the live formatRob's surprising AI use case: Students creating machine-voiced presentations to avoid public speakingCraig introduces vibe coding and creating deployable apps in minutes Information Systems as the "canary in the coal mine" for knowledge work disruptionWhen vibe coding works (and when it doesn't): Simple vs. enterprise applicationsThe 50% principle: "50% is greater than 100%"How AI changes systems analysis and prototypingThe job market reality: Entry-level positions disappearingWhat should we be teaching students now?Privacy concerns and institutional AI toolsThe monetization problem: When AI platforms need to make moneyAI companionship and mental health concernsUsing AI for 24/7 policy questions and course supportShould we accept AI as a solution to technology-created loneliness? Key InsightsThe 50% Principle: Stop trying to get AI to do 100% of a task. Instead, focus on tools that save you half the effort—that's where the real value lies. Vibe Coding Reality: It's not for enterprise-scale applications, but it's revolutionary for rapid prototyping and creating simple, personal productivity tools without needing current coding skills. Productive Friction: Students are increasingly using AI to avoid uncomfortable but necessary learning experiences, like public speaking, removing the "friction points" that actually drive growth. The IS Canary: Information Systems professionals are experiencing AI disruption first, but similar transformations are coming for accounting, finance, law, and virtually all knowledge work. Privacy Warning: As AI companies struggle to monetize, expect increased data harvesting and advertising. Consider running local models for sensitive work. Resources MentionedAI Goes to College website: aigostocollege.comLM Studio: Tool for running large language models locallyClaude Code, Codex, Anti Gravity: Professional coding environments mentionedMeta's LLAMA: Open-source AI model (though future releases uncertain) CreditsHosts: Craig Van Slyke and Rob Crossler Audio: Hazel Crossler Sponsored by: Association for Information Systems Special Interest Group on Education (SIG ED) https://ais-siged.org/ Event: International Conference on Information Systems Education Research Workshop Special thanks to: Conference organizers Tanya McGill and Rosetta Romano Companies mentioned in this episode: Washington State University AIS SIG ED OpenAI Google Claude Code Gemini Copilot Facebook Prospect Press of Vermont Mentioned in this episode: AI Goes to College Newsletter

    47 min
  6. AI, Friction, and the Future of Teaching and Learning: Lessons from Gemini 3

    11/24/2025

    AI, Friction, and the Future of Teaching and Learning: Lessons from Gemini 3

    Are you ready to rethink how AI is shaping higher education? Join Craig and Rob in episode 27 of AIGTC as they dive into the recent agentic shift in AI models like Google’s Gemini 3—and what this means for students, faculty, and the future of learning. In this thought-provoking conversation, Craig shares his unsettling experience with Gemini 3’s “agentic” behavior, where AI takes the reins with minimal user input—even when that’s not what the user asked for. The hosts examine how this frictionless, super-helpful technology might make academic shortcuts easier than ever, removing the crucial learning struggles that foster true understanding. Are we on the verge of a “skill inversion,” where students need expertise just to avoid cheating themselves out of learning? But it’s not all doom and gloom: Rob and Craig explore actionable solutions for instructors, focusing on process-oriented teaching, project-based learning, and authentic reflection assignments that resist easy automation. They challenge educators to try just one process-focused change in their next class and offer themselves as resources for peer collaboration and feedback. Throughout the episode, you’ll discover: Why “agentic” AI could undermine deep learning—and what to do about itThe hidden dangers of frictionless assignment completion for student growthPractical strategies to make student assessment more process-driven and meaningfulHow reflection and professionalism can help students stand out in the AI eraThe importance of radical thinking and institutional adaptation for the future of higher ed Plus, stay tuned for details on AIGTC’s first-ever live stream at the International Conference on Information Systems, and learn how you can join the hosts in Nashville or virtually for engaging Q&A and networking. The live stream will be held from 17:00 - 18:00 US Central time (UTC - 6). Link forthcoming. If you’re an educator, administrator, or student eager for honest insights and expert advice on navigating AI in academia, this episode is essential listening. Discover how you can control what you can—and embrace the challenge of teaching and learning in an AI-driven world. Mentioned in this episode: AI Goes to College Newsletter

    43 min
  7. 10/28/2025

    Creating the Classroom of Tomorrow: Stephen Fitzpatrick Discusses Generative AI

    Are you an educator navigating the new world of generative AI, or a college faculty member wondering how your incoming students are being shaped by technology? Join hosts Craig and Rob on this episode of AI Goes to College as they sit down with Stephen Fitzpatrick—a veteran secondary school history teacher, debate coach, and leading voice on AI in education—to explore how artificial intelligence is fundamentally transforming the classroom experience, from high school to higher ed. Stephen shares his journey from classroom innovator to Substack thought-leader, detailing his hands-on experimentation with emerging AI tools like ChatGPT, Claude, NotebookLM, and specialized debate platforms. You'll hear candid stories about students' rapid normalization of AI use, from research projects and note-taking to more controversial applications like essay writing—and the ethical dilemmas teachers face in response. This episode reveals why banning AI in schools is a losing game, and why the true challenge is fostering critical thinking, curiosity, and responsible use among students. Discover how high school educators are wrestling with the balance between preserving “AI-free” learning spaces and adapting assignments for an AI-empowered world. Stephen provides actionable insights on: The rise of AI-powered research and note-taking among high schoolers—what college faculty need to knowThe importance of clear, consistent policies on AI use across classes and institutionsHow educators' comfort level with technology directly impacts their ability to guide studentsPractical solutions for cultivating AI fluency and resilience in both teachers and studentsWhy peer-to-peer training and real-world use cases trump generic professional development Whether you're a teacher, administrator, or college professor, this conversation will equip you to meet students where they are in the age of AI—challenging old paradigms and preparing them for a future where intelligent technology is their constant companion. Stephen’s nuanced perspective, grounded in frontline experience and continuous experimentation, will inspire you to rethink resistance and embrace adaptation. Ready to hear the real story behind “AI Goes to College”? Tune in to learn how you can empower students to use AI as a tool for deeper thinking and lifelong learning—and don’t forget to check out Stephen’s Substack, Teaching in the Age of AI, linked in the show notes for even more practical wisdom. Links: Teaching in the Age of AI: https://fitzyhistory.substack.com Mentioned in this episode: AI Goes to College Newsletter

    49 min
  8. 09/22/2025

    Context rot, AI over-hype and an intriguing, hilarious video

    Welcome to another episode of "AI Goes to College," where hosts Dr. Rob Crossler and Craig dive into the ever-evolving landscape of artificial intelligence in higher education. In this episode, they kick things off with a lighthearted look at the viral, AI-generated parody "Redneck Star Trek," using it as a springboard to discuss the rapidly advancing capabilities of AI video creation and what this means for both educators and students. Rob and Craig explore the implications of AI tools making creative content more accessible, shaking up traditional teaching methods, and opening new doors for engagement. They unpack the excitement—and potential pitfalls—around trend topics like vibe coding, the “agentic layer,” governance concerns, and the phenomenon of “context rot” in AI conversations. Along the way, the hosts share personal experiences with different AI platforms, discuss challenges in scaling AI within institutional systems, and highlight the need for critical thinking and strong oversight as universities start to embed AI more deeply into daily operations. Whether you’re a faculty member, student, or just AI-curious, this episode offers practical insights, food for thought, and a dose of humor for anyone navigating the intersection of technology and higher ed. So grab your coffee (or sweet tea) and join Rob and Craig as they “go where no podcasters have gone before” in the world of collegiate AI! Takeaways: The emergence of AI technologies presents unprecedented opportunities for students to create engaging content, thereby transforming traditional classroom dynamics. With AI-generated content, educators can offer varied media presentations, catering to diverse learning styles and enhancing student engagement. The recent advancements in AI tools have made it feasible for novices to produce high-quality content, which was previously the domain of experts alone. Concerns regarding AI-generated outputs necessitate critical evaluation to avoid potential misinformation and ensure educational integrity. Links: Redneck Star Trek - Beam Me Up, Bubba: https://youtu.be/1eqYswiW4eo?si=XvwPdWGTbiSvx6FLThe vibe coding hangover is upon us (Fast Company): https://www.fastcompany.com/91398622/the-vibe-coding-hangover-is-upon-usThe Agentic Layer: Why the Middle of the Cake Matters in AI-Driven Delivery: https://cd.foundation/blog/2025/09/05/agentic-layer-ai/Context rot - What it is and how to avoid it: https://aigoestocollege.substack.com/p/context-rot-the-hidden-challenge Companies mentioned in this episode: Neural Derp Grok Google Vids Wondery Claude ChatGPT Fast Company Microsoft Copilot Suno AI Mentioned in this episode: AI Goes to College Newsletter

    42 min
4.8
out of 5
16 Ratings

About

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.

You Might Also Like