AI Goes to College

Craig Van Slyke

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.

  1. 5D AGO

    Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AI

    AI Goes to College, Episode 33: Accessibility Hacks, 81,000 Interviews, and the Choppy Waters of Academic AIHigher education is drowning in accessibility deadlines, grappling with what 81,000 AI interviews reveal about how people actually use these tools, and watching the academic publishing system creak under new pressures. In this episode, Craig and Rob dig into all three, with practical advice, a few uncomfortable truths, and their usual mix of optimism and healthy skepticism. The Accessibility Crunch Is Here (and AI Can Help)The episode opens with a problem that's top of mind for faculty everywhere: the April 24 federal deadline requiring public-facing digital content to meet WCAG accessibility guidelines. Universities have been scrambling, and many of the contracted tools designed to help have been, as Craig diplomatically puts it, hit and miss. Craig shares a concrete example from his own workflow. He took three image-heavy slide decks from his Principles of Information Systems course and handed them to Claude Cowork with a simple instruction: add alt text for all the images. Within about 30 minutes, the job was done. The accuracy? Roughly 75 to 80 percent. A handful of images needed corrections, but instead of writing alt text for 40 or 50 images from scratch, he only had to fix six or eight. Rob tried something similar with Microsoft Copilot on a keynote presentation he gave at the SAIS conference in Asheville; two images, 30 seconds, done. Rob makes the important point that accessibility isn't just a PowerPoint problem. It extends to whiteboard files, videos, and essentially everything faculty communicate digitally. The burden is real, and it lands on faculty who are already overwhelmed by the changes AI is bringing to their professional lives. Craig adds a note of personal sensitivity here; his wife has a profound hearing disability, which makes these issues more than abstract compliance for him. The larger takeaway? When you hit one of these friction points in your work, try AI. It won't always solve the problem, but it often will, and the time savings can be substantial. What 81,000 Interviews Tell Us About How People Actually Use AILink: https://www.anthropic.com/features/81k-interviews Craig's article: https://open.substack.com/pub/aigoestocollege/p/what-81000-people-told-anthropic The conversation shifts to Anthropic's large-scale qualitative study, where Claude was used to conduct and analyze 81,000 interviews about how people use AI tools. Rob, who has spent considerable time doing qualitative research the traditional way (36 interview transcripts with families, a labor-intensive process), finds the scale almost hard to believe. Craig wrote a separate article about this study for the AI Goes to College newsletter. The phrase that catches both hosts' attention is one from the report: "the light and the shade are tangled together." It captures the tension between excitement about AI's possibilities and anxiety about what those possibilities mean for how people work, learn, and think. Craig connects this to a concept from technology studies: this is not technological determinism. The outcomes aren't dictated by the tools themselves. They emerge from the sociotechnical space where human choices and technological capabilities intersect. Rob observes that most current AI use cases still amount to doing what we've always done, just faster. The real transformation will come when people start imagining entirely new approaches (he draws an analogy to cloud computing, which started as a backup solution and eventually reshaped how people interact with technology in ways nobody initially anticipated). One quote from the Anthropic study lands hard. A freelance software engineer in Pakistan says: "I want to learn skills, but learning deeply is of no use. Ultimately I can just use AI." Craig points out that if a working professional thinks this way, the implications for students who may not yet appreciate the long-term value of deep learning are sobering. Rob agrees but pushes back slightly: people who lean too far into this mindset will eventually hit a wall where they lack the critical thinking skills to know when or why AI has gotten something wrong. The hosts converge on what's becoming a running theme for the podcast: higher education's central task is helping students understand the long-term value of cognitive engagement, because without that understanding, the default will always be to let AI handle it. Academics Need to Wake Up: 10 Theses on a Shifting LandscapeLink: https://substack.com/home/post/p-189705626 The second major discussion centers on Alexander Kustoff's Substack article, "Academics Need to Wake Up on AI: 10 Theses for Folks Who Haven't Noticed the Ground Shifting Under Their Feet." Rob sees it as a useful prompt for conversations the research community needs to have. Craig appreciates the ambition but pushes back on some of the claims. Take thesis number one: AI can already do social science research better than most professors. Craig's reaction is nuanced. The claim is probably technically true if "most" is read literally, since many professors don't publish much (Rob notes the median number of publications for business school professors may be as low as one). But the implication that AI can replace skilled researchers? Not yet. Craig estimates that a knowledgeable researcher can use AI to cut research production time by about three-quarters, but that knowledge is the key ingredient; without research skill, you'll just produce publishable garbage faster. Rob raises something interesting: colleagues who are brilliant thinkers but never thrived in research because they didn't enjoy writing may now have a path to contribute. AI could genuinely democratize parts of the research process. Craig extends this point to data analysis; tools like Cowork can run Python and R analyses without expensive specialized software, which matters enormously for under-resourced institutions and researchers in developing countries. The conversation turns to the strain AI is putting on the peer review system. More submissions (many of them better written thanks to AI) are flooding journals, but finding reviewers was already difficult. Craig, speaking from his role as a journal editor, argues that well-trained AI could do a better job reviewing than roughly half of current human reviewers. Rob agrees but emphasizes that journal leaders need to come together and define norms for what's acceptable. Right now, the rules are either nonexistent or unrealistically restrictive ("just don't use AI for anything"), which creates the same kind of confusion faculty have imposed on students with inconsistent classroom policies. One of the most provocative moments comes when Craig reads a quote from the Kustoff article: "I don't envision a research assistant role in my workflow anymore. What I want from collaborators is original thinking, domain expertise, and intellectual challenge. This is a genuine loss for the traditional apprenticeship model, and I don't have a clean answer for how to replace it." Both hosts take this seriously. Craig argues that senior scholars will need to accept some suboptimal results in the short term to continue mentoring the next generation. Rob suggests the apprenticeship model isn't dying; it's transforming. The mentorship shifts from teaching students how to do tasks to teaching them how to direct AI tools and critically evaluate what those tools produce. Craig closes with a characteristically honest observation: senior scholars get stuck in their ways of thinking, and one of the real values of working with early-career doctoral students is the occasional moment when their unformed, messy thinking reveals a perspective that nobody in the room had considered. That's worth protecting. AI-Generated Lesson Plans and the Bloom's Taxonomy ProblemLink: https://citejournal.org/volume-25/issue-3-25/social-studies/civic-education-in-the-age-of-ai-should-we-trust-ai-generated-lesson-plans/ The final segment covers a paper by four researchers from UMass Amherst, "Civic Education in the Age of AI: Should We Trust AI-Generated Lesson Plans?" The study found that roughly 90 percent of AI-generated lesson plans hit only the lower levels of Bloom's taxonomy (remembering, understanding) rather than the higher-order thinking skills like analyzing, evaluating, and creating. Craig's first reaction was that the prompts used in the study were terrible. But he acknowledges the researchers had a reason: they were mimicking how most teachers would actually prompt. And that's the real finding. The problem isn't that AI can't produce sophisticated lesson plans; the problem is that untrained users produce unsophisticated prompts, and the output reflects the input. Rob agrees and broadens the point: if even a fraction of teachers are prompting this way, that's affecting a lot of students. Craig shares a personal anecdote from his one year as a high school teacher. He diligently wrote lesson plans; a veteran teacher (whom he describes as one of the best he'd ever seen) simply copied his plans to satisfy an administrative checkbox. The experienced teacher didn't need detailed plans because she could read the room and adapt in real time. Some lesson planning, Craig suggests, falls...

    47 min
  2. MAR 3

    We're On Our Own: Academic Integrity through AI Resilience

    Craig and Rob kick off this episode with a deep dive into Claude's Constitution — the 84-page document Anthropic released to explain how Claude is governed. The document lays out a four-part hierarchy of priorities: be broadly safe, be broadly ethical, follow Anthropic's guidelines, and be genuinely helpful — in that order. Craig walks through the key language, and both hosts zero in on the uncomfortable questions it raises. Who gets to define "broadly ethical"? Whose values count? Craig points out that collectivist and individualist cultures would answer those questions very differently, and Rob raises the example of how privacy has historically carried different social weight in China versus the United States. They give Anthropic credit for the transparency. Rob notes that he has no idea what governs ChatGPT by comparison, and Craig argues the openness could become a real differentiator for universities evaluating which AI tools to bring in-house. But the Constitution also includes some curious language — the phrase "during the current phase of development" gives Anthropic significant room to evolve these guardrails over time, and a section on emotional support states that Claude should "show that it cares," which both hosts flag as a strikingly anthropomorphic choice of words. Craig shares a fun aside: he used Claude Code to build a clone of the classic Colossal Cave Adventure game — reframed around understanding large language models — using just a few sentences as a prompt. The game was up and running in about an hour. That kind of capability would have been unthinkable a couple of years ago, and it underscores why the Constitution's language about the "current phase" matters so much. The big takeaway from the Constitution discussion lands hard: higher ed is on its own when it comes to academic integrity. Anthropic — arguably the most transparent of the major AI companies — has no interest in blocking students from misusing its tools. Rob mentions a new product called Einstein that will watch your Canvas videos, write your discussion posts, reply to classmates, and complete your assignments. All you have to do is hand over your login credentials. That sets up the episode's second major topic: AI resilience. Rob explains the concept as designing learning outcomes that hold up regardless of what AI can do. If a major portion of a student's grade depends on writing an essay that AI could produce in seconds, that assignment has very little resilience. The shift Rob advocates moves evaluation toward process — asking students for the prompts they used, reflections on how they refined their approach, and demonstrations that they understand what was produced. He shares the example of a colleague whose programming class now requires students to record videos explaining their code rather than just submitting it. Craig raises the scaling problem. He regularly teaches 90 to 100 undergraduates. Rob suggests that AI itself can help with formative feedback on scaffolding assignments, freeing faculty to focus their grading energy on fewer, higher-stakes assessments. Craig uses an analogy from music: scaffolding assignments are like playing scales — you do them to build toward performance, and they don't need to carry grade weight. Both hosts agree this represents a move away from the grade economy, where students rationally minimize effort because every small assignment is a transaction. Craig pushes the conversation further by proposing live client projects — or AI-simulated client projects — as a way to create the messiness and ambiguity that real work demands. Rob's initial reaction is skepticism (live client projects are logistically brutal), but he warms to the idea of using AI to simulate clients with realistic fuzziness and scope creep. The broader point: AI could be the lever higher ed needs to fix problems that have been accumulating for decades. The episode wraps with an update on NotebookLM. Craig walks through the recent changes — more user control over reports, slide decks, flashcards, quizzes, and other outputs in the Studio panel. You can now specify the structure and focus of custom reports rather than relying solely on canned formats. Slide decks can be exported (though editing remains clunky since each slide is essentially an image). Craig's recommendation: if you have a Google account and you work with knowledge in any form, you should be using NotebookLM. Rob notes that Microsoft Copilot has added a similar notebook feature worth exploring, and they float the idea of a future head-to-head comparison episode. Links referenced in this episode: notebooklmanthropicclaudegooglecanvaseinstein Mentioned in this episode: AI Goes to College Newsletter

    48 min
  3. FEB 16

    Students Are Confused About AI and It's Our Fault (with Dr. Bette Ludwig)

    Dr. Bette Ludwig spent 20 years in higher ed working directly with students before leaving to build something different — a Substack (AI Can Do That), a consulting practice, and most recently, the Socratic AI Method, an AI literacy program that teaches students how to think critically alongside AI while keeping their own voice intact. That last part is the hard part. Craig opens with the question that drives the whole episode: Socratic dialogue requires you to already know enough to ask good questions. So what happens when a student doesn’t know enough to push back on what AI is telling them? Bette’s answer is both practical and unsettling — younger students literally don’t know what they don’t know, and that gap is where the real danger lives. The conversation moves into dependency territory when Craig shares a moment from his own morning: Claude froze while he was editing a manuscript, and he felt a flash of genuine panic. Two seconds later, he remembered he could just… write. But he names the uncomfortable truth — his students won’t have that fallback. Bette compares it to the panic we feel when the wifi drops, which is both funny and a little alarming when you sit with it. From there, the three dig into the policy mess — teachers across the hall from each other running opposite AI rules, students confused about what’s allowed, and educational systems moving at what Bette calls “a glacial pace” while the technology sprints ahead. Craig shares his own college’s approach: you have to have a policy, it has to be clear, but how restrictive or permissive it is remains your call. The non-negotiable? You can’t leave students in the dark. The episode’s most surprising thread might be Bette’s observations about how students actually use AI. It’s not just homework. They’re using it for companionship, personal problems, cooking questions, building apps — ways that don’t even register as “AI use” to most faculty. Her closing point lands hard: students have never used technology the way adults assume they should, and they’re going to do the same thing with AI. Key Takeaways 1. The Socratic method has an AI prerequisite problem. You need existing knowledge to know what questions to ask, which means younger students are especially vulnerable to accepting AI output uncritically. Bette and Craig agree that junior/senior year of high school is roughly where the cognitive capacity for meaningful pushback begins. 2. AI dependency is already happening to experienced users. Craig describes a two-second panic when Claude froze mid-editorial. He recovered by remembering he could just write the way he always has. His concern: students who grew up with AI won’t have that muscle memory to fall back on. 3. The “helpful by default” design is a subtle problem. Craig raises the point that AI systems are programmed to be agreeable, which means they can lock students into a single mode of thinking without anyone noticing. The hallucinations get all the attention, but the quiet steering might be worse. 4. Policy chaos is the norm, not the exception. Teachers in the same hallway can have opposite AI rules. Bette recommends clarity above all: whatever your policy is, make it explicit. In K–12, she argues for uniform policies. In higher ed, where faculty governance complicates things, Craig’s approach works — require a policy, let faculty own the specifics. 5. Grace matters more than enforcement right now. Both Craig and Bette push back on the “AI cop” mentality. Students sometimes cross lines they didn’t know existed, just like past generations plagiarized without understanding citation rules. Teaching moments beat punitive responses, especially when the rules themselves are still being written. 6. Students use AI in ways faculty don’t expect. Companionship, personal problems, everyday questions, building apps. Bette’s observation: students are as likely to use AI for roommate conflicts as for essay writing. Faculty who don’t use AI themselves can’t begin to understand these patterns. 7. Education isn’t moving fast enough. New York got an AI bachelor’s program launched in fall 2025, which Bette calls “Mach speed for higher ed.” Most institutions are still in the resistance-or-denial phase. The shared worry: AI across the curriculum could become another empty checkbox, like ethics across the curriculum before it. Links Dr. Ludwig's website: https://www.betteludwig.com/ AI Can Do That Substack: https://betteconnects.substack.com/ AI Goes to College: https://www.aigoestocollege.com/ Craig's AI Goes to College Substack: https://aigoestocollege.substack.com/ Mentioned in this episode: AI Goes to College Newsletter

    40 min
  4. FEB 2

    Human-AI Collaboration: Outsourcing vs Offloading and the Rise of Co-Produced Cognition

    Recording from the Deep Freeze: Craig broadcasts from snow-covered north Louisiana (running on generator and Starlink!), where AI helped him MacGyver a propane tank solution involving ratchet straps, a plastic bucket, and a shop light. Welcome to the wild world of practical AI applications. Featured TopicsOboe.com: The Future of Self-Directed Learning? Craig and Rob explore Oboe (oboe.com), a free AI-powered platform that creates customized courses on virtually any topic in minutes. Craig demonstrates by building a course on AI agents, and Rob becomes his first student. The hosts discuss: How the platform auto-generates quizzes with reasonable multiple-choice options and helpful feedbackThe potential to revolutionize textbook accessibility with low-cost or no-cost alternativesUsing Oboe to supplement existing textbooks (like adding blockchain content to their own textbook)The limitations: shallow sourcing and need for instructor vettingCredit to the AI and I podcast from Every.to (makers of Lex.page) for the discovery Security First: The Moltbot Warning Not all that glitters is AI gold. Rob raises important concerns about new tools like Moltbot that can automate processes but may introduce security vulnerabilities. Key takeaway: Educators must apply the same critical thinking they expect from students when evaluating new AI tools for classroom use. Craig's Three-Stage Hierarchy: A Framework for Human-AI Interaction The centerpiece discussion introduces Craig's developmental model for understanding how we work with AI: Cognitive Outsourcing - AI does the task for you (the "easy" but often problematic approach)Cognitive Offloading - AI handles specific components while you maintain controlCo-Produced Cognition - True collaborative thinking that produces outcomes neither human nor AI could achieve alone Craig shares his experience co-writing with Claude, comparing it to the collaborative process of updating their textbook with co-author Franz. The magic: AI enables 24/7 expert-level collaboration that would be impossible with humans alone. The Big Idea: This hierarchy should guide our teaching. Rather than telling students to "think critically" (a vague catchall), educators should actively move students from outsourcing toward co-produced cognition, where AI's power truly unlocks. Geeking Out on Affordances Craig unpacks how AI is fundamentally "a bundle of affordances" - potential uses that only matter when actualized. Using the metaphor of a rock (hammer? erosion control? weapon? stepladder?), he explains: The same AI tool can be used to cheat on an assignment or to write a meaningless email nobody will readWhat matters isn't just what AI can do, but which affordances we choose to actualizeUnderstanding affordances helps us guide students toward productive uses Rob adds that affordances can be actualized poorly (like dropping a rock on your toe), emphasizing the need for purposeful, intentional use. The Balanced Path Forward The hosts reject both AI extremism and AI evangelism, calling for nuanced, intentional engagement. Whether it's Oboe.com or ChatGPT, tools can be used for good or ill - context and purpose matter. The Challenge: You can't understand AI's affordances without using it. Even if your conclusion is not to use AI in your classroom, that decision should come from informed experimentation, not avoidance. Key Quotes"What we need to do as educators is we need to push students from that outsourcing to the offloading to the co-produced cognition. I see that as our main job with generative AI." - Craig "The whole idea of think critically I think is a catch all phrase that we use very often that's very hard to quantify... I do really like that example of pushing students towards that co-produced cognition." - Rob "If you don't use them, you're not going to know what they're capable of either harm or benefit. So it's really, I think anybody in higher ed, it's your responsibility to start using these tools." - Craig Episode ResourcesOboe.com - Free AI course creation platform (for now)AI Agent Oboe.com courseAI and I Podcast from Every.toWatch out for: Moltbot security concerns Bottom LineDon't be blindly pro-AI or anti-AI. Be intentionally informed. Understanding the affordances of AI tools - and helping students actualize them purposefully - may be one of higher education's most important responsibilities in 2025. AI Goes to College is your guide to navigating generative AI in higher education. Hosted by Dr. Craig Van Slyke (Louisiana Tech University) and Dr. Rob Crossler (Washington State University). Takeaways: In the podcast, we discussed the emergence of Oboe.com, an innovative platform that facilitates self-directed learning through AI.We emphasized the importance of critically evaluating new AI tools before implementing them in educational settings.Our conversation highlighted the significance of distinguishing between cognitive outsourcing and cognitive offloading in the context of AI use.The hosts expressed their belief that AI can democratize learning, but it must be used responsibly and with proper oversight from educators.We reflected on the collaborative potential of AI, stressing that true innovation arises from synergistic human-AI interactions.The episode concluded with a call to action for educators to engage with AI tools to better understand their affordances and implications. Companies mentioned in this episode: GoogleOboeEvery2Lex PageMoltbotClaude Mentioned in this episode: AI Goes to College Newsletter

    31 min
  5. JAN 14

    Confronting Higher Ed's Grade Economy: A Call to Action on AI

    Welcome to another episode of AI Goes to College, the podcast where Craig and Rob break down what’s really happening with Generative AI in higher education. In this episode, Rob shares a professional update and the hosts dive straight into a candid conversation about the urgent need for action when it comes to embracing and experimenting with AI in the classroom. Forget waiting for the “perfect plan.” Craig and Rob encourage faculty and academic leaders to start doing, iterating, and learning as the technology—and the educational landscape—continues to evolve. They tackle the risks and realities educators face, from teaching evaluations to institutional inertia, and explore the challenges of moving beyond a “grade economy” where effort is traded for grades. The conversation gets real about shifting mindsets, focusing on genuine demonstrations of learning, and the importance of collective action in higher ed to adapt to the AI transformation. Plus, get a practical tip to supercharge your workflow: how to use Chrome Split View (and Edge’s version) to work side-by-side with AI tools and documents. If you’re looking for honest discussion, actionable advice, and a bit of humor about the trials and opportunities of AI in academia, this episode is for you. Don’t miss out! Takeaways: The concept of the grade economy has led to a transactional view of education, where students equate effort directly with grades, rather than focusing on genuine learning.It is essential for educators to embrace iterative approaches in their classrooms, similar to how college football playoffs evolved, rather than waiting for the perfect solution.The rapid evolution of AI tools necessitates that educators continuously adapt their teaching methods to remain relevant and effective in fostering student learning.We must challenge the conventional grading system that incentivizes minimal effort by students, and instead focus on developing intrinsic motivation to learn.Transparency in teaching strategies and the incorporation of AI should be communicated to students to foster a collaborative learning environment.Educational institutions must engage in systemic change to address the flaws in the current grading system, moving away from a production line mentality towards genuine assessments of learning.

    38 min
  6. 12/22/2025

    Building Resilience in the AI Era: What Faculty Need to Know (Live from ICISER)

    Episode DescriptionJoin Craig and Rob for the very first live stream of AI Goes to College, recorded at the International Conference on Information Systems Education Research Workshop. In this special episode, we explore how generative AI is fundamentally changing knowledge work, starting with our own field of Information Systems as the "canary in the coal mine." Craig shares his surprising experience with vibe coding—creating deployable web applications and productivity tools in hours rather than days—and explains why this signals a massive shift coming for all knowledge workers. We also tackle the troubling trend of students using AI to avoid productive learning friction, discuss the dark side of AI monetization and data privacy, and wrestle with difficult questions about AI companionship in an increasingly lonely society. This conversation moves beyond the hype to examine both the genuine opportunities and serious concerns that educators and technologists need to grapple with as AI becomes embedded in every aspect of work and learning. Key Topics & TimestampsWelcome and introduction to the live formatRob's surprising AI use case: Students creating machine-voiced presentations to avoid public speakingCraig introduces vibe coding and creating deployable apps in minutes Information Systems as the "canary in the coal mine" for knowledge work disruptionWhen vibe coding works (and when it doesn't): Simple vs. enterprise applicationsThe 50% principle: "50% is greater than 100%"How AI changes systems analysis and prototypingThe job market reality: Entry-level positions disappearingWhat should we be teaching students now?Privacy concerns and institutional AI toolsThe monetization problem: When AI platforms need to make moneyAI companionship and mental health concernsUsing AI for 24/7 policy questions and course supportShould we accept AI as a solution to technology-created loneliness? Key InsightsThe 50% Principle: Stop trying to get AI to do 100% of a task. Instead, focus on tools that save you half the effort—that's where the real value lies. Vibe Coding Reality: It's not for enterprise-scale applications, but it's revolutionary for rapid prototyping and creating simple, personal productivity tools without needing current coding skills. Productive Friction: Students are increasingly using AI to avoid uncomfortable but necessary learning experiences, like public speaking, removing the "friction points" that actually drive growth. The IS Canary: Information Systems professionals are experiencing AI disruption first, but similar transformations are coming for accounting, finance, law, and virtually all knowledge work. Privacy Warning: As AI companies struggle to monetize, expect increased data harvesting and advertising. Consider running local models for sensitive work. Resources MentionedAI Goes to College website: aigostocollege.comLM Studio: Tool for running large language models locallyClaude Code, Codex, Anti Gravity: Professional coding environments mentionedMeta's LLAMA: Open-source AI model (though future releases uncertain) CreditsHosts: Craig Van Slyke and Rob Crossler Audio: Hazel Crossler Sponsored by: Association for Information Systems Special Interest Group on Education (SIG ED) https://ais-siged.org/ Event: International Conference on Information Systems Education Research Workshop Special thanks to: Conference organizers Tanya McGill and Rosetta Romano Companies mentioned in this episode: Washington State University AIS SIG ED OpenAI Google Claude Code Gemini Copilot Facebook Prospect Press of Vermont Mentioned in this episode: AI Goes to College Newsletter

    47 min
  7. AI, Friction, and the Future of Teaching and Learning: Lessons from Gemini 3

    11/24/2025

    AI, Friction, and the Future of Teaching and Learning: Lessons from Gemini 3

    Are you ready to rethink how AI is shaping higher education? Join Craig and Rob in episode 27 of AIGTC as they dive into the recent agentic shift in AI models like Google’s Gemini 3—and what this means for students, faculty, and the future of learning. In this thought-provoking conversation, Craig shares his unsettling experience with Gemini 3’s “agentic” behavior, where AI takes the reins with minimal user input—even when that’s not what the user asked for. The hosts examine how this frictionless, super-helpful technology might make academic shortcuts easier than ever, removing the crucial learning struggles that foster true understanding. Are we on the verge of a “skill inversion,” where students need expertise just to avoid cheating themselves out of learning? But it’s not all doom and gloom: Rob and Craig explore actionable solutions for instructors, focusing on process-oriented teaching, project-based learning, and authentic reflection assignments that resist easy automation. They challenge educators to try just one process-focused change in their next class and offer themselves as resources for peer collaboration and feedback. Throughout the episode, you’ll discover: Why “agentic” AI could undermine deep learning—and what to do about itThe hidden dangers of frictionless assignment completion for student growthPractical strategies to make student assessment more process-driven and meaningfulHow reflection and professionalism can help students stand out in the AI eraThe importance of radical thinking and institutional adaptation for the future of higher ed Plus, stay tuned for details on AIGTC’s first-ever live stream at the International Conference on Information Systems, and learn how you can join the hosts in Nashville or virtually for engaging Q&A and networking. The live stream will be held from 17:00 - 18:00 US Central time (UTC - 6). Link forthcoming. If you’re an educator, administrator, or student eager for honest insights and expert advice on navigating AI in academia, this episode is essential listening. Discover how you can control what you can—and embrace the challenge of teaching and learning in an AI-driven world. Mentioned in this episode: AI Goes to College Newsletter

    43 min
  8. 10/28/2025

    Creating the Classroom of Tomorrow: Stephen Fitzpatrick Discusses Generative AI

    Are you an educator navigating the new world of generative AI, or a college faculty member wondering how your incoming students are being shaped by technology? Join hosts Craig and Rob on this episode of AI Goes to College as they sit down with Stephen Fitzpatrick—a veteran secondary school history teacher, debate coach, and leading voice on AI in education—to explore how artificial intelligence is fundamentally transforming the classroom experience, from high school to higher ed. Stephen shares his journey from classroom innovator to Substack thought-leader, detailing his hands-on experimentation with emerging AI tools like ChatGPT, Claude, NotebookLM, and specialized debate platforms. You'll hear candid stories about students' rapid normalization of AI use, from research projects and note-taking to more controversial applications like essay writing—and the ethical dilemmas teachers face in response. This episode reveals why banning AI in schools is a losing game, and why the true challenge is fostering critical thinking, curiosity, and responsible use among students. Discover how high school educators are wrestling with the balance between preserving “AI-free” learning spaces and adapting assignments for an AI-empowered world. Stephen provides actionable insights on: The rise of AI-powered research and note-taking among high schoolers—what college faculty need to knowThe importance of clear, consistent policies on AI use across classes and institutionsHow educators' comfort level with technology directly impacts their ability to guide studentsPractical solutions for cultivating AI fluency and resilience in both teachers and studentsWhy peer-to-peer training and real-world use cases trump generic professional development Whether you're a teacher, administrator, or college professor, this conversation will equip you to meet students where they are in the age of AI—challenging old paradigms and preparing them for a future where intelligent technology is their constant companion. Stephen’s nuanced perspective, grounded in frontline experience and continuous experimentation, will inspire you to rethink resistance and embrace adaptation. Ready to hear the real story behind “AI Goes to College”? Tune in to learn how you can empower students to use AI as a tool for deeper thinking and lifelong learning—and don’t forget to check out Stephen’s Substack, Teaching in the Age of AI, linked in the show notes for even more practical wisdom. Links: Teaching in the Age of AI: https://fitzyhistory.substack.com Mentioned in this episode: AI Goes to College Newsletter

    49 min
4.8
out of 5
16 Ratings

About

Generative artificial intelligence (GAI) has taken higher education by storm. Higher ed professionals need to find ways to understand and stay up with developments in GAI. AI Goes to College helps higher ed professionals learn about the latest developments in GAI, how these might affect higher ed, and what they can do in response. Each episode offers insights about how to leverage GAI, and about the promise and perils of recent advances. The hosts, Dr. Craig Van Slyke and Dr. Robert E. Crossler are an experts in the adoption and use of GAI and understanding its impacts on various fields, including higher ed.