Adjunct Intelligence: AI + HE

Adjunct Intelligence

Adjunct Intelligence: Ai and the future of Higher EducationStay ahead of the AI revolution transforming education with hosts Dale, tech enthusiast and AI Nerd, and Nick McIntosh, Learning Futurist.This weekly espresso shot delivers essential AI insights for educators, administrators, and learning professionals navigating the rapidly evolving landscape of higher education.Each episode brings you a concise rundown of breaking AI developments impacting education, followed by deep dives into cutting-edge research, emerging tools, and practical applications that Dale and Nick are implementing in their own work. From classroom innovations to institutional strategy, discover how AI is reshaping teaching, learning, and educational operations.Whether you're working in the classroom, on the the classroom a university lecturer, TAFE teacher, or simply passionate about the future of learning, "Adjunct Intelligence" equips you with the knowledge to transform disruption into opportunity. Business casual, occasionally humorous, but always informative.

  1. 13H AGO

    If AI Can do it, Why Teach it? The Education Question Nobody Wants to Answer

    Dario Amodei has spent the past year writing essays about AI eliminating half of white-collar jobs. Then his company demonstrated it by having 16 AI agents build a C compiler for $20,000 — and published a legal plug-in that triggered a $1 trillion stock market sell-off the same week. Dale and Nick pull apart what actually happened, why the market panic reveals more about AI literacy than AI capability, and what any of this means for what universities should be teaching and why. [00:00] — Three events, one week: a $20K compiler, a $1 trillion market panic, and Dario Amodei's job displacement warnings. [02:15] — Who Amodei is and why his predictions are harder to dismiss than most tech CEOs'. [03:39] — The prediction timeline: from Machines of Loving Grace to Davos. [06:19] — Why The Adolescence of Technology argues this disruption is structurally different from every previous one.  [10:12] — The C compiler project: 16 Claude agents, 100,000 lines of Rust, $20K in API fees, 99% test pass rate. [15:17] — A 2,500-line prompt file causes Thomson Reuters' worst trading day on record and wipes $1 trillion from software stocks.  [18:13] — Why the panic was wrong — and what the recovery tells us about institutional switching costs. [21:19] — The gap between what was shipped and what the market priced in is an AI literacy problem. [22:59] — The wrong conclusion: if AI can code and do legal research, why teach either?  [23:50] — Nonperishable skills only work anchored to domain content — and Danny Liu's stuff, skills, and soul framework. 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    31 min
  2. MAR 1

    China is winning the AI Adoption War. We Just Haven't Admitted It Yet.

    In 2025, the AI race quietly split in two: one for building the smartest model, and another for getting everyone to use yours. Chinese labs chose the second race — and the data says they're winning. Dale and Nick break down how DeepSeek, Alibaba, and Kimi captured developers, startups, and soon entire education systems by being cheaper, open, and good enough. They examine why Airbnb ditched ChatGPT for Qwen, why 80% of startups pitching A16z are building on Chinese open-source models, and what this means for universities still teaching AI literacy through a single-tool lens. The conversation covers safety trade-offs, the equity problem of premium vs. free models, and why prompt engineering alone is already a relic. Timestamps: [00:00] — Nick sets the scene: DeepSeek's $6M model vs OpenAI's $100M spend [00:48] — The two AI races: building the best model vs. winning adoption [02:33] — 80% of A16z-backed startups now building on Chinese models [04:13] — Dale's experience bargaining with Kimi's onboarding for a $0.99 subscription [05:08] — Percy Liang on why open-weight models drive faster adoption [06:00] — Apple choosing Gemini for Siri and what distribution beats benchmarks looks like [06:20] — OpenAI's precarious position: prediction markets give them 10% odds for top model by end of 2026 [08:11] — China's national mandate: eight hours of AI education for every student, annually [09:19] — Estonia's similar move with mandatory AI training for teachers [10:57] — OpenAI's halfhearted pivot to open-source with GPT OS, and Meta retreating on Llama openness [12:30] — Predatory pricing patterns from Uber to Netflix — and why institutions should pay attention [14:22] — Beijing's chip exodus: ByteDance and Alibaba abandoning Nvidia for Huawei [14:51] — Switzerland's sovereign AI model as a third path beyond the US-China binary [16:32] — Ambient intelligence and the "good enough" vending machine that talks to you [17:07] — AI safety scores: DeepSeek and Alibaba Cloud both scored D/D-minus on existential safety [18:56] — Anthropic's Claude jailbroken for Chinese state-sponsored cyber espionage [19:20] — The equity problem: do we shame cash-strapped institutions into premium licensing? [20:55] — Dale's call for transparency: share failures and findings, don't hoard them [22:24] — The classroom reality: students trained on ChatGPT will graduate into Chinese AI infrastructure [23:22] — Dale's pitch for model comparison tools — seeing outputs side-by-side [25:10] — Both hosts on using multiple models: Claude, Gemini, and the "council of experts" approach [27:11] — Stop teaching tools, start building human judgment about AI infrastructure choices [28:14] — Prompt engineering as table stakes: why AI fluency in 2026 means understanding infrastructure 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    31 min
  3. FEB 22

    The MoltBot Moment: When Enthusiastic Adopters Become the Biggest AI Risk

    In this episode of Adjunct Intelligence, Dale Leszczynski and Nick McIntosh examine what the viral MoltBot AI assistant reveals about AI security risks in education. They break down Simon Willison's "lethal trifecta" framework for AI agent vulnerability, the difference between shadow IT and shadow agentic AI, and why FERPA — written in 1974 for filing cabinets — can't handle autonomous agents acting on behalf of educators. The episode covers the Maryland school GPTZero privacy case, Ethan Mollick's "wizard era" framing, and Perplexity's Model Council. [00:00] — MoltBot goes viral: 68,000 GitHub stars, Mac Mini shortages, and a security nightmare. [04:23] — Two million AI agents join their own social network. Nick panics when they vanish. [06:13] — The real appeal: persistent memory, no context limits, a "heartbeat" that acts unprompted. [08:56] — Simon Willison's "lethal trifecta" — and why existing edtech already ticks all three boxes. [12:30] — Craig Hepburn's "employee model" for AI agents. Why some data silos are a feature. [15:25] — Shadow agentic AI vs. shadow IT: 22% of enterprises had staff running MoltBot unsanctioned. [19:18] — Maryland school uploads student work to GPTZero without consent. FERPA dates from 1974. [21:12] — Ethan Mollick's "wizard era" and the audit paradox of detection tools. [23:07] — Your most enthusiastic AI adopter may be your most dangerous. "Nutritional labels" for AI. [29:05] — Field Notes: Perplexity's Model Council runs queries across multiple models simultaneously. 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    33 min
  4. FEB 8

    2025: The Year AI Stopped Being a Tool and Became Infrastructure

    Season 2 opens with Dale and Nick looking back on the year AI became ubiquitous — and what that actually meant for higher education. They walk through the safety failures that defined 2025, including lawsuits linking AI to student deaths and every major lab receiving a failing safety grade. They tackle the now-dead plagiarism debate, the financial ouroboros propping up trillion-dollar valuations, and why AI literacy certificates already feel obsolete. The centrepiece is Dale's Napster analogy: when the product can be generated for $20/month, universities have to sell the concert, not the CD. Part 1 of 2. [00:00] — Three facts that sum up AI in 2025: $750B valuations, student death lawsuits, and university bans  [02:37] — Andrew Maynard's "critical disconnect" model and why 2025 proved both its assumptions wrong  [03:54] — AI shifts from chatbot to infrastructure — Operator, Claude Code, browsing agents, and Fei-Fei Li's World Labs  [05:16] — The plagiarism debate is dead: Dale on writing horse-drawn carriage speeding tickets in a robotaxi era  [07:13] — The safety collapse: lawsuits against OpenAI, the AI Safety Index failing every major lab, and Grok's deepfake problem  [10:29] — The double-edged sword: Reid Hoffman's optimism vs. the real mental health costs  [12:08] — Anthropic's Claude Constitution and whether universities should be shaping AI's moral frameworks  [14:42] — The salad bar problem: why prompt engineering certificates are already the new "proficient in Microsoft Word"  [17:38] — The financial ouroboros: Galloway, Oracle's $80B loss, and validating stock prices with compliance budgets  [22:04] — Ghosting a degree, the Napster analogy, and why universities need to find their concert model 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    26 min
  5. Sora 2 & Robots: Altman’s six-month “fix it or nix it” ultimatum + Robots are coming, kinda

    10/05/2025

    Sora 2 & Robots: Altman’s six-month “fix it or nix it” ultimatum + Robots are coming, kinda

    Sora 2 just vaulted over the uncanny valley, and Sam Altman swears he’ll yank the cord if it doesn’t improve our lives. Dale and Nick unpack what “ChatGPT-for-video” really means, why OpenAI’s new one-click checkout gambit turns 700 M weekly users into impulse buyers, and how AI is shifting from shiny lab demo to invisible plumbing across Apple, Google and Microsoft stacks. We celebrate the return of Claude Sonnet 4.5 as coding champ, head to the jobsite to ask why your plumber’s safe from robots—for now—and bust the jargon on AI “artifacts.” Higher-ed, commerce and the trades collide in this fast-forward tour of 2025’s agentic economy. Sora 2 & the Uncanny Valley: Physics that finally behave and Altman’s six-month “fix it or nix it” ultimatumTikTok meets Hollywood: OpenAI’s Cameo-style selfie videos and Meta’s “Vibes” cloneCheckout in ChatGPT: Conversational commerce, Shopify integration, and the era of AI-Optimised (AO) websitesProductisation of AI: Apple’s quiet AI everywhere, OpenAI’s move from shovel supplier to SaaS competitorClaude 4.5 comeback & artifacts explainer: Why Anthropic’s alignment focus matters for educators and builders•Robots vs. Trades: Tesla Optimus, BYD units—and the irreplaceable tacit skill in your handsTakeaway: The lab era is over; AI is plumbing. The question isn’t if tech is ready—it’s whether we are.Hit Subscribe, drop a review, and stay human-in-the-loop. 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop. 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    34 min
  6. The Accidental AI Economy: When Agents Run the Market + The Empire of Ai

    09/28/2025

    The Accidental AI Economy: When Agents Run the Market + The Empire of Ai

    This week on Adjunct Intelligence, Dale and Nick dive headfirst into the accidental AI economy—a system already running faster than human decision-making. From Google’s new Agent Payments Protocol to frontier models caught scheming, the episode unpacks how markets, education, and everyday life are shifting at machine speed. This week on Adjunct Intelligence, Dale and Nick dive headfirst into the accidental AI economy—a system already running faster than human decision-making. From Google’s new Agent Payments Protocol to frontier models caught scheming, the episode unpacks how markets, education, and everyday life are shifting at machine speed. We explore:Why Google’s payments protocol could mark the birth of a new economy.How ChatGPT quietly became the world’s biggest educational institution (250M daily learning chats).What frontier models’ scheming behavior means for safety, trust, and higher education.Why TEQSA is telling Australian universities to redesign assessment instead of chasing detection.Chrome’s Gemini integration and the invisible AI infrastructure shaping the web.The empire analogy: AI labs acting like historical powers, extracting resources, labor, and control. As always, we finish with a jargon buster and a touch of humor (including will.i.am’s surprising new role as an AI professor). Stay curious. Stay intelligent. Stay the human in the loop. 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop. 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    27 min
  7. Why Ai Adoption is a Marathon not a Sprint

    09/21/2025

    Why Ai Adoption is a Marathon not a Sprint

    In this episode of Adjunct Intelligence, Dale and Nick sit down with Professor Rahil Garnavi, Director of RAISE Hub at RMIT and former IBM Research leader with 50+ AI patents. Rahil shares her unique perspective on why the challenge of AI in higher education isn’t about building smarter models, but about building trust, skills, and sustainable adoption. From classrooms to boardrooms, she unpacks why universities must act as both “skills engines” and trusted conveners, how to bridge the gap between technical possibility and real-world usability, and why AI adoption is best understood as a marathon effort rather than a sprint. If you’re a higher ed leader wondering how to move beyond policy debates and into practical, responsible integration of AI, this conversation offers clarity, realism, and optimism. 01:00 – Rahil’s JourneyFrom IBM Research to RMIT’s RAISE Hub: why she shifted focus from building AI to embedding it responsibly. 03:40 – Universities’ Dual RoleWhy higher education must act as both a “skills engine” and a trusted convener for industry and government. 05:15 – Building AI Capability at ScaleHow RAISE Hub is creating AI fluency across all disciplines, not just STEM. 07:00 – From AI Users to AI ThinkersDesigning authentic assessments that emphasize process, critical thinking, and integrity over polished outputs. 09:40 – The Talent PipelineHow universities can stay a step ahead of industry demand and prepare graduates for a rapidly shifting workforce. 11:00 – National Conversations on AIRahil’s work with CEDA’s AI Community of Best Practice and why policy, trust, and skills development go hand in hand. 12:30 – Marathon vs. SprintWhy AI technology moves fast, but adoption, governance, and trust take endurance. 15:00 – Plug-and-Play MythThe disconnect between glossy tech marketing and the messy reality of organizational adoption. 17:00 – Australia’s Cautious StanceWhy readiness scores are low, the risks of “pilot mode,” and what’s needed to move forward. 18:30 – Real Use CasesWhere AI is already making a difference in business and higher ed—even if the wins aren’t glamorous. 20:00 – Critical Engagement, Not ReplacementWhy AI should be seen as a multiplier of human thinking rather than a substitute. To find out more about Rahil Garnavi view her linkedin - https://www.linkedin.com/in/rahil-garnavi-phd/ 🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [🎙️ Adjunct Intelligence is the weekly briefing for higher-ed professionals who want AI as a cheat code—not a headache. Every episode: • Real tests of AI tools in education and professional workflows • Fast, Monday-morning actions you can actually try • Clear signal through the noise (no hype, no jargon) 👉 Subscribe on [YouTube] | [Apple Podcasts] | [Spotify] 👉 Share this with a colleague who still says “I’ll figure AI out later” 👉 Join the conversation on LinkedIn with #AdjunctIntelligence Stay curious. Stay intelligent. Stay the human in the loop.

    24 min

About

Adjunct Intelligence: Ai and the future of Higher EducationStay ahead of the AI revolution transforming education with hosts Dale, tech enthusiast and AI Nerd, and Nick McIntosh, Learning Futurist.This weekly espresso shot delivers essential AI insights for educators, administrators, and learning professionals navigating the rapidly evolving landscape of higher education.Each episode brings you a concise rundown of breaking AI developments impacting education, followed by deep dives into cutting-edge research, emerging tools, and practical applications that Dale and Nick are implementing in their own work. From classroom innovations to institutional strategy, discover how AI is reshaping teaching, learning, and educational operations.Whether you're working in the classroom, on the the classroom a university lecturer, TAFE teacher, or simply passionate about the future of learning, "Adjunct Intelligence" equips you with the knowledge to transform disruption into opportunity. Business casual, occasionally humorous, but always informative.

You Might Also Like