AI for Lifelong Learners Podcasts

Tom Parish

Beyond the AI hype and the 'work-faster' mindset, let's consider how AI might affect our enjoyment of life and our pursuit of curiosity. It might be just the tool you need to help you along as a lifelong learner. Between the extremes there is always a middle ground. Seek that and feed the good wolf along the way for the better good. aiforlifelonglearners.substack.com

  1. 11/12/2025

    Feel First: Larry Seyer on Live Performance

    Show Notes: AI for Lifelong Learners - Larry Seyer Interview Welcome back, everyone. You’re listening to AI for Lifelong Learners! Today, I’m joined by my friend and mentor, Larry Seyer. Larry is a Grammy-winning engineer, producer, and musician, as well as the creator and host of The Larry Seyer Show. We’re going to talk about where performance meets engineering and audience interaction. We discuss Larry’s creative instincts, his live format, and how his home-built tools come together, and what AI actually does in his pipeline. Larry brings a reality to building your own tools with your own creative flow. Larry WELCOME to the show … What You Will Learn in This Show * How to integrate AI tools into live music performance without losing authenticity and spontaneity * Practical applications of Claude Code for building custom broadcasting tools * Which AI platforms excel at specific creative tasks (coding vs. lyrics vs. current events) * How to automate production elements while maintaining focus on musical performance * Strategies for building interactive audience experiences using AI-generated content * The importance of maintaining “feel” as the primary driver when choosing between human and automated elements Who This Episode Helps * Solo performers looking to enhance their live shows with automated production elements * Musicians and content creators interested in leveraging AI without sacrificing their unique creative voice * Live streamers wanting to broadcast simultaneously across multiple platforms * Audio engineers and producers curious about integrating AI into their workflow * Anyone interested in practical, real-world applications of AI in creative fields Key Points and Takeaways Live Performance Energy * Live audience interaction creates a feedback loop that elevates performance energy beyond what’s possible in studio settings * Larry broadcasts simultaneously to YouTube, Facebook, LinkedIn, X (Twitter), and Rumble using Restream * Interactive elements like live chat keep performances dynamic and engaging Show Switcher Innovation * Custom-built module for Bitfocus Companion that automates camera switching during live performances * Randomly selects camera angles at varied intervals, allowing the performer to focus on playing * Available free on GitHub for other performers to use * Solves the challenge of being a solo performer who can’t manually switch cameras while playing AI Tool Recommendations by Purpose * Claude Code (CLI): Best for programming and custom software development * ChatGPT: Excellent for lyrics, text content, and creative writing * Grok: Best for current events and less biased news information * Gemini: Fast but unreliable in Larry’s experience * Zencoder: Good for coding within CLion IDE but had reliability issues Creative AI Integration * AI helps generate weekly show themes based on current events or seasons * Movie trivia and quotes generated by AI for audience engagement * Jukebox Hero feature lets audience vote on song selections * Automated scrolling quotes related to show themes OTTO Project (Organic Trigger Timing Orchestrator) * AI-assisted drummer software in development * Creates dynamic drum patterns that adjust to song sections (verse, chorus, bridge) * Incorporates human-like timing variations (slightly ahead on chorus, behind on verse) * Part of Grimm for Reaper project Maintaining Authenticity * “Feel is always number one” - choose whatever serves the emotion of the music * Use loopers to maintain spontaneity within structured performances * Create flexible song arrangements that allow for extended solos or reordered sections * Prepare playlists to avoid dead air while maintaining creative freedom Practical Tips for Musicians * Start small: Use AI to help refine lyrics without replacing your creative voice * Maintain the human element: AI should enhance, not replace, performance energy * Test reliability: Some AI tools have bugs that can disrupt live performance * Layer automation gradually: Add one automated element at a time Resources * Show Switcher: Available free on Larry’s GitHub profile * Bitfocus Companion: Control software for video switchers * Quantiloop Pro: iPad looper app for flexible song arrangements * Restream: Multi-platform streaming service * Larry’s Sample Libraries: Free on PianoBook * Claude Code: Command-line interface for AI coding assistance * ATEM Mini Switchers: Hardware video switchers for production Connect * Website: LarrySeyer.com * Live Show: The Larry Seyer Show - Thursdays at 7:00 PM Central Time * Platforms: YouTube, Facebook, LinkedIn, X (Twitter), Rumble * GitHub: https://github.com/larryseyer Larry’s profile for free tools and software Past Episodes: Available on YouTube for replay Larry Seyer brings over 60 years of experience as a Grammy-winning engineer, producer, and musician to his innovative approach to AI-assisted live performance. His Thursday night shows demonstrate how technology can enhance rather than replace the human element in music. See and hear him at http://larryseyer.com Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    34 min
  2. 09/02/2025

    What can a software developer who thinks like a teacher show us about AI?

    Episode Overview In this insightful episode of AI for Lifelong Learners, host Tom Parish sits down with Preston McCauley, author, educator, and software developer who wrote "Generative AI for Everyone: A Practical Guidebook." Preston brings his unique perspective, gained from teaching, curriculum development, and hands-on AI system building, to share practical insights about mastering artificial intelligence. He reveals how he built an AI system to teach himself AI and discusses his approach to staying ahead of technology trends by "living five years in the future." The conversation covers essential topics for anyone looking to effectively utilize AI tools, including Preston's CLEAR methodology for prompting, the current state of various AI models such as ChatGPT-5 and Claude, understanding AI agents, and the growing importance of open-source models. Preston demystifies complex AI concepts and provides practical frameworks that help transform AI from a simple chat tool into a true collaborative partner, making this episode essential listening for both beginners and experienced AI users. Book Snapshot Generative AI for Everyone by Preston McCauley is the ultimate guide to understanding and harnessing the potential of artificial intelligence. Whether you're new to AI or an experienced professional, this book equips you with the tools to revolutionize your work, learn, and create in an AI-driven world. This guide's core is prompt engineering, a critical skill for effectively communicating with AI systems like ChatGPT and other advanced models. Learn how to: • Craft effective prompts by mastering key elements like context, clarity, constraints, and adaptation. • Refine your prompts iteratively to achieve precise, high-quality outputs. • Use proven frameworks such as CLEAR AI (focusing on 15 essential elements) and FOCUS AI (creating reusable templates for various applications). With practical examples, including a creative take on Goldilocks and the Three Bears, you'll gain hands-on experience in constructing and perfecting prompts. What You'll Learn in This Episode [00:33] - Preston's "Living Five Years in the Future" Philosophy • Preston's work mantra of staying ahead of mainstream technology adoption • Building his first AI system specifically designed to teach him about AI • The importance of immersing yourself in new technology, like learning a new language • How this approach helped him anticipate the evolution of GPT and early AI systems [03:10] - Reverse Engineering the Learning Process • Creating an AI-powered curriculum and syllabus for learning AI • Breaking down complex concepts into teachable structures • Learning faster through AI assistance than traditional methods • The importance of understanding things in a way you can teach them [05:12] - Writing "Generative AI for Everyone" • The book's intentionally timeless design and methodology • Using AI assistants as a team to edit and review the book • Applying the Goldilocks principle ("getting it just right") to AI interactions • Creating content that flows conversationally rather than lecturing at readers [07:41] - Who Benefits from AI Training • Designing content for diverse demographics beyond tech professionals (gardeners, story writers, etc.) • Breaking through the "AI is just chat" mindset • Moving from using AI as a tool to collaborating with it as a partner • The visible "gasp" moment when people grasp AI's collaborative potential [10:41] - Common Barriers to AI Adoption • Trust and privacy concerns are holding people back • Fear of "breaking" something when experimenting with AI • The challenge of sycophantic AI responses that always agree • Understanding AI inference (like Wheel of Fortune) vs. factual truth [14:19] - Comparing Current AI Models • Claude 4.1 excels at deep written thought • ChatGPT-5 offers a good balance but requires proper prompting structure • Perplexity combines search engine functionality with AI • Gemini 2.5 for text generation • The reality that businesses don't need "supercomputer" level AI for most tasks [18:48] - ChatGPT-5 Deep Dive • Managing expectations vs. reality for the new model • The thinking mode and when to use it vs. faster responses • Cost and energy challenges of training advanced models • Why technical users may be disappointed while everyday users find it sufficient [21:57] - Cross-Platform Prompting Strategy • Running the same prompt across multiple LLMs for comparison • Understanding that different models provide different perspectives • All LLMs use similar training datasets but produce varied outputs • The encyclopedia analogy - different sources tell stories differently [24:18] - Identifying AI-Generated Content • The telltale word "unlock" frequently appears in AI content • Common patterns across different AI models • Techniques for getting deeper, less generic responses • The importance of reflection and going beyond first-level results [27:16] - The CLEAR Methodology Framework • Clarity: Setting the AI on the right path (GPS route entry) • Limits: Defining boundaries and constraints (routes along the way) • Examples: Showing what good looks like, not just telling • Adaptation: Adjusting when obstacles arise (like construction on I-635) • Reflection: Both AI and human reviewing the output • Why this framework is more important than ever with GPT-5's thinking model [31:28] - Practical Prompting Tips • Adding confidence rankings (1-5 scale) to AI responses • Using "Does this make sense?" as a quick reflection check • Asking AI for clarification questions before proceeding • Building nested inference and establishing references • Not going more than 3-5 requests deep to maintain context [35:44] - Understanding AI Agents • Agents as specialized team members with specific roles (like a marketing team) • The difference between task-based agents and truly agentic systems • The importance of providing GPS-like structure to prevent wandering • Achieving 50% workflow automation with human review • CrewAI as an example of truly agentic systems [40:36] - Open Source and Local Models • LM Studio, Ollama, and GPT4ALL for running models locally • Hardware requirements: 13 billion parameters or less for responsive performance on Mac M1 Pro • Privacy benefits of running models locally • ChatGPT OS 20B is the current best open-source model size • Memory optimization advances allowing larger models on consumer hardware [49:02] - Beyond Fine-Tuning • Why fine-tuning isn't always necessary anymore • Alternative techniques like advanced RAG (Retrieval-Augmented Generation) • Cost-effective approaches to domain-specific knowledge • Building medical AI models with minimal training using Unsloth [53:07] - Preston's Future Projects • New intelligent AI website with six AI personalities • MELD framework (Model Engagement Language Directive) - open-sourced ver 1 • Upcoming book: "Generative AI for Everyone: Images" • New frameworks for complex image and brand structures Resources * Book: "Generative AI for Everyone: A Practical Guidebook" by Preston McCauley * Reference: The Goldilocks principle applied to AI interactions * Newsletter: The White Box by Ignacio (Nacho) * Organizations: OpenAI, Anthropic, Google, Hugging Face * Tools Mentioned: ChatGPT-5, Claude 4.1, Perplexity, Grok, Gemini, LM Studio, Ollama, GPT4ALL, CrewAI, Keras, Unsloth, Google Colab Connect * LinkedIn: Preston McCauley * Website: clearsightdesigns.com * Book (physical copy): books.by/clearai * Amazon: "Generative AI for Everyone: A Practical Guidebook" Enjoyed the episode? Share these notes and help more learners discover AI insights! Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    57 min
  3. 08/12/2025

    Fraser Gorrie on momentum, no-code tools, AI development and the two-pass mindset

    Who this episode helps * Builders graduating from no-code prototypes to production. * Leaders evaluating whether to bet on n8n, Node-RED, or an AI-assisted code stack. * Curious newcomers who want a momentum-first approach to learning modern tools. Fraser Gorrie — developer, consultant, and systems problem-solver who thrives on constraint-heavy projects and collaborative iteration. Contact: FraserGorrie.com Episode theme: Why our builds stall, how to get momentum back, and when to move from vibe-coding prototypes to production-ready software. Introduction If you’ve ever opened a promising tool and lost your momentum three minutes later, permissions, APIs, mystery checkboxes—this conversation is your map out of the maze. Veteran developer and systems thinker Fraser Gorrie joins host Tom Parish to share hard-earned advice from decades of coding, consulting, and experimenting with no-code, low-code, and today’s “vibe-coding” AI tools. We dig into: keeping momentum when everything changes weekly, choosing tools without getting trapped by them, documenting flows so you don’t drown in your own success, and a counter-intuitive practice that speeds everything up: tinker first, then restart with structure. key points and takeaways Learning & momentum * Momentum is the hidden cost center. When a tool throws friction (auth, IDs vs files, function naming dilemmas, opaque UI), the loss of momentum can erase prior gains unless you have a recovery plan. * Care about the project. Passion provides the energy to push through the inevitable stalls. * Use intuition as a signal. If something “doesn’t feel right,” pause and reassess rather than muscling through. Choosing and testing tools * Run a quick “worth-time” checklist before committing: does it have all of the UI control you need? Does it have API access, server calls, payments, etc? Find convincing examples that use the same tech, not just marketing hype. * Know your personality limits. Comprehensive types go too wide with alternatives and make slow progress; intuitive types go deep on one option that may be the wrong one. Adjust your approach so you don’t burn cycles installing four payment processors “just to compare.” * Pair up and trade up. You become the average of the five tools and LLMs you use most. Periodically rotate your LLM tools, or at least ask for two different LLM solutions to avoid single-tool bias, and to avoid flattering, uncritical and ultimately time-wasting solutions. No-code, low-code, vibe-coding * No-code’s promise is speed: provide something visible for client feedback quickly; its trap is that last 20% of what you need might be unreachable, especially when you need precision or need to scale. * Low-code buys wiggle room with custom scripts inside nodes. But you are still restricted by what the flow of nodes can do. The trees (nodes) can look good, but the forest may not be buildable. * Vibe-coding (LLM-driven development) lets you describe the “above-water” intent while the model fills the iceberg beneath, but prototypes often need a complete restart, with architecture for scale, internationalization, and performance. Process over product: the two-pass learning cycle * Pass 1 - Tinker on intuition. Get the messy version working, learn the terrain. * Pass 2 - Restart with structure. Break the problem into steps, organize your files, name variables, factor modules, and add versioning. Be willing to do Pass 3 or 4 if new design insights keep arriving. Documenting and not getting lost * Keep a paper journal. Muscle memory helps; jot failures and speed bumps as you hop between and even within tools. * Treat flows like code. Use naming conventions, modular “snippets,” comments, and visual labels. If you copy a node, label the copy with why and a date so future-you can safely delete it. * Screenshots are cheap insurance. When a dynamic form finally works, capture it. Node-RED vs n8n (server-side automation) * Node-RED passes a single msg object from node to node. It’s great for industrial/home automation and server workflows; the pattern is consistent and easy to reason about. * n8n lets any node reach back to any earlier node’s data. It is both powerful and popular (including self-hosting to escape per-execution pricing). n8n is also well wired for AI use cases. However, n8n is changing, almost weekly. And execution paths (the big picture of what you want to do) can feel opaque without careful discipline. * Migration caution: Visual similarity between these 2 tools does not mean conceptual equivalence; don’t transfer mental models 1:1 between them. Performance, scale, and production reality * Prototypes ≠ products. A no-code build that’s “good enough” may fail on performance (for users, sub-second page loads matter). Many times the only answer is recoding with the right architecture. * Security and updates are non-negotiable. Today’s cadence demands staying current, especially with server-facing tools. AI Prompts and context * AI Prompts are context-bound. Save good prompts, but expect to regenerate and refactor because conditions differ. The LLM’s are changing too fast for a single prompt to work effectively continually. * Avoid over-specifying. Over-constraining can force confident nonsense. Use guardrails sparingly and prefer iterative clarification. * Ask for alternatives. Request two approaches to see the option space and to test whether you’ve provided enough context. Versioning and visualization * Adopt Git/GitHub once a prototype stabilizes. Commit early, commit often, and write helpful commit messages. AI will help with all of that. * Let your AI dev environment help. Modern tools can diff changes and even generate sequence/flow diagrams from code so your docs stay in sync. It’s a great feeling to know, in a nutshell, what you just changed after a long AI coding session. Human help compounds * Find a buddy or mentor. Bring them in after you’ve tinkered. Be sure you know what you want and how difficult it was to get it. You provide context and direction; they provide pattern recognition and “early course-corrections.” The relationship benefits both sides. Interfaces and ergonomics * Interfaces will always bug you. Accept some friction. Learn only what you need to ship. Chasing perfect ergonomics can destroy momentum faster than the quirks themselves. Find the good-enough point and get your project out into the world. Mindset for the long game * Expect to rebuild. With LLMs collapsing build time, rebuilding with better insight is often faster than patching a shaky base. * Passion + discipline beats tool-churn: care about the outcome, journal the path, refactor the plan, and keep momentum sacred. Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    56 min
  4. 08/06/2025

    Leslie Gruis on “The Privacy Pirates” and how to protect your privacy

    Episode Overview Former NSA intelligence officer Dr. Leslie Gruis joins host Tom Parish to unpack her book The Privacy Pirates: How Your Privacy Is Being Stolen and What You Can Do About It. Gruis traces the roots of American privacy from Magna Carta ideals to smartphone-era dilemmas, explains why federal protections lag behind Europe’s GDPR, and offers concrete steps listeners can take to guard their data. The conversation also dives into AI’s impact on education, her experiences as a STEM teacher, the hidden risks of school-issued laptops, and why device convenience often masks deeper trade-offs. Book Snapshot The Privacy Pirates uses the story of 14-year-old Alice to illustrate how companies and governments mine personal data. Gruis blends humor and clear language to explain: * The historical link between privacy and American freedom * How corporate surveillance eclipses government intrusion * Practical tactics for “defeating the Privacy Pirates” What You’ll Learn in This Episode What You’ll Learn in This Episode: * Origins of Privacy: How First and Fourth Amendment principles underpin today’s privacy expectations. 2. Tech’s Double-Edged Sword: The internet promised a knowledge utopia but evolved into what Gruis calls “a sewer full of inappropriate content.” 3. AI in Classrooms: School-issued laptops boost administrative efficiency yet narrow student curiosity and throttle deep reading. 4. GDPR vs. U.S. Patchwork: Europe’s high watermark forces companies to comply globally, while U.S. states fill the federal gap with a mosaic of privacy laws. 5. Subscription Myth: Paying for a service does not guarantee data protection; the real currency is attention and personal profiling. 6. Quick-Win Defenses: Always-on VPNs, reputable ad blockers, privacy-focused browsers, and a strict “LinkedIn only” social policy. 7. Smart Home Red Flags: Internet-connected thermostats, TVs, and doorbells harvest more information than most users realize. 8. Privacy Literacy for All Ages: From simple phone “diary” analogies for teens to due-diligence tips on kids’ games like Roblox for grandparents. 9. Historical Milestones: Telegraph wiretaps, the 1934 Communications Act, FOIA (1967), and how each technological leap reshaped “search and seizure.” 10. 2030 Vision: Success looks like a comprehensive U.S. privacy law, ethical AI audit standards, and public fluency in “good-touch / bad-touch” data practices. Resources * Book: The Privacy Pirates (Amazon) * Reference: Lawrence Lessig’s Code and Other Laws of Cyberspace * Organizations: Electronic Frontier Foundation (EFF) * Tools Mentioned: VPNs, privacy-centric browsers Connect * Leslie Gruis: LinkedIn * Tom Parish & AI for Lifelong Learners: Substack Enjoyed the episode? Share these notes and help more learners outsmart the Privacy Pirates. Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    48 min
  5. 05/29/2025

    Dr. Justin Traxler on functional strength training and the right role for AI (for now)

    Welcome to another AI for Lifelong Learners podcast. Your companion in exploring the ever-evolving landscape of artificial intelligence and its real-world impact on our continuous journey of growth and understanding of ourselves, life, and others. In this featured episode, we step aside from my usual format to bring you an insightful conversation led by yours truly, Tom Parish. My wife, Donna, and I share a candid and relatable journey into the world of fitness and health as older adults. We recount our experiences navigating a sea of online opinions and advice, the challenges of finding what truly works, and our eventual success through personalized human coaching. The episode culminates in an interview with our coach, Dr. Justin Traxler, a physical therapist who specializes in strength training for adults. You may be wondering, how does a deep dive into exercise for seniors and physical therapy connect with "AI for Lifelong Learners"? This show is powerfully relevant. Dr. Traxler provides a fascinating, on-the-ground perspective on how AI is (and isn't) currently used in his practice. He discusses the potential of AI for generating workout programs but underscores its significant limitations in accounting for crucial individual factors: age, previous injuries, fear, specific health conditions, and the nuanced goals of a patient. His insights perfectly encapsulate a core theme of this newsletter: discerning where AI offers genuine utility versus where personalized, human expertise and connection remain irreplaceable. Dr. Traxler's professional viewpoint on AI in healthcare and fitness provides a critical lens on the technology's current capabilities and ethical considerations, reminding us that while AI can be a powerful tool for information and efficiency but it's not yet a substitute for tailored, empathetic, and context-aware human guidance in complex, personal domains. Join us for this important discussion that illuminates the practical realities of AI and the enduring value of human-centered learning and support. What you will learn in this show: Limited current use of AI in practice (for now) His practice uses AI primarily for administrative tasks like generating social media hashtags and captions, not for patient encounters or creating exercise programs, mainly due to liability, HIPAA (patient information privacy), and the current limitations of AI. AI for personalized workout programs – a cautious view While acknowledging people and some trainers use AI (like ChatGPT) to create workout programs, he sees a significant con: AI cannot yet account for the multitude of crucial factors essential for safe and effective programming. These include age, previous injuries, specific goals (e.g., an Olympian's needs vs. a beginner's), fear of movement, and other individual nuances. Human expertise is key for nuance He believes AI, in its current state, lacks the ability to handle the complexity and personalization required for effective and safe exercise prescription, especially when dealing with individual limitations or medical histories. On functional strength training, especially for older adults: Definition of "older adult" Generally considered this 65+, but emphasizes it's more about an individual's activity history; active individuals are "older adults" much later in life. Importance of functional training This type of training mimics daily life tasks (e.g., lifting a child, carrying groceries, handling luggage) and builds capacity to perform them without injury, as opposed to purely aesthetic training like bodybuilding. "Use it or lose it" Stressed that as people age, they lose muscle mass faster. Strength training is crucial to combat this, maintain bone density, and limit comorbidities. It's never too late to start. Power is crucial for preventing falls Highlighted that "power" (the ability to stimulate the nervous system quickly, e.g., to catch oneself during a stumble) is more critical for older adults than just strength alone, and walking doesn't sufficiently build this. Mindset shift & empowerment A big part of his work is educating older adults, empowering them to understand they can lift things, that it improves bone density and muscle mass, and that it's not "too late." Addressing misconceptions Many older adults fear getting "bulky" or think it's too late. He clarifies that getting bulky requires intense, specific effort, and the benefits of strength training far outweigh not doing it. Osteoarthritis and movement Emphasized "motion is lotion." Movement and exercise increase blood flow and synovial fluid, often reducing pain associated with osteoarthritis, rather than exacerbating it. Lifting helps build muscle around joints to protect them. On training principles and coaching: Proper form and warm-ups are crucial Warm-ups (5–10 minutes) increase body temperature and lubricate joints. Proper form is essential to prevent injury. Start with bodyweight Master bodyweight movements (e.g., air squats, push-ups, proper hinging) before adding external load to ensure safety and effectiveness. Minimum recommended exercise The general guideline is 150 minutes of moderate-intensity exercise or 75 minutes of vigorous-intensity exercise per week, though many don't achieve this. He advises starting small (1–2 times a week) and building up. The "marginal decade" Referenced Peter Attia's concept – the last decade of life, where physical and cognitive decline can accelerate. The goal is to "train for your marginal decade" to maintain a high quality of life for as long as possible ("live long, die fast"). Remote coaching benefits Effective for accountability and progressing individuals safely once they have good movement fundamentals. Apps like TrueCoach allow for feedback, video analysis, and program tracking. In-person is ideal initially. Realistic expectations & dialing it in For older adults, starting strength training will, at a minimum, maintain current muscle mass and bone density, but usually leads to gains. The focus isn't on competitive lifting but on functional improvement. Balancing pushing limits and safety Start with an "underdose" to build tolerance and confidence. Introduce "deload weeks" (intentionally reducing intensity for a week after 3–6 weeks of harder training) to allow recovery and prevent joint stress. Listen for "dull, deep, achy, throbby" joint pain as a warning. Individualized programming Training programs must account for individual limitations, medical history (cardiac conditions, osteoporosis, previous surgeries), and goals. Progress is gradual, especially if conditions like osteopenia are present. Final key advice: Just start moving If doing nothing, begin with a simple walking program. Strength training is safe and crucial It's incredibly important and safe when done with supervision, especially initially. It's never too late Age should not be a barrier to starting strength training. Strength training isn't about competition (for most) It's about improving functional ability, health, and quality of life, not necessarily lifting maximal weights. Thank you for your contribution of time and attention as a reader. I’ve received many lovely notes and thoughtful insights. A special thank you to those who have so generously contributed financially to AI for Lifelong Learners. What you do makes a difference and keeps me inspired. Have a question? Remember to post a Substack note or drop a comment → here anonymously. Your thoughts are my inspirations. Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    52 min
  6. Where human physiology meets AI

    05/20/2025

    Where human physiology meets AI

    Welcome to the next lifelong learners Podcast! Today's episode explores territory rarely covered in tech discussions - the relationship between our stress physiology and our ability to thrive alongside artificial intelligence. My guest Bethlyn Gerard brings decades of experience in systems thinking and team performance, offering a fresh perspective on what it truly means to be human in an age where machines are handling more of our thinking. To wet your appetite, here are some quotes from the interview that I think help set the stage and resonated with me: * "K plus E equals W, which is, it takes knowledge plus experience to mature into wisdom." Shared by Bethlyn Gerard * "We need to hone larger, intuitive, sensing, right-brain skills, because the machine's gonna do the rest of it." Bethlyn Gerard * "The level of human physiology dysregulation and the need for skills that take coherent regulation [are in] an inverse relationship." Bethlyn Gerard * "There's no excuse for less safe work to save a human job... There's no excuse to compromise the quality of care because we don't want AI to do it better for us." Bethlyn Gerard * "One of the principles of AI for lifelong learning is that you need to be what the machines can't be." Podcast Host About this episode: In this thought-provoking conversation, Bethlyn shares insights from her recent experience in Vietnam, where she leveraged AI tools to rapidly develop and cross-reference 38 lesson plans in just hours. Work that would have traditionally taken days. However, this episode goes far beyond efficiency gains to explore a critical question: as AI increasingly handles analytical and computational tasks, how must humans evolve their uniquely human capabilities? Why this matters for Lifelong Learners: This discussion connects directly to the core principle of AI for Lifelong Learning and you’ve heard me say this often, “what’s on offer now in life is for you to be more human”. We’re discovering that our lived experiences make us uniquely human—something machines can never replicate. Bethlyn makes a compelling case that as AI handles more routine cognitive tasks, our focus must shift toward strengthening our physiological foundations for expanded intuition, sensing, and right-brain skills. She explains why understanding stress regulation is as fundamental as learning photosynthesis, especially when our dysregulated physiology directly conflicts with the coherent regulation needed for the enhanced critical thinking our AI-assisted world demands. Join me now in this inspiring and enlightening conversation with Bethlyn Gerad. About Bethlyn Gerard: Bethlyn Gerard is the Founder and Principal of Generativity Solutions, where she has spent over two decades strengthening organizations across various growth stages from startups to large health systems. As an innovative strategist and proven leader in healthcare models, Bethlyn specializes in outcomes assessment, protocol design, and turning complex data into high-value insights. Her expertise lies at the intersection of systems thinking, stress physiology, and team wellness, making her uniquely qualified to discuss how our rapidly changing technological landscape impacts human performance.Bethlyn Gerad can be reached at https://www.generativity.solutions or on Linkedin. Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    42 min
  7. 04/15/2025

    The human element: Sam Lipman on what AI can and cannot bring to music

    Welcome to AI for Lifelong Learners. Today we explore how artificial intelligence is transforming our creative and professional lives. I'm your host, Tom. In the studio with me is Sam Lipman, a highly innovative composer and educator at the University of Texas who recently made waves with his concerto for trumpet and orchestra performed by the Austin Symphony, featuring the brilliant Ephraim Owens and the amazing Gifton Jelen from New York City. What caught my attention during the pre-concert lecture questions were not just about Sam's innovative composition, the audience had many questions about the impact of AI – that is something I'd never witnessed at a classical music event. It speaks volumes about how AI has permeated every creative field, even those steeped in tradition. Sam brings a unique perspective as both a composer and a professor at the Department of Arts and Entertainment Technologies at UT, where he recently pioneered a groundbreaking course on AI in music production. In Austin's forward-thinking culture that is known for embracing innovation, Sam is helping shape how the next generation of artists will engage with AI technologies. Today, we'll explore Sam's experiences teaching this experimental course, the ethical questions his students grappled with, and how he envisions AI transforming music creation and production. Interview summary * Developed for UT Austin's AET department for students not fitting traditional career tracks * Rebranded traditional music theory as "AI in music" to appeal to the administration * The curriculum covers AI foundations, history, copyright issues, and applications * Balances theoretical discussions with hands-on music creation Ethics and legal aspects * Strong emphasis on copyright issues through mock trials * Students argued whether AI companies should use copyrighted training material * Highlighted disparity: traditional media pays for music while AI companies use it freely * Prediction: AI companies will eventually pay for copyrighted training materials AI music tools and applications * Students presented various AI music tools to classmates * Covered composition tools, production tools (GAWs), and post-production tools * Used free tools to avoid additional student costs * Revealed limitations in AI music generation (predominantly 4/4 time with predictable progressions) AI's creative limitations * AI cannot predict music trends or create truly innovative music * Inherent "lag" in relevance due to training in older music * Cannot replicate the "ridiculous accidents" driving musical innovation * Students discovered gaps between creative vision and AI capabilities Personal AI usage * Primarily used for administrative tasks rather than creative work * Helpful for structuring plans, writing professional emails, reviewing contracts * Generated background music for client project using Suno * Used for marketing campaigns and promotional materials * Compensates for personal skill gaps in administration and planning As background:Composer Sam Lipman and trumpeter Ephraim Owens sit down with Dianne Donovan on KMFA to talk about the Austin jazz scene, where classical and jazz blend, and Sam and Ephraim's upcoming performance with the Austin Symphony Orchestra. February 2025. Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    39 min
  8. 01/28/2025

    The DeepSeek paradox

    In the audio version in this post and add some commentary. Picture this: It's the 1980s, and giants like Digital Equipment Corporation (DEC) and Wang Laboratories rule the computing world with their massive, expensive machines. These companies couldn't imagine a future where small, personal computers would matter. Their machines cost hundreds of thousands of dollars, and surely, these toy-like PCs could never compete. They were wrong. Catastrophically wrong. One by one, these titans fell. DEC, once a colossus of computing, was sold to Compaq. In 1992, Wang Laboratories went bankrupt. Prime Computer stopped selling new computers that same year. Data General was swallowed by EMC in 1999. Even IBM, who helped create the PC revolution, stumbled badly as they underestimated how quickly things would change. Today, in 2025, we're watching this same story unfold in artificial intelligence. The disruption has a name: DeepSeek. The DeepSeek earthquake Last week, a Chinese company called DeepSeek released something that has Silicon Valley in a panic. They demonstrated an AI model that matches or exceeds the capabilities of OpenAI's latest technology - but at a fraction of the cost. We're not talking about small savings here. DeepSeek claims they built their system for $5.6 million, compared to the half-billion dollars or more that U.S. companies spend on similar systems. Just like the PC makers of the 1980s who figured out how to build powerful computers cheaply, DeepSeek reimagined how AI systems could work. They use clever mathematical tricks to do more with less - like using 8-bit numbers instead of 32-bit numbers, which dramatically reduces the computing power needed. The results are stunning. Users around the world are downloading DeepSeek's models and running them on personal computers. Some are processing hundreds of thousands of AI queries for mere pennies. It's like watching the personal computer revolution happen all over again, but at AI speed. Market tremors and the NVIDIA question When markets opened after DeepSeek's announcement, NVIDIA - the company that makes the expensive chips used for AI - lost nearly a trillion dollars in market value. Some people ask: "Why does this matter? It's just stock prices going up and down." But this misses the bigger picture. The trillion-dollar drop reflects a fundamental shift in how we think about AI's future. Just as the PC revolution showed we didn't need million-dollar mainframes to do powerful computing, DeepSeek is showing we might not need massive arrays of expensive chips to do powerful AI. This isn't just about NVIDIA. The entire AI industry has been building on the assumption that more expensive hardware equals better AI. DeepSeek just proved that assumption wrong. The pattern of creative destruction Here's where we see a deeper pattern emerge. Every major technological revolution follows this path: First, something is expensive and exclusive. Then, someone figures out how to make it cheaper and more accessible. The established players panic, claiming the cheaper version can't possibly be as good. But if it is good enough, it changes everything. We saw it with mainframes giving way to PCs. We saw it with expensive software being replaced by apps. Now we're seeing it with AI. But here's the twist that many are missing: When technology gets cheaper, we don't use less of it - we use more. Economists call this Jevons Paradox. When personal computers got cheap, we didn't buy fewer computers - suddenly everyone needed one. When cloud storage got cheap, we didn't store less data - we started storing everything. Tomorrow's AI landscape This brings us to what happens next. Just as the PC revolution didn't kill computing - it exploded it into something far bigger - DeepSeek's breakthrough won't kill AI. Instead, it will likely transform AI from something that only big tech companies can afford into something that becomes part of everyday life. Microsoft's CEO Satya Nadella captured this perfectly when he tweeted about Jevons Paradox in response to DeepSeek. As AI becomes more efficient and accessible, its use will skyrocket. The pie isn't shrinking - it's growing dramatically. This could be an opportunity for the tech giants who can adapt. Amazon's cloud services could offer cheaper AI to millions of customers. Apple's devices could run powerful AI locally. Meta could embed AI throughout its services at a fraction of the current cost. But for those who can't adapt - who cling to the old, expensive way of doing things - well, just ask the executives of DEC how that worked out in the 1990s. The lesson is clear: in technology, the future belongs not to those who build the most expensive systems, but to those who figure out how to make powerful technology accessible to everyone. Deep Seek just showed us that future is coming faster than anyone expected. And just like the PC revolution before it, this will create both winners and losers. The winners will be those who embrace the change and figure out how to use cheaper, more accessible AI to solve real problems. The losers will be those who refuse to believe the world is changing until it's too late. History doesn't repeat, but it rhymes. And right now, it's rhyming pretty loudly. It's like when a new student joins your class who's really good at something. They might seem like competition at first, but often they end up raising everyone's game and making the whole class better. That's how progress often works - through challenge and response, push and pull, each side making the other stronger. What fascinates me most about this story is how it shows that big breakthroughs often come not from doing things bigger and more expensively, but from finding clever new ways to balance different approaches. It's kind of like finding out you don't need an expensive gym membership to get fit - sometimes a creative approach with simpler tools can work just as well or better. The Relational Paradigm: challenges and opportunities The DeepSeek breakthrough isn't just a story of disruption—it's a perfect example of how major changes create both challenges AND opportunities. Those who focus solely on the potential downfall of established players like NVIDIA or the threat to American tech dominance are missing half the picture. This is where the relational paradigm comes into play, teaching us that we need to examine both sides and how they interact to truly grasp the situation. Think back to high school once again. You had the popular kids and the quiet kids. Neither group was inherently "better"—they balanced each other out, each contributing to the school community in their unique ways. The same principle applies here with American and Chinese AI developments. They're not locked in a zero-sum game, but rather pushing each other to improve in different ways. This story is particularly fascinating because it highlights how quick people are to jump to extremes. Some cry, "This changes everything!" while others lament, "This ruins everything!" The reality, as is often the case, lies somewhere in the middle. It involves both positive and negative aspects that need to be understood in relation to each other. Yes, DeepSeek's breakthrough might disrupt the current AI industry giants. But it also opens up new possibilities for AI integration in everyday life, potentially spurring innovation we can't yet imagine. It might challenge American tech dominance, but it could also lead to more robust international collaboration and competition, ultimately benefiting global AI development. The key is to resist the urge to see this development in black and white terms. Instead, we need to embrace the complexity, understanding that the true impact of Deep Seek's innovation will be a tapestry of interrelated effects, some challenging, some opportune, all part of the ever-evolving landscape of technological progress. As we move forward, the winners in this new AI paradigm won't just be those who can build the cheapest or most powerful systems. They'll be the ones who can navigate this complex relational landscape, understanding how different factors interact and finding opportunities in the balance between disruption and continuity, challenge and opportunity, East and West. tp Addendum: An understandable perspective on what is DeepSeek and what it means and doesn’t mean - well, so far … Get full access to AI for Lifelong Learners at aiforlifelonglearners.substack.com/subscribe

    11 min

About

Beyond the AI hype and the 'work-faster' mindset, let's consider how AI might affect our enjoyment of life and our pursuit of curiosity. It might be just the tool you need to help you along as a lifelong learner. Between the extremes there is always a middle ground. Seek that and feed the good wolf along the way for the better good. aiforlifelonglearners.substack.com