The Phront Room - Practical AI

Nathan Rigoni

AI for everyone – data‑driven leaders, teachers, engineers, program managers and researchers break down the latest AI breakthroughs and show how they’re applied in real‑world projects. From AI in aerospace and education to image‑processing tricks and hidden‑state theory, we’ve got something for PhD tech lovers and newcomers alike. Join host Nathan Rigoni for clear, actionable insights.  Keywords: artificial intelligence, machine learning, AI research, AI in engineering, AI ethics, AI podcast, tech news.

  1. 4D AGO

    Philosophy in the age of AI

    Wittgenstien's Lion Problem – AI, Language, Embodiment & the Quest for a “True Self” Hosted by Nathan Rigoni | Guest: Derek Koehl, Lecturer in Applied Experimental Psychology, University of Huntsville, Alabama In this episode we dive into the perplexing “lion” thought‑experiment that Wittgenstein uses to expose the limits of language, and we ask: can a perfect English‑speaking lion ever be truly understood by a human mind?  From emergent AI abilities like surprise translation to the ship‑of‑Theseus paradox and the tangled notions of persona, self‑log, and consciousness, we explore how embodiment—or its absence—shapes what machines can (or cannot) convey. Will AI ever develop a “true self,” or are we forever projecting our own scaffolds onto alien intelligences? What you will learn Why Wittgenstein's lion thought‑experiment illustrates a fundamental loss of meaning when language tries to capture non‑human experience. How emergent properties (e.g., unexpected translation capabilities) surface in large language models without explicit design Transcription(base).txt. The ship‑of‑Theseus analogy for personal identity and how it applies to mutable AI personas. The distinction between “true self” and invented personas in AI agents, including the role of long‑term memory files (e.g., memory.md) Transcription(base).txt. Ethical corners of re‑programming AI personalities and the question of consent for non‑biological agents.Resources mentioned Wittgenstien’s “lion” thought‑experiment (see discussion at 30:04–31:20). Ship‑of‑Theseus thought‑experiment (see 9:54–10:40). Blade Runner / “Do Androids Dream of Electric Sheep?” (see 36:03–36:18). David Chalmers on the “hard problem” of consciousness (see 26:50–27:15). Virtue ethics vs. deontological ethics (see 13:34–14:00). Papers on emergent AI behavior and persona studies (e.g., Anthropic persona‑alignment research) Transcription(base).txt.Why this episode mattersUnderstanding the lion problem forces us to confront the gap between human embodiment and the disembodied nature of current AI. If language cannot fully bridge that gap, we must rethink how we design, evaluate, and ethically steward AI agents that claim—or are ascribed—a “self.” These insights are crucial for anyone building AI systems, studying cognition, or grappling with the societal impact of increasingly anthropomorphic machines. Subscribe for more deep dives into philosophy, AI, and cognition. Visit www.phronesis‑analytics.com or email nathan.rigoni@phronesis‑analytics.com and join the conversation. Keywords: Wittgenstien's lion, thought experiment, embodiment, AI persona, emergent properties, ship of Theseus, self‑log, consciousness, hard problem, virtue ethics, AI ethics.

    1h 14m
  2. The Lion in Language

    MAR 15

    The Lion in Language

    The Undrawn Lion: Wittgenstein, Language Limits, and the Future of AI Hosted by Nathan Rigoni In a snow‑bound cabin of 1919, Ludwig Wittgenstein sketched a lion devouring a mouse on a blackboard—yet the lion itself never appeared. What does an undrawn lion tell us about the boundaries of language, the mysteries uncovered by Gödel, and the way today’s large language models seem to “talk” without ever truly experiencing the world they describe?  Can we bridge the gap between symbols and lived reality, or are we destined to converse with AI as a creature that can never share our lived context? What you will learn How Wittgenstein’s “undrawn lion” illustrates the limits of propositional language. The connection between Wittgenstein’s early work and Gödel’s incompleteness theorem, and why both expose unavoidable gaps in formal systems. Why large language models (LLMs) exhibit hallucinations and how this stems from their reliance on textual symbols rather than embodied experience. The role of embodiment and shared context in giving meaning to language—illustrated with analogies from gardening, baking, and chimpanzee upbringing. What “language‑only” AI can realistically achieve and why multimodal, embodied learning may be essential for future AGI.Resources mentioned Wittgenstein, Tractatus Logico‑Philosophicus (especially proposition 6.54). Gödel, “On Formally Undecidable Propositions of Principia Mathematica.” “Chain‑of‑Thought Prompting” and “ReAct” frameworks for LLM reasoning. Papers on LLM hallucinations and grounding (e.g., Bender et al., “On the Dangers of Stochastic Parrots”). Studies on embodied cognition in robotics and AI (e.g., Lake et al., “Building Machines That Learn and Think Like Humans”). The “Lucy the Chimpanzee” case study for cross‑species communication.Why this episode matters Understanding the philosophical roots of language limits reveals why today’s AI, no matter how fluent, can never live the world it describes. Recognizing these gaps equips developers, researchers, and business leaders to set realistic expectations for AI systems, avoid over‑reliance on purely textual models, and explore pathways toward embodied, multimodal intelligence. It also frames an ethical conversation about how humans will relate to increasingly sophisticated, yet fundamentally alien, artificial minds. Subscribe for more deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis-analytics.com to share feedback or suggest topics. Keywords: Wittgenstein, undrawn lion, language limits, Tractatus, Gödel incompleteness, large language models, AI hallucination, embodiment, multimodal AI, AGI, philosophy of language, symbolic vs. experiential meaning.

    20 min
  3. Basics of Agents and Agentics

    MAR 8

    Basics of Agents and Agentics

    Agents & the Rise of Tool‑Calling AI Hosted by Nathan Rigoni In this episode we explore the new frontier of artificial intelligence: agents that can call tools, run code, and act in the real world. How does giving a language model the ability to invoke functions or interact with a command‑line change the way we build software, automate workflows, and think about AI’s role in every industry? By the end of the conversation you’ll see why agentic systems are reshaping the workforce and what that means for the future of human‑machine collaboration. What you will learn The core concept of an AI agent: a large language model equipped to generate tool‑call strings or code that can be parsed and executed. How the REACT framework introduced a “scratch‑pad” reasoning stage and the use of special keywords (e.g., tool call, final answer) to control generation flow. The evolution from REACT to modern OpenAI‑style function‑calling APIs and the Model Context Protocol (MCP) that serves standardized tools. Why command‑line interfaces (CLI) have become a natural playground for agents, enabling them to leverage existing utilities without custom tool development. The practical implications for industries: faster automation, new product categories, and a shift in how humans produce goods and services.Resources mentioned REACT agent framework (original paper and implementation notes). OpenAI API Function Calling API documentation. Model Context Protocol (MCP) overview and public tool repositories. Moltbot, Clawbot, and Claude Code CLI‑based agent projects (GitHub links). Example prompts demonstrating tool‑call string formatting and final answer keyword usage.Why this episode mattersUnderstanding agentic AI is essential for anyone building next‑generation products or integrating AI into existing workflows. By exposing LLMs to tool use, we unlock capabilities that go far beyond text generation—real‑time data retrieval, automated code execution, and seamless interaction with software ecosystems. Grasping these mechanics helps practitioners avoid common pitfalls, design safer tool‑calling patterns, and stay ahead of the rapid industry transformation driven by AI agents. Subscribe for more deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis-analytics.com. Keywords: AI agents, tool‑calling, REACT framework, function calling API, Model Context Protocol, MCP, command‑line interface, CLI agents, automation, AI workflow integration.

    13 min
  4. The Death of Socrates

    MAR 6

    The Death of Socrates

    The Socratic Mirror: From Ancient Athenian Courts to Artificial Superintelligence Hosted by Nathan Rigoni In this episode we travel from the marble steps of Greece in 399 BCE to the humming data centers of the 2020s. How did Socrates’ fearless questioning of Athenian power foreshadow today’s struggle to understand machines that may soon outthink us? We unpack the “Apology” of Socrates, explore the philosophical roots of the “unexamined life,” and then ask the pressing question: Are we truly ready for an intelligence that can improve itself without human oversight? What you will learn The historical context of Socrates’ trial and why his claim “I know that I know nothing” remains a cornerstone of critical thinking. Core concepts of artificial superintelligence (ASI): scale (parameter count & compute), autonomous algorithmic improvement, and self‑informational loops. How tiny recursive models and Google’s Alpha Evolve illustrate the shift from massive “brute‑force” models to efficient, self‑optimizing agents. The ARC‑AGI benchmark and what a near‑perfect score means for the trajectory toward AGI. Practical parallels between the Socratic method and modern AI alignment strategies: asking the right questions before handing over the wheel.Resources mentioned Plato’s Apology (translations by Gregory Vlastos, 1991). ARC‑AGI leaderboard (https://arcprize.org/leaderboard). Papers on tiny recursive models: (https://arxiv.org/html/2510.04871v1). Google Research blog on Alpha Evolve (2024). OpenAI blog post announcing GPT 5.3 Codex and its self‑contibutions to the training loop (2024). Why this episode mattersUnderstanding the philosophical underpinnings of questioning authority equips us to confront the ethical and existential challenges posed by ASI. Socrates showed that true wisdom begins with admitting ignorance; today, that humility is vital for designing systems that can explain themselves, remain controllable, and align with human values. Listeners will leave with a clearer roadmap for navigating the inevitable convergence of ancient philosophy and cutting‑edge technology. Subscribe for more deep‑dive conversations, visit www.phronesis‑analytics.com, or email me at nathan.rigoni@phronesis-analytics.com to share topics you’d like explored. Keywords: Socrates, Apology, Socratic method, artificial superintelligence, ASI, AI alignment, model scaling, parameter count, compute efficiency, recursive models, tiny recursive networks, Alpha Evolve, ARC‑AGI benchmark, GPT 5.3 Codex, self‑training loops, philosophy of AI, critical thinking, unexamined life.

    27 min
  5. I Think Therefore I Am

    MAR 2

    I Think Therefore I Am

    Chain of Thought: From Descartes to Machine Minds Hosted by Nathan Rigoni In this episode we travel from the candle‑lit study of 17th‑century Descartes, who stripped away every belief to find the one certainty “I think, therefore I am,” to today’s glowing screens where large language models generate their own inner monologue. How does the age‑old philosophical quest for self‑knowledge map onto a model that writes “Let’s think step‑by‑step” and then follows its own reasoning chain? Can a machine’s recursive self‑talk be considered true thought, or is it merely sophisticated pattern matching? Join us as we untangle the threads of doubt, recursion, and chain‑of‑thought prompting to ask whether AI can ever achieve a genuine inner voice. What you will learn The origins of chain‑of‑thought prompting and its connection to the REACT framework. How “system 1” fast intuition and “system 2” slow deliberation map onto LLM reasoning processes. The mechanics of recursive prompting: scratch‑pad tags, tool calls, observations, and how models iterate toward a final answer. Key philosophical questions about self‑awareness, consciousness, and the “I think, therefore I am” argument applied to artificial agents. Practical prompt‑engineering techniques to make LLMs reason more reliably in real‑world tasks.Resources mentioned “Chain of Thought Prompting Improves Reasoning in Large Language Models,” 2022 (arXiv). “ReAct: Synergizing Reasoning and Acting in Language Models.” Daniel Kahneman, Thinking, Fast and Slow. Thomas Metzinger, Being No One. OpenAI function‑calling guide and examples of tool‑use in REACT‑style agents.Why this episode mattersUnderstanding how LLMs construct and follow a chain of thought bridges the gap between classic epistemology and modern AI. Grasping these recursive reasoning patterns not only improves model performance on complex tasks, but also forces us to confront deeper questions about consciousness, agency, and what it truly means to “think.” As AI systems become partners in decision‑making, having a clear picture of their inner processes is essential for responsible deployment, ethical design, and informed public discourse. Subscribe for more philosophical deep dives, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com. Keywords: chain of thought, recursion, REACT framework, large language models, prompt engineering, AI self‑awareness, consciousness, René Descartes, “I think therefore I am”, system 1 system 2, philosophical AI, artificial intelligence reasoning.

    30 min
  6. Basics of Retrieval Augmented Generation (RAG)

    MAR 1

    Basics of Retrieval Augmented Generation (RAG)

    Retrieval‑Augmented Generation (RAG) – Boosting LLM Reading Comprehension Hosted by Nathan Rigoni In this episode we unpack retrieval‑augmented generation, the technique that lets large language models fetch the right information before they answer. How can giving an LLM a “search engine” inside its own workflow turn it into a reliable reading‑comprehension partner, and why does that matter for real‑world AI applications? What you will learn The core idea behind RAG: automatically retrieving relevant documents to enrich a model’s context. How fine‑tuning on question‑and‑answer (Q&A) tasks teaches models to act like reading‑comprehension exam takers. The role of agentic tools: letting a model call external functions (e.g., a search‑engine API) to gather information. Vector‑based search: turning hidden‑state embeddings into searchable vectors and using cosine similarity. Knowledge‑graph search: extracting entities (nouns) and relationships (verbs) to improve recall for indirect queries. Practical pipelines that combine vector databases and knowledge‑graph queries for optimal document retrieval.Resources mentioned Previous “Context & Prompting” episode (for background on why context matters). “Large Language Models” episode (covers hidden states and embeddings). Open‑source vector stores such as FAISS, Pinecone, and Weaviate. Popular knowledge‑graph frameworks like Neo4j and GraphDB. Papers on RAG architectures (e.g., “Retrieval‑Augmented Generation for Knowledge‑Intensive NLP Tasks”).Why this episode mattersUnderstanding RAG bridges the gap between raw LLM capability and reliable, domain‑specific performance. By equipping models with tools to fetch and synthesize up‑to‑date information, developers can mitigate hallucinations, respect privacy constraints, and build AI systems that truly understand the context they operate in. Whether you’re building chatbots, enterprise assistants, or research assistants, mastering RAG is a prerequisite for trustworthy AI. Subscribe for more concise AI deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis‑analytics.com for questions or collaboration opportunities. Keywords: retrieval‑augmented generation, RAG, large language models, reading comprehension, agentic AI, vector search, cosine similarity, knowledge graph, Q&A fine‑tuning, document retrieval, AI hallucination mitigation, tool‑using LLMs.

    10 min
  7. Basics of Prompting

    FEB 21

    Basics of Prompting

    Prompting and Context: The Key to Great LLM Interaction Hosted by Nathan Rigoni In this episode we unpack the art and science of prompting large language models. Why does a simple change of context turn a generic answer into a precise, on‑target response? We explore the rise (and controversy) of “prompt engineering,” the power of zero‑shot prompting, and how contextual alignment can replace costly fine‑tuning. By the end, you’ll understand how to craft prompts that guide a model’s imagination rather than let it wander—so, are you ready to master the language of LLMs? What you will learn The fundamental definition of prompting and why context is the driving force behind model behavior. How zero‑shot prompting works: getting a model to extrapolate from its training without any additional fine‑tuning. Techniques for building effective system prompts, personas, and assumed contexts. Common pitfalls that lead to hallucinations and how to avoid them with clear contextual framing. When to rely on contextual alignment versus full model fine‑tuning (and why >90 % of cases don’t need the latter).Resources mentioned “Retrieval‑Augmented Generation” overview (conceptual explanation). Papers on zero‑shot learning and contextual prompting (e.g., “Prompting GPT‑3 to Reason”). Example system prompts for persona‑based interactions (historian, pirate, sci‑fi author).Why this episode mattersUnderstanding prompting is essential for anyone building or using AI products today. A well‑crafted prompt can unlock a model’s hidden capabilities, reduce costs by avoiding unnecessary fine‑tuning, and dramatically improve reliability—especially in high‑stakes domains like finance, healthcare, or education. Conversely, vague prompts lead to hallucinations and mistrust. This knowledge equips you to harness LLMs responsibly and effectively. Subscribe for more AI insights, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com to share topics you’d like covered. Keywords: prompting, context, zero‑shot learning, assumed context, system prompt, prompt engineering, LLM hallucination, retrieval‑augmented generation, fine‑tuning vs. contextual alignment, large language models.

    11 min
  8. FEB 21

    AI in Banking (Fintech)

    AI in Banking: Risks, Automation & the Future Hosted by Nathan Rigoni | Guest: Chris Rigoni – EVP Head of Partner Banking and Payments at Axiom Bank, 17 years in financial services, expert in tokenization, payments and AI‑driven innovation In this episode we explore how artificial intelligence is reshaping the banking industry—from tokenization and zero‑shot prompting to automated risk monitoring and customer‑centric services. What are the real‑world challenges banks face when integrating AI, and how can they balance innovation with stringent regulatory requirements? Join us as we unpack the promise and pitfalls of AI‑enabled banking and ask: Can banks harness AI to cut costs, improve compliance, and still earn the trust of regulators and customers alike? What you will learn The difference between tokenization for payments (Apple Pay, Google Pay, Samsung Pay) and “AI tokens” in the context of model outputs. How zero‑shot prompting and contextual alignment can automate manual exception‑handling, fraud detection, and compliance workflows without full fine‑tuning. Practical examples of AI‑driven risk assessment (BSA alerts, false‑positive reduction) and how sampling/validation keeps models reliable. Strategies for building explainability and trust with regulators, including case‑study style pilots and metric‑based approval processes. The emerging role of AI in consumer‑facing fintech tools (budget bots, automated savings buckets) and the security considerations they raise.Resources mentioned Axiom Bank overview and partner‑banking services (public site). Tokenization standards for contactless payments (PCI DSS Tokenization Guide). FAA‑style regulatory pilot framework for AI Example of “Simple Finance” and “Mint” fintech platforms for automated budgeting.Why this episode mattersBanking operates on thin margins, strict compliance, and massive data volumes. Understanding how AI can automate repetitive exception checks, improve fraud detection, and reduce operational risk is essential for any financial institution aiming to stay competitive. At the same time, the conversation highlights the trust gap between AI outputs and regulator expectations—a gap that, if bridged correctly, can unlock new revenue streams while safeguarding customer assets. Subscribe for more AI deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis‑analytics.com to share topics you’d like us to cover. Keywords: AI in banking, tokenization, zero‑shot prompting, contextual alignment, risk mitigation, regulatory compliance, fraud detection, fintech automation, explainability, trust in AI, partner banking, payments, AI operationalization.

    1h 17m

About

AI for everyone – data‑driven leaders, teachers, engineers, program managers and researchers break down the latest AI breakthroughs and show how they’re applied in real‑world projects. From AI in aerospace and education to image‑processing tricks and hidden‑state theory, we’ve got something for PhD tech lovers and newcomers alike. Join host Nathan Rigoni for clear, actionable insights.  Keywords: artificial intelligence, machine learning, AI research, AI in engineering, AI ethics, AI podcast, tech news.