The Phront Room - Practical AI

Nathan Rigoni

AI for everyone – data‑driven leaders, teachers, engineers, program managers and researchers break down the latest AI breakthroughs and show how they’re applied in real‑world projects. From AI in aerospace and education to image‑processing tricks and hidden‑state theory, we’ve got something for PhD tech lovers and newcomers alike. Join host Nathan Rigoni for clear, actionable insights.  Keywords: artificial intelligence, machine learning, AI research, AI in engineering, AI ethics, AI podcast, tech news.

  1. I Think Therefore I Am

    1D AGO

    I Think Therefore I Am

    Chain of Thought: From Descartes to Machine Minds Hosted by Nathan Rigoni In this episode we travel from the candle‑lit study of 17th‑century Descartes, who stripped away every belief to find the one certainty “I think, therefore I am,” to today’s glowing screens where large language models generate their own inner monologue. How does the age‑old philosophical quest for self‑knowledge map onto a model that writes “Let’s think step‑by‑step” and then follows its own reasoning chain? Can a machine’s recursive self‑talk be considered true thought, or is it merely sophisticated pattern matching? Join us as we untangle the threads of doubt, recursion, and chain‑of‑thought prompting to ask whether AI can ever achieve a genuine inner voice. What you will learn The origins of chain‑of‑thought prompting and its connection to the REACT framework. How “system 1” fast intuition and “system 2” slow deliberation map onto LLM reasoning processes. The mechanics of recursive prompting: scratch‑pad tags, tool calls, observations, and how models iterate toward a final answer. Key philosophical questions about self‑awareness, consciousness, and the “I think, therefore I am” argument applied to artificial agents. Practical prompt‑engineering techniques to make LLMs reason more reliably in real‑world tasks.Resources mentioned “Chain of Thought Prompting Improves Reasoning in Large Language Models,” 2022 (arXiv). “ReAct: Synergizing Reasoning and Acting in Language Models.” Daniel Kahneman, Thinking, Fast and Slow. Thomas Metzinger, Being No One. OpenAI function‑calling guide and examples of tool‑use in REACT‑style agents.Why this episode mattersUnderstanding how LLMs construct and follow a chain of thought bridges the gap between classic epistemology and modern AI. Grasping these recursive reasoning patterns not only improves model performance on complex tasks, but also forces us to confront deeper questions about consciousness, agency, and what it truly means to “think.” As AI systems become partners in decision‑making, having a clear picture of their inner processes is essential for responsible deployment, ethical design, and informed public discourse. Subscribe for more philosophical deep dives, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com. Keywords: chain of thought, recursion, REACT framework, large language models, prompt engineering, AI self‑awareness, consciousness, René Descartes, “I think therefore I am”, system 1 system 2, philosophical AI, artificial intelligence reasoning.

    30 min
  2. Basics of Retrieval Augmented Generation (RAG)

    1D AGO

    Basics of Retrieval Augmented Generation (RAG)

    Retrieval‑Augmented Generation (RAG) – Boosting LLM Reading Comprehension Hosted by Nathan Rigoni In this episode we unpack retrieval‑augmented generation, the technique that lets large language models fetch the right information before they answer. How can giving an LLM a “search engine” inside its own workflow turn it into a reliable reading‑comprehension partner, and why does that matter for real‑world AI applications? What you will learn The core idea behind RAG: automatically retrieving relevant documents to enrich a model’s context. How fine‑tuning on question‑and‑answer (Q&A) tasks teaches models to act like reading‑comprehension exam takers. The role of agentic tools: letting a model call external functions (e.g., a search‑engine API) to gather information. Vector‑based search: turning hidden‑state embeddings into searchable vectors and using cosine similarity. Knowledge‑graph search: extracting entities (nouns) and relationships (verbs) to improve recall for indirect queries. Practical pipelines that combine vector databases and knowledge‑graph queries for optimal document retrieval.Resources mentioned Previous “Context & Prompting” episode (for background on why context matters). “Large Language Models” episode (covers hidden states and embeddings). Open‑source vector stores such as FAISS, Pinecone, and Weaviate. Popular knowledge‑graph frameworks like Neo4j and GraphDB. Papers on RAG architectures (e.g., “Retrieval‑Augmented Generation for Knowledge‑Intensive NLP Tasks”).Why this episode mattersUnderstanding RAG bridges the gap between raw LLM capability and reliable, domain‑specific performance. By equipping models with tools to fetch and synthesize up‑to‑date information, developers can mitigate hallucinations, respect privacy constraints, and build AI systems that truly understand the context they operate in. Whether you’re building chatbots, enterprise assistants, or research assistants, mastering RAG is a prerequisite for trustworthy AI. Subscribe for more concise AI deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis‑analytics.com for questions or collaboration opportunities. Keywords: retrieval‑augmented generation, RAG, large language models, reading comprehension, agentic AI, vector search, cosine similarity, knowledge graph, Q&A fine‑tuning, document retrieval, AI hallucination mitigation, tool‑using LLMs.

    10 min
  3. Basics of Prompting

    FEB 21

    Basics of Prompting

    Prompting and Context: The Key to Great LLM Interaction Hosted by Nathan Rigoni In this episode we unpack the art and science of prompting large language models. Why does a simple change of context turn a generic answer into a precise, on‑target response? We explore the rise (and controversy) of “prompt engineering,” the power of zero‑shot prompting, and how contextual alignment can replace costly fine‑tuning. By the end, you’ll understand how to craft prompts that guide a model’s imagination rather than let it wander—so, are you ready to master the language of LLMs? What you will learn The fundamental definition of prompting and why context is the driving force behind model behavior. How zero‑shot prompting works: getting a model to extrapolate from its training without any additional fine‑tuning. Techniques for building effective system prompts, personas, and assumed contexts. Common pitfalls that lead to hallucinations and how to avoid them with clear contextual framing. When to rely on contextual alignment versus full model fine‑tuning (and why >90 % of cases don’t need the latter).Resources mentioned “Retrieval‑Augmented Generation” overview (conceptual explanation). Papers on zero‑shot learning and contextual prompting (e.g., “Prompting GPT‑3 to Reason”). Example system prompts for persona‑based interactions (historian, pirate, sci‑fi author).Why this episode mattersUnderstanding prompting is essential for anyone building or using AI products today. A well‑crafted prompt can unlock a model’s hidden capabilities, reduce costs by avoiding unnecessary fine‑tuning, and dramatically improve reliability—especially in high‑stakes domains like finance, healthcare, or education. Conversely, vague prompts lead to hallucinations and mistrust. This knowledge equips you to harness LLMs responsibly and effectively. Subscribe for more AI insights, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com to share topics you’d like covered. Keywords: prompting, context, zero‑shot learning, assumed context, system prompt, prompt engineering, LLM hallucination, retrieval‑augmented generation, fine‑tuning vs. contextual alignment, large language models.

    11 min
  4. FEB 21

    AI in Banking (Fintech)

    AI in Banking: Risks, Automation & the Future Hosted by Nathan Rigoni | Guest: Chris Rigoni – EVP Head of Partner Banking and Payments at Axiom Bank, 17 years in financial services, expert in tokenization, payments and AI‑driven innovation In this episode we explore how artificial intelligence is reshaping the banking industry—from tokenization and zero‑shot prompting to automated risk monitoring and customer‑centric services. What are the real‑world challenges banks face when integrating AI, and how can they balance innovation with stringent regulatory requirements? Join us as we unpack the promise and pitfalls of AI‑enabled banking and ask: Can banks harness AI to cut costs, improve compliance, and still earn the trust of regulators and customers alike? What you will learn The difference between tokenization for payments (Apple Pay, Google Pay, Samsung Pay) and “AI tokens” in the context of model outputs. How zero‑shot prompting and contextual alignment can automate manual exception‑handling, fraud detection, and compliance workflows without full fine‑tuning. Practical examples of AI‑driven risk assessment (BSA alerts, false‑positive reduction) and how sampling/validation keeps models reliable. Strategies for building explainability and trust with regulators, including case‑study style pilots and metric‑based approval processes. The emerging role of AI in consumer‑facing fintech tools (budget bots, automated savings buckets) and the security considerations they raise.Resources mentioned Axiom Bank overview and partner‑banking services (public site). Tokenization standards for contactless payments (PCI DSS Tokenization Guide). FAA‑style regulatory pilot framework for AI Example of “Simple Finance” and “Mint” fintech platforms for automated budgeting.Why this episode mattersBanking operates on thin margins, strict compliance, and massive data volumes. Understanding how AI can automate repetitive exception checks, improve fraud detection, and reduce operational risk is essential for any financial institution aiming to stay competitive. At the same time, the conversation highlights the trust gap between AI outputs and regulator expectations—a gap that, if bridged correctly, can unlock new revenue streams while safeguarding customer assets. Subscribe for more AI deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis‑analytics.com to share topics you’d like us to cover. Keywords: AI in banking, tokenization, zero‑shot prompting, contextual alignment, risk mitigation, regulatory compliance, fraud detection, fintech automation, explainability, trust in AI, partner banking, payments, AI operationalization.

    1h 17m
  5. Basics of Pretraining and Finetuning

    FEB 14

    Basics of Pretraining and Finetuning

    Pre‑training vs Fine‑tuning: How AI Learns Its Basics Hosted by Nathan Rigoni In this episode we break down the two foundational stages that turn raw data into useful AI systems: pre‑training and fine‑tuning. What does a model actually learn when it reads billions of words or scans millions of images, and how do we reshape that knowledge into behaviors like answering questions, writing code, or describing pictures? By the end you’ll see why the split between “learning the mechanics” and “learning the behavior” is a game‑changer for building adaptable, efficient models— and you’ll be left wondering: could the next wave of AI rely more on clever fine‑tuning than on ever‑larger pre‑training datasets? What you will learn The goal of pre‑training: teaching a model the fundamental mechanics of language or vision through massive supervised (next‑token) learning. How fine‑tuning shifts focus to behavior, using methods such as RLHF, instruction tuning, code‑tuning, and agentic fine‑tuning. A concrete case study: ServiceNow’s Apriel model, which starts from the multimodal Pixtral backbone and is fine‑tuned for conversational VLM capabilities. Trade‑offs in data volume, compute cost, and model size when choosing between larger pre‑trained models and smaller models enhanced by aggressive fine‑tuning. Key terminology you’ll hear repeatedly: supervised pre‑training, reinforcement learning from human feedback (RLHF), instruction tuning, RLVR, multimodal fine‑tuning.Resources mentioned OpenAI “GPT‑4 Technical Report” (details on pre‑training and RLHF). ServiceNow blog post on Apriel and the underlying Pixtral model. Christiano et al., “Fine‑Tuning Language Models from Human Preferences” (2017). Hugging Face’s guide to instruction fine‑tuning with Transformers. Papers on multimodal vision‑language models such as CLIP and Flamingo.Why this episode mattersGrasping the distinction between pre‑training and fine‑tuning lets engineers, product leaders, and AI enthusiasts make smarter choices about model selection, cost management, and deployment strategy. Whether you’re building a chatbot, an image captioner, or an autonomous agent, knowing when to invest in massive pre‑training versus targeted fine‑tuning can dramatically impact performance, latency, and scalability. This insight also highlights a growing trend: leveraging modest multimodal backbones and unlocking them with specialized fine‑tuning to create edge‑friendly AI solutions. Subscribe for more bite‑sized AI deep dives, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com. Your feedback shapes future episodes—keep the conversation going! Keywords: pre‑training, fine‑tuning, RLHF, instruction tuning, multimodal models, vision‑language models, Pixtral, Apriel, reinforcement learning from human feedback, AI curriculum learning, model behavior, edge AI.

    11 min
  6. Basics Of Multimodal Models

    FEB 7

    Basics Of Multimodal Models

    Multimodal Models: Vision, Language, and Beyond Hosted by Nathan Rigoni In this episode we untangle the world of multimodal models—systems that learn from images, text, audio, and sometimes even more exotic data types. How does a model fuse a picture of a cat with the word “feline” and the sound of a meow into a single understanding? We explore the building blocks, from early CLIP embeddings to the latest vision‑language giants, and show why these hybrid models are reshaping AI’s ability to perceive and describe the world. Can a single hidden state truly capture the richness of multiple senses, and what does that mean for the future of AI applications? What you will learn The core idea behind multimodal models: merging separate data modalities into a shared hidden representation. How dual‑input architectures and cross‑modal translation (e.g., text‑to‑image, image‑to‑text) work in practice. Key milestones such as CLIP, FLIP, and modern vision‑language models like Gemini and Pixtral. Real‑world use cases: image generation from prompts, captioning, audio‑guided language tasks, and multimodal classification. The challenges of scaling multimodal models, including data diversity, hidden‑state alignment, and computational cost.Resources mentioned CLIP (Contrastive Language‑Image Pre‑training) paper and its open‑source implementation. Recent vision‑language model releases: Gemini, Pixtral, and other multimodal LLMs. Suggested background listening: “Basics of Large Language Models" and “Basics of Vision Learning” episodes of The Phront Room. Further reading on multimodal embeddings and cross‑modal retrieval.Why this episode mattersUnderstanding multimodal models is essential for anyone who wants AI that can see, hear, and talk—bridging the gap between isolated language or vision systems and truly integrated perception. As these models become the backbone of next‑generation applications—from creative image synthesis to audio‑driven assistants—grasping their inner workings helps developers build more robust, interpretable, and innovative solutions while navigating the added complexity and resource demands they bring. Subscribe for more AI deep dives, visit www.phronesis-analytics.com, or email nathan.rigoni@phronesis-analytics.com. Keywords: multimodal models, vision‑language models, CLIP, FLIP, cross‑modal translation, hidden state, image generation, captioning, audio‑text integration, multimodal embeddings, AI perception, Gemini, Pixtral.

    9 min
  7. Basics of Large Language Models

    JAN 27

    Basics of Large Language Models

    Large Language Models: Building Blocks & Challenges Hosted by Nathan Rigoni In this episode we dive into the heart of today’s AI—large language models (LLMs). What makes these gigantic text‑predictors tick, and why do they sometimes hallucinate or run into bias? We’ll explore how LLMs are trained, what “next‑token prediction” really means, and the tricks (chain‑of‑thought prompting, reinforcement learning) that turn a raw predictor into a problem‑solving assistant. Can a model that has never seen a question truly reason to an answer, or is it just clever memorization? What you will learn The core components of an LLM: tokenizer, encoder, transformer blocks, and the softmax decoder. Why training at terabyte‑scale data and quintillion‑level token iterations is required for emergent abilities. How chain‑of‑thought prompting and the REACT framework give models a “scratch‑pad” for better reasoning. The role of fine‑tuning and reinforcement learning from human feedback in shaping model behavior. Key pitfalls: lack of byte‑level tokenization, spatial reasoning limits, Western‑biased training data, and context‑window constraints (from ~128 k tokens to ~2 M tokens).Resources mentioned Tokenization basics (see the dedicated “NLP – Tokenization” episode). Auto‑encoder fundamentals (see the “NLP – Auto Encoders” episode). Papers on chain‑of‑thought prompting and REACT agents (discussed in the episode). Information on context‑window sizes and scaling trends (e.g., 128 k → 2 M tokens).Why this episode mattersUnderstanding LLM architecture demystifies why these models can generate coherent prose, write code, or answer complex queries—yet also why they can hallucinate, misinterpret spatial concepts, or inherit cultural bias. Grasping these strengths and limits is essential for anyone building AI products, evaluating model outputs, or simply wanting to use LLMs responsibly. Subscribe for more AI deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis‑analytics.com. Keywords: large language models, next‑token prediction, tokenizer, transformer, chain of thought, REACT framework, reinforcement learning, context window, AI hallucination, model bias.

    15 min
  8. JAN 24

    AI in Rocket Science and Space

    AI in Space: How AI Is Transforming NASA’s Engineering Hosted by Nathan Rigoni | Guest: Thomas Brooks – Aerospace Engineer, NASA Marshall Space Flight Center Advanced Concepts Office In this episode of The Phront Room we dive into the ways artificial intelligence is reshaping space exploration—from automating routine meetings to enabling autonomous robots that can explore distant worlds without human latency. Thomas Brooks shares real‑world examples from his work on spacecraft conceptual design, thermal systems, and the emerging “agentic coding” tools that are already helping engineers write code, generate designs, and even write entire websites. What if a rover could decide its own path on Mars while you sip coffee on Earth? What you will learn The core challenges of spacecraft design (size, weight, power) and how AI‑driven “feasibility engineering” can speed up early‑stage trade studies. How AI‑powered tools such as Klein and anti‑gravity enable natural‑language command‑line automation and rapid website generation for engineering documentation. The concept of “edge AI” on rovers and drones that can make real‑time decisions without waiting for ground control. How large‑language models can act as searchable knowledge bases for massive technical documents (RAG – Retrieval‑Augmented Generation). The social and ethical implications of AI‑augmented space programs, from data‑center heat management to the future of engineering work identity.Resources mentioned Cline(local LLM assistant) – https://cline.botanti‑gravity (cloud‑linked LLM) – https://anti‑gravity.ai Microsoft 365 Copilot – https://www.microsoft.com/microsoft‑365/copilot GSFC chat‑RAG tool – internal NASA implementation (referenced in discussion) NASA’s Marshall Space Flight Center Advanced Concepts Office (overview) – https://www.nasa.gov/centers/marshall/advanced‑conceptsWhy this episode mattersSpace missions are limited by mass, power budgets, and communication latency. AI can compress design cycles, automate drudge work, and give robots the autonomy to operate safely in hazardous environments—potentially increasing mission success rates while reducing cost and risk. At the same time, Thomas raises critical questions about control, transparency, and the future role of human engineers, making this conversation essential for anyone interested in the next frontier of aerospace AI. Subscribe for more insights, visit www.phronesis‑analytics.com, or reach out at nathan.rigoni@phronesis‑analytics.com. Keywords: AI in space, NASA engineering, autonomous rovers, edge AI, LLM‑assisted design, RAG, agentic coding, spacecraft conceptual design, thermal engineering, AI ethics, future of work. open.spotify.com – Podcast overview and episode list for The Phront Room (including “AI in Engineering for Space” with Thomas Brooks).

    1h 17m

About

AI for everyone – data‑driven leaders, teachers, engineers, program managers and researchers break down the latest AI breakthroughs and show how they’re applied in real‑world projects. From AI in aerospace and education to image‑processing tricks and hidden‑state theory, we’ve got something for PhD tech lovers and newcomers alike. Join host Nathan Rigoni for clear, actionable insights.  Keywords: artificial intelligence, machine learning, AI research, AI in engineering, AI ethics, AI podcast, tech news.