The Ruby AI Podcast

Valentino Stoll, Joe Leo

The Ruby AI Podcast explores the intersection of Ruby programming and artificial intelligence, featuring expert discussions, innovative projects, and practical insights. Join us as we interview industry leaders and developers to uncover how Ruby is shaping the future of AI.

  1. New Year, New Ruby: Agents, Wishes, and a Calm Ruby 4

    27 ENE

    New Year, New Ruby: Agents, Wishes, and a Calm Ruby 4

    Ruby turns 30, Ruby 4 quietly ships, and the AI tooling arms race shows signs of maturity. Valentino and Joe unpack what stability really means for a language in its third decade, debate agent-driven development, AI “slop,” binary distribution, and whether open source incentives are breaking down—or simply evolving. Mentioned In The Show A grab-bag of tools, projects, and references Valentino & Joe brought up. Ruby & Core Ecosystem Ruby Gets A Fresh Look — Official Ruby programming language site (news, downloads, docs) now with a great new look.  Ruby Kaigi — Ruby’s flagship conference (talks, schedules, archives). Bundler — Ruby dependency manager used across the ecosystem.AI Coding Tools Claude Code — Anthropic’s CLI coding assistant workflow discussed heavily in the episode.OpenAI Codex — OpenAI’s coding agent/tooling referenced as an alternative workflow. Ruby Web Frameworks & Architecture Rails Framework — Ruby on Rails, referenced as the default baseline for many apps.Jumpstart Rails — Rails starter kits/templates mentioned as a “pick a Rails” approach.Roda Framework — Jeremy Evans’ web toolkit (lighter than Rails, bigger than Sinatra).dry-rb Suite — Ruby gems for functional-ish architecture and explicit business logic.Trailblazer — High-level architecture for operations, workflows, and domain logic.Quality, Testing, and Practice Better Specs — Community-curated RSpec guidelines mentioned as a spec style target.Datadog — Error monitoring referenced in the “well-defined bug + stack trace” workflow.Open Source Sustainability GitHub Sponsors — Sponsorship mechanism discussed as one (partial) monetization path.People Mentioned Sandi Metz — Referenced as the “code whisperer” ideal for idiomatic Ruby guidance.

    51 min
  2. Running Self-Hosted Models with Ruby and Chris Hasinski

    02/12/2025

    Running Self-Hosted Models with Ruby and Chris Hasinski

    In this episode of the Ruby AI Podcast, hosts Valentino Stoll and Joe Leo welcome AI and Ruby expert Chris Hasinski. They delve into the benefits and challenges of self-hosting AI models, including control over model updates, cost considerations, and the ability to fine-tune models. Chris shares his journey from machine learning at UC Davis to his extensive work in AI and Ruby, touching upon his contributions to open source projects and the Ruby AI community. The discussion also covers the limitations of current LLMs (Large Language Models) in generating Ruby code, the importance of high-quality data for effective AI, and the potential for Ruby to become a strong contender in AI development. Whether you're a Ruby enthusiast or interested in the intersection of AI and software development, this episode offers valuable insights and practical advice. 00:00 Introduction and Guest Welcome 00:31 Why Self-Host Models? 01:28 Challenges and Benefits of Self-Hosting 03:14 Chris's Background in Machine Learning 04:13 Applications Beyond Text 06:39 Fine-Tuning Models 12:27 Ruby in Machine Learning 16:06 Distributed Training and Model Porting 18:22 Choosing and Deploying Models 25:19 Testing and Data Engineering in Ruby 27:56 Database Naming Conventions in Different Languages 28:19 Importance of Data Quality for AI 18:03 Monitoring Locally Hosted AI Models 29:37 Challenges with LLMs and Performance Tracking 31:09 Improving Developer Experience in Ruby 31:45 Ruby's Ecosystem for Machine Learning 32:43 The Need for Investment in Ruby's AI Tools 38:25 Challenges with AI Code Generation in Ruby 43:35 Future Prospects for Ruby in AI 51:26 Conclusion and Final Thoughts

    54 min
  3. The Latent Spark: Carmine Paolino on Ruby’s AI Reboot

    18/11/2025

    The Latent Spark: Carmine Paolino on Ruby’s AI Reboot

    In this episode of the Ruby AI Podcast, hosts Joe Leo and his co-host interview Carmine Paolino, the developer behind Ruby LLM. The discussion covers the significant strides and rapid adoption of Ruby LLM since its release, rooted in Paolino's philosophy of building simple, effective, and adaptable tools. The podcast delves into the nuances of upgrading Ruby LLM, its ever-expanding functionality, and the core principles driving its design. Paolino reflects on the personal motivations and community-driven contributions that have propelled the project to over 3.6 million downloads. Key topics include the philosophy of progressive disclosure, the challenges of multi-agent systems in AI, and innovative ways to manage contexts in LLMs. The episode also touches on improving Ruby’s concurrency handling using Async and Rectors, the future of AI app development in Ruby, and practical advice for developers leveraging AI in their applications. 00:00 Introduction and Guest Welcome 00:39 Depend Bot Upgrade Concerns 01:22 Ruby LLM's Success and Philosophy 05:03 Progressive Disclosure and Model Registry 08:32 Challenges with Provider Mechanisms 16:55 Multi-Agent AI Assisted Development 27:09 Understanding Context Limitations in LLMs 28:20 Exploring Context Engineering in Ruby LLM 29:27 Benchmarking and Evaluation in Ruby LLM 30:34 The Role of Agents in Ruby LLM 39:09 The Future of AI Apps with Ruby 39:58 Async and Ruby: Enhancing Performance 45:12 Practical Applications and Challenges 49:01 Conclusion and Final Thoughts

    52 min
  4. The TLDR of AI Dev: Real Workflows with Justin Searls

    21/10/2025

    The TLDR of AI Dev: Real Workflows with Justin Searls

    In this episode of the Ruby AI Podcast, co-hosts Valentino Stoll and Joe Leo engage in a lively discussion with guest Justin Searls. They explore the evolving landscape of software development with agentic AI tools, comparing traditional agile methodologies with emerging AI-driven practices. Justin Searls his experiences with refactoring and the challenges of integrating AI tools into development workflows. The conversation touches on the suitability of AI in coding, philosophical perspectives on reinforcing proper software practices, and the future potential of these technologies. Justin also provides valuable insights on configuring AI tools for better productivity and discusses his personal coping strategies with the frustrations of modern AI capabilities. 00:00 Introduction and Hosts Banter 00:30 Guest Introduction: Justin Searls 03:13 Justin's Career and Conference Talks 07:52 The Evolution of Agile and Development Practices 16:07 Challenges with AI and Iterative Development 27:47 Recalibrating Development Processes 28:00 Adoption of Pivotal Labs' Methods 28:28 Continuous Integration and Testing 29:21 AI in Development: Current State and Challenges 30:16 The Role of AI Agents in Development 32:17 Frustrations with AI Tools 35:03 Philosophical Reflections on AI in Development 36:16 Generative vs. Subtractive AI 37:06 The Future of AI in Software Development 39:27 Balancing Coding Enjoyment and Productivity 44:02 Capability vs. Suitability in AI Tools 46:35 Prompt Engineering Tips and Tricks 52:39 Closing Thoughts and Plugs

    55 min
  5. Real-World Ruby AI: Practical Systems That Work

    07/10/2025

    Real-World Ruby AI: Practical Systems That Work

    In this episode of the Ruby AI Podcast, co-hosts Joe Leo and Valentino Stoll, alongside guest Amanda Bizzinotto from Ombu Labs, delve into the ongoing controversy within the Ruby community involving Ruby Central, Shopify, and Bundler/Ruby Gems. While both Valentino and Amanda share their perspectives on the situation, the conversation swiftly transitions into Amanda's journey and current work in AI and machine learning at Ombu Labs. The episode highlights various AI initiatives, including the creation of an AI bot to streamline internal processes, automated Rails upgrade roadmaps, and multi-agent architectures aimed at enhancing efficiency in Rails projects. Amanda also discusses the challenges of integrating AI in consultancy services and shares some insights on the tools and strategies used at Ombu Labs. The podcast concludes with exciting updates about Amanda's recent work, Joe's announcements on upcoming projects including Phoenix's public release, and Valentino's discovery of a new user interface for Claude Swarm. 00:00 Introduction and Welcome 00:26 Ruby Community Controversy 04:37 Amanda's AI Journey 08:45 AI in Business and Consultancy 16:24 AI-Powered Tools and Applications 23:09 Managing Knowledge Base Updates 24:42 Prompting Strategies and Agentic Workflows 26:02 Understanding Workflows vs. Agents 28:37 Observability in AI Systems 29:06 Advanced Prompting Techniques 31:08 Multi-Agent Architectures 34:32 Ruby AI Gems and Libraries 37:09 Exciting Announcements and Future Plans 41:44 Conclusion and Final Thoughts Mentioned In The Show: AI for Rails upgrades: FastRuby automated roadmapPGVector and Neighbor gemGuardrails.ai for hallucination control (https://www.guardrailsai.com)Microsoft Presidio for PII strippingObservability with LangFuse (https://www.langfuse.com)Prompting engineering techniquesChain-of-Thought, ReAct pattern article ActiveAgentLangChain.rbDSPy.rbPhoenix AI upgrade assistant public beta Oct 15 eventOmbu Labs roadmap tool live nowSwarm UI for Claude Swarm by Parruda Ombu Labs – https://ombulabs.comArtificial Ruby NYC meetup – https://artificialruby.aiShopify Claude Swarm project – https://github.com/shopify/claude-swarm

    42 min
  6. Contracts and Code: The Realities of AI Development

    23/09/2025

    Contracts and Code: The Realities of AI Development

    In this episode, Valentino Stoll and Joe Leo unpack the widening gap between headline-grabbing AI salaries and the day-to-day realities of building sustainable AI products. From sports-style contracts stuffed with equity to the true cost of running large models, they explore why incremental gains often matter more than hype. The conversation dives into the messy art of benchmarking LLMs, the fresh evaluation tools emerging in the Ruby ecosystem, and new OpenAI features that change how prompts, tools, and reasoning tokens are handled. Along the way, they weigh the business math of switching models, debate standardisation versus playful experimentation in Ruby, and highlight frameworks like RubyLLM, Phoenix, and Leva that are reshaping how developers ship AI features. Takeaways The importance of marketing oneself in the tech industry.Disparity in AI salaries reflects market demand and hype.AI contracts often include equity, complicating true value assessment.The AI race lacks clear winners, with incremental improvements across models.User experience often outweighs model efficacy in AI products.Prompt engineering is crucial for optimizing model performance.Benchmarking AI models is complex and requires tailored evaluation sets.Existing tools for AI evaluation are often insufficient for specific needs.Cost analysis is critical when choosing AI models for business.Incremental improvements in AI models may not meet user expectations. You can constrain tool outputs to specific grammars for flexibility.Asking models to think out loud can enhance tool calls.Reasoning tokens can be reused in subsequent AI calls.Evaluating AI frameworks is crucial for business decisions.Ruby's integration in AI is becoming more prominent.The AI landscape is rapidly evolving, requiring adaptability.Hype cycles can mislead developers about tool longevity.Ruby offers a unique user experience for developers.Tinkering with code fosters creativity and innovation.The playful nature of Ruby can lead to unexpected insights.

    48 min

Acerca de

The Ruby AI Podcast explores the intersection of Ruby programming and artificial intelligence, featuring expert discussions, innovative projects, and practical insights. Join us as we interview industry leaders and developers to uncover how Ruby is shaping the future of AI.

También te podría interesar