Inspire AI: Transforming RVA Through Technology and Automation

AI Ready RVA

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI. 

  1. 2D AGO

    Ep 77 - The Ralph Loop: How Iteration Turns AI Into A Reliable Work System

    Send us Fan Mail Most teams are still using AI like a vending machine: type a prompt, hope for the right answer, then waste time nudging it closer. We take a different route and unpack the Ralph Loop, a deceptively simple pattern that turns AI from a one-shot helper into a process that improves through iteration. We explain where the idea comes from, why the name matters, and what “intelligence lives in the loop” really means. Then we ground it with two practical stories. First, an engineering team migrates hundreds of tests by letting AI convert files, running the test suite after each pass, and feeding failures back into the next attempt. Next, a product leader stops endlessly editing AI drafts and instead runs a loop: draft, critique, revise, repeat. Same model, dramatically better outcomes because refinement is built into the workflow. We also get honest about what can go wrong: infinite loops when “done” is vague, garbage amplification when the task is unclear, cost blowups when retries are unbounded, and silent drift when there are no checkpoints. The big leadership takeaway is the shift in responsibility. AI does not remove judgment, it demands more of it, because the real skill becomes designing the system that gets the work done over time. If you want a mental model you can use next week, we share the Ralph lens: is the task iterative, can you define success clearly, and can you let it run without you for a while? If that clicks, subscribe, share this with a builder or leader on your team, and leave a review with the loop you want to try first. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    10 min
  2. APR 27

    Ep 76 - The BMAD Method For Building Reliable Agentic Systems

    Send us Fan Mail AI can write code on demand now, but that doesn’t mean we’re building better software. When we treat AI like a chat window with a long memory, projects drift: requirements change midstream, agents hallucinate assumptions, and systems that felt “fast” become fragile. I walk through the hidden cost of vibe coding and why discipline matters more than ever in an age where intelligence is cheap. We break down a framework serious AI builders are converging on: the BMAD method (Breakthrough Method for Agile AI Driven Development). The heart of BMAD is simple but powerful: treat AI like a team of specialized agents with clear roles, then give that team shared artifacts that act as the source of truth. PRDs, ADRs, story files, and project context become durable, reviewable memory, so you move from conversation driven development to system driven development. The result is contract based intelligence where agents execute what’s written instead of guessing what you meant. From there, we get practical about reliability and security for agentic systems. We map the core loop of goals, planning, execution, and verification, and explain why verification gates, adversarial reviews, and tests are not “nice to have” if you want production-grade outcomes. We also cover real threats like prompt injection and tool hijacking, plus defenses like context minimization, least privilege, action isolation, and audit trails. If you only take one step today, add a readiness gate that forces clarity before you build. If you found this useful, subscribe to Inspire AI, share the episode with a builder on your team, and leave a review so more leaders can find it. What’s the one place your AI workflow needs more structure right now? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    9 min
  3. APR 20

    Ep 75 - Where Human Judgment Belongs Throughout A Multi-Agent Workflow

    Send us Fan Mail Multi-agent AI feels like a breakthrough right up until you realize the real problem isn’t intelligence anymore, it’s coordination. When planning agents, retrieval agents, tool-using agents, and verification agents all make decisions, a simple “final answer review” can miss the most dangerous failures: bad handoffs, invisible drift, and silent coordination breakdowns where every step looks fine but the system still misses the goal. We dig into why Human in the Loop has to evolve from a last-minute checkpoint into a true control layer for AI systems that act. We walk through a practical, high-leverage framework for human oversight in multi-agent systems: pre-execution oversight (approve plans, set constraints, define boundaries), process intervention (monitor decisions mid-flight, catch loops, block unexpected tool use), and post-execution evaluation (audit trajectories, feed corrections back into the system). The big takeaway is simple: oversight only matters when it can still change the outcome, so we place human judgment at points of irreversibility and high uncertainty. Then we get concrete about AI governance and AI safety: common multi-agent failure modes like agent misalignment, cascading errors, tool misuse at scale, and silent coordination failure. We also cover evaluation metrics that actually reflect system behavior such as trajectory correctness, handoff integrity, intervention rate, recovery success rate, and true system-level task success. If you’re building an agent factory across learning, workflow, and production agents, this is the playbook for scaling autonomy without scaling risk. Subscribe, share this with your team, and leave a review telling us: where should human judgment live in your AI stack? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    10 min
  4. APR 13

    Ep 74 - Agentic Workflows: Did The Job Actually Get Done?

    Send us Fan Mail An AI agent that confidently says “done” can still be the most expensive kind of wrong. We start with a simple test of reality: when an agent updates a policy document, who was notified, what changed, what got logged, and what state did it actually leave behind? That gap between a polished response and a verified result is where agent hype turns into operational risk. We walk through task-based evaluation, the practical way to measure agentic workflows that act through tools and trigger real system changes. The key framework is defining every task with a goal state (what must be true at the end) and a constraint set (what must never happen on the way). From there, we build a metrics stack that goes beyond “did it sound helpful” into what engineering teams can defend: task success rate, P95 completion time, tool-use correctness, step-level accuracy, partial progress, and especially catastrophic failure rate. If 10% of runs cause irreversible damage, the system isn’t “90% successful,” it’s not deployable. Evaluation also can’t be a one-time checkpoint. We map a full lifecycle from offline testing to simulation and staging, then canary releases, and finally production monitoring with continuous evaluation. Along the way we call out the hidden killer: collateral damage, when the agent completes the main task but breaks something adjacent. We close by zooming out to AI governance and leadership decisions, including autonomy tiers and the principle that autonomy must be earned through evidence, not assumed through capability. Subscribe to Inspire AI, share this with a builder who ships agents, and leave a review with the metric you think most teams ignore. What’s your non-negotiable constraint for autonomous systems? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    11 min
  5. APR 6

    Ep 73 - The AI Race: Winner Takes All

    Send us Fan Mail The AI race is quietly changing shape, and if you’re still tracking it like a scoreboard of model releases, you’re going to miss the real winners. We step back from the noise and make the case that the decisive battleground is physical: electricity, chips, land, permits, cooling, grid connections, and the ability to run AI reliably at scale. The question shifts from “Can we build it?” to “Can we power it, place it, and operate it everywhere people need it?” We share the core framework we use to evaluate AI strategy in the real world: AI advantage equals energy times compute times chips times capital times distribution. We unpack why energy becomes the new bottleneck as data centers surge in electricity demand, why compute is constrained by infrastructure timelines, why chips remain a concentrated source of leverage, and why capital can’t outrun the physics of buildouts. Then we dig into the most underrated factor: distribution, where the race turns from innovation to integration inside workflows, factories, hospitals, logistics, and classrooms. We also map the global landscape with clearer lenses: US strength in frontier power, China’s accelerating edge in industrial diffusion, and Europe’s slower but powerful influence through regulation, compliance, and trust frameworks that shape what gets deployed and where. As open models rise and costs fall, we argue the advantage of having the “best model” shrinks while the advantage of deploying faster and operating cheaper grows. If you’re leading AI adoption, investing, or setting strategy, listen for the questions that matter: where will your AI run, what infrastructure dependencies are you accepting, and are you optimizing for capability or usability? Subscribe for more practical frameworks, share this with a teammate, and leave a review with the biggest bottleneck you’re facing right now. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    9 min
  6. MAR 30

    Ep 72 - The Future of Work: Designing AI-Augmented Roles w/ Vivek Gupta

    Send us Fan Mail AI is moving so fast that “keeping up” can start to feel like a losing game and that speed is exactly what’s reshaping the future of work. We sit down with Vivek, an IT services leader building an enterprise AI platform focused on digital foundation and organization-wide AI transformation, to talk about what’s changing underneath our job roles right now. The big idea is simple but urgent: AI isn’t just another productivity tool. It’s a system-level shift that’s pushing every department to redesign how work gets done. We dig into why AI adoption often starts the messy way: one team at a time. Sales picks a vendor, support picks another, finance picks a third. Vivek explains why that piecemeal approach creates AI silos and multiplies the hard parts like data exposure, guardrails, compliance controls, and maintenance. Over time, you lose consistency, visibility, and the ability to manage a single AI strategy across the business. If you care about governance, cost control, and brand consistency, this part will hit home. Then we get concrete about what an AI-augmented worker looks like in practice. Think agentic AI systems aligned to familiar job families like sales, customer support, finance, and HR, with role-based access so people see only the agents they need. We also talk about the skills that survive the next wave of change, why adaptability and creativity matter more than ever, and how to build habits that keep you learning without burning out. If this helped you think differently about enterprise AI, the future of work, and AI strategy, subscribe to Inspire AI, share the episode with a colleague, and leave a quick review. What part of your job do you think AI will change first? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    45 min
  7. MAR 23

    Ep 71 - Furiously Curious: How To Stay Relevant As AI Speeds Up Work w/ Caleb Snow

    Send us Fan Mail The career fear around AI is real, but Caleb Snow flips the frame: the bigger threat is standing still while everything else accelerates. Caleb has spent decades moving through waves of technology from early systems work to Fortune 500 environments to vendor architecture, sales leadership, and AI startups. That range gives him a clear view of what’s changing in the future of work and what still matters when the tools keep shifting under your feet. We dig into the mindset he calls “furiously curious” and why it’s no longer optional. We talk about the new expectation of speed, the tension between rapid building and enterprise realities like security and QA, and how sales, marketing, and operators can use AI to research faster, communicate better, and stay organized. Caleb shares concrete examples, including using an AI-written LinkedIn message to open a relationship with a top executive and building daily workflows that surface the tasks you promised to do. Then we go deeper on a topic most AI conversations skip: trust. As models improve and also sometimes overpromise, AI governance, model validation, and accountability become as important as innovation. Caleb explains why wrapping tools around the customer and measuring performance across models can reduce risk and improve decisions. We also talk about AI Ready RVA, community cohorts, and the growing need to help people adapt as roles shift. If this conversation helps you think differently, subscribe to Inspire AI, share the episode with a friend navigating career change, and leave a review. What part of your work are you most ready to reinvent next? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    1h 5m
  8. MAR 16

    Ep 70 - The Rise Of AI Runtimes: Google's Agent Development Kit

    Send us Fan Mail AI is starting to feel less like a feature you bolt onto a product and more like a system you have to run. That shift is easy to miss until you try to build something real: a workflow that calls APIs, keeps context across sessions, coordinates tasks, pauses for human approval, and resumes later without breaking. Suddenly prompts are not the hard part. Architecture is. I walk through what Google’s Agent Development Kit (ADK) reveals about the future of AI agents and agentic workflows. The core idea is event driven execution: a runner orchestrates the system while an agent emits events like “use this tool,” “update state,” “store an artifact,” or “request confirmation.” It’s a clean mental model for building an AI runtime with resumable execution, observable state, and tool integration that can actually survive production. We also get practical about agent design. Not every agent should be an LLM free styling its way through a task. I break down LLM agents for reasoning, workflow agents for deterministic reliability, and custom agents for complex orchestration, then connect that to the deeper takeaway: the model is the decision engine, but tools are the capability. Rich tool ecosystems and clear interfaces will matter more than chasing ever larger parameter counts. Finally, we talk governance and safety. Tool confirmation and human in the loop controls are not optional if agents can send emails, change data, or trigger real world actions. If you’re a leader, builder, or architect trying to scale enterprise AI responsibly, this is the mindset shift to make now. Subscribe, share this with a teammate, and leave a review with the guardrail you think every AI agent should have. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    11 min

About

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI.