Inspire AI: Transforming RVA Through Technology and Automation

AI Ready RVA

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI. 

  1. 5D AGO

    Ep 64 - Intelligence, Accountability, And You: From AI Slop to Sound Judgement

    Send us a text The pace of AI can feel exhilarating until a polished report collapses under scrutiny and your team spends hours repairing “work slop.” We’re seeing a quiet shift across organizations: as intelligence becomes ambient, leadership’s edge moves from gathering information to evaluating it. That shift changes how we make calls, how we manage risk, and how we design trust into everyday workflows. We unpack practical decision hygiene that keeps speed from steamrolling substance. Treat AI outputs as drafts, not verdicts; verify facts, pressure-test conclusions, and define what “done” really means so polish doesn’t masquerade as insight. We share question prompts to expose missing data and faulty assumptions, and we draw clear lines between decision support and decision replacement—because confidence is not correctness, and accountability cannot be delegated to an algorithm. We then move into risk management where leaders operate as the safety net between model outputs and real-world consequences. From finance to healthcare to marketing, we outline why high-stakes decisions demand human in the loop and how to establish reviews, stress tests, and override paths without smothering speed. You don’t need to build models to lead well; you need to know where they break, how bias creeps in, and which failure modes matter for money, health, fairness, and reputation. Finally, we design for trust. Adoption accelerates when people know where AI is used, who stays accountable, and how decisions align with values. We explore transparency, explainability, and psychological safety so teams feel augmented rather than quietly judged or replaced. The throughline is simple: AI can generate options, but it can’t weigh meaning or carry consequence. That’s your job. If you’re ready to turn ambient intelligence into durable advantage, join us and upgrade your role to evaluator in chief. Enjoy the conversation? Follow the show, share with a colleague, and leave a quick review—then tell us the one change you’ll make to improve AI evaluation on your team. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    14 min
  2. JAN 26

    Ep 63 - Human In The Loop: Designing The Boundary Between Machines And Humans

    Send us a text The moment an AI agent can issue refunds or change accounts, the conversation shifts from capability to responsibility. We dig into how to design trust between people and machines by choosing the right oversight model for the job: human in the loop for high-stakes decisions and human on the loop for fast, high-volume work. Along the way, we unpack concrete playbooks for customer service leaders and operators who need speed without sacrificing judgment. We start by drawing a clear line between decision-time approval and supervisory control, then show how confidence-based escalation creates dynamic autonomy. Instead of all-or-nothing automation, we use signals like model confidence, customer sentiment, value at risk, and ambiguity to route actions for auto-resolution or human review. We also break down synchronous versus asynchronous oversight, and why advanced teams separate planning (human approved) from execution (AI driven) to combine safety with scale. The examples ground the theory: a retailer that automated 40 percent of inquiries while escalating emotionally charged cases, an airline that trained its system through human corrections before handing off routine tickets, and insurers that pay clean claims instantly while auditing edge cases. You’ll hear a pragmatic checklist for safe scaling: map risk before tasks, set thresholds, give reviewers explanations, log everything, prevent automation bias, and train people to be AI supervisors. The goal isn’t to remove humans; it’s to elevate them—letting AI handle speed and repetition while humans guard empathy, accountability, and trust. Ready to build AI that knows when to ask for help? Subscribe, share this episode with a teammate, and leave a review with your top escalation trigger—we’ll feature the best ideas in a future show. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    11 min
  3. JAN 19

    Ep 62 - Reconfiguring Work: A Playbook For Agentic AI Adoption

    Send us a text When AI stops acting like a tool and starts acting like a teammate, the rules of work change. We explore what agentic AI really means for teams, decisions, and culture—and why the biggest blockers aren’t algorithms but fear, fatigue, and unclear purpose. Instead of chasing pilots that never scale, we walk through a practical, people-first playbook anchored in outcomes, trust, and daily usefulness. We break down battle-tested frameworks leaders are using right now: McKinsey’s North Star and reconfigured work model, BCG’s five must‑haves for AI upskilling, and Mercer’s human‑plus‑agent operating system. Along the way, we dive into candid case studies: how McKinsey’s “Have you asked Lily?” norm turned AI into habit, and how Bank of America’s “make work easier” principle drove adoption above 90% while strengthening governance. You’ll hear why distributed leadership and peer champions matter more than mandates, how to close the enthusiasm gap with honest communication, and how to design rollouts that reduce friction instead of adding change fatigue. If you’re leading transformation, you’ll leave with a Monday morning checklist: define outcomes, build trust with transparent governance, co-create with employees, overinvest in role-based upskilling, model usage from the top, design for daily usefulness, and keep wins visible to sustain momentum. The edge isn’t competing with AI—it’s orchestrating it to amplify human judgment and deliver measurable value. Subscribe, share with a colleague, and tell us: what’s your North Star for agentic AI where you work? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    16 min
  4. JAN 12

    Ep 61 - From Automation To Augmentation: How Humans And AI Become Better Teammates

    Send us a text Fear says AI will replace us; experience shows the real upside arrives when we work with it. We explore how augmentation—keeping humans in the loop—turns AI from a black box into a trusted teammate that speeds analysis, amplifies creativity, and preserves the meaning in our jobs. We start by reframing the automation vs. augmentation debate, breaking down what humans and machines each do best. Then we map the collaboration spectrum—advisor, assistant, co-creator, executor—and explain how to pick the right level of autonomy based on risk and context. Along the way, we share design principles for trustworthy systems: human decision authority in high-stakes areas, complementary roles, intuitive interfaces, and embedded governance so transparency and override controls are never an afterthought. From there we get practical. You’ll hear a clear learning path for professionals that avoids buzzwords and focuses on outcomes: anchor on your own workflows, build AI literacy instead of tool worship, treat prompting as a thinking skill, and practice human-in-the-loop habits like “AI drafts, you edit” and “AI analyzes, you interpret.” We dig into calibrated trust—how to avoid both skepticism and blind reliance—and the cultural shifts leaders need to drive, from early employee involvement to clear communication about the why behind AI. Real-world stories bring it to life, from service teams using real-time coaching and summarization, to clinicians with diagnostic support, to advisors and creatives accelerating insight and ideation without losing judgment. If you’re ready to design work where AI handles scale and speed while people carry context, ethics, and responsibility, this conversation will help you move from fear to forward motion. Subscribe, share with a colleague, and leave a review to tell us where you want augmentation to make your work more meaningful. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    15 min
  5. JAN 5

    Ep 60 - Reinforcement Learning: Reward The Process And The Future Changes

    Send us a text New year energy is loud; smarter growth is quiet. We’re kicking off 2026 by trading fragile resolutions for a durable learning loop inspired by reinforcement learning. Instead of chasing perfect plans, we break down how real change happens: practice that compounds, rewards that align with values, feedback that arrives fast, and reflection that turns data into decisions. We unpack the core ideas behind learning by doing and translate them into tools you can use right away. You’ll hear why reward design directs both AI systems and human lives, and how misaligned incentives can push you toward perfectionism while starving curiosity. We dig into the explore versus exploit dilemma—when to try new approaches, when to double down on what works, and how to schedule experimentation so you don’t stagnate. Along the way, we borrow a page from machines and build safe simulations for ourselves: visualizing, rehearsing, drafting, and running tiny tests where failure is just feedback. This conversation also makes a case for self‑play and community. The strongest systems improve by competing and cooperating with worthy opponents, and so do we. Choose peers who challenge your assumptions, join rooms that raise your baseline, and design environments that make growth unavoidable. By the end, you’ll have a simple, repeatable loop—practice, feedback, reflection, adjustment—plus clear leading indicators to track. You are not behind or fixed; you’re an evolving intelligence capable of adaptation and curiosity. Subscribe, share with a friend who’s designing their own learning loop, and leave a review with one experiment you’ll run this week. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    8 min
  6. 12/29/2025

    Ep 59 - Year End Reflections With AI: And It Has Notes

    Send us a text What if reflection wasn’t a year-end memory dump but a working system that sharpens judgment? We sat down to examine how AI quietly changed the way we think, plan, and lead—shifting focus from task speed to decision quality, from outcomes to assumptions, and from rigid plans to resilient learning loops. Instead of asking what happened, we asked who we grew, where we created real leverage, and how our narrative as leaders evolved. We unpack the prompts that force clarity without comfort: where did our judgment create outsized impact, which decisions aged well given the information we had, and how well our calendars matched our stated priorities. Along the way, we show how AI reconstructs decisions at the moment they were made, turning private reasoning into an artifact we can analyze without ego. That distance unlocks clean counterfactuals—what alternatives were viable, which assumptions mattered most, and where risk was mispriced—so we stop relitigating ourselves and start improving the system. From there, we build a true decision quality loop: track choices, inputs, confidence, and results to expose patterns in judgment. Strengths become repeatable, biases become addressable, and learning accelerates. The payoff isn’t just productivity; it’s resilience. AI lowers the friction around thinking, helps separate signal from noise, and makes it easier to update beliefs quickly. If next year looked exactly like this one, would that excite you or concern you? Press play to grab the questions, run your own review, and set a sharper direction. If this resonated, subscribe, share with a friend who leads, and leave a review with the one question you’re taking into your year-end reflection. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    9 min
  7. 12/22/2025

    Ep 58 - Claude & MCP: And The Rise Of Enterprise Agents

    Send us a text Midnight outages that never become crises. Forms that fill themselves. Support queues that sort and draft responses before a human even looks. We explore how agentic AI moves from talk to action by pairing Claude with the Model Context Protocol (MCP) so models can safely reach into the tools your teams use every day and execute real work with guardrails. We start by framing the leap: a chatbot is great at conversation, an agent is great at outcomes. That difference hinges on capabilities. MCP acts like a universal adapter that exposes what tools can do—create a ticket, query a database, send an email, trigger a workflow—so an AI can discover and call actions, not just fetch data. With skills packaged as safe connectors, Claude runs a plan–act–reflect loop to complete tasks end to end: summarize tickets, prioritize, draft a report, and send it to Slack, all with permissions, scope, and logging baked in. From there, we go deep on practical wins. In IT help desks and ops, agentic patterns enable self-healing behavior—diagnosing likely causes, restarting services within strict bounds, and posting clear incident timelines that improve recovery and documentation. In enterprise workflows, the agent becomes an administrative accelerator that pre-fills onboarding steps, creates standard accounts, and routes for approval so humans make the calls that matter. For customer support, triage gets smarter and faster, pulling order history, detecting urgency and sentiment, and handing complex cases to people with richer context so they start at step five, not step one. We also tackle the big technical question: isn’t GraphQL enough? GraphQL shines at structured, deterministic data retrieval. MCP is different because the client is an agent that needs to discover capabilities and chain actions across open-ended tasks. Used together, GraphQL provides curated data access while MCP exposes that access as a safe tool—giving you deterministic guardrails with flexible orchestration. To get started, we share a focused pilot playbook: pick a bounded use case, leverage existing connectors, design guardrails first, decide autonomy levels, and measure resolution time, backlog reduction, hours saved, and satisfaction. Ready to move from AI that can talk to AI that can do? Subscribe for more deep dives, share this with a teammate who owns ops or workflows, and leave a review to tell us where you want agents to help next. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    12 min
  8. 12/15/2025

    Ep 57 - When Machines Decide: How Agentic AI Orchestration Delivers Real Resolution

    Send us a text What if your support chat didn’t just explain the fix but executed it end to end in under a minute? We explore the move from conversational help to true autonomy, where multiple specialized agents collaborate under an orchestrator to verify identity, update records, resolve conflicts, and confirm outcomes without human handoffs. It’s a practical, real-time shift that turns AI into a digital workforce built to deliver resolution, not just responses. We break down the core building blocks: language models to understand intent, specialized agents to retrieve data and act, an orchestrator to manage sequence and context, and tight integrations into CRM, billing, inventory, and HR systems. Then we get honest about risk. Autonomy amplifies small mistakes into big failures, so we emphasize governance, auditability, and human oversight—especially for edge cases and emotionally sensitive moments where empathy matters more than speed. You’ll hear how legacy systems can bottleneck progress and what it takes to modernize safely with idempotent operations, rate-aware designs, and policy guardrails. From customer service and returns to retail pricing, IT diagnostics, HR workflows, and content operations, we share concrete use cases along with a cautionary tale of runaway automation. The takeaway is clear: success with agentic AI isn’t magic; it’s thoughtful design that aligns actions with human values and business outcomes. If you’re leading teams through AI adoption, expect a people-first change management challenge: building trust, training for oversight, and deciding where human judgment remains non-negotiable. Ready to map your first autonomous workflow? Follow the show, share this episode with a colleague, and tell us which task you’d hand to an agent next. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    13 min

About

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI.