Inspire AI: Transforming RVA Through Technology and Automation

AI Ready RVA

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI. 

  1. 2D AGO

    Ep 73 - The AI Race: Winner Takes All

    Send us Fan Mail The AI race is quietly changing shape, and if you’re still tracking it like a scoreboard of model releases, you’re going to miss the real winners. We step back from the noise and make the case that the decisive battleground is physical: electricity, chips, land, permits, cooling, grid connections, and the ability to run AI reliably at scale. The question shifts from “Can we build it?” to “Can we power it, place it, and operate it everywhere people need it?” We share the core framework we use to evaluate AI strategy in the real world: AI advantage equals energy times compute times chips times capital times distribution. We unpack why energy becomes the new bottleneck as data centers surge in electricity demand, why compute is constrained by infrastructure timelines, why chips remain a concentrated source of leverage, and why capital can’t outrun the physics of buildouts. Then we dig into the most underrated factor: distribution, where the race turns from innovation to integration inside workflows, factories, hospitals, logistics, and classrooms. We also map the global landscape with clearer lenses: US strength in frontier power, China’s accelerating edge in industrial diffusion, and Europe’s slower but powerful influence through regulation, compliance, and trust frameworks that shape what gets deployed and where. As open models rise and costs fall, we argue the advantage of having the “best model” shrinks while the advantage of deploying faster and operating cheaper grows. If you’re leading AI adoption, investing, or setting strategy, listen for the questions that matter: where will your AI run, what infrastructure dependencies are you accepting, and are you optimizing for capability or usability? Subscribe for more practical frameworks, share this with a teammate, and leave a review with the biggest bottleneck you’re facing right now. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    9 min
  2. MAR 30

    Ep 72 - The Future of Work: Designing AI-Augmented Roles w/ Vivek Gupta

    Send us Fan Mail AI is moving so fast that “keeping up” can start to feel like a losing game and that speed is exactly what’s reshaping the future of work. We sit down with Vivek, an IT services leader building an enterprise AI platform focused on digital foundation and organization-wide AI transformation, to talk about what’s changing underneath our job roles right now. The big idea is simple but urgent: AI isn’t just another productivity tool. It’s a system-level shift that’s pushing every department to redesign how work gets done. We dig into why AI adoption often starts the messy way: one team at a time. Sales picks a vendor, support picks another, finance picks a third. Vivek explains why that piecemeal approach creates AI silos and multiplies the hard parts like data exposure, guardrails, compliance controls, and maintenance. Over time, you lose consistency, visibility, and the ability to manage a single AI strategy across the business. If you care about governance, cost control, and brand consistency, this part will hit home. Then we get concrete about what an AI-augmented worker looks like in practice. Think agentic AI systems aligned to familiar job families like sales, customer support, finance, and HR, with role-based access so people see only the agents they need. We also talk about the skills that survive the next wave of change, why adaptability and creativity matter more than ever, and how to build habits that keep you learning without burning out. If this helped you think differently about enterprise AI, the future of work, and AI strategy, subscribe to Inspire AI, share the episode with a colleague, and leave a quick review. What part of your job do you think AI will change first? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    45 min
  3. MAR 23

    Ep 71 - Furiously Curious: How To Stay Relevant As AI Speeds Up Work w/ Caleb Snow

    Send us Fan Mail The career fear around AI is real, but Caleb Snow flips the frame: the bigger threat is standing still while everything else accelerates. Caleb has spent decades moving through waves of technology from early systems work to Fortune 500 environments to vendor architecture, sales leadership, and AI startups. That range gives him a clear view of what’s changing in the future of work and what still matters when the tools keep shifting under your feet. We dig into the mindset he calls “furiously curious” and why it’s no longer optional. We talk about the new expectation of speed, the tension between rapid building and enterprise realities like security and QA, and how sales, marketing, and operators can use AI to research faster, communicate better, and stay organized. Caleb shares concrete examples, including using an AI-written LinkedIn message to open a relationship with a top executive and building daily workflows that surface the tasks you promised to do. Then we go deeper on a topic most AI conversations skip: trust. As models improve and also sometimes overpromise, AI governance, model validation, and accountability become as important as innovation. Caleb explains why wrapping tools around the customer and measuring performance across models can reduce risk and improve decisions. We also talk about AI Ready RVA, community cohorts, and the growing need to help people adapt as roles shift. If this conversation helps you think differently, subscribe to Inspire AI, share the episode with a friend navigating career change, and leave a review. What part of your work are you most ready to reinvent next? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    1h 5m
  4. MAR 16

    Ep 70 - The Rise Of AI Runtimes: Google's Agent Development Kit

    Send us Fan Mail AI is starting to feel less like a feature you bolt onto a product and more like a system you have to run. That shift is easy to miss until you try to build something real: a workflow that calls APIs, keeps context across sessions, coordinates tasks, pauses for human approval, and resumes later without breaking. Suddenly prompts are not the hard part. Architecture is. I walk through what Google’s Agent Development Kit (ADK) reveals about the future of AI agents and agentic workflows. The core idea is event driven execution: a runner orchestrates the system while an agent emits events like “use this tool,” “update state,” “store an artifact,” or “request confirmation.” It’s a clean mental model for building an AI runtime with resumable execution, observable state, and tool integration that can actually survive production. We also get practical about agent design. Not every agent should be an LLM free styling its way through a task. I break down LLM agents for reasoning, workflow agents for deterministic reliability, and custom agents for complex orchestration, then connect that to the deeper takeaway: the model is the decision engine, but tools are the capability. Rich tool ecosystems and clear interfaces will matter more than chasing ever larger parameter counts. Finally, we talk governance and safety. Tool confirmation and human in the loop controls are not optional if agents can send emails, change data, or trigger real world actions. If you’re a leader, builder, or architect trying to scale enterprise AI responsibly, this is the mindset shift to make now. Subscribe, share this with a teammate, and leave a review with the guardrail you think every AI agent should have. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    11 min
  5. MAR 9

    Ep 69 - Future of Work: When Intelligence Lives In Your Tools

    Send us Fan Mail What if the real shift in the future of work isn’t learning to code, but learning to supervise? We dig into a new operating model where product and engineering leaders step into the execution loop by directing AI coding agents that read repos, edit files, run tests, and open pull requests—while engineers safeguard architecture and correctness. The payoff is leverage: clear intent, tighter feedback loops, and artifacts that move from concept to code without the slow drag of endless handoffs. We break down the workflows that change first. Technical discovery goes from week‑long spelunking to safe, read‑only scans that map modules, APIs, logs, and risks. Strategy stops living in slides as agents draft API contracts, edge cases, rollout plans, observability requirements, and acceptance tests tailored to your repo conventions. Prototyping accelerates with feature‑flagged walking skeletons that ship telemetry and a passing test, so feasibility debates turn into concrete PR reviews. Communication gets sharper as release notes and risk flags are generated from diffs, not guesswork. Verification becomes culture when prompts encode done as tests pass with outputs shown, and CI automations become structured, maintainable flows rather than fragile hacks. Even roadmap hygiene matures as agents link traceability, standardize acceptance criteria, and rewrite unclear tasks. Speed without rigor is a trap, so we name the metrics that actually show progress: cycle time, change failure rate, experiment throughput, avoided defects, and review latency. We also surface the new risk surface—hallucinations and silent failures, security and supply chain exposure, data retention and IP policy mismatch, skill and ownership drift—and share pragmatic governance: permission scopes, sandboxing, allow‑listed integrations, audit logs, and mandatory human PR review. Tools like Claude Code, Codex, Cursor, and Windsurf are signals of a broader pattern: intelligence becoming ambient inside production systems. The winners won’t be the teams that chase the latest tool; they’ll be the ones who redesign workflows thoughtfully, measurably, and ethically. Join us as we turn leadership judgment into the core advantage: delegating to agents, specifying constraints and verification, and building execution loops that turn clarity into shipping code. If this resonates, follow the show, share it with a teammate who owns delivery, and leave a quick review telling us which workflow you want us to demo next. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    10 min
  6. MAR 2

    Ep 68 - Intent Over Keystrokes: The New Mind of the Modern Builder w/ Godwin Josh

    Send us Fan Mail What happens when coding stops being about keystrokes and starts being about intent? We sit down with Godwin Josh—mentor, builder, and author of The New Mind—to unpack how agentic AI is transforming the path from student to work-ready engineer. Instead of celebrating speed for its own sake, we look at why tools like Windsurf, Claude Code, and Copilot accelerate learning, make patterns visible, and free developers to focus on judgment, problem framing, and real outcomes. We trace Godwin’s journey from early DOS animations and hardware products to AI-first teams, then dive into a practical stack for modern builders: Linux for environments, Python for ecosystem depth, and an agentic layer that includes skills, agents.md for self-describing projects, and soul.md for consistent behavior around testing, security, and clarity. With MCP acting like a universal “USB port,” models can discover and use tools reliably, turning agents into capable collaborators rather than autocomplete toys. The shift is profound: a developer becomes a director—defining goals, curating capabilities, and validating results—while agents handle scaffolding, refactors, and repetitive glue work. Mentorship emerges as the quiet engine behind impact. Raw intelligence doesn’t guarantee results; exposure to constraints, wise counsel, and clear goals does. We talk about building cross-disciplinary teams with universities, where physics meets data science and bio meets compute, and how AI compresses the learning curve so students can build real systems before graduation. We also confront the anxiety many veterans feel: when the “how” is automated, your edge becomes asking sharper questions, making faster decisions, and communicating with courage. Math and language prove durable; specific tools churn. If you’re a student, lead, or educator navigating agentic AI, you’ll leave with a playbook: codify standards in skills, describe projects with agents.md, shape agent behavior with soul.md, validate across multiple models, and measure progress by shipped value. Subscribe, share this with someone who’s rethinking their workflow, and leave a review telling us: what skill matters most when AI writes the boilerplate? Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    48 min
  7. FEB 23

    Ep 67 - RAG Done Right: Measure The Evidence Or Drift Into Error

    Send us Fan Mail What happens when a brilliant-sounding AI gives the wrong answer with total confidence? We dig into the quiet culprit behind so many “LLM failures”: retrieval. Rather than judging how smart a model sounds, we walk through how to judge whether it looked at the right evidence, why that matters in high-stakes domains like finance, healthcare, HR, and government, and how leaders can stop organizational drift driven by outdated or partial sources. We break down four pillars every RAG team should track: retrieval precision and recall to balance noise versus coverage; context relevance and coverage to ensure the retrieved passages actually answer the question; groundedness and fluency so every claim traces back to evidence; and accuracy and completeness to catch stale or missing knowledge. Along the way, we share real-world patterns—chatbots citing old HR policies, assistants using superseded regulations, and tools surfacing obsolete medical guidance—and show how these errors spread when confidence outruns curation. Then we get practical. We outline precision@K and recall@K, golden question sets tied to authoritative documents, LLM-based judging for relevance and groundedness, and continuous regression testing as knowledge bases evolve. More importantly, we frame the cultural shift: assign ownership for knowledge freshness, make sources visible next to answers, and normalize verification at every level. Treat AI answers as drafts, retrieval as evidence, and evaluation as the safeguard. If you’re running or planning a RAG system, start by asking to see retrieved sources, build a small high-stakes golden set, and set a cadence for archiving and updates. If this conversation helped sharpen your approach to reliable AI, subscribe, share with a teammate who manages content or compliance, and leave a quick review with one insight you’re taking back to your team. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    10 min
  8. FEB 16

    Ep 66 - From Prompts To Process: Building Trustworthy AI Workflows w/ Tianzhen Lin

    Send us Fan Mail When intelligence is everywhere but correctness is scarce, how do we lead without cutting corners? We sit down with Tianzhen (Tangent) Lin—veteran engineer and systems thinker—to unpack a practical, durable approach to building AI‑assisted products that hold up under pressure. No hype, no shortcuts: just the patterns that make teams faster and safer at the same time. We start by reframing large language models as “eager interns”: fast, helpful, and prone to saying yes. That mental model shifts responsibility back where it belongs—on leaders who must design workflows that surface assumptions, constrain degrees of freedom, and verify outcomes. Tangent explains why context remains a finite resource even with giant windows and how the “lost in the middle” effect undermines long prompts. The fix isn’t more chat; it’s better scaffolding. Specs, plans, and documentation become the backbone for repeatable success because they compress what matters and travel across sessions and teammates. From there, we dig into decomposition as a risk strategy. Breaking work into small, testable steps gives you early checkpoints to catch hallucinated requirements, unsafe libraries, or performance traps—like UI freezes from naive million‑row operations. Tangent shares a late‑night pivot where a strong, technology‑agnostic spec let the team re‑architect in hours, not days, turning a potential rewrite into a near‑seamless transition. We dive into verification as a non‑negotiable, the value of documentation as compressed context, and how institutional knowledge prevents the “sandcastle effect” when requirements shift or the tide comes in. The result is a playbook for leaders navigating an AI‑accelerated world: treat context like budget, invest in durable artifacts, decompose to control risk, and verify relentlessly. Do that well and AI stops being a confident amateur and starts acting like a reliable teammate. If you’re serious about trust, safety, and scalable speed, this conversation will sharpen your judgment and strengthen your systems. Subscribe, share with a teammate who ships software, and leave a review with the one workflow change you’ll make this week. Want to join a community of AI learners and enthusiasts? AI Ready RVA is leading the conversation and is rapidly rising as a hub for AI in the Richmond Region. Become a member and support our AI literacy initiatives.

    33 min

About

Our mission is to cultivate AI literacy in the Greater Richmond Region through awareness, community engagement, education, and advocacy. In this podcast, we spotlight companies and individuals in the region who are pioneering the development and use of AI. 

You Might Also Like