An Hour of Innovation with Vit Lyoshin

Vit Lyoshin

An Hour of Innovation is hosted by Vit Lyoshin, a technology professional with a product development, and program leadership background. Each episode explores the art and science of innovation through conversations with product leaders, scientists, and innovators. We dive into groundbreaking ideas, uncovering the why and how behind them. The goal is to amplify the voices of those driving change, offering insights, inspiration, and practical takeaways to spark listeners’ creativity and passion for progress. Welcome, and enjoy!

  1. Own Your AI Agent: Security, OpenClaw, Data Ownership, and the Future of Work | Toufi Saliba

    5D AGO

    Own Your AI Agent: Security, OpenClaw, Data Ownership, and the Future of Work | Toufi Saliba

    What if the AI agent working for you today could quietly become a risk, or your greatest long-term advantage, depending on how well you secure and own it? In this episode of the An Hour of Innovation podcast, Vit Lyoshin sits down with Toufi Saliba to unpack one of the most urgent and misunderstood shifts in modern technology: AI agents with real autonomy, real agency, and real consequences for humans. Toufi Saliba is a seasoned AI and infrastructure leader, founder and CEO of Hypercycle, and a long-time voice in AI security, governance, and decentralized systems. Toufi explains what AI agents are and how they work, why tools like OpenClaw reveal serious security risks, and how giving AI full system access can expose users to data loss, manipulation, and loss of control. He breaks down the importance of AI governance, containerized AI environments, and why human agency must remain at the center as autonomous systems become more powerful. The discussion also reframes the future of work with AI agents, arguing that AI doesn’t eliminate human work, but multiplies it for those who take ownership early. Toufi Saliba is the CEO of Hypercycle and a vocal advocate for human agency in an AI-driven world. He has spent years working on infrastructure that allows AI agents to communicate securely without relying on centralized third parties. In this episode, his perspective matters because he frames AI not as something to fear—but as something humans must actively own, secure, and govern before that choice disappears. Takeaways * AI agents are not just tools; they have agency, meaning they can make decisions and act autonomously on a user’s behalf. * Giving an AI agent full system access turns it into a powerful assistant and a potential security liability. * A single vulnerability in an autonomous AI agent can expose emails, files, and credentials, and even allow malware to be installed. * Most current AI security solutions reduce risk by limiting capability, but that tradeoff may undermine AI’s real value. * Containerized and sandboxed AI environments are a practical way to preserve AI power while reducing attack surfaces. * If you don’t actively capture and secure your data, platforms and governments will do it for you by default. * AI governance is not about stopping AI; it’s about defining who owns, controls, and benefits from AI-generated intelligence. * The future of work isn’t humans vs. AI; it’s humans managing fleets of AI agents working 24/7 on their behalf. * The Internet of AI will create massive new wealth, but only those who own their agents will participate in it. * Saving more personal data isn’t the problem; saving it without security, encryption, and control is the real risk. Timestamps 00:00 Introduction to OpenClaw and AI Agents 10:33 Global Brain, Data Ownership, and Human Agency 17:14 Mosaic Spot: AI Security for Everyone 18:44 AI Agent Security Risks and Protection 21:11 Human - AI Collaboration and AI Governance 29:41 AI Wealth Creation and Ownership 32:29 Mosaic Spot: Secure AI Interaction Layer 35:15 Future of Work with AI Agents 37:02 One Rule for Securing Your AI 41:41 Innovation Q&A Connect withToufi * Website: https://www.hypercycle.ai/ * LinkedIn: https://www.linkedin.com/in/toufisaliba/ * X: https://x.com/toouufii This Episode Is Supported By * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe * Monkey Digital: Unbeatable SEO. Outrank your competitors - https://www.monkeydigital.org?ref=110260 For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com Connect with Vit * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * Substuck: https://anhourofinnovation.substack.com/ * X: https://x.com/vitlyoshin * Podcast: https://www.anhourofinnovation.com/

    48 min
  2. Why Smart Engineering Teams Fail: Alignment, Ownership, and Real Delivery | Prashanth Tondapu

    FEB 3

    Why Smart Engineering Teams Fail: Alignment, Ownership, and Real Delivery | Prashanth Tondapu

    Why do smart engineering teams miss deadlines, struggle with alignment, and fail at real software delivery, even when everyone is talented and working hard? In this episode of An Hour of Innovation podcast, Vit Lyoshin sits down with Prashanth Tondapu, the CEO of InnoStax, to unpack why intelligence alone doesn’t guarantee outcomes and how alignment, ownership, and engineering leadership are the real drivers of execution. They explore why Agile teams often fall into the trap of local optimization, where individuals optimize tasks but projects still fail at the system level. Prashanth explains how the tech lead role, clear ownership, and visible progress transform project management and software delivery. The episode dives into practical lessons on engineering leadership, team accountability, and why outcome ownership matters more than raw talent. You’ll also hear real examples of how startups can scale development teams without micromanaging while improving ROI. Prashanth Tondapu is the CEO of InnoStax, a software consulting company that works with startups and scale-ups across the US and Europe, helping engineering teams move from slow delivery to measurable results. He brings over 15 years of experience leading and observing hundreds of development teams across different industries. He is known for helping smart engineering teams fix execution gaps by focusing on alignment, clarity, and leadership instead of process-heavy rituals. Takeaways * Smart engineers often slow projects down by optimizing individual tasks instead of the whole system. * Alignment and clear ownership matter more than raw talent for consistent software delivery. * When everyone “owns” the outcome, accountability disappears, and execution suffers. * A dedicated tech lead acts as a system-level thinker, not just the best coder on the team. * Teams move faster when progress is demonstrable, not just explained in status updates. * Daily visible progress exposes blockers early and prevents engineers from rabbit-holing. * Agile rituals can hide delivery problems when they prioritize narrative over proof. * Developers are more likely to ask for help when transparency is built into the workflow. * Tech leads should reduce their own coding over time as the team becomes more effective. * Startup founders must delegate with checkpoints or risk becoming the execution bottleneck. Timestamps 00:00 Introduction 02:10 Why Team Alignment Matters More Than Talent 04:14 Why Smart Engineering Teams Struggle to Deliver 05:27 Owning Outcomes vs Task-Based Work 06:56 The Tech Lead Role Explained 11:23 Early Warning Signs of Failing Teams 12:40 Daily Visible Progress for Faster Delivery 16:52 How Daily Updates Expose Hidden Issues 18:57 Building a Culture of Openness and Trust 22:55 Why Teams Need a Single Tech Lead 25:58 Avoiding Tech Lead Burnout and Micromanagement 29:15 Startup Scaling Advice for Founders 31:59 Ideal Team Structure for Software Delivery 33:44 The One Thing That Guarantees Outcomes 34:34 Innovation Q&A Connect with Prashanth * Website: https://innostax.com/ * LinkedIn: https://www.linkedin.com/in/prashanth-tondapu/ This Episode Is Supported By * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe * MeetGeek: Record, transcribe, summarize, and share insights from every meeting - https://get.meetgeek.ai/yjteozr4m6ln For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com Connect with Vit * Substuck: https://anhourofinnovation.substack.com/ * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

    38 min
  3. AI Video Analysis: How AI Is Changing Mental Health Care Between Doctor Visits l Loren Larsen

    JAN 27

    AI Video Analysis: How AI Is Changing Mental Health Care Between Doctor Visits l Loren Larsen

    Patients often hide how they’re really doing, but when AI listens between visits, the truth finally comes out, reshaping mental health care with empathy and precision. In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with Loren Larsen, founder and CEO of Videra Health, to explore how AI in healthcare is transforming behavioral health by capturing what patients actually say and feel outside the clinic, using human-in-the-loop AI to support better care decisions. They discuss why the most dangerous moments in mental health care often happen between doctor visits, how AI-based check-ins can surface real patient narratives, and why ethical, well-tested AI matters more than ever. The conversation breaks down the limits of score-based assessments, the risks of poorly built AI, and how technology can extend, not replace, clinical judgment. It’s a practical look at mental health technology that’s already being used in real clinical settings. Loren Larsen is a longtime builder at the intersection of AI, video, and human decision-making. Before founding Videra Health, he served as CTO of HireVue, deploying video AI at a massive scale. In this episode, his experience matters because he’s navigated bias, ethics, and real-world deployment, offering a grounded perspective on what responsible healthcare AI should look like today. Takeaways * The most dangerous moment in a mental health patient’s life is right after leaving inpatient care. * AI check-ins between visits restore visibility into patient wellbeing when clinicians cannot scale human outreach. * Patients often share more honestly with AI than with therapists because they feel less judged and less pressure to perform. * Mental health scores without narrative (like PHQ-9) miss the “why” behind patient distress. * AI should augment clinical judgment, not replace therapists, especially during high-risk treatment moments. * Generative AI is not ready to safely conduct therapy, particularly in crises. * Model drift can occur from unexpected factors, such as medications or cosmetic procedures, not just bad data. * Poorly built healthcare AI can look legitimate, making it hard for buyers to distinguish safe tools from risky ones. * Ethical healthcare AI requires clear consent, transparency, and human oversight, not just technical accuracy. * The biggest challenge in AI healthcare adoption is balancing speed, safety, and trust in a fast-moving market. Timestamps 00:00 Introduction 01:35 Videra Health Origin Story 03:02 AI Patient Check-Ins Between Doctor Visits 05:33 Why Human Judgment Still Matters in AI Care 08:49 Gaps in Mental Health Patient Care 12:07 AI vs Human Care in Mental Health 13:23 Testing & Validating Healthcare AI Systems 17:16 Edge Cases, Bias, and AI Model Failure 19:29 Ethical AI in Healthcare 23:33 Why Healthcare AI Adoption Is Hard 25:43 Common Myths About AI in Healthcare 30:02 Lessons from Building Video AI at Scale 34:54 Early Warning Signs in AI Systems 38:31 Advice for First-Time Video AI Builders 42:05 Innovation Q&A Connect with Loren * Website: https://www.viderahealth.com/ * LinkedIn: https://www.linkedin.com/in/loren-larsen/ This Episode Is Supported By * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe * Monkey Digital: Unbeatable SEO. Outrank your competitors - https://www.monkeydigital.org?ref=110260 For inquiries about sponsoring An Hour of Innovation, email iris@anhourofinnovation.com Connect with Vit * Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

    45 min
  4. AI Isn’t the Problem! Why AI Adoption Fails at Work (95% Get Zero ROI) | Jay Kiew

    JAN 17

    AI Isn’t the Problem! Why AI Adoption Fails at Work (95% Get Zero ROI) | Jay Kiew

    Most teams adopt AI, expecting a breakthrough, but end up frustrated, disappointed, and wondering what went wrong when productivity doesn’t improve. In this episode of An Hour of Innovation podcast, Vit Lyoshin sits down with Jay Kiew, a globally recognized expert in organizational change and transformation, to unpack why so many AI initiatives fail to deliver value, even when the technology itself is powerful and widely available. They explore why AI alone does not create productivity or innovation, and why research shows that nearly 95% of companies see little to no ROI from their AI initiatives. Jay explains how broken processes, weak critical thinking, and low change readiness quietly sabotage even the best AI tools. Instead of chasing the next technology, this episode reframes AI adoption as a human and organizational challenge, one that requires mindset shifts before tools can deliver results. Jay Kiew is a change strategist and transformation leader who works with organizations navigating complex change at scale. He is known for helping leaders move beyond tool-driven thinking toward building adaptive, change-ready cultures. In this episode, Jay’s perspective matters because it challenges the assumption that AI failures are technical problems and shows why leadership, process discipline, and learning capability are the real differentiators. Takeaways * AI does not create productivity by itself; it only amplifies the quality of existing processes and decision-making. * Most AI initiatives fail not because of weak models, but because teams cannot clearly explain how their work actually gets done. * Research showing that 95% of companies see no AI ROI reflects organizational readiness gaps, not a lack of AI capability. * Poorly defined workflows become painfully visible the moment AI is introduced into a team. * Leaders often deploy AI as a solution before agreeing on what problem they are trying to solve. * Organizations that struggle with change management tend to struggle the most with AI adoption. * AI agents fail when humans cannot articulate rules, context, and success criteria for the work. * Critical thinking is becoming more valuable than technical AI skills as automation increases. * Change fluency, the ability to adapt continuously, is emerging as a core career skill for the next decade. * Teams that succeed with AI focus less on tools and more on learning, feedback loops, and behavior change. Timestamps 00:00 Introduction 01:48 Why Leaders Misunderstand AI 03:22 How AI Reveals Organizational Dysfunction 05:58 SOPs and Critical Thinking for AI Success 08:41 AI Adoption and ROI Reality 13:19 Learning and Integration Matter More Than Tools 16:11 What AI Agents Really Are 18:03 How AI Agents Change Roles 22:42 Training Teams for AI Adoption 23:59 Why Teaching AI Tools Is Hard 25:49 Learning on the Job with AI 28:01 Essential Skills for the AI Era 29:03 Design Thinking and Influence 32:16 Why Human Perception Matters 33:17 Change Fluency as a Future Skill 34:13 AI’s Real Impact on Productivity 36:19 Asking Better Questions with AI 37:55 Practical AI Use at Work 39:38 Innovation Q&A Connect with Jay * Website: https://www.changefluency.com/ * LinkedIn: https://www.linkedin.com/in/jaykiew-change-fluency/ * Instagram: https://www.instagram.com/changefluency * Book: https://www.amazon.com/Change-Fluency-Principles-Uncertainty-Innovation/dp/1774586991 Sponsors * Google Workspace: Collaborative way of working in the cloud, from anywhere, on any device - https://referworkspace.app.goo.gl/A7wH * Webflow: Create custom, responsive websites without coding - https://try.webflow.com/0lse98neclhe * MeetGeek: Record, transcribe, summarize, and share insights from every meeting - https://get.meetgeek.ai/yjteozr4m6ln Connect with Vit * Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Podcast: https://www.anhourofinnovation.com/

    45 min
  5. Can AI Steal Your Book? The Alarming Plagiarism Problem! | US Publishing Expert

    JAN 10

    Can AI Steal Your Book? The Alarming Plagiarism Problem! | US Publishing Expert

    What if your book could be copied, republished, and sold under someone else’s name, and you’d barely know it happened? In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Julie Trelstad, a longtime publishing leader and one of the most thoughtful voices on copyright, metadata, and digital trust. Julie brings a rare insider’s view into how books are discovered, distributed, and increasingly misused in an AI-driven world. They explore a growing fear among writers, creators, and publishers: how AI is quietly reshaping plagiarism, authorship, and trust in the publishing ecosystem. They examine how AI-generated content is blurring the line between original work and imitation, why traditional copyright protections struggle in a machine-readable world, and how fake or derivative books can appear online within days. The episode breaks down the real risks authors face today, not hypothetical futures, and what structural changes may be required to protect creative work. It’s a practical, sober look at AI plagiarism. Julie Trelstad is a publishing executive and strategist known for her work at the intersection of technology and intellectual property. She has spent decades helping publishers, authors, and platforms navigate the identification, protection, and trust of content at scale. In this episode, her perspective matters because she explains not just that AI plagiarism is happening, but why the system makes it so hard to detect and stop, and what could actually help. Takeaways * AI can clone and resell a book in days, and most platforms struggle to reliably prove that the theft occurred. * AI-generated plagiarism often looks legitimate enough to fool retailers, reviewers, and buyers. * Authors lose sales and reputation when fake AI versions of their books appear at lower prices. * Traditional copyright law exists, but it was never designed for machine-scale copying and AI training. * There has been no machine-readable way for AI systems to recognize who owns content, until now. * Content fingerprinting can detect similarity across languages and paraphrased AI rewrites. * Time-stamped content registries can establish legal proof of who published first. * Most books already inside AI models were scraped without the author's consent or compensation. * AI lawsuits focus less on training itself and more on the use of pirated content. * Authors could earn micro-payments when AI systems use specific paragraphs or ideas from their work. Timestamps 00:00 Introduction 01:37 Why AI Plagiarism Is So Hard to Detect 03:25 Amlet.ai and the Fight for Content Ownership 05:32 How Copyright Worked Before Generative AI 08:09 The Origin Story Behind Amlet.ai 12:22 Building Machine-Readable Infrastructure for Copyright 14:24 How Publishing Is Changing in the AI Era 17:34 How Authors Can Protect Their Work with Amlet.ai 20:38 Tools Publishers Use to Detect and Enforce Rights 21:38 How Authors Can Monetize Content Through AI 24:27 The Reality of AI Scraping and Plagiarism Today 27:00 Publisher Rights, Digital Security, and Enforcement 29:08 Evolving the Business Model for AI Licensing 35:34 The Future of Digital Ownership and AI Rights 38:37 Innovation Q&A Support This Podcast * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/ Connect with Julie * Website: https://paperbacksandpixels.com/ * LinkedIn: https://www.linkedin.com/in/julietrelstad/ * Amlet AI: https://amlet.ai/ Connect with Vit * Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

    41 min
  6. Functional Precision Medicine: How Cancer Drugs Are Tested Before Treatment | Jim Foote

    12/20/2025

    Functional Precision Medicine: How Cancer Drugs Are Tested Before Treatment | Jim Foote

    Cancer care still forces patients and doctors to guess! Learn how functional precision medicine is replacing that uncertainty by testing cancer drugs before treatment even begins. In this episode of An Hour of Innovation podcast, host Vit Lyoshin speaks with Jim Foote, co-founder and CEO of First Ascent Biomedical, an innovator who is challenging one of the most uncomfortable truths in modern medicine: many cancer treatments are chosen without knowing if they will actually work. First Ascent Biomedical is a company focused on transforming personalized cancer treatment through functional precision medicine and data-driven decision support. In this conversation, they explore how functional precision medicine differs from traditional precision medicine and why testing drugs on patients’ live tumor cells changes everything. Jim explains how AI, robotics, and large-scale drug testing help doctors move from trial-and-error to a true test-and-treat approach. The discussion also covers the risks of ineffective or harmful treatments, the economic cost of cancer care, and what must change for this model to become part of standard oncology practice. Jim Foote is a former technology executive turned healthcare innovator whose work is deeply shaped by personal loss and firsthand experience with cancer care. He is best known for advancing functional precision medicine by combining genomics, live-cell drug testing, and AI-driven analysis to guide treatment decisions. His perspective matters because it connects real clinical outcomes with the technology needed to give doctors and patients clearer, faster, and more humane options. Takeaways * Cancer treatment still relies heavily on trial-and-error, even with modern medical technology. * Two biologically different patients often receive the same cancer treatment based on population averages. * Precision medicine based on DNA and RNA sequencing still cannot confirm if a drug will work before it’s given. * Functional precision medicine tests drugs directly on a patient’s live tumor cells before treatment begins. * Some FDA-approved cancer drugs can be completely ineffective or even make a patient’s cancer worse. * Testing drugs outside the body can prevent patients from being exposed to harmful or useless treatments. * AI and robotics enable hundreds of drug tests to be completed in days instead of weeks or months. * In a published study, 83% of refractory cancer patients did better when treatment was guided by this approach. * Knowing which drugs won’t work is just as important as knowing which ones will. * Personalized, test-and-treat cancer care has the potential to improve outcomes while reducing overall healthcare costs. Timestamps 00:00 Introduction 02:46 The Core Problem in Modern Cancer Care 04:16 Functional Precision Medicine Explained 06:42 How AI, Robotics, and Data Are Changing Cancer Treatment 10:01 How Cancer Drugs Are Tested Before Treatment 13:20 Personalized, Patient-Centric Cancer Care 18:22 Cost, Access, and the Economics of Cancer Treatment 22:19 The Future of Cancer Care and Patient Empowerment 25:21 Real Patient Outcomes and Success Stories 26:50 Why Functional Precision Medicine Is the Future 31:18 Predicting, Detecting, and Preventing Cancer Earlier 34:27 Where to Learn More About Functional Precision Medicine 36:12 Transforming Healthcare Beyond Trial-and-Error 37:27 Regulations, FDA Pathways, and Scaling Innovation 40:09 Why Cancer Is Affecting Younger Patients 41:17 Innovation Q&A Support This Podcast * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/ Connect with Jim * Website: https://firstascentbiomedical.com/ * LinkedIn: https://www.linkedin.com/in/jim-foote/ * TEDx Talk: https://www.youtube.com/watch?v=CqLCgNxUhVc Connect with Vit LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ X: https://x.com/vitlyoshin Website: https://vitlyoshin.com Podcast: https://www.anhourofinnovation.com/

    46 min
  7. The Future of Music Education: AI Tutors, Human Mentors, and Creativity

    12/13/2025

    The Future of Music Education: AI Tutors, Human Mentors, and Creativity

    Music education is quietly undergoing a massive shift, and most people haven’t noticed yet. AI tutors are no longer just tools; they’re starting to shape how musicians learn, practice, and improve. But here’s the real question: where does human creativity and mentorship still matter in an AI-driven world? In this episode of An Hour of Innovation podcast, host Vit Lyoshin sits down with John von Seggern, a longtime musician, educator, and founder of Futureproof Music School, to unpack what’s actually changing, and what isn’t, in the future of music education. John has spent over a decade designing online music education programs and now works at the intersection of AI, creativity, and human mentorship. In this conversation, they explore how AI is personalizing music education in ways traditional schools struggle to scale. John explains how AI tutors can analyze music, guide students through complex production workflows, and surface the one or two things that matter most at each stage of learning. They also dig into why AI still falls short in mastery, taste, and creative judgment, and why human mentors remain essential. They discuss the hybrid model of AI tutors and human teachers, the future of music production learning, and what this shift means for creators trying to stay relevant in a fast-changing industry. John von Seggern is a musician, producer, educator, and music technologist who has worked with film composers and contributed sound design to Pixar’s WALL·E. He previously helped lead and design one of the world’s most respected electronic music programs before founding Futureproof Music School, where he’s building AI-powered, personalized music education systems. His work matters because it goes beyond hype, offering a practical, grounded view of how AI can support creativity without replacing the human elements that make music meaningful. Takeaways * AI tutors are most effective when they surface only one or two actionable fixes, not long reports that overwhelm learners. * Music education improves dramatically when AI can analyze your actual work (like mixes), not just answer theoretical questions. * The biggest limitation of AI in music is that elite, professional knowledge is often undocumented, so models can’t learn it. * Human mentors remain essential at advanced levels because taste, judgment, and creative intuition can’t be automated. * Personalized learning paths outperform one-size-fits-all programs, especially in creative and technical fields like music production. * Generative AI tools are fun, but most professionals prefer AI that assists the process, not tools that generate finished music. * AI acts best as an intelligence amplifier, helping creators move faster rather than replacing their role. * The future of music education isn’t AI-only, but a hybrid model where AI accelerates learning, and humans guide mastery. Timestamps 00:00 Introduction 03:02 How AI Is Transforming Music Education 07:50 Why AI + Human Mentorship Works Better Than Music Schools 11:43 Why Music Education Curricula Must Evolve Faster 15:04 How AI Personalizes Music Learning for Every Student 19:38 Building an AI-Powered Education Business 24:22 What Students Really Say About AI Music Education 26:20 Electronic Music vs Learning Traditional Instruments 27:58 The Future of AI in Music and Creative Industries 30:28 Why Artists Still Matter in AI-Generated Art 32:21 Who Owns Music Created With AI? 36:50 How Creators Can Survive and Thrive Using AI 42:24 Innovation Q&A Support This Podcast * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/ Connect with John * Website: https://futureproofmusicschool.com/ * LinkedIn: https://www.linkedin.com/in/johnvon/ Connect with Vit * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

    46 min
  8. RAG, LLMs & the Hidden Costs of AI: What Companies Must Fix Before It’s Too Late

    12/06/2025

    RAG, LLMs & the Hidden Costs of AI: What Companies Must Fix Before It’s Too Late

    Most companies have no idea how risky and expensive their AI systems truly are until a single mistake turns into millions in unexpected costs. In this episode of An Hour of Innovation podcast, host Vit Lyoshin explores the truth about AI safety, enterprise-scale LLMs, and the unseen risks that organizations must fix before it’s too late. Vit is joined by Dorian Selz, co-founder and CEO of Squirro, an enterprise AI company trusted by global banks, central banks, and highly regulated industries. His experience gives him a rare inside look at the operational, financial, and security challenges that most companies overlook. They dive into the hidden costs of AI, why RAG has become essential for accuracy and cost-efficiency, and how a single architectural mistake can lead to a $4 million monthly LLM bill. They discuss why enterprises underestimate AI risk, how guardrails and observability protect data, and why regulated environments demand extreme trust and auditability. Dorian explains the gap between perceived vs. actual AI safety, how insurance companies will shape future AI governance, and why vibe coding creates dangerous long-term technical debt. Whether you’re deploying AI in an enterprise or building products on top of LLMs. Dorian Selz is a veteran entrepreneur, known for building secure, compliant, and enterprise-grade AI systems used in finance, healthcare, and other regulated sectors. He specializes in AI safety, RAG architecture, knowledge retrieval, and auditability at scale, capabilities that are increasingly critical as AI enters mission-critical operations. His work sits at the intersection of innovation and regulation, making him one of the most important voices in enterprise AI today. Takeaways * Most enterprises dramatically overestimate their AI security readiness. * A single architectural mistake with LLMs can create a $4M-per-month operational cost. * RAG is essential because enterprises only need to expose relevant snippets, not entire documents, to an LLM. * Trust in regulated industries takes years to build and can be lost instantly. * Real AI safety requires end-to-end observability, not just disclaimers or “verify before use” warnings. * Insurance companies will soon force AI safety by refusing coverage without documented guardrails. * AI liability remains unresolved: Should the model provider, the user, or the enterprise be responsible? * Vibe coding creates massive future technical debt because AI-generated code is often unreadable or unmaintainable. Timestamps 00:00 Introduction to Enterprise AI Risks 02:23 Why AI Needs Guardrails for Safety 05:26 AI Challenges in Regulated Industries 11:57 AI Safety: Perception vs. Real Security 15:29 Risk Management & Insurance in AI 21:35 AI Liability: Who’s Actually Responsible? 25:08 Should AI Have Its Own Regulatory Agency? 32:44 How RAG (Retrieval-Augmented Generation) Works 40:02 Future Security Threats in AI Systems 42:32 The Hidden Dangers of Vibe Coding 48:34 Startup Strategy for Regulated AI Markets 50:38 Innovation Q&A Questions Support This Podcast * To support our work, please check out our sponsors and get discounts: https://www.anhourofinnovation.com/sponsors/ Connect with Dorian * Website: https://squirro.com/ * LinkedIn: https://www.linkedin.com/in/dselz/ * X: https://x.com/dselz Connect with Vit * Substuck: https://substack.com/@vitlyoshin * LinkedIn: https://www.linkedin.com/in/vit-lyoshin/ * X: https://x.com/vitlyoshin * Website: https://vitlyoshin.com/contact/ * Podcast: https://www.anhourofinnovation.com/

    57 min
5
out of 5
5 Ratings

About

An Hour of Innovation is hosted by Vit Lyoshin, a technology professional with a product development, and program leadership background. Each episode explores the art and science of innovation through conversations with product leaders, scientists, and innovators. We dive into groundbreaking ideas, uncovering the why and how behind them. The goal is to amplify the voices of those driving change, offering insights, inspiration, and practical takeaways to spark listeners’ creativity and passion for progress. Welcome, and enjoy!