LuminaTalks

LuminaTalks

LuminaTalks – Where Data Meets Intuition.Hosted by Kevin De Pauw, LuminaTalks dives deep into the evolving world of AI, data strategy, and digital innovation. From AI agents and cybersecurity to data platforms and the ethics of emerging tech — we explore the power moves and pitfalls shaping the future of intelligent systems.Whether you're a tech leader, founder, or just curious about the real-world impact of AI, LuminaTalks delivers clear insights, bold opinions, and practical takeaways to help you stay ahead.🔍 Tune in for smart conversations, sharp perspectives, and stories from the frontlines of innovation.👉 Follow us and join the dialogue. 

Episodes

  1. Building Trusthworthy AI systems for Enterprises | Petar Tsankov

    5D AGO

    Building Trusthworthy AI systems for Enterprises | Petar Tsankov

    In this episode, I sit down with Petar Tsankov, AI safety researcher, engineer, and one of the leading voices on enterprise AI governance and compliance. We unpack why traditional, checkbox-driven AI compliance is breaking down, and why governance can no longer live in policies, audits, or PDFs. As AI systems scale faster than regulation, enterprises are being forced to rethink compliance as an engineering problem, not a legal one. This episode goes deep into how AI governance must shift from documentation to execution. We explore: ✔️ Why annual AI audits fail the moment models change ✔️ The biggest blind spot in enterprise AI governance and why most teams don’t see it ✔️ Why compliance must become continuous, automated, and embedded in pipelines ✔️ How abstract principles like fairness, transparency, and security break down at the technical level ✔️ Why AI bias is unavoidable and why measuring it early matters more than eliminating it ✔️ How regulation can enable innovation, not slow it down, when applied to high-risk use cases ✔️ What “from checkbox to code” actually means for AI teams in production 🧠 15 Key Takeaways from this episode: → AI governance began in academic research but has become a strategic business necessity, especially for large enterprises. → AI governance works only when it’s embedded into business strategy, not treated as a separate function. → AI forces compliance to shift from one-time checks to continuous, real-time evaluation. → Trust in AI depends on transparency. People need to understand how models are trained and how decisions are made. → The hardest part of AI governance is turning abstract ideas like fairness and security into measurable technical metrics. → Bias in AI is inevitable, but catching and measuring it early is what makes systems fairer. → AI governance must live inside development pipelines, not be added after systems are built. → The future of AI governance is code-based, with ethical principles built directly into systems, not paper audits. → Smart regulation can enable AI innovation if it focuses on risk without slowing scale. → The biggest AI governance blind spot is the gap between high-level principles and real technical implementation. → AI systems need continuous, automated validation because manual compliance can’t keep up with change. → Automation should make AI governance easier, reducing manual work without slowing innovation. → AI governance fails without strong data governance. If you can’t track your models and data, you can’t stay compliant. → High-risk AI systems demand stricter oversight and risk-based governance frameworks. → AI will reshape work by automating routine tasks and amplifying human intelligence, not replacing it. 🧩 Episode chapters: 0:00 Intro & Petar’s introduction 1:49 Petar in AI governance and eisk 12:13 Checkbox to code 28:38 Hot Take Timer  33:32 Need help to build in platforms, data or AI? 34:56 How to use continuous validation? 47:21 Rapid Fire 49:28 Governance at Scale 1:04:18 Making AI trustworthy and strategic 1:08:06 Wrap up 🔔 Subscribe for more conversations on building AI that scales safely, responsibly, and in production.

    1h 10m
  2. Only 0.3% Founders will Reach €10M in Revenue | Jürgen Ingels Explains Why

    5D AGO

    Only 0.3% Founders will Reach €10M in Revenue | Jürgen Ingels Explains Why

    Most founders believe startups fail because they run out of money.  Experienced investors know that’s rarely the real reason.  In this episode of LuminaTalks, I sat down with Jürgen Ingels, investor, entrepreneur, and founder of Supernova, one of Europe’s leading tech events. We discuss how companies actually scale, why focus matters more than funding, and how technology is quietly reshaping the way investors make decisions.  Modern startups are being built with smaller teams, higher profitability, and radically different operating models. Yet Europe is falling behind in tech adoption, and how automation and AI are already influencing investment decisions long before a founder walks into a room.  Why?  This conversation goes into patterns, systems, and the uncomfortable truths founders need to understand if they want to build companies that last.  In this episode, we talk about:  ✔️ Why investors never fully trust your business plan  ✔️ How experienced investors mentally rewrite startup projections  ✔️ The rise of small, highly profitable teams and the “mushroom model”  ✔️ Why hiring too early can slow your company down  ✔️ How automation is changing back offices, finance, and operations  ✔️ The growing role of AI in filtering and evaluating startups  ✔️ Why Europe is structurally behind in tech — and what founders can learn from Asia  ✔️ The difference between building products and building platforms  ✔️ Why second- and third-generation founders matter for ecosystems  🧠 Key Takeaways from this episode:  → Use a Clear Strategic Focus to Decide What You Ignore  → Build Investor Relationships Long Before You Need Money  → Pitch the Plan Investors Will Recalculate Anyway  → Ask Investors to Remove Operational Load → Protect Founder Focus as a Core Responsibility  → Learn Faster by Borrowing Experience  → Use Events to Create Outcomes, Not Exposure  → Build Platforms over Products  → Observe How Your Team Disagrees Under Pressure  → Deliberately Stress Test Founder Dynamics Early  → Use Remote Work Strategically, Not Ideologically  → Stay Lean Until Growth Forces Structure → Prioritize Focus Over Speed  → Tell a Founder Story, Not a Feature List → Formalize Founder Agreements Before Success  Whether you’re a founder, operator, or investor, this episode will challenge how you think about growth, scale, and decision-making in today’s tech landscape.  🔔 Subscribe to LuminaTalks for more conversations on technology, business, and the systems shaping the future.

    1h 15m

About

LuminaTalks – Where Data Meets Intuition.Hosted by Kevin De Pauw, LuminaTalks dives deep into the evolving world of AI, data strategy, and digital innovation. From AI agents and cybersecurity to data platforms and the ethics of emerging tech — we explore the power moves and pitfalls shaping the future of intelligent systems.Whether you're a tech leader, founder, or just curious about the real-world impact of AI, LuminaTalks delivers clear insights, bold opinions, and practical takeaways to help you stay ahead.🔍 Tune in for smart conversations, sharp perspectives, and stories from the frontlines of innovation.👉 Follow us and join the dialogue.