The in-between tech and trust podcast

Eva Simone Lihotzky

The in-between tech and trust podcast explores how we build, break, and rebuild trust in a world shaped by accelerating technology. Hosted by Eva Simone Lihotzky, the podcast holds space for in-depth conversations at the intersection of AI, business, ethics, and human connection. Through interdisciplinary voices - from business and politics to neuroscience to tech and systems thinking in organizations - it invites to conversation that reflect ambiguity, emotion, and deep dives into one of the complex topics we need to solve as a society and beyond.

  1. Why security intelligence fails before the attack - Assaf Kipnis

    MAR 19

    Why security intelligence fails before the attack - Assaf Kipnis

    Most security failures are organisational: This episode is about the gap between threat intelligence that exists and the human systems that never act on it, and what that costs the organisations that keep losing to attacks they already understood. Assaf Kipnis has spent over a decade inside the threat intelligence and trust and safety functions of some of the world's largest platforms. In this conversation, he maps a structural failure that runs across the industry: the team that identifies threats and the team that deploys detection operate in parallel, with no reliable mechanism to connect them. Intelligence gets produced, reports get written, and the knowledge sits unused while the same attacks return. Assaf describes what it actually took to stop a sophisticated actor group ahead of the 2020 US elections - a rare case where structure and resources aligned - and explains why that outcome is the exception rather than the rule. He also walks through the design decisions behind Catalyst Labs, the company he is now building to close the gap, and why he made provenance non-negotiable even at the cost of speed. 🎙 Key themes discussed Why security teams are structurally rewarded for fighting fires rather than preventing themThe organisational gap between threat intelligence and detection - and why it persists even in well-resourced teamsWhat data provenance means in practice, and why it matters more than speed when using AI in securityHow attackers learn your defences faster than you can adapt - and what the military analogy revealsWhy trust online currently feels, in Assaf's words, like a pipe dream 👤 About the guest Assaf Kipnis is the founder of Catalyst Labs, with over 12 years working across threat intelligence, information security, and trust and safety at LinkedIn, Google, Meta, and ElevenLabs. He brings the perspective of someone who has spent his career making threats legible to organisations - and watching those organisations lack the structure to act on what they could now see. 🕐 Chapter markers [00:18] Why the industry keeps fighting the same fires [08:04] What it actually took to stop an actor group - the 2020 elections case [12:36] How AI is widening an asymmetry that already existed [15:31] Catalyst Labs: the provenance problem and why speed comes second [20:35] What to build first if you're starting a threat intelligence team 🔗 Links Assaf Kipnis https://www.linkedin.com/in/assafkipnis/ KTLYST Labs https://www.ktlystlabs.com Background information on MGM / FBI reports: https://www.reuters.com/technology/cybersecurity/fbi-struggled-disrupt-dangerous-casino-hacking-gang-cyber-responders-say-2023-11-14/ Related episode: organisational trust and AI implementation with Simon Berkler https://open.spotify.com/episode/6y8PMaVUnZVAR1hOAR15DN Related episode: accountability and invisible infrastructure with Sergiu Petean https://open.spotify.com/episode/4KcsZBDgFzkSuwQVihjNR5

    31 min
  2. AI as a Mirror: How Your Organization's Trust Culture Impacts AI Implementation (EP23)

    MAR 14

    AI as a Mirror: How Your Organization's Trust Culture Impacts AI Implementation (EP23)

    🎙️ Simon Berkler, Co-Founder of The Dive 🎧 About this episode  This episode of The In-Between Tech & Trust Podcast asks a question every leader is quietly facing: what does AI actually do to the trust inside your organization — and what does your trust culture do to AI? Simon Berkler, organizational development expert and co-founder of The Dive, argues that technology doesn't change organizations. It reveals them. The conversation is for leaders, HR professionals, and anyone navigating organizational transformation in the age of AI. 🧭 Episode overview  Eva Simone Lihotzky speaks with Simon Berkler about why trust is not a soft skill but the structural condition that makes organizations work — and why that matters more than ever in the context of AI adoption. Drawing on systems theory, regenerative organizational design, and 20+ years of hands-on OD practice, Simon reframes the tech-and-trust debate: the question is not which AI tools to adopt, but what kind of organization you already are. Because AI, he argues, will act as a mirror — amplifying what's already alive, for better or worse. They explore how to lead through in-between moments when old logic is crumbling and new logic hasn't formed yet, why collective intuition may be the most underused organizational resource, and what it would mean to design governance structures built for uncertainty rather than against it. 🧩 Key themes discussed ​Why trust reduces social complexity — and what that means practically for organizational transformation​AI as a mirror of organizational culture: how existing trust levels determine whether AI becomes augmentation or surveillance​The difference between trust and probability — and why AI runs on the latter, not the former​Leading through in-between spaces: how to change the rules while still playing the game​Collective intuition as a strategic resource for navigating complexity, drawing on the work of organizational psychologist Peter Kruse​The Stellar Approach: a regenerative OD framework for moving organizations from conventional to net-positive ways of working​Why rhythm is the most overlooked asset in transformation​Shifting organizational governance from optimizing for certainty to optimizing for uncertainty​What "safe enough to try" looks like as a leadership stance in AI adoption 📥 References & further reading ​The Dive — Simon's organizational development consultancy: thedive.com​Simon Berkler's personal site & writing: simon-berkler.de​The Stellar Approach by Simon Berkler & Ella Lagé (2024): Amazon​Niklas Luhmann, Trust and Power — systems theory foundation for the episode's framing of trust: Amazon​Nora Bateson & the concept of Warm Data — the distinction between warm and cold data Simon references: warmdata.life​Peter Kruse on collective intuition and complexity — the four ways of dealing with complexity Simon draws on: artsnext.ch summary

    36 min
  3. Navigating the AI Slop: How Editorial Judgment Is Changing (EP 20)

    MAR 14

    Navigating the AI Slop: How Editorial Judgment Is Changing (EP 20)

    🎙️ Dr. Paul Elvers, Head of AI at Funke Mediengruppe 💬 Summary This week's episode of the in-between tech & trust podcast examines how AI is being used inside one of the largest media organizations in Germany, with a focus on trust, transparency, and day to day editorial practice - steered by Dr. Paul Elvers, Head of AI at Funke Medienhaus and podcast host Eva Simone Lihotzky. The conversation is for media specialists, editors, product leaders, and anyone working close to news production and consumption. The episode dives deep into the choices directly affecting credibility, audience trust, and the role journalism plays in a democratic society. 🎧 Episode overview In a detailed discussion, Dr. Paul Elvers walks through how AI actually shows up in newsroom workflows, separating real operational value from common misconceptions. Rather than debating whether AI should exist in journalism, the episode stays grounded in how it is governed, where human responsibility remains essential, and why naïve adoption is a bigger risk than cautious experimentation. The conversation also explores how audiences judge credibility in an environment flooded with synthetic content, and what media organizations can realistically do to maintain trust while adapting to new tools and distribution pressures. 🔍 Key themes discussed ​Why trust in AI comes from understanding systems and accountability, not blind confidence​The difference between deliberate AI integration and careless, volume driven adoption​How “AI slop” reflects a growing difficulty in judging what is trustworthy, not just content quality​Using AI to automate necessary but unpopular newsroom tasks while keeping humans at the start and end​The role of recognizable brands and journalists in sustaining audience trust​What transparency about AI use looks like in real editorial workflows​Why AI governance in media is iterative, shared, and never fully settled

    40 min
  4. Trust, Creativity, and What We Risk When Adopting AI (EP 22)

    MAR 5

    Trust, Creativity, and What We Risk When Adopting AI (EP 22)

    🎙️ Iwona Fluda, expert for creativity & ethics 🧭 Opening This week's episode of the in-between tech & trust podcast examines how AI is reshaping creativity, trust, and responsibility in everyday work. If you work in creative fields, technology, or organizational leadership who are dealing with AI as a practical reality rather than an abstract future, then this podcast is for you. 🗣️ Episode overview Eva Simone Lihotzky is joined by creativity and ethics expert Iwona Fluda, founder of the Ministry for Creativity, Head of AI and Content Growth at Deamleaps and ambassador for the Royal Society for Arts, Manufactures and Commerce. Together, they unpack why trust in technology is eroding, how AI tools affect human thinking when cognition is outsourced, and why creativity cannot be reduced to speed or output. The discussion moves between individual responsibility, organizational shortcuts, and the ethical gaps that appear when inclusivity and long term design are treated as secondary concerns.  🧩 Key themes discussed Cognitive engagement and AIHow relying on AI without active thinking weakens human cognition, drawing on research associated with the MIT Media Lab. Creativity under pressureCreativity as a historically essential survival skill, and why it remains structurally undervalued despite being central to innovation. AI as tool and disruptorThe dual role of AI as a powerful collaborator for some and a driver of job loss for others, especially in creative and marketing work. Trust in technology and platformsWhy skepticism, not trust, defines today’s relationship with technology and institutions, including content ecosystems like LinkedIn. Radical inclusivity by designThe limits of add-on ethics programs and the need to build inclusivity into systems from the very beginning. Efficiency versus responsibilityOrganizational choices that favor short term gains over long term impact, even when frameworks like the EU AI Act already exist. Societal and existential riskConcerns about large scale job displacement and long term societal disruption, including references to thinkers such as Roman Jampolsky.

    29 min
  5. Foresight, Tech and Trust: How to Plan Ahead When The Future Stops Behaving Linearly (EP 19)

    FEB 12

    Foresight, Tech and Trust: How to Plan Ahead When The Future Stops Behaving Linearly (EP 19)

    🎙️Prof. Dr. Heiko von der Gracht, Professor at the University of Krems Opening Episode 19 of the in-between tech & trust podcast explores how organizations can make better decisions under uncertainty through foresight and scenario planning. In conversation with Heiko von der Gracht, professor at the University for Continuing Education Krems and long-standing practitioner of foresight practices, the discussion looks at how trust, technology, and perception shape what leaders think is possible. It is especially relevant for people working with strategy, innovation, or long-term planning in fast-moving environments. 🧭 Episode overview The conversation examines foresight not as prediction, but as a practical discipline for stress-testing assumptions and improving choices when the future is unclear. Drawing on decades of research and applied work, Heiko reflects on why uncertainty feels overwhelming today, how media and digital systems influence our perception of risk, and why traditional planning often breaks down under rapid change. The episode also looks at how trust is being reshaped by scalable, anonymous technologies, and what this means for organizations trying to act responsibly and coherently over time. 🔍 Key themes discussed Why foresight is about decision quality, not forecasting outcomes The difference between actual uncertainty and how uncertain the world feels How complexity and speed interact to undermine linear planning Trust in digital environments shaped by anonymity, scale, and weak accountability Knowledge overload, misinformation, and the loss of shared reality Scenario planning as a strategic conversation rather than an analytical exercise Empirical evidence that sustained foresight investment improves performance The discussion also draws on Heiko’s involvement in global foresight and governance contexts, including work connected to the World Economic Forum and UNESCO, grounding the conversation in both research and lived practice.

    36 min
  6. Parenting, Tech & Transformation in Times of Synthesized Knowledge (EP 18)

    FEB 5

    Parenting, Tech & Transformation in Times of Synthesized Knowledge (EP 18)

    🎙️Grisha Pavlotsky, Chief Transformation Officer at Miro Opening paragraph This episode shares a conversation between Grisha Pavlotsky, CTO of Miro, and Eva Simone Lihotzky. It examines trust as a practical design problem in teams, AI systems, and everyday decision-making. The conversation is for leaders, builders, and parents trying to make sense of how judgment, accountability, and authority shift when AI becomes part of how work and learning happen. It focuses on what needs to be made explicit - intent, guardrails, and decision logic - rather than assumed. Episode overview Grisha draws on his work leading transformation at Miro and his experience raising four children to explore how trust holds - or breaks - when information is abundant and increasingly synthesized. The discussion moves between organizations and families, treating them as parallel systems facing the same challenge: people are no longer short on answers, but on the ability to judge, contextualize, and disagree productively. Along the way, the episode questions current education models, critiques optional AI adoption, and argues that trust depends less on confidence and more on transparency about how decisions are made and who remains accountable. Key themes discussed Trust as alignment on intent plus visibility into decision frameworks, not just emotional safety How AI amplifies confidence without guaranteeing expertise, complicating collaboration Why probabilistic systems require clear guardrails, not vague goals The shift from producing synthesis to judging and challenging synthesized viewpoints Education moving from teaching facts to navigating competing narratives Identity and ego as the real blockers in large-scale transformation Leadership responsibility in making AI adoption mandatory rather than optional Parenting and organizational leadership as the same sense-making problem at different scales A recurring reference is the idea - attributed to Satya Nadella - that trust is built through consistency over time, and what that consistency demands in an AI-mediated world.

    36 min
  7. WEF26: The Politics of Tech, AI Agent Systems & Models, Adoption Challenges and Tech Sovereignty (EP 17)

    JAN 29

    WEF26: The Politics of Tech, AI Agent Systems & Models, Adoption Challenges and Tech Sovereignty (EP 17)

    Opening This solo episode of The In-Between Tech & Trust Podcast reflects on conversations from Davos and what they reveal about where tech, politics, and trust are heading into 2026. It’s for leaders, operators, and policy-adjacent roles who are trying to make sense of AI adoption beyond tooling. The focus is on what actually changes inside organizations, institutions, and collaborations when AI becomes infrastructure. 🎧 Episode overview Eva Simone Lihotzky unpacks four threads that kept resurfacing across discussions with tech, political, and business leaders: agentic AI systems, the politics of technology, sovereignty, and the future of collaboration and trust. Rather than reporting speeches, the episode explores tensions beneath the surface - why organizations feel urgency but struggle to act, how AI exposes institutional weaknesses instead of fixing them, and why governance, infrastructure, and responsibility are now inseparable. The episode moves between business realities and geopolitical dynamics, asking what it really means to design AI-driven organizations, who shapes the rules when tech and politics are interwoven, and how dependence on a small set of platforms reshapes power, accountability, and autonomy. 🔍 Key themes discussed Agentic AI systems and why they force a rethink of organizational design AI adoption as a platform shift, not a tool rollout The gap between AI urgency and practical implementation inside companies World models vs. specialized models and why both matter Interoperability as an unsolved infrastructure problem Tech as both upstream and downstream of politics Sovereignty across compute, infrastructure, data, operations, and talent Europe’s position in an AI-driven power landscape Why collaboration now depends on explicit commitments, not assumptions How trust becomes harder - and more necessary - as systems scale

    33 min
  8. The Widening AI Value Gap: What Scaling AI Really Demands (EP 16)

    JAN 15

    The Widening AI Value Gap: What Scaling AI Really Demands (EP 16)

    🎙️ with Dr. Marc Roman Franke, Partner & Associate Director AI and digital transformation at BCG 💬 Opening Eva Simone Lihotzky speaks with Marc Roman Franke, Partner & Associate Director AI and digital transformation at BCG, about how trust is built - or lost - during AI transformation inside large organizations. The conversation is for leaders, product owners, and transformation teams trying to move beyond pilots and into real operating change. It focuses on why execution, governance, and organizational choices determine whether AI creates value or stalls. 🎤 Episode overview Drawing on large-scale research and implementation experience, the episode examines why only a small share of companies see meaningful returns from AI. Franke argues that the main constraints are not models or tools, but leadership alignment, operating models, and how trust is earned through delivery. The discussion moves from the limits of “AI-ready” programs to what it means to become “AI-first,” including the rise of agentic AI, unmanaged security risks, and why postponing Responsible AI eventually blocks scale. 🎯 Key themes discussed Trust as a practical outcome of reliable execution and visible value, not long-term promises Why most AI value depends on people, organization, and leadership rather than algorithms What separates the small minority of companies that capture real AI value from the rest The difference between experimenting with AI and redesigning the business around it How agentic AI changes accountability, decision rights, and human–AI collaboration Governance as an enabler of adoption and safety, not a compliance afterthought Security and third-party risks that grow as AI scales When Responsible AI can be delayed—and why it becomes a blocker later 🤝🏻 Referenced during the conversation: BCG, MIT, SAP S/4HANA, GDPR, and Steve Jobs.

    39 min

About

The in-between tech and trust podcast explores how we build, break, and rebuild trust in a world shaped by accelerating technology. Hosted by Eva Simone Lihotzky, the podcast holds space for in-depth conversations at the intersection of AI, business, ethics, and human connection. Through interdisciplinary voices - from business and politics to neuroscience to tech and systems thinking in organizations - it invites to conversation that reflect ambiguity, emotion, and deep dives into one of the complex topics we need to solve as a society and beyond.