The in-between tech and trust podcast

Eva Simone Lihotzky

The in-between tech and trust podcast explores how we build, break, and rebuild trust in a world shaped by accelerating technology. Hosted by Eva Simone Lihotzky, the podcast holds space for in-depth conversations at the intersection of AI, business, ethics, and human connection. Through interdisciplinary voices - from business and politics to neuroscience to tech and systems thinking in organizations - it invites to conversation that reflect ambiguity, emotion, and deep dives into one of the complex topics we need to solve as a society and beyond.

  1. Designing a Sovereign Operating System for Human-Centric AI in Europe (EP 21)

    4D AGO

    Designing a Sovereign Operating System for Human-Centric AI in Europe (EP 21)

    🎙️ Timo Spring, Co-Founder of Alma 🎧 About this episode The latest episode of The In-Between Tech & Trust Podcast explores how AI systems quietly extract human and organizational knowledge, and what it takes to design infrastructure that preserves sovereignty instead. The conversation is for leaders, founders, and technologists working with AI in real organizational settings, especially where trust, regulation, and long-term control matter. Rather than focusing on models or product features, the discussion stays at the infrastructure level - where decisions about coordination, ownership, and data flow shape what AI systems ultimately do to people and institutions. 🧭 Episode overview Eva Lihotzky speaks with Timo Spring, founder of MIMMS, about why AI became powerful faster than it became governable. Drawing on years of work across healthcare, energy, and public sector systems, Timo argues that the real constraint in AI adoption is not intelligence, but coordination. They examine how AI agents turn human experience into executable algorithms, why this creates a one-way export of institutional knowledge, and how alternative system designs could allow collaboration without raw data sharing. The conversation also touches on Europe’s strategic position in the AI stack, the limits of current procurement models, and why trust in infrastructure cannot be claimed but must be structurally proven. 🧩 Key themes discussed How AI agents extract implicit human knowledge and why organizations pay for this Why coordination and orchestration matter more than smarter models The idea of individual expertise as executable algorithms Context transfer as an alternative to large-scale data transfer Energy use, efficiency, and infrastructure design choices in AI systems Sovereignty, ownership, and trust at the operating system layer Tensions between value-based AI systems and existing institutional structures Open sourcing as a long-term trust mechanism for foundational infrastructure

    36 min
  2. Navigating the AI Slop: How Editorial Judgment Is Changing (EP 20)

    FEB 19

    Navigating the AI Slop: How Editorial Judgment Is Changing (EP 20)

    💬 Summary This week's episode of the in-between tech & trust podcast examines how AI is being used inside one of the largest media organizations in Germany, with a focus on trust, transparency, and day to day editorial practice - steered by Dr. Paul Elvers, Head of AI at Funke Medienhaus and podcast host Eva Simone Lihotzky. The conversation is for media specialists, editors, product leaders, and anyone working close to news production and consumption. The episode dives deep into the choices directly affecting credibility, audience trust, and the role journalism plays in a democratic society. 🎧 Episode overviewIn a detailed discussion, Dr. Paul Elvers walks through how AI actually shows up in newsroom workflows, separating real operational value from common misconceptions. Rather than debating whether AI should exist in journalism, the episode stays grounded in how it is governed, where human responsibility remains essential, and why naïve adoption is a bigger risk than cautious experimentation. The conversation also explores how audiences judge credibility in an environment flooded with synthetic content, and what media organizations can realistically do to maintain trust while adapting to new tools and distribution pressures. 🔍 Key themes discussed Why trust in AI comes from understanding systems and accountability, not blind confidence The difference between deliberate AI integration and careless, volume driven adoption How “AI slop” reflects a growing difficulty in judging what is trustworthy, not just content quality Using AI to automate necessary but unpopular newsroom tasks while keeping humans at the start and end The role of recognizable brands and journalists in sustaining audience trust What transparency about AI use looks like in real editorial workflows Why AI governance in media is iterative, shared, and never fully settled

    40 min
  3. Foresight, Tech and Trust: How to Plan Ahead When The Future Stops Behaving Linearly (EP 19)

    FEB 12

    Foresight, Tech and Trust: How to Plan Ahead When The Future Stops Behaving Linearly (EP 19)

    🎙️Prof. Dr. Heiko von der Gracht, Professor at the University of Krems Opening Episode 19 of the in-between tech & trust podcast explores how organizations can make better decisions under uncertainty through foresight and scenario planning. In conversation with Heiko von der Gracht, professor at the University for Continuing Education Krems and long-standing practitioner of foresight practices, the discussion looks at how trust, technology, and perception shape what leaders think is possible. It is especially relevant for people working with strategy, innovation, or long-term planning in fast-moving environments. 🧭 Episode overview The conversation examines foresight not as prediction, but as a practical discipline for stress-testing assumptions and improving choices when the future is unclear. Drawing on decades of research and applied work, Heiko reflects on why uncertainty feels overwhelming today, how media and digital systems influence our perception of risk, and why traditional planning often breaks down under rapid change. The episode also looks at how trust is being reshaped by scalable, anonymous technologies, and what this means for organizations trying to act responsibly and coherently over time. 🔍 Key themes discussed Why foresight is about decision quality, not forecasting outcomes The difference between actual uncertainty and how uncertain the world feels How complexity and speed interact to undermine linear planning Trust in digital environments shaped by anonymity, scale, and weak accountability Knowledge overload, misinformation, and the loss of shared reality Scenario planning as a strategic conversation rather than an analytical exercise Empirical evidence that sustained foresight investment improves performance The discussion also draws on Heiko’s involvement in global foresight and governance contexts, including work connected to the World Economic Forum and UNESCO, grounding the conversation in both research and lived practice.

    36 min
  4. Parenting, Tech & Transformation in Times of Synthesized Knowledge (EP 18)

    FEB 5

    Parenting, Tech & Transformation in Times of Synthesized Knowledge (EP 18)

    🎙️Grisha Pavlotsky, Chief Transformation Officer at Miro Opening paragraph This episode shares a conversation between Grisha Pavlotsky, CTO of Miro, and Eva Simone Lihotzky. It examines trust as a practical design problem in teams, AI systems, and everyday decision-making. The conversation is for leaders, builders, and parents trying to make sense of how judgment, accountability, and authority shift when AI becomes part of how work and learning happen. It focuses on what needs to be made explicit - intent, guardrails, and decision logic - rather than assumed. Episode overview Grisha draws on his work leading transformation at Miro and his experience raising four children to explore how trust holds - or breaks - when information is abundant and increasingly synthesized. The discussion moves between organizations and families, treating them as parallel systems facing the same challenge: people are no longer short on answers, but on the ability to judge, contextualize, and disagree productively. Along the way, the episode questions current education models, critiques optional AI adoption, and argues that trust depends less on confidence and more on transparency about how decisions are made and who remains accountable. Key themes discussed Trust as alignment on intent plus visibility into decision frameworks, not just emotional safety How AI amplifies confidence without guaranteeing expertise, complicating collaboration Why probabilistic systems require clear guardrails, not vague goals The shift from producing synthesis to judging and challenging synthesized viewpoints Education moving from teaching facts to navigating competing narratives Identity and ego as the real blockers in large-scale transformation Leadership responsibility in making AI adoption mandatory rather than optional Parenting and organizational leadership as the same sense-making problem at different scales A recurring reference is the idea - attributed to Satya Nadella - that trust is built through consistency over time, and what that consistency demands in an AI-mediated world.

    36 min
  5. WEF26: The Politics of Tech, AI Agent Systems & Models, Adoption Challenges and Tech Sovereignty (EP 17)

    JAN 29

    WEF26: The Politics of Tech, AI Agent Systems & Models, Adoption Challenges and Tech Sovereignty (EP 17)

    Opening This solo episode of The In-Between Tech & Trust Podcast reflects on conversations from Davos and what they reveal about where tech, politics, and trust are heading into 2026. It’s for leaders, operators, and policy-adjacent roles who are trying to make sense of AI adoption beyond tooling. The focus is on what actually changes inside organizations, institutions, and collaborations when AI becomes infrastructure. 🎧 Episode overview Eva Simone Lihotzky unpacks four threads that kept resurfacing across discussions with tech, political, and business leaders: agentic AI systems, the politics of technology, sovereignty, and the future of collaboration and trust. Rather than reporting speeches, the episode explores tensions beneath the surface - why organizations feel urgency but struggle to act, how AI exposes institutional weaknesses instead of fixing them, and why governance, infrastructure, and responsibility are now inseparable. The episode moves between business realities and geopolitical dynamics, asking what it really means to design AI-driven organizations, who shapes the rules when tech and politics are interwoven, and how dependence on a small set of platforms reshapes power, accountability, and autonomy. 🔍 Key themes discussed Agentic AI systems and why they force a rethink of organizational design AI adoption as a platform shift, not a tool rollout The gap between AI urgency and practical implementation inside companies World models vs. specialized models and why both matter Interoperability as an unsolved infrastructure problem Tech as both upstream and downstream of politics Sovereignty across compute, infrastructure, data, operations, and talent Europe’s position in an AI-driven power landscape Why collaboration now depends on explicit commitments, not assumptions How trust becomes harder - and more necessary - as systems scale

    33 min
  6. The Widening AI Value Gap: What Scaling AI Really Demands (EP 16)

    JAN 15

    The Widening AI Value Gap: What Scaling AI Really Demands (EP 16)

    🎙️ with Dr. Marc Roman Franke, Partner & Associate Director AI and digital transformation at BCG 💬 Opening Eva Simone Lihotzky speaks with Marc Roman Franke, Partner & Associate Director AI and digital transformation at BCG, about how trust is built - or lost - during AI transformation inside large organizations. The conversation is for leaders, product owners, and transformation teams trying to move beyond pilots and into real operating change. It focuses on why execution, governance, and organizational choices determine whether AI creates value or stalls. 🎤 Episode overview Drawing on large-scale research and implementation experience, the episode examines why only a small share of companies see meaningful returns from AI. Franke argues that the main constraints are not models or tools, but leadership alignment, operating models, and how trust is earned through delivery. The discussion moves from the limits of “AI-ready” programs to what it means to become “AI-first,” including the rise of agentic AI, unmanaged security risks, and why postponing Responsible AI eventually blocks scale. 🎯 Key themes discussed Trust as a practical outcome of reliable execution and visible value, not long-term promises Why most AI value depends on people, organization, and leadership rather than algorithms What separates the small minority of companies that capture real AI value from the rest The difference between experimenting with AI and redesigning the business around it How agentic AI changes accountability, decision rights, and human–AI collaboration Governance as an enabler of adoption and safety, not a compliance afterthought Security and third-party risks that grow as AI scales When Responsible AI can be delayed—and why it becomes a blocker later 🤝🏻 Referenced during the conversation: BCG, MIT, SAP S/4HANA, GDPR, and Steve Jobs.

    39 min
  7. Trust as an Operating Metric in AI Companions (EP 15)

    JAN 8

    Trust as an Operating Metric in AI Companions (EP 15)

    🎙️Lior Oren, Chief Technology Officer at Replika A conversation on how emotionally intimate AI systems are built, monitored, and held together under real-world constraints. 🎧 Opening This episode explores how trust is built, measured, and sometimes strained in AI systems designed for emotionally intimate conversations. It’s a technical and ethical discussion for people working on conversational AI, product infrastructure, and safety in systems that users form real attachments to. The focus stays on operational reality - what engineers actually face when AI moves from tools to companions. 🔍 Episode overview Eva Simone Lihotzky speaks with Lior Oren about what it means to run AI companions at scale, where user trust is not an abstract principle but a daily KPI. Drawing on his experience as CTO of Replika and prior work on integrity teams at Meta, Lior explains how unpredictability, observability, and emotional reliance shape engineering decisions. The conversation examines tensions between flexibility and stability, innovation and guardrails, and regulation and lived product reality. Rather than future speculation, it stays grounded in how teams design memory, user control, and safety systems when conversations themselves are the product. 🧩 Key themes discussed Trust treated as a measurable success metric, not a philosophical goal Why observability is essential in statistical, non-deterministic AI systems Guardrails as part of core infrastructure, similar to security or reliability Emotional attachment influencing uptime, priorities, and team culture User agency through transparency, memory control, and conversational steering The risk of breaking “tone” and continuity when models change Limits of regulation and the trade-offs inherent in statistical safety systems

    37 min
  8. Tech & Trust in 2025: The Good, The Bad and The Big Bets for 2026 - EP14

    12/19/2025

    Tech & Trust in 2025: The Good, The Bad and The Big Bets for 2026 - EP14

    🎙️ with Trusha Rolvering - Director Transformation @adidas 🎙️ with Carina Hauswald - Managing Partner @GlobeOne 🎙️ with Kathrin Steinbichler - Director Narrative Consulting 🎙️ with Mirja Schwartz - Head of Business Development @showz Summary In this episode of the in-between trust podcast, host Eva Simone Lihotzky engages in a thought-provoking discussion with four women leaders about the intersection of technology and trust as they look ahead to 2026. The conversation recaps personal and organizational trust in technology, the challenges of AI adoption, and the balance between efficiency and human connection. Each guest shares insights on how to navigate the complexities of technology in their respective fields, culminating in predictions for the future of AI and its impact on trust in 2026. 🔑 Takeaways The intersection of tech and trust is crucial for transformation. Trust is a significant barrier to changing human behaviors. AI can enhance efficiency but requires a shift in mindset. Organizations need to create space for exploration and experimentation with AI. Transparency in using AI builds trust with clients and teams. The speed of technological change can overwhelm organizations. AI should complement human skills rather than replace them. Future conversations will focus on new business models and possibilities. Reflection and pause are essential in the fast-paced tech landscape. Empowering individuals to explore technology fosters trust and innovation. 🎙️ Sound bites "It's about trust to communicate yourself." "AI frees up lots of space and time mentally." "We need moments to reflect and pause." ⏱️ Chapters 00:00 Introduction to the Intersection of Tech and Trust 05:18 Exploring Personal and Organizational Trust in Technology 13:45 Navigating Expectations vs. Reality in AI Adoption 21:09 The Future of AI: Efficiency vs. Human Connection 34:23 Betting on the Future: Predictions for 2026 40:44 Reflections and Closing Thoughts 🔭 Keywords tech, trust, AI, transformation, organizational change, human behavior, efficiency, communication, strategy, future predictions 💻 Links in-between trust on Instagram: @inbetween_trust More about Trusha Rolvering: https://tinyurl.com/2pj5k69w More about Kathrin Steinbichler: https://tinyurl.com/2rv2hpef More about Carina Hauswald: https://tinyurl.com/4twrc37h More about Mirja Schwartz: https://tinyurl.com/mr26cv3p

    37 min

About

The in-between tech and trust podcast explores how we build, break, and rebuild trust in a world shaped by accelerating technology. Hosted by Eva Simone Lihotzky, the podcast holds space for in-depth conversations at the intersection of AI, business, ethics, and human connection. Through interdisciplinary voices - from business and politics to neuroscience to tech and systems thinking in organizations - it invites to conversation that reflect ambiguity, emotion, and deep dives into one of the complex topics we need to solve as a society and beyond.