Humans WithAI™

David Brown

Hosted by David Brown, the show features honest conversations about the human impact of AI. What started as Creatives WithAI in 2023 has evolved into a broader look at how AI is reshaping careers, industries, identity and opportunity far beyond the creative world. From founders and freelancers to educators, leaders and workers navigating change, Humans WithAI explores what happens when technology stops being abstract and starts affecting real life.

  1. 136. Marco Ramilli: Understanding AI: The Importance of Detecting Fake Content

    04/07/2025

    136. Marco Ramilli: Understanding AI: The Importance of Detecting Fake Content

    Marco Ramilli joins us to discuss the urgent need for technology that can identify whether images and videos have been generated by artificial intelligence. He shares that the idea for his software arose from a viral image of the Pope in a designer jacket, which sparked widespread debate and confusion over its authenticity. As the digital landscape becomes increasingly cluttered with manipulated content, Marco emphasizes the critical importance of distinguishing reality from fabrication. He explains how his software leverages advanced AI models to analyze visual content and determine its origins. This conversation sheds light on the broader implications of AI-generated media and the challenges we face in maintaining trust in what we see online. Takeaways: Marco Ramilli discusses the importance of distinguishing real images from AI-generated content, especially in today's digital world. He shares how the viral fake image of the Pope in a puffer jacket inspired him to develop software for identifying AI-generated media. The technology developed by Marco can analyze photos, videos, and sounds to determine their authenticity, which is crucial for preventing misinformation. Marco emphasizes that the responsibility lies with technology developers to incorporate safeguards against misuse of AI-generated content. He notes that the rise of fake content can dilute public trust and complicate issues surrounding information verification in society. Marco believes that collaboration among companies is essential to address the challenges posed by the proliferation of AI-generated media.

    33 min
  2. 135. Tim Carter & Simon Mirren: AI’s Missing Soul: Who’s Really Telling the Story?

    20/06/2025

    135. Tim Carter & Simon Mirren: AI’s Missing Soul: Who’s Really Telling the Story?

    In this episode of Creatives WithAI, Lena Robinson and David Brown are joined by Tim Carter (CEO) and Simon Mirren (Creative Officer) of Karmanline, a newly launched company focused on integrating AI into the content production industry. Together, they dive into the provocations and potential of AI in storytelling and content creation, from the philosophical to the practical. Simon brings his decades of experience as a showrunner and storyteller (Criminal Minds, Versailles etc.) to interrogate whether machines can ever grasp the soul of a narrative. Tim, with a background in IP, tech, and ethics, unpacks how generative tools can (and should) be leveraged across the production pipeline without sidelining the deep craft and collaboration that makes filmmaking human. From fake AI startups and the dangers of anthropomorphising machines, to the creative chaos worth protecting in an increasingly optimised world, this episode is a must-listen for anyone working in, adjacent to, or even worried about AI’s influence on the future of media. Takeaways: AI Creativity: A machine might generate content, but it can’t understand tension, soul, or satire. That still belongs to humans, at least for now.Middle Ground Disruption: AI is widening the talent pool, but in doing so, it’s making life harder for average-skilled professionals.Human-Centric Storytelling Matters: Technology can support storytelling, but it shouldn’t overwrite the stories of marginalised voices.Collective Craft is Sacred: Every role on a film set, from grips to carpenters, holds meaning. Disregarding that in pursuit of “efficiency” is both arrogant and shortsighted.Let’s Talk Back: The episode challenges us to stay involved, speak up, and resist the sanitisation of creativity through algorithmic convenience. ( PS – We want to hear from you! Got a question for Lena and Dave to tackle in a future episode? Drop it in the comments on our socials and we might feature it on the show.) Find Tim and Simon Online: Tim Carter (CEO, Karmanline) – LinkedInSimon Mirren (Creative Officer, Karmanline; Showrunner (Criminal Minds, Versailles etc) LinkedInKarmanline News article Links referenced in this episode: 1st News Article Tim Mentioned: I tested Google's VEO 3 Myself: Here's what they don't show you in the keynote2nd News Article Tim Mentioned: Video of Emily M Bender & Sébastien Bubeck at The Computer History MuseumArticle Lena Mentioned: 'Nobody wants a robot to read them a story!' The creatives and academics rejecting AI - at work and at home.Mentioned by Lena: Simon's Linkedin Post re"EI and IQ intelligence tests" People/companies worth a look and mentioned in this episode: Justine Bateman: (Actress, director, writer, outspoken on AI in Hollywood) IMDbEmily M. Bender: (Linguist, AI critic) WebsiteDave Chappelle: (Comedian / example of creative nuance) WebsiteDonald Glover (Actor, writer, creator of Atlanta) IMDbGoogle Veo 3: (Recent software release mentioned by Tim) WebsiteGoogle Assistant: (Software mentioned by Dave) WebsiteCriminal Minds: (TV show Simon worked on) IMDbVersailles: (TV show Simon worked on) IMDbCSI (Crime Scene Investigation): (TV show Simon worked on) IMDbSpinVox: (Historical AI-voice startup referenced by Tim) WikipediaBuilder.AI (AI startup outed for using human labour behind the scenes) WebsiteOpenAI (Developers of ChatGPT) WebsiteAnthropic (AI research company mentioned in context of safety/AI blackmail) WebsiteComputer History Museum: (Location of Emily Bender’s public AI debate) WebsiteKodak: (film and digital photography company) Website Red Digital Cinema: (RED cameras used in Simon’s productions) Website

    1hr 7min
  3. 134. Dr. Sonia Tiwari: Why AI Characters Need Empathy and Boundaries

    10/06/2025

    134. Dr. Sonia Tiwari: Why AI Characters Need Empathy and Boundaries

    Dr. Sonia Tiwari joins Iyabo Oba on Relationships WithAI to explore how her work in design, education, and character creation intersects with AI, particularly in emotionally safe and ethical ways. Sonia shares how AI characters can foster learning, how her personal journey shaped her approach, and why foundational skills matter in AI collaboration. The conversation delves into topics like dual empathy, the dangers of parasocial AI relationships, and the mental health chatbot she created, Limona. Sonia calls for thoughtful design, cultural awareness, and clear guardrails to ensure AI supports rather than harms, especially in children’s lives. Top Three Takeaways: Design and Empathy Matter in AI - AI characters that feel relatable and emotionally safe can support learning and mental health, but their design must include ethical safeguards and clear limits.Foundational Skills Are Crucial - AI tools amplify existing expertise—they don’t replace it. Educators and designers with real-world experience use AI more responsibly and creatively.Guardrails Must Be Built In - Effective AI literacy and child safety require action on three levels: law, design, and culture. Without all three, AI can become emotionally manipulative or unsafe. Links and References Limona chatbot – Sonia’s CBT-based AI support toolDaniel Tiger’s Neighborhood and Mr. Rogers’ Neighborhood – character-led emotional learningBuddy.ai – AI tutor for kidsEveryone AI – nonprofit working on AI and child safetyCBT overview – understanding cognitive behavioural therapyRed teaming – stress-testing AI for safety flaws

    1 hr
  4. 133. Rola Aina: Why Emotionally Intelligent Leaders Will Win With AI

    27/05/2025

    133. Rola Aina: Why Emotionally Intelligent Leaders Will Win With AI

    In this episode of Relationships WithAI, Iyabo Oba sits down with tech transformation consultant and TurnTroop founder Rola Aina for a wide-ranging conversation on leadership, purpose, and building with AI. Rola shares how her faith and upbringing shape her mission to make AI adoption both ethical and inclusive. She explains how TurnTroop is using African talent to help businesses implement AI responsibly, creating social impact while solving real enterprise problems. They discuss why generosity is not a soft skill but a strategic one, and how emotionally intelligent leadership can slow things down to build faster, fairer systems. Rola also reflects on her use of AI tools like ChatGPT and Claude, why nuance and judgment still belong to humans, and how real connection and kindness must remain at the heart of how we build and lead in an AI-driven world. Takeaways AI as a Tool for Equity and Empowerment: Rola sees AI as a powerful tool to level the global playing field, particularly through her startup TurnTroop, which helps businesses adopt AI responsibly while building talent pipelines in Africa. She believes Africa doesn’t need saviours or more charities—it needs CEOs and commerce rooted in dignity and purpose. Leadership Grounded in Purpose, Generosity, and Emotional Intelligence: Rola champions emotionally intelligent leadership, rejecting the “move fast and break things” culture. She promotes slowing down to reflect, empowering teams, and building systems that include everyone. Her values of generosity and purpose shape how she leads, builds tools, and envisions ethical AI. Human Connection Must Remain Central in an AI-Driven World: While Rola utilises AI tools like ChatGPT and Claude as “chiefs of staff,” she emphasises their limitations, particularly in terms of nuance, judgment, and emotional presence. She urges founders and leaders to stay human, stay kind, and stay emotionally connected, especially in distributed teams. Links https://www.turntroop.ai/ https://www.linkedin.com/in/rola-aina/

    58 min
  5. 132. Lyudmila Lugovskaya: The Hidden Costs of Creative Automation

    15/05/2025

    132. Lyudmila Lugovskaya: The Hidden Costs of Creative Automation

    In this episode of Women WithAI, Dr. Lyudmila Lugovskaya brings her extensive experience in AI and data science to our conversation. We discuss how she helps companies navigate the complex landscape of generative AI to achieve real business outcomes. Lyudmila highlights the common misconceptions surrounding AI, particularly the overestimation of its capabilities and the importance of having clean, organised data. We explore the evolving job market as AI becomes more integrated into various industries, and the necessity for humans to adapt and maintain essential skills. Our discussion emphasises the potential of AI as a tool for innovation while also recognising the challenges and responsibilities that come with its use. Takeaways: Dr. Lugovskaya emphasises the importance of having clean and organised data for successful AI implementations. Companies often have unrealistic expectations about AI, believing it can instantly solve their business problems. As generative AI rapidly evolves, continuous learning and adaptation are essential for professionals in the field. The conversation highlights that while AI can automate tasks, it also requires human oversight and input for effective results. Lyudmila suggests starting to use AI tools with detailed prompts to achieve better outcomes and efficiency. The emergence of AI is likely to change job roles, creating new opportunities while rendering some tasks obsolete. Links referenced in this episode: linkedin.comarxiv.orghuggingface.comhackernews.com

    36 min

About

Hosted by David Brown, the show features honest conversations about the human impact of AI. What started as Creatives WithAI in 2023 has evolved into a broader look at how AI is reshaping careers, industries, identity and opportunity far beyond the creative world. From founders and freelancers to educators, leaders and workers navigating change, Humans WithAI explores what happens when technology stops being abstract and starts affecting real life.