Incongruent

The Incongruables

Podcast edited by Stephen King, award-winning academic, researcher and communications professional. Chester | Dubai | The World.  Correspondence email: steve@kk-stud.io.

  1. S5E18 Books, Bots, And The Battle For Credit: Julie Trelstad

    4 DAYS AGO

    S5E18 Books, Bots, And The Battle For Credit: Julie Trelstad

    Spill the tea - we want to hear from you! A quiet revolution is underway in publishing, and it’s not just about formats. We dive into how AI is reshaping discovery, licensing, and authorship with Julie Trelstad of Amlet, a rights registry built to make books machine-recognisable—so creators can be identified, consent can be captured, and royalties can actually flow. From the surge of AI-generated lookalikes to landmark lawsuits over scraped libraries, we trace how the industry arrived at this moment and what a fair, practical path forward looks like. Julie breaks down the crucial difference between input and output licensing, why ISBNs and legacy metadata fail modern systems, and how the ISCC standard enables a robust digital fingerprint for each work—down to paragraphs and images. We talk candidly about fear in the creative community, the flood of bland AI prose, and the very human qualities that machines tend to erase: voice, quirk, and risk. Along the way, we share hands-on advice for authors using AI: practise discernment, slow the process, maintain a single brief, and edit with intent. Treat the model like an eager intern—helpful, fast, but never the author. Looking ahead, we imagine AI-native discovery where books, audiobooks, and summaries live inside conversational interfaces—and attribution becomes a visible badge of trust. With transparent licensing and machine-readable rights, micro-royalties for model usage become possible, piracy loses its edge, and independent creators gain leverage. If ebooks taught us to distribute better, AI is our chance to account better and to value the people behind the pages. Enjoyed the conversation? Follow and subscribe, share with a friend who writes or reads obsessively, and leave a quick review to help others find the show. Your support helps us bring more smart, human-centred conversations about AI and creativity to your feed. Support the show

    27 min
  2. S5E17 AI Meets Dermatology: Jonathan Benassaya

    26 JAN

    S5E17 AI Meets Dermatology: Jonathan Benassaya

    Spill the tea - we want to hear from you! A near miss changed everything. When a dermatologist almost sent Jonathan Benasaya home during COVID without checking under his mask, a hidden melanoma forced a reckoning with how we detect skin cancer: slow access, manual exams, and invasive treatments that arrive too late. That moment now fuels SkinBit, a clinical‑grade, full‑body scanner built to triage quickly, track change over time, and help dermatologists focus on the patients who need them most. We sit down with Jonathan to unpack the big idea: use dermatoscopic‑resolution imaging to create a digital twin of your skin, then apply AI to score suspicious lesions and prioritise care. Instead of waiting months for rushed visual checks, people could be scanned at clinics or even pharmacies, with flagged results routed to specialists. It is a practical, scalable way to expand dermatology capacity without replacing clinicians. Jonathan also shares what comes next: millimetre‑wave imaging to look beneath the surface, a human‑in‑the‑loop workflow for safety, and a data strategy that follows regulatory guidance by training on consistently acquired images and biopsy‑confirmed outcomes. Beyond the tech, we dig into trust. Jonathan outlines a plain‑spoken covenant: transparent consent, meaningful opt‑out, and a firm promise not to sell data. When models improve, retrospective reviews can benefit the same patients who contributed, turning participation into shared progress. He also reflects on leadership in a risky space—hardware, health, regulation—and why a mission that saves lives powers teams through the hard parts. Expect candid insights on aligning patients, clinicians, and payers, managing milestones, and keeping ethics at the core. Make a small move that matters: book a skin check and talk to your loved ones about doing the same. If you want updates on when you can try the system, join the waitlist at https://www.skinbit.co/. If this conversation resonates, subscribe, share it with a friend, and leave a review to help more people find it. Support the show

    31 min
  3. S5E16 Banking The New Majority: Tamara Laine

    12 JAN

    S5E16 Banking The New Majority: Tamara Laine

    Spill the tea - we want to hear from you! Credit scores were built for a world of steady paycheques and long mortgages—so what happens when half the workforce earns through gigs, multiple clients, and flexible hours? We sit down with Tamara Laine, an Emmy-winning journalist turned fintech founder, to explore how AI and verified alternative data can open fair credit to the people traditional underwriting overlooks: drivers, carers, creators, renters, and newly arrived citizens who keep the economy moving. Tamara walks us through MPWR’s approach to building lender-ready profiles from real life signals—rent payments, utilities, bank inflows, and multi-source income—sourced directly from institutions rather than hype-heavy tech. We unpack why human-in-the-loop systems matter for trust, how feedback loops keep products grounded in user reality, and why diverse teams aren’t a “nice to have” but essential to preventing bias at scale. Along the way, we challenge outdated assumptions about the gig economy and map a path where financial inclusion is not charity, but overdue modernisation. You’ll hear a clear case for storytelling and community as the engines of adoption, practical ways to evaluate AI tools for privacy and safety, and a forward look at the next decade of work where soft skills and emotional intelligence rise in value. If you’ve ever paid your rent on time yet struggled to access credit, or if you build products and want to keep people at the centre, this conversation offers both strategy and hope. If this resonates, follow and share the show with someone who needs a fairer shot at finance. Subscribe, leave a quick review, and tell us: what everyday data should count toward credit that doesn’t today? Show Link: https://mpwr.money/about/ Support the show

    23 min
  4. S5E15 Trust, Code, And The Future Of Truth: Billy Luedtke

    15/12/2025

    S5E15 Trust, Code, And The Future Of Truth: Billy Luedtke

    Spill the tea - we want to hear from you! What happens when a handful of companies can quietly steer what we see, buy, and believe? We sit down with Billy Ludke, founder of Intuition and a veteran of EY and ConsenSys, to map a path where trust is built on cryptographic proof, portable reputation, and your own data — not a platform’s black box. Billy argues that while crypto started by decentralising money, the bigger prize is decentralising information itself. If discovery flows through opaque feeds and proprietary AIs, power concentrates. The antidote is simple in concept and ambitious in practice: verifiable attestations about people, agents, and platforms that travel with you anywhere. We dig into how cryptographic attribution shows who said what, while reputation adds the nuance that pure math cannot. One trusted voice beats ten thousand gamified reviews, so Intuition focuses on a neutral substrate for signed claims and lets multiple scoring models compete on top. That choice avoids a central arbiter of truth and keeps bias in check. From there we explore the rise of agent swarms — many specialised agents coordinating like a brain — and why open, portable reputations will decide how requests are routed and which tools act on your behalf. Billy also shares how this vision lands on device with Samsung’s Gaia phones: a second brain for your preferences and trusted sources that you control, usable across any model without lock‑in. We talk healthcare records, bank reputations, and why your ChatGPT context should be yours to carry. The through‑line is clear: don’t let anyone control the truth. Treat it as a prism of perspectives anchored by verifiable facts and accountable actors. If that future excites you — or challenges your assumptions — tune in, share with a friend, and leave a review so more curious listeners can find us. Support the show

    31 min
  5. S5E14 Algorithms, Beauty, And The Artist: Gretchen Andrew

    08/12/2025

    S5E14 Algorithms, Beauty, And The Artist: Gretchen Andrew

    Spill the tea - we want to hear from you! What if the internet that trained today’s AI also rewired our sense of beauty, originality, and self? We sit down with artist and former Googler Gretchen Andrew to explore how algorithms shape culture—from who gets seen on social platforms to why so many rooms, faces, and feeds now look the same. Gretchen’s path from information systems to the Whitney offers a rare inside-out view: she uses AI not to generate images, but to expose how machine-enforced standards flatten difference and reward sameness. Gretchen breaks down the feedback loop that began a decade ago when adtech and SEO drove the kind of content the web produced. Those archives became the fuel for generative models, and now those models steer taste back into the feed. We talk practical signals for spotting AI images, the difference between building your own dataset versus prompting in a black box, and why the best AI artists still make work that is unmistakably theirs. Her Facetune Portraits turn invisible edits into physical marks, revealing the embedded judgements inside “beautifying” tools and how they travel from screens to surgeons’ offices. The conversation gets personal and urgent. Filters can destabilise your self-image even when you know how they work. Plastic surgery trends among men and women rise as we optimise our 2D selves for Zoom and Instagram. For artists worried about replacement, Gretchen offers a path forward: study art history to know what’s actually new, build a practice that explains why the tool matters, and lean into the messy human qualities machines can’t convincingly fake. If you care about AI, culture, and creative integrity, this one will challenge how you see your feed—and your face. Enjoy Gretchen's work: https://www.gretchenandrew.com/facetune-portraits/facetune-portraits-gretchen-1 Enjoyed the conversation? Follow the show, share this episode with a friend, and leave a review to help more curious listeners find us. Support the show

    35 min
  6. S5E13 AI, Advertising, And The Future Of Trust: Nadeem Ibrahim

    01/12/2025

    S5E13 AI, Advertising, And The Future Of Trust: Nadeem Ibrahim

    Spill the tea - we want to hear from you! What if AI didn’t arrive overnight but grew up with us—and only now stepped into the spotlight? We’re joined by Nadeem Ibrahim to map the real story: from Spectrum tapes and pagers to telco‑powered data flows and the rise of large language models that collapse research, recommendation, and purchase into a single conversation. We get honest about the agency dilemma: when a strategist uses AI to draft research or polish slides, is that cheating or smart leverage? Nadim argues for a different metric—returning 30% of brainspace to strategy, creative judgement, and client problem‑solving. The machine assembles; the human interrogates, edits, and decides. That’s how brands earn distinctiveness in a sea of sameness. Along the way, we unpack how user‑generated public data fuels models, why conversational commerce is a direct threat to traditional search, and how marketers can meet customers at the moment of intent inside AI interfaces. The stakes rise with ownership and trust. Deepfakes, synthetic voices, and viral misinformation make provenance non‑negotiable. We dive into talent IDs and authenticity metadata designed to protect artists, route royalties, and help platforms block abuse. Today it safeguards public figures; tomorrow it should extend to everyone, because a face and a voice are not public domain. We also explore fintech’s convergence with e‑commerce and AI, envisioning systems that can nudge healthier spending while delivering uncanny personalisation—if guardrails like consent and auditability are built in. If you’re leading a team, start small but deliberate: automate low‑leverage tasks, codify human review, treat prompts and logs as sensitive data, and pilot conversational commerce with clear measurement. If you’re a creator, add authenticity signals and demand the same from partners. Subscribe for more sharp, human‑centred takes on AI, advertising, and digital culture—and tell us: where should AI never cross the line? Support the show

    36 min
  7. S5E12 When Emotions Become Data, Leaders Finally Hear: James Warren

    24/11/2025

    S5E12 When Emotions Become Data, Leaders Finally Hear: James Warren

    Spill the tea - we want to hear from you! Start with a bold idea: stories are data. Not the soft kind you file under “nice to have,” but structured, analysable insight that explains why people choose, stay, leave or change. We sit down with James Warren, founder of Share More Stories, to unpack how narrative plus AI can outclass dashboards built on clicks and demographics, and why the shift begins with designing for trust. James walks us through SEEQ, a platform that invites customers and employees to share personal experiences in writing or audio, primed by sensory prompts that activate memory and emotion. Instead of over-engineered chatbots, a simple, humane flow draws out richer accounts and reduces performance pressure. Those stories are then segmented and analysed across 55 emotional signals, creating an emotional map leaders can act on. The payoff is clarity: not just what happened, but how it felt and what is likely to happen next. We dive into privacy and ethics—anonymised identities, clear boundaries between “personal” and “private,” and reporting that avoids exposing individuals. Then we get practical about leadership. When trust is shaky, go first and model vulnerability; when trust exists, step back and listen before you speak. A recent energy-sector case shows how new hire and hiring leader stories revealed overlooked friction and emotional strain, guiding more humane fixes than another round of survey tweaks. As AI-generated content floods every channel, authentic human voice becomes a competitive advantage. Real-time querying and emotion-aware analysis can scale insight, but meaning still starts with people willing to share. If you’re in HR, marketing, or internal comms, and you’re tired of numbers that don’t move behaviour, this conversation offers a cleaner path: fewer assumptions, deeper listening, and decisions anchored in emotion-driven truth. If it resonates, follow the show, share it with a colleague who cares about culture, and leave a review with one story you think leaders need to hear. ~~~ Share More Stories: https://sharemorestories.com/ Support the show

    38 min
  8. S5E11 AI And The Future Of Music Education: John von Seggern

    17/11/2025

    S5E11 AI And The Future Of Music Education: John von Seggern

    Spill the tea - we want to hear from you! A world-touring jazz bassist turned educator and AI builder joins us to explore what happens when smart tools meet hard-won craft. We dig into how Futureproof Music School blends a curriculum-aware chatbot with real mentors so producers learn faster, pay less, and still develop their own voice rather than a template sound. From Wembley Arena stories to DAW specifics, John breaks down what large models already understand, where proprietary production knowledge still wins, and why structure matters more than infinite answers. We take you inside Kadence, a memory-based AI co-pilot that analyses mixes, compares references, and serves targeted, actionable feedback instead of overwhelming students with lists. Think fewer rabbit holes, more progress: clear mix notes, arrangement guidance, and strategic nudges that build week over week. We also get honest about what students actually want from AI right now—help with marketing, release planning, and social consistency—so the music doesn’t drown under admin. The throughline is creativity as curation: your taste decides what ships, even if AI offers a hundred options. We tackle the big questions too. Can detectors keep up as artefacts vanish? What counts as responsible use when training data is opaque? John argues for consent and compensation, drawing a sharp line between experimentation and high-stakes commercial work while legal frameworks mature. Looking forward, we preview screen-aware and voice-first coaching that can see your DAW and fix problems in context, compressing learning curves without flattening style. If you care about music production, AI ethics, and building a career that sounds like you, this conversation brings clarity, nuance, and practical steps. Enjoyed the episode? Follow and share with a friend, and leave a quick review to help more producers find the show. Support the show

    36 min

Ratings & Reviews

5
out of 5
2 Ratings

About

Podcast edited by Stephen King, award-winning academic, researcher and communications professional. Chester | Dubai | The World.  Correspondence email: steve@kk-stud.io.