Computer Says Maybe

Alix Dunn

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.

  1. Who Knows? Independent Researchers in a Platform Era w/ Brandi Geurkink

    HÁ 1 DIA

    Who Knows? Independent Researchers in a Platform Era w/ Brandi Geurkink

    Imagine doing tech research… but from outside the tech industry? What an idea… More like this: Nodestar: Turning Networks into Knowledge w/ Andrew Trask So much of tech research happens within the tech industry itself, because it requires data access, funding, and compute. But what the tech industry has in resources, it lacks in independence, scruples, and a public interest imperative. Alix is joined by Brandi Guerkink from The Coalition of Independent Tech Research to discuss her work at a time where platforms have never been so opaque, and funding has never been so sparse Further Reading & Resources: More about Brandi and The CoalitionUnderstanding Engagement with U.S. (Mis)Information News Sources on Facebook by Laura Edelson & Dan McCoyMore on Laura EdelsonMore on Dan McCoyJim Jordan bringing in Nigel Farage from the UK to legitimise his attacks on EU tech regulations — PoliticoTed Cruz on preventing jawboning & government censorship of social media — BloombergJudge dismisses ‘vapid’ Elon Musk lawsuit against group that cataloged racist content on X — The GuardianSee the CCDH’s blog post on getting the case thrown outPlatforms are blocking independent researchers from investigating deepfakes by Ariella SteinhornDisclosure: This guest is a PR client of our consultancy team. As always, the conversation reflects our genuine interest in their work and ideas. **Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

    48min
  2. Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud

    21 DE NOV.

    Tres Publique: Algorithms in the French Welfare State w/ Soizic Pénicaud

    Governments around the world are using predictive systems to manage engagement with even the most vulnerable. Results are mixed. More like this: Algorithmically Cutting Benefits w/ Kevin De Liban Luckily people like Soizic Pénicaud are working to prevent the modern welfare state from becoming a web of punishment of the most marginalised. Soizic has worked on algorithmic transparency both in and outside of a government context, and this week will share her journey from working on incrementally improving these systems (boring, ineffective, hard) — to escaping the slow pace of government and looking at the bigger picture of algorithmic governance, and how it can build better public benefit in France (fun, transformative, and a good challenge). Soizic is working to shift political debates about opaque decision-making algorithms to focus on what they’re really about: the marginalised communities who’s lives are most effected by these systems. Further reading & resources: The Observatory of Public Algorithms and their InventoryThe ongoing court case against the French welfare agency's risk-scoring algorithmMore about SoizicMore on the Transparency of Public Algorithms roadmap from Etalab — the task force Soizic was part ofLa Quadrature du NetFrance’s Digital Inquisition — co-authored by Soizic in collaboration with Lighthouse Reports, 2023AI prototypes for UK welfare system dropped as officials lament ‘false starts’ — The Guardian Jan 2025Learning from Cancelled Systems by Data Justice LabThe Fall of an Algorithm: Characterizing the Dynamics Toward Abandonment — by Nari Johnson et al, featured in FAccT 2024**Subscribe to our newsletter to get more stuff than just a podcast — we host live shows and do other work that you will definitely be interested in!**

    52min
  3. The Toxic Relationship Between AI & Journalism w/ Nic Dawes

    7 DE NOV.

    The Toxic Relationship Between AI & Journalism w/ Nic Dawes

    What happens when AI models try to fill the gaping hole in the media landscape where journalists should be? More like this: Reanimating Apartheid w/ Nic Dawes This week Alix is joined by Nic Dawes, who until very recently ran the non-profit newsroom The City. In this conversation we explore journalism’s new found toxic relationship with AI and big tech: can journalists meaningfully use AI in their work? If a model summarises a few documents, does that add a new layer of efficiency, or inadvertently oversimplify? And what can we learn from big tech positioning itself as a helpful friend to journalism during the Search era? Beyond the just accurate relaying of facts, journalistic organisations also represent an entire backlog of valuable training data for AI companies. If you don’t have the same resources as the NYT, suing for copyright infringement isn’t an option — so what then? Nic says we have to break out of the false binary of ‘if you can’t beat them, join them!’ Further reading & resources: Judge allows ‘New York Times’ copyright case against OpenAI to go forward — NPRGenerative AI and news report 2025: How people think about AI’s role in journalism and society — Reuters InstituteAn example of The City’s investigative reporting: private equity firms buying up property in the Bronx — 2022The Intimacy Dividend — Shuwei FangSam Altman on Twitter announcing that they’ve improved ChatGPT to be mindful of the mental health effects — “We realize this made it less useful/enjoyable to many users who had no mental health problems, but…”**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

    42min
  4. Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation

    31 DE OUT.

    Unlearning in the AI Era w/ Nabiha Syed at Mozilla Foundation

    Mozilla Foundation wants to chart a new path in the AI era. But what is its role now and how can it help reshape the impacts and opportunities of technology for… everyone? More like this: Defying Datafication w/ Abeba Birhane Alix sat down with Nabiha Syed to chat through her first year as the new leader of Mozilla Foundation. How does she think about strategy in this moment? What role does she want the foundation to play? And crucially, how is she stewarding a community of human-centered technology builders in a time of hyper-scale and unchecked speculation? As Nabiha says, “restraint is a design principle too”. Plug: We’ll be at MozFest this year broadcasting live and connecting with all kinds of folks. If you’re feeling the FOMO, be on the look out for episodes we produce about our time there. Further reading & resources: Watch this episode on YouTubeImaginative Intelligences — a programme of artist assemblies run by Mozilla FoundationNothing Personal — a new counterculture editorial platform from the Mozilla FoundationMore about MozfestNabiha on the Computer Says Maybe live show at the 2025 AI Action SummitNabiha Syed remakes Mozilla Foundation in the era of Trump and AI — The RegisterNabiha on why she joined MF as executive director — MF Blog**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

    46min
  5. You Seem Lonely. Have a Robot w/ Stevie Chancellor

    24 DE OUT.

    You Seem Lonely. Have a Robot w/ Stevie Chancellor

    Loneliness and mental health illnesses are rising in the US, while access to care dwindles — so a lot of people are turning to chatbots. Do chatbots work for therapy? More like this: The Collective Intelligence Project w/ Divya Siddarth and Zarinah Agnew Why are individuals are confiding in chatbots over qualified human therapists? Stevie Chancellor explains why an LLM can’t replace a therapeutic relationship — but often there’s just no other choice. Turns out the chatbots designed specifically for therapy are even worse than general models like ChatGPT; Stevie shares her ideas on how LLMs could potentially be used — safely — for therapeutic support. This is really helpful primer on how to evaluate chatbots for specific, human-replacing tasks. Further reading & resources: Stevie’s paper on whether replacing therapists with LLMs is even possible (it’s not)See the research on GithubPeople are Losing Their Loved Ones to AI-Fuelled Spiritual Fantasies — Rolling Stone (May 2025)Silicon Valley VC Geoff Lewis becomes convinced that ChatGPT is telling him government secrets from the futureLoneliness considered a public health epidemic according to the APAFTC orders online therapy company BetterHelp to pay damages of $7.8mDelta plans to use AI in ticket pricing draws fire from US lawmakers — Reuters July 2025**Subscribe to our newsletter to get more stuff than just a podcast — we run events and do other work that you will definitely be interested in!**

    53min

Classificações e avaliações

5
de 5
10 avaliações

Sobre

Technology is changing fast. And it's changing our world even faster. Host Alix Dunn interviews visionaries, researchers, and technologists working in the public interest to help you keep up. Step outside the hype and explore the possibilities, problems, and politics of technology. We publish weekly.

Você também pode gostar de