imperfect offerings podcast

Helen Beetham

Thoughts on technology, education and related issues, now direct to your ears. Not always fully formed but never auto-completed. helenbeetham.substack.com

Episodes

  1. 07/15/2025

    Doug Belshaw on digital and AI literacies

    In this episode, I talk to Doug Belshaw, co-founder of We Are Open cooperative, an old friend and sometime sparring partner – I’m sure he won’t mind me saying that. And if you’re here for the AI, we dig into large language models, the need for playful as well as critical spaces, and current AI literacy projects, all in the second half. In the first half we revisit some older ground, discussing the various digital literacy frameworks we’ve been involved with and whether they are still relevant. By way of Walter Ong on literacy and William Empson on ambiguity we come around to whether AI literacy is a useful term. No, we don’t have any simple solutions and yes, we do disagree about several things, hopefully in an interesting way. As always there are lots of references in the show notes below. Please do like, comment, subscribe, and consider becoming a paid subscriber so I can keep working with my good friend and audio wizard Ans to improve the sound of the pod. Their contact details are also in the shownotes, along with Bryan’s who drew the ‘contexts’ image and others you’ll find on the blog. Thanks for reading imperfect offerings! This post is public so feel free to share it. Digital literacy stuff we mention: Doug’s ‘open thinkering’ blog: https://dougbelshaw.com/blog/ Doug’s book about his digital literacy framework: https://dougbelshaw.com/essential-elements-book.pdf Jisc Digital Capabilities framework: https://digitalcapability.jisc.ac.uk/what-is-digital-capability/individual-digital-capabilities/our-digital-capabilities-framework/ Knobel and Lankshear: the ‘new’ literacies https://newlearningonline.com/literacies/chapter-2/knobel-and-lankshear-on-the-new-literacies Kahn and Kellner: Reconstructing Techno-literacies: https://pages.gseis.ucla.edu/faculty/kellner/essays/technoliteracy.pdf DigCompEdu (Digital Competence Framework for Educators): https://joint-research-centre.ec.europa.eu/digcompedu_en PISA 2029 Media and AI Literacy assessment: https://www.oecd.org/en/about/projects/pisa-2029-media-and-artificial-intelligence-literacy.html Angela Gunder and team: AI literacies: https://aiopeneducation.pubpub.org/pub/fmktz5d3/release/4 Other stuff we mention: Danah Boyd context collapse: https://www.zephoria.org/thoughts/archives/2013/12/08/coining-context-collapse.html William Empson 7 types of ambiguity: https://en.wikipedia.org/wiki/Seven_Types_of_Ambiguity Richard Rorty on dead metaphors: https://www.lrb.co.uk/the-paper/v08/n07/richard-rorty/the-contingency-of-language Walter Ong Orality and Literacy: https://monoskop.org/images/d/db/Ong_Walter_J_Orality_and_Literacy_2nd_ed.pdf Walter Ong on secondary orality: https://en.wikipedia.org/wiki/Secondary_orality Richard Seymour Twittering machine: https://www.versobooks.com/en-gb/products/2505-the-twittering-machine The people who help me could help you too: Ans for Audio: https://anshassel.net/ Bryan at Visual Thinkery for visual thinkery: visualthinkery.com Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    1h 3m
  2. 04/14/2025

    Talking with Katie Conrad about AI and human rights

    Some time ago I invited Katie (Kathryn) Conrad onto a discussion panel in the Generative Dialogues series, and while this covered some essential ground in AI pedagogies, I felt there was a lot more of her work to dig into. So I was delighted she agreed to come onto Imperfect Offerings for a deeper dive, particularly into her Blueprint for an AI Bill of Rights in Education. Katie is a Professor of English at Kansas University and co-director of the AI & Digital Literacy project, in partnership with the National Humanities Center, as well as Associate Editor of the journal Critical AI. Her Blueprint was one of the first things coming out of the AI tailspin that really spoke to me - and I increasingly think a human rights based approach is critical - not only in the negative sense that the inequities of AI threaten so many legally enshrined rights (equality, non-discrimination, freedom of thought and expression for example) but also in a positive sense. We need something in which to ground our ideas about humanity, in opposition to definitions of ‘the human’ emerging from the AI industry as a kind of host species for ‘intelligence’, and then as an inverse or supplement or deficiency in relation to whatever ‘artificial’ intelligence is supposed to be capable of. Human rights is a way of thinking about being human means that - for all its imperfections - starts from our common vulnerability and dependence on each other, and therefore our equality. Human rights have a long history of collective thought and action, and shared institutions that seemed robust until quite recently. So it was the connection between education and rights, and Katie’s intentions in developing the blueprint, that I really wanted to talk about. As ever, we ranged well beyond our original brief. Here are links to some of the resources we touched on. Links The Blueprint https://criticalai.org/2023/07/17/a-blueprint-for-an-ai-bill-of-rights-for-education-kathryn-conrad/ NORRAG Policy Insights: AI and Digital Inequities (including a chapter by Katie and Lauren Goodlad) https://resources.norrag.org/storage/documents/NllPZ3GRhnWCbiMFcG0tUv5qxOt4snLAVxpOwgsN.pdf Katie’s Critical Digital Literacy resources: https://docs.google.com/document/d/1TAXqYGid8sQz8v1ngTLD1qZBx2rNKHeKn9mcfWbFzRQ/ Katie’s blog: https://kconrad.substack.com/ Artificial Intelligence in Education: a critical view through the lens of human rights, democracy and the rule of law: https://rm.coe.int/artificial-intelligence-and-education-a-critical-view-through-the-lens/1680a886bd Pedagogies of Generative AI: podcast with Helen and Mark Carrigan from May last year, with Katie and others: Critical AI journal: https://www.dukeupress.edu/critical-ai Marc Watkins’ Rhetorica: marcwatkins.substack.com Thresholds in Education https://academyforeducationalstudies.org/journals/thresholds/ Maha Bali’s blog on critical AI literacies: https://blog.mahabali.me/educational-technology-2/what-i-mean-when-i-say-critical-ai-literacy/ Report to the UN General Assembly on AI in Education: https://www.ohchr.org/en/documents/thematic-reports/a79520-artificial-intelligence-education-report-special-rapporteur-right Harvard’s AI Pedagogy Project: creative and critical engagement with AI in education: https://aipedagogy.org/ The Data Sovereignty CARE principles: https://www.gida-global.org/care Melanie Dusseau’s Burn It Down piece for Inside Higher Ed: https://www.insidehighered.com/opinion/views/2024/11/12/burn-it-down-license-ai-resistance-opinion Roderic N Crooks: ‘Access is capture’ - on how edtech reproduces racial inequality: https://www.ucpress.edu/books/access-is-capture/paper Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    59 min
  3. 03/20/2025

    Remi Kalir on student writing

    Remi Kalir is Associate Director of Faculty Development and Applied Research at Duke University and Associate Director of the University’s CARADITE Centre, where many wise ideas about student writing and reading are developed. Since ChatGPT first emerged, he has been working alongside students to understand the role Generative AI has and could have in their practice. He is also the author of two books on annotation as a way of linking student reading and writing, and empowering students in relation to academic texts. He finds annotation to be a ‘participatory act [that] marks public memory, struggles for justice, and social change’. Remi and I discuss the need for ‘brave spaces’ where the purposes of education and writing can be talked about. In Remi’s words, trusting young people to work with us means being open about our own states of ‘not knowing’, before we can find collective ways ahead. Links Remi’s blog Remi(x)Learning https://remikalir.com/ Centre for Applied Research and Design in Transformative Education https://lile.duke.edu/caradite/: https://lile.duke.edu/caradite/ CARADITE centre’s resources for students on learning with AI: https://ai.duke.edu/ai-resources/learn-with-ai/ ReMarks on Power (2025) from MIT Press by Remi Kalir: https://mitpress.mit.edu/9780262551038/remarks-on-power/ Blog connected to the book: https://www.readingremarks.com/ Annotation (2021) from MIT Press by Remi Kalir and Antero Garcia: https://mitpress.mit.edu/9780262539920/annotation/ The hypothes.is project and software: https://web.hypothes.is/ Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    55 min
  4. Alistair Alexander on unsustainable AI

    03/04/2025

    Alistair Alexander on unsustainable AI

    In the first of several interviews about the impact of generative AI on planetary resources, I talk to Alistair Alexander, an academic and climate activist based in Berlin. He has all the facts about about the power and water costs of the data centres being rolled out ‘for AI’. But he also asks us to think more widely about the costs of computation, and of embedding the logics of scale into every aspect of economic and social life. I found this a fascinating conversation that should make every organisation with an IT budget ask itself some hard questions. Like: is the use of generative models compatible with commitments on sustainability and climate justice? And: whoever asked for this anyway? Links: Alistair’s newsletter/blog Reclaimed Systems: His Web site: https://reclaimed.systems Alistair’s recent piece in the Berliner Gazette, ‘After Progress’: https://berlinergazette.de/generative-ai-is-degenerating-human-ecologies-of-knowledge/ The course Alistair mentioned: https://www.schoolofma.org/programs/p/early2025-ecologies-of-technology The glass room website: https://theglassroom.org/ ‘Materialising the virtual’, art project mentioned by Helen: https://we-make-money-not-art.com/how-artists-and-designers-are-materialising-the-internet/ Some more recent creative works designed to ‘make visible’ the invisible labour of AI: https://berlinergazette.de/projects/silent-works/ A recent post by Edward Ongweso Jr detailing the capital investments being made by the ‘big four’ in building out data centres (notice that the boss of Nvidia sees inference, not training, as the major driver of demand): You might also like my recent post about the UK Government’s plans to turn the UK into a data park And this earlier post about the climate costs of AI: Saving the Planet, one cute animal video at a time Finally, you might want to listen (again?) to Dan MacQuillan on the podcast, talking among other things about the need to ‘decompute’. Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    58 min
  5. Audrey Watters on AI state capture

    02/08/2025

    Audrey Watters on AI state capture

    As Enterprise AI goes full state capture and as Elon Musk’s freshmen engineers get their hands on all the data of the US federal government, Helen and Audrey team up again to ask: was this always going to be the end game? We look at AI’s 75-year-old relationship with white nationalism, eugenics and military violence, and we ask whether AI as a ‘general’ technology could ever escape these associations. Audrey anticipates a new era of edtech investment that will drive venture capital and data architectures even deeper into public education. While Helen muses on the AI Action Plan of the UK government that - despite its very different vibe - is putting UK data and public services into the hands of many of the same US corporations that are bringing us Project25. It seems the tech news has become the news, and whatever madness that brings into the world in the coming days and weeks, you’ll want to get your sanity check here. Limited show notes this week, but you might like to check out: Some recent commentary on the Elon Musk moment (sure to be out of date by now) from the UK Guardian: https://www.theguardian.com/us-news/2025/feb/08/elon-musk-doge-team-staff And from the Washington Post: https://www.washingtonpost.com/business/2025/02/05/elon-musk-federal-technology-takeover/ Up-to-date takes on tech history-in-the-making are often posted here: https://futurism.com/. Daniel Greene’s book, mentioned by Audrey: The Promise of Access: Technology, Inequality, and the Political Economy of Hope (MIT Press): https://mitpress.mit.edu/9780262542333/the-promise-of-access/ Feminist critiques of AI from the 1980s and 1990s, mentioned by Helen (most of these require a log-in): * Alison Adam: https://journals.sagepub.com/doi/10.1177/135050689500200305 * Lynette Hunter: https://www.jstor.org/stable/10.1525/rh.1991.9.4.317 * Donna Haraway: https://www.jstor.org/stable/3178066 * Lucy Suchman (still writing brilliantly on this topic today): https://journals.sagepub.com/doi/full/10.1177/20539517231206794 Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    57 min
  6. Audrey Watters on eugenics and robber barons

    02/03/2025

    Audrey Watters on eugenics and robber barons

    As we sink further into the pit that is the Musk/Trump presidency, who better to survey the hellscape on the way down than Audrey Watters, ed tech’s sharpest and toughest commentator? If you don’t know Audrey’s work, you really should. You’ll find her Second Breakfast newsletter in the shownotes, along with a link for her book, Teaching Machines, and plenty more that came up in our discussion. It’s the first imperfect x breakfast cross-over on the pod, and I hope it won’t be the last. Audrey’s newsletter, Second Breakfast: https://2ndbreakfast.audreywatters.com/ Audrey’s book Teaching Machines https://mitpress.mit.edu/9780262546065/teaching-machines/ Simone Brown on the origins of surveillance in the management of plantation labour: https://journals.kent.ac.uk/index.php/klr/article/view/1100 Emily Bender and Timnit Gebru (et al’s) famous paper: On the dangers of stochastic parrots: https://dl.acm.org/doi/10.1145/3442188.3445922 Recent critique of this paper from a posthumanist perspective, referenced by Helen: https://posthumanism.co.uk/jp/article/view/3287 Meredith Whittaker on Babbage, computers and plantation labour: https://logicmag.io/supa-dupa-skies/origin-stories-plantations-computers-and-industrial-control/ Reid Hoffman ‘AI will empower humanity’ in the NYT, referenced by Audrey: https://www.nytimes.com/2025/01/25/opinion/ai-chatgpt-empower-bot.html The article is paywalled but there is an interview with similar takes here: https://techcrunch.com/2025/01/26/why-reid-hoffman-feels-optimistic-about-our-ai-future/ A recent Guardian UK article on the ‘Paypal Mafia’: https://www.theguardian.com/technology/2025/jan/26/elon-musk-peter-thiel-apartheid-south-africa Peter Thiel’s argument that freedom and democracy are incompatible, referenced by Audrey: https://www.cato-unbound.org/2009/04/13/peter-thiel/education-libertarian/. This is also referenced by Curtis Yarvin and Nick Land in support of their Dark Enlightenment neo-reactionary movement: https://en.wikipedia.org/wiki/Dark_Enlightenment Links between Palantir (Peter Thiel’s company) and the US military: https://www.palantir.com/offerings/defense/air-space/ Helen’s original substack post on Faculty AI (a new one follows shortly): https://helenbeetham.substack.com/i/139080460/safer-ai-round-two AI Snake Oil, the blog of the book by Arvind Narayanan and and Sayash Kapoor, as discussed by Audrey and Helen: https://www.aisnakeoil.com/ Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    1h 4m
  7. 01/22/2025

    Eamon Costello on human wisdom

    In this episode I talk with Eamon Costello, writer and thinker and educator, editor of a special issue on Metaphors of AI, and host of an upcoming symposium Education After the Algorithm - where he and I will be speaking in the real. Here we dive into topics from metaphor to metadata, from the AI business case to its increasingly compulsive interfaces, and from the wisdom of story to why AI tries so hard to be sexy. I hope you enjoy the twists and turns as much as I did. Please like, subscribe and share. Links Eamon’s LinkedIn profile: https://www.linkedin.com/in/eamoncostello/ Call for Papers, JIME special issue, Metaphors of AI: https://account.jime.open.ac.uk/index.php/up-j-jime/libraryFiles/downloadPublic/8 Education After the Algorithm: Symposium at City University Dublin: https://www.hackthiscourse.com/symposium/ Eamon’s co-authored article ‘Speculative Practicescapes of Learning Design and Dreaming’ (2024): https://link.springer.com/article/10.1007/s42438-024-00465-5 Eamon’s co-authored article ‘Information and Media Literacy in an age of AI’ (2023): https://www.mdpi.com/2227-7102/13/9/906 Eamon’s ‘AI is destroying education’ post (back in 2023): https://www.linkedin.com/pulse/ai-destroying-education-we-have-one-chance-stop-eamon-costello-hl7xe/ Emily Bender’s talk ‘Synthetic text extruding machines’ (2024) referenced by Eamon: https://linguistics.ucla.edu/event/colloquium-talk-emily-bender/ Helen’s ‘provocation: the unconscious is structured like a language’, referenced by Eamon: Liz Jackson’s article ‘The Manliness of Artificial Intelligence’ (2024) referenced by Eamon: https://www.tandfonline.com/doi/full/10.1080/00131857.2024.2409739 Althusser’s concept of ‘interpellation’, referenced by Helen: https://en.wikipedia.org/wiki/Interpellation_(philosophy) Dirty Work (2023) by Eyal Press, book referenced by Eamon: https://www.bloomsbury.com/uk/dirty-work-9781801107235/ Ghost Work (2019) by Mary L. Gray and Siddharth Suri, referenced by Helen: https://ghostwork.info/ Ruha Benhamin’s article on Eugenics, referenced by Eamon: https://lareviewofbooks.org/article/the-new-artificial-intelligentsia/ Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    53 min
  8. Catherine Cronin and Laura Czerniwicz on HE for good

    01/15/2025

    Catherine Cronin and Laura Czerniwicz on HE for good

    In this episode I meet with the editors of Higher Education for Good to discuss how generative AI may be changing the prospects for an equitable and socially-just system of higher education. I ask how they were able to bring an ethic of care to the work of editing, in an academic publishing landscape that values speed, scale and reach. We look at the particular challenges of producing an audio version of the book, and explore the politics of the different options, from AI voices to personal voice-prints to reading aloud. This turns out to be a powerful lens for viewing the prospects of alternative technological futures. While Cath and Laura don’t shy away from the inequities of big tech infrastructures, I always come away from a conversation with them feeling hopeful and enriched. Links Catherine’s home page: https://catherinecronin.net/ Laura’s blog: https://czernie.weebly.com/ Higher Education for Good: Teaching and Learning Futures. The book is freely and openly available to read or to download from the publisher, OpenBook: https://www.openbookpublishers.com/books/10.11647/obp.0363 Feminist Special Issue of Learning, Media and Technology journal (2022): the open version from FemEdTech https://femedtech.net/special-issue-of-learning-media-technology-feminist-perspectives-on-learning-media-and-educational-technology/ Helen’s review of HE4G in Postdigital Science and Education (2024): Where is the University worth fighting for? https://link.springer.com/article/10.1007/s42438-024-00511-2 Mark Fisher’s Capitalist Realism (referenced by Helen) https://en.wikipedia.org/wiki/Capitalist_Realism Walter Ong’s Orality and Literacy: the Technologizing of the Word (referenced by Catherine and Helen) https://monoskop.org/images/d/db/Ong_Walter_J_Orality_and_Literacy_2nd_ed.pdf Eddie Glaude Begin Again (referenced by Catherine) https://en.wikipedia.org/wiki/Begin_Again_(book) Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    57 min
  9. Dan MacQuillan on decomputing

    01/09/2025

    Dan MacQuillan on decomputing

    In this episode I talk to Dan MacQuillan, Lecturer in Creative Computing at Goldsmiths, and author of Resisting AI: an Anti-fascist Approach to Artificial Intelligence. I read this in 2022, as soon as it was published, and it remains for me one of the most vivid, provocative and relevant critiques of ‘artificial intelligence’ as a project. Here, Dan speaks about the continuities between today’s machine learning models and earlier projects of categorising and disciplining people. We discuss how education is implicated in these architectures and how educators might resist. Dan has been a star of podcasts with tens of thousands of listeners, so I am deeply grateful that he made time to talk to me on this first episode of Imperfect Offerings in sound. Links Dan’s home page: https://www.gold.ac.uk/computing/people/d-mcquillan/ Resisting AI: and Anti-Fascist Approach to Artificial Intelligence from Bristol University Press: https://bristoluniversitypress.co.uk/resisting-ai Dan’s ‘other’ podcasts on Resisting AI: https://www.transformingsociety.co.uk/2023/07/17/the-extensive-and-unconventional-reach-of-dan-mcquillans-resisting-ai/ On Arendt’s diagnosis of ‘thoughtlessness’ as a feature and an enabler of fascism: https://danmcquillan.org/arendtandalgorithms.html On AI colonialism and the likely impacts on the Global South: https://foreignpolicy.com/2024/12/17/ai-global-south-inequality/ or https://www.technologyreview.com/supertopic/ai-colonialism-supertopic/ On algorithmic states of exception: https://research.gold.ac.uk/id/eprint/11079/ Wikipedia article on the Situationists: https://en.wikipedia.org/wiki/Situationist_International And on Guy Debord’s Society of the Spectacle: https://en.wikipedia.org/wiki/The_Society_of_the_Spectacle “All that was once directly lived has become mere representation” Get full access to imperfect offerings at helenbeetham.substack.com/subscribe

    1h 19m

About

Thoughts on technology, education and related issues, now direct to your ears. Not always fully formed but never auto-completed. helenbeetham.substack.com