Agents Of Tech

WebsEdge

*Where big questions meet bold ideas* Agents of Tech is a video podcast exploring the biggest questions of our time—featuring bold thinkers and transformative ideas driving change. Perfect for the curious, the thoughtful and anyone invested in what’s next for our planet. Hosted by Stephen Horn, former BBC producer turned entrepreneur and CEO, Autria Godfrey, Emmy Award-winning journalist and Laila Rizvi, neuroscience and tech researcher, the show features conversations with trailblazers reshaping the scientific frontier.

  1. 4H AGO

    AI Hype vs. Reality with Prof. Emily Bender, Author of “The AI Con”

    ChatGPT has 800 million users. OpenAI is valued at $500 billion. But our guest today says the whole thing is a scam. Professor Emily Bender, author of “The AI Con” and Director of the Computational Linguistics Laboratory at University of Washington, argues Artificial Intelligence is just a broad marketing term and “AI" is just a label for unrelated tech - creating a false sense of an inevitable, God-like entity. Is she a prophet... or is she just wrong? We’ll ask our questions for Professor Bender in the episode, but if you’ve got questions for us, throw them into the comments below! Hosts Autria Godfrey and Laila Rizvi start by asking Emily whether AI is intelligent enough to replace humans. Emily says studies indicating that AI models are cheating, blackmailing, and playing dumb when they know they’re being tested don’t stand up. She says it’s elaborate interactive fiction, and that Anthropic’s “research” isn’t peer reviewed – basically, no more than blog posts. LLM training includes language that looks like introspection, so systems can output language that looks like introspection even though they have no capacity to actually engage in introspection. Emily suggests that replacing interns and entry level workers with AI short-circuits the process of training future leaders. She describes how AI systems exploit the Global South, with difficult psychological conditions and compensation so low it creates, as Autria suggests, the next generation of sweat shops. When it comes to AI 2027 and whether AI poses an existential threat, Emily says it’s just a case of “Big Tech Fan Fiction” from the same shared world as the thinking of Nick Bostrom and the Effective Altruist movement. What about claims by Anthropic that Claude Code wrote the code for Claude Cowork? Emily doubts those claims, explaining that those systems have no agency and require input to do something. Although Emily doesn’t buy into claims of near-term existential risk, AI is creating labor and environmental harm on local levels if not global ones, often with a lack of transparency. What about arguments like those by Nobel Prize winner Geoffrey Hinton that suggest LLMs understand meaning and can mirror how humans operate? Emily says that given his background and specific knowledge of how these systems are built, he “really ought to know better.” She explains that unless we have access to the training data actually used on these systems, we can’t know that they are actually understanding concepts without explicit training. After Professor Bender leaves, Autria and Laila discuss whether Professor Bender’s dismissal of some of the data Laila presented is appropriate or incorrect. CHAPTERS: 00:00 - Is AI Hype a Scam? 01:33 - AI: Existential Risk or Theater? 02:02 - Dario Amodei and Demis Hassabis At Davos: 1-2 years Until AI Is a Risk 02:50 - Revolution or Con? 03:07 - How intelligent is AI, really? We ask Emily Bender 03:30 - Is AI Intelligent Enough to Replace Humans? Emily Bender Says No! 04:24 - “Cheating” Models and False Agency 06:32 - Will AI Take Our Jobs or Just Make Them Crappier? 06:43 - AI and the Career Ladder Problem 07:54 - Are AI Systems Exploiting Data Workers in the Global South? 08:18 - The Hidden Human Labor of AI 10:47 - AI 2027 and Big Tech Fan Fiction? 12:29 - Are LLMs like Claude Really Writing Their Own Code? 13:45 - Does AI Code Itself? 14:41 - Does AI Need to Be All-Powerful to Pose an Existential Risk? 15:44 - Environmental and Labor Harms 16:35 - Is AI Power and Water Consumption As Bad As Some People Claim? 17:41 - If AI’s Importance to Humanity Is Overhyped, Why Do So Many Believe It? 17:52 - Why the Hype Worked 18:48 - Can Neural Networks Mirror Human Neurology? 21:02 - Geoffrey Hinton and “Understanding” 22:07 - What Is AI Actually Good For? 23:23 - Questions for Professor Bender 23:36 - Is AGI Inevitable? 24:08 - Where Do Humans Draw the Line? 25:28 - After the Interview: Who’s Right? 27:34 - What Do You Think: Doomsday or Hype?

    28 min
  2. JAN 15

    AI, Big Tech & Global Power: Oxford University Dr. Jennifer Cassidy on Diplomacy

    Diplomacy used to be about treaties and territory – now it seems it's more about data, algorithms, and the companies that control them. At Donald Trump’s inauguration, Silicon Valley’s most powerful figures stood steps away, a sign that Big Tech now sits at the centre of global power. Tech companies pervade everyday life and wield power once reserved for nation states. Are the people in charge of global power those elected to office or those appointed to positions within those companies? To explore how AI is reshaping diplomacy, from negotiation and representation to influence operations and disinformation, hosts Autria Godfrey, Stephen Horn, and Laila Rizvi interview Dr. Jennifer Cassidy, AI & Diplomacy, University of Oxford, about: How AI is transforming diplomacy’s core functionsWhy Big Tech now rivals governments in geopolitical influenceThe rise of “digital sovereigns” and private powerWhen former political leaders move into tech, where accountability goesDemocratic versus authoritarian uses of AIWhy global AI governance is still largely non-bindingFor Dr. Cassidy, diplomacy rests on three, timeless pillars: communication, representation, and negotiation. AI “is not demolishing these pillars, but quietly rewiring the architecture that holds them together… Predictive analysis now allows ministries to read the global mood” almost in real-time. The United Nations and the World Bank use AI models that monitor food prices, rainfall patterns, and social media data to anticipate instability “up to 6 weeks before that instability might actually break out.” NATO employs machine learning to map Russian disinformation. “What we’re seeing here is the move from reactive diplomacy… to anticipatory diplomacy.” One of the most pressing questions is whose AI is being used to create “sovereign diplomatic AI systems.” France and the EU train their AI on Mistral, a French company. US AI models are OpenAI's and Anthropic's. Microsoft's Azure Cloud hosts data for NATO and national governments. These companies have become “digital sovereigns” – private actors who control the three levers of power that were once defined by the state: information, infrastructure and interpretation. Former politicians like Nick Clegg (Meta) and Rishi Sunak (Microsoft) represent a “circuit of influence” where “experience, access, and authority are just flowing continuously between capitals and campuses in Silicon Valley.” While “democracies do need experienced voices helping to steer the tech transition,” we must ensure that “when the expertise moves, accountability moves with it.” What about bad actors using AI? Jennifer says we’ve seen this in elections in the US and the world. In China, “predictive policing algorithms are tracking not just where crime might occur, but who might commit it… Authoritarian regimes are combining facial recognition, travel data, and digital behaviour into vast surveillance scores.” It is “digital authoritarianism in its most refined form… controlled by prediction, rather than force.” Dr. Cassidy concludes, “We have a very, very, very long way to go regarding the governance and structure of, and frameworks for AI… a difficult task… that has to be done.” What’s your take? Share your thoughts in the comments and subscribe for more on AI, geopolitics and global power. CHAPTERS 00:00 Tech, Trump and the New Global Power Game 01:26 Do Tech Giants Now Run Foreign Policy? 04:00 How AI Is Reshaping Diplomacy? 06:37 Why Nations Are Building Their Own AI Models 09:18 Have Big Tech Companies Become Sovereigns? 12:33 From Prime Minister to Big Tech: The Revolving Door 16:46 AI Power Politics Beyond the West 19:43 AI for Good or Digital Authoritarianism? 22:09 Who Sets the Rules for AI? 24:48 Closing Thoughts with Dr. Jennifer Cassidy 25:05 Debrief: Authoritarian Drift and Regulation Fights 27:13 AI Ministers, Echo Chambers and What Comes Next

    29 min
  3. 11/26/2025

    Ellison’s $2.5bn Bet - Can Santa Ono turn Oxford into Europe’s Silicon Valley?

    Larry Ellison built Oracle into a cornerstone of the modern tech economy. Now he is making a $2.5 billion bet on Oxford, backing the Ellison Institute of Technology at Oxford to fuse AI, medicine and sustainability in one global hub. In this episode of Agents of Tech, Autria Godfrey, Stephen Horn and Laila Rizvi sit down in Oxford with Professor Santa Ono, Global President of the Ellison Institute of Technology (EIT), to ask a simple question: Can Oxford really become Europe’s Silicon Valley? We explore: - Why Ellison chose Oxford and the UK over Chicago or California - How EIT plans to recruit 7,000 world class scientists and double Oxford’s research base - The model of science-led capitalism and why commercialization is central to Ellison’s vision - The UK’s unique advantage in health data and biobanks (NHS data, UK Biobank, Protein Data Bank) - How AI, machine learning and robotics will change drug discovery, pandemics and healthcare - The relationship between EIT and Oracle, and how independent the institute really is - Parallels and contrasts with the Bill & Melinda Gates Foundation model of philanthropy - What this means for the UK’s role between the US and China in the global innovation race Professor Ono explains why he believes the UK is now one of the best places in the world to build AI-driven science: from single-payer health data to a fast-growing ecosystem of serial entrepreneurs. He also addresses questions about data privacy, ethics, bioterrorism risks and public concerns about American tech money in historic British institutions. If you care about: - How AI and health data will reshape medicine - Whether Oxford and Cambridge can anchor Europe’s answer to Silicon Valley - What it really takes to build a global science and technology campus at scale …this conversation is for you. Tell us in the comments: Do you think Ellison’s Oxford gamble is a bold new model for global science, or another moonshot that will be hard to scale? CHAPTERS 00:00 Larry Ellison’s $2.5B bet on Oxford00:35 Agents of Tech intro01:22 Why Oxford?02:45 Interview begins: Santa Ono03:01 Ellison’s vision for EIT05:11 Scaling talent and entrepreneurship05:53 Science capitalism vs traditional philanthropy07:52 Why base EIT in the UK10:38 NHS data, privacy and AI concerns12:55 AI’s impact on jobs and drug discovery15:12 Commercialisation and scientific breakthroughs17:38 Building a new global research hub20:26 AI geopolitics and the UK’s role21:03 EIT as a global model22:50 Interview ends23:01 Post-interview reflections24:41 Closing and invitation to Larry Ellison

    26 min
  4. 10/17/2025

    Will the U.S. LOSE the AI Race to China? – Helen Toner, ex OpenAI Board Member

    Is the U.S. LOSING the AI race to China? China and the U.S. are neck and neck in the AI race for global dominance. Former OpenAI board member Helen Toner (now at Georgetown’s CSET) joins us in Washington, D.C. to break down China vs U.S. strategies—open-source diffusion vs big tech global dominance— and what “winning” actually means. Helen has recently spent time in China and works at the center of U.S. AI policy—offering a rare inside view of both ecosystems and who’s truly ahead. Helen explains: - Who’s ahead right now and how to measure it (frontier AI vs adoption/diffusion) - Open-source vs closed: DeepSeek, Qwen, Kimi, Gemma, Llama vs OpenAI, Anthropic, Google - Compute & chips: NVIDIA dependence, export controls, and why compute concentration matters - AGI timelines: whether “AI 2027” holds up and why short timelines cooled after GPT-5 - “AI+” strategy: applying AI to manufacturing, healthcare, and finance vs pure frontier bragging rights - What governments should do now: transparency, auditing, AI literacy, and measurement science Who do you think is winning and WHY – China or the U.S.? Drop one evidence-backed reason (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society. Chapters 00:00 – Two strategies, one AI race 01:00 – Open-source China vs Big-Tech USA 03:37 – Not one race: choose your finish line 04:04 – Who’s actually open? DeepSeek, Qwen/Kimi, Llama, Gemma, GPT-OSS 06:26 – Frontier bragging rights vs real-world adoption 07:46 – China’s “AI Plus” play (AI + industry) 10:06 – Is the US still ahead at the frontier? 12:04 – GPT-5 reality check & AGI timelines 20:58 – Compute decides: chips, export controls, auto-ML engineers 23:04 – What we need now: transparency, audits, AI literacy 28:02 – Standards in practice: de-facto beats de-jure 30:56 – Next 5 years: closed peaks, open bow wave 37:55 – Final take: which path wins? #OpenAI #HelenToner #ai #GPT5 #OpenSource #podcast #China #DeepSeek

    38 min
  5. 10/17/2025

    Will the U.S. LOSE the AI Race to China? – Helen Toner, ex OpenAI Board Member

    Is the U.S. LOSING the AI race to China? China and the U.S. are neck and neck in the AI race for global dominance. Former OpenAI board member Helen Toner (now at Georgetown’s CSET) joins us in Washington, D.C. to break down China vs U.S. strategies—open-source diffusion vs big tech global dominance— and what “winning” actually means. Helen has recently spent time in China and works at the center of U.S. AI policy—offering a rare inside view of both ecosystems and who’s truly ahead. Helen explains: - Who’s ahead right now and how to measure it (frontier AI vs adoption/diffusion) - Open-source vs closed: DeepSeek, Qwen, Kimi, Gemma, Llama vs OpenAI, Anthropic, Google - Compute & chips: NVIDIA dependence, export controls, and why compute concentration matters - AGI timelines: whether “AI 2027” holds up and why short timelines cooled after GPT-5 - “AI+” strategy: applying AI to manufacturing, healthcare, and finance vs pure frontier bragging rights - What governments should do now: transparency, auditing, AI literacy, and measurement science Who do you think is winning and WHY – China or the U.S.? Drop one evidence-backed reason (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society. Chapters 00:00 – Two strategies, one AI race 01:00 – Open-source China vs Big-Tech USA 03:37 – Not one race: choose your finish line 04:04 – Who’s actually open? DeepSeek, Qwen/Kimi, Llama, Gemma, GPT-OSS 06:26 – Frontier bragging rights vs real-world adoption 07:46 – China’s “AI Plus” play (AI + industry) 10:06 – Is the US still ahead at the frontier? 12:04 – GPT-5 reality check & AGI timelines 20:58 – Compute decides: chips, export controls, auto-ML engineers 23:04 – What we need now: transparency, audits, AI literacy 28:02 – Standards in practice: de-facto beats de-jure 30:56 – Next 5 years: closed peaks, open bow wave 37:55 – Final take: which path wins? #OpenAI #HelenToner #ai #GPT5 #OpenSource #podcast #China #DeepSeek

    38 min
  6. 09/24/2025

    Is AI a Bubble? Gary Marcus on GPT-5 Hype and the Future of AI

    Has OpenAI made the WRONG bet? Gary Marcus argues that OpenAI is taking the wrong approach - and the AI bubble is real. We walk through why GPT-5 underwhelmed, where the scaling paradigm breaks, and what might really get us to AGI. Gary explains:– Why the economics don’t add up for current AI– Why GPT-5 isn’t as good as expected– The core LLM limitations and why the scaling paradigm fails– Why AI won’t take your job in the near term– A practical path to AGI (hybrid / neuro-symbolic, world models) We also debate whether investors are over- or under-valuing AI, what productivity gains are real, and how long it will take before AI truly replaces jobs. References discussed in this episode: – Gary Marcus, The Algebraic Mind: Integrating Connectionism and Cognitive Science (2001)– Gary Marcus, “Deep Learning Is Hitting a Wall” (Nautilus, 2022)– Gary Marcus, The Next Decade in AI (arXiv, 2020)– Mike Dash, Tulipomania Do you think Gary is right—or are we just getting started? Drop one strong piece of evidence either way (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society. Chapters00:00 – Is AI in a bubble?00:30 – AI hype vs. reality01:10 – GPT-5 launch: disappointment or progress?03:50 – Can AI be both a revolution and a bubble?05:40 – Productivity gains and investment hype08:00 – Gary Marcus joins the conversation09:30 – Why Gary calls himself a skeptic11:15 – GPT-5 and the limits of scaling14:00 – Financial reality of large language models17:20 – “Deep Learning Is Hitting a Wall”19:00 – Why hallucinations won’t go away21:00 – Neuro-symbolic AI explained24:00 – Building world models for AI27:00 – Are AI valuations sustainable?29:30 – Lessons from Tulipomania31:30 – Will AI take all our jobs?36:00 – What comes next for AI research38:30 – Final thoughts #OpenAI #GaryMarcus #ai #GPT5 #scaling #podcast

    32 min

About

*Where big questions meet bold ideas* Agents of Tech is a video podcast exploring the biggest questions of our time—featuring bold thinkers and transformative ideas driving change. Perfect for the curious, the thoughtful and anyone invested in what’s next for our planet. Hosted by Stephen Horn, former BBC producer turned entrepreneur and CEO, Autria Godfrey, Emmy Award-winning journalist and Laila Rizvi, neuroscience and tech researcher, the show features conversations with trailblazers reshaping the scientific frontier.

You Might Also Like