Agents Of Tech

WebsEdge

*Where big questions meet bold ideas* Agents of Tech is a video podcast exploring the biggest questions of our time—featuring bold thinkers and transformative ideas driving change. Perfect for the curious, the thoughtful and anyone invested in what’s next for our planet. Hosted by Stephen Horn, former BBC producer turned entrepreneur and CEO, Autria Godfrey, Emmy Award-winning journalist and Laila Rizvi, neuroscience and tech researcher, the show features conversations with trailblazers reshaping the scientific frontier.

  1. 16 APR

    Wikipedia, Media Bias and AI with Jimmy Wales

    As AI gets more capable, will it make public information more trustworthy, or less? Does news media have to be biased to be financially successful? Is AI a threat to Wikipedia or will we always be reliant to the human component when it comes to seeking trustworthy information? These are timely questions about AI, information, technology and trust that affect us all – which is why Stephen Horn, Autria Godfrey and Laila Rizvi are interviewing the founder of Wikipedia, Jimmy Wales. We start with a discussion of trust about where we get our information, and how to build trust amidst the changing economics of news media and AI. With Wikipedia celebrating its 25th Anniversary, Autria asks Jimmy how they overcame the public’s initial distrust and what he thinks about the current cynicism towards AI. He admits that “There is, you know, a cycle that happens…when the quality is low and something's very new, then people obviously are skeptical and quite reasonably so.” Laila asks if we’re close to AI superintelligence, and Jimmy explains that he’s a tech geek but not an expert in AI. The people he listens to, his friends Gary Marcus and Demis Hassabis, think we need some fundamental breakthroughs before that. Of course, he says, they may be wrong and things are moving pretty quickly. “It’s a classic sort of thing in tech, it’s an old saying: People tend to overestimate the short run and underestimate the long run.” The conversation turns to the value of neutrality and unbiased information. Laila suggests that people are happy with the ease of the answers they get from AI or social media and don’t have the luxury of researching every issue. Jimmy offers an “imperfect” analogy to junk food, saying “Junk food’s easy. Tastes really good right now… So I don't buy [crisps]. I don't like to have them around because… I actually do have a higher order sort of brain.” Stephen points out that the media world seems to be moving beyond providing multiple perspectives on an issue, and that there is no business model for neutrality. Jimmy disagrees, citing Wikipedia’s popularity, which is higher than the top 10 newspapers combined, and suggests that, when it comes to neutrality and fighting bias, “We have to fight for it.” In our rapid fire segment, Autria asks where people will finally draw the line when it comes to AI. Jimmy cites OpenClaw and his feeling that people will draw the line between using AI to get things done and the improper use of personal information by that AI. Laila asks Jimmy what's something that's universally accepted in his field that he disagrees with? His answer: “That news media has to be biased to be financially successful,” although he admits, “I'm a minority viewpoint there.” Finally, Stephen asks what Jimmy sees in the future that we’re not talking about today? Jimmy says we’re focused a lot about AI in LLMs, but there are other things going on like advances in biology, drug discovery, driverless cars and other positive, transformative developments that deserve more attention. “I think there's a lot more that's going to come that's going to be really pretty amazing.” CHAPTERS: 00:00 - Introduction 01:00 - Is Trust in Ai, Tech and Media in Short Supply? 04:10 - Early Skepticism about Wikipedia and AI 05:34 - When and Where To Use LLMs and AI 06:40 - Jimmy Wales on AI: Pretty Terrible at Facts but Kind of Creative 07:17 - Can AI Work With the Right Framework? 10:04 - Will AI Replace Wikipedia? 13:22 - The Seven Rules of Trust - Neutrality and Bias 15:18 - People Tend to Trust Individuals Over Abstract Entities 16:22 - Echo Chambers, Convenience and Trust 20:43 - Media Literacy and the Economics Of Trust 22:23 - Is There a Media Business Model for Neutrality? 24:19 - Drawing the Line Between Personal Info and Getting Things Done 25:14 - News Media Doesn’t Have to Be Biased to Be Financially Successful 25:38 - Bright Future for AI in Biology, Drug Discovery, Driverless Cars, More 27:11 - Can AI and Wikipedia Coexist?

    30 min
  2. 19 MAR

    Is Sovereign AI Possible? We Ask Ryan Wain of the Tony Blair Institute

    NOTE: This episode was recorded before the recent conflict involving Iran began. The US and China control over 90 percent of the world's AI computing power. In practice, that means most countries rely on American or Chinese firms, chips, and rules to access the most advanced systems ever built. Some call it partnership. Others call it dependency. Our guest today, Ryan Wain, the Senior Director of the Tony Blair Institute for Global Change, advises governments on how to navigate this. His answer? Stop trying to compete. In fact, he calls self-sufficiency a "vanity project." But here's the question: if you're not one of the two countries holding the keys, what leverage do you actually have? And if you are America or China, should you share this power at all? Hosts Autria Godfrey and Laila Rizvi start off with the TBI report which argues that AI self-sovereignty is unrealistic for most countries. Autria asks if AI power is already so entrenched that we’ll just see a widening divide between the haves and the have nots. Ryan says that the US and China have spent so much money building frontier models, that other countries building their own frontier AI is now an unrealistic strategy. Instead, they need to figure out how to take part in the AI revolution by leveraging their strengths and opportunities, like Kazakhstan’s pan to train a million people to become AI engineers, or Kenya, which has geothermal energy they’ve used as leverage build partnerships with tech companies to bring AI to their country. Ryan says, "Control what you can, steer where you have leverage, and then depend on those partners for the rest." Could geopolitical tensions bleed over into AI access, so even allies like the UK could end up locked out of US-based AI? Ryan argues that long before this happens, countries need to not get locked into one model. He points out that “Sovereignty is a choice and we have levers that we can pull” and that the UK and Europe are looking at multiple models, including open source models. What about concerns that AI can be used to create more authoritarian states as we’re seeing in China and the US? Political leadership needs to understand the importance of harnessing technology and make the case that it can provide greater privacy protection, safety from crime, and even security during wartime. He points out how Estonia has digital ID and yet ranks as the second freest online environment, after Iceland. Should the US be letting China get its chips? Is AI more like the development of 5G or more like the nuclear arms race? Neither, says Ryan. Sovereign frontier models don’t guarantee national prosperity or security. Advantage comes from a robust and diverse set of tech companies like America has. The path involves proper industrial strategy, communicating with the public, addressing energy needs and data centers, training, and supporting founders and leaders to build next gen AI companies that transform everything from healthcare and public services to boosting national security. CHAPTERS: 00:00 - Power, Partnership and Dependency 01:22 - Is this a Catch 22? 02:30 - What Does AI Sovereignty Really Mean? 03:07 - Is It Better To Build Your Own Frontier Model? 04:02 - What If the US Pulls the Plug? O4:58 - Frontier AI Models Are Impossible for Many Countries 05:34 - What Are The 3 Dimensions of AI Sovereignty? 07:14 - You Can’t Be Dependent on One AI Model 07:59 - Sovereignty Is a Choice and We Have Levers that We Can Pull 08:12 - Europe Embraces Open Source More Readily than US or China 08:56 - Smaller Nations Should Leverage Their Strengths with AI 10:23 - Digital ID, Facial Recognition, Surveillance 15:36 - Should US Give AI Chips to China? 17:36 - Europe Needs More Global Tech Startups 18:40 - The World Is Interconnected 19:54 - Is AI Sovereignty a Fantasy? 20:39 - What Advantages Do Countries Other Than China and the US Have? 22:25 - Energy Costs, Talent and Industrial Strategy 26:30 - Is True AI Sovereignty Even Possible?

    30 min
  3. 5 MAR

    AI Superintelligence: Are We Racing Toward Extinction?

    Will AI destroy humanity? Most people think that's science fiction. The people actually building it aren't so sure. Geoffrey Hinton - the Godfather of AI - says there's a 10 to 20 percent chance AI wipes us out. OpenAI’s Sam Altman told Congress his own technology could 'cause significant harm to the world.' Our guest, Malo Bourgon, CEO of the Machine Intelligence Research Institute (MIRI) has been warning about this for two decades. He says a machine doesn't need to be sentient to become a global risk. And today's safety measures? Nowhere near enough. Can we build an off switch for a machine that's smarter than us? Malo tells hosts Autria Godfrey and Laila Rivzi that he thinks we’re on the path to building systems that are radically smarter than us, speeding faster in a race to build systems that we don’t understand and that we don’t know how to control that “could end up with none of us around to see the future we could have built instead.” While he’s not worried that current AI models are existentially dangerous, Malo agrees that some emergent behaviors, like the deception we’re seeing in smarter general systems, suggest that if we continue to scale towards superintelligence, they’ll only have a bigger impact. The trio discuss whether it would even be possible to build in a safeguard, like an off switch. Malo considers this a losing battle if we wait too long and even then, it would be better if we built a broader system that allows us to not get into that position in the first place. Some harms and disruptions from AI are already happening. Malo suggests that we need to find some way to have coordinated action globally, even among adversaries. When it comes to existential risk, he draws a comparison to the nuclear arms race and the cold war, and how, thanks to treaties and agreements and some luck along the way, we’re still here. Autria asks about specific ways AI can lead to a catastrophic, apocalyptic ending, including disruptions to the food chain, the creation of bioweapons, mass unemployment and faltering economies. Malo says even if we solve those, there remains the core danger which comes from a misalignment of the values and goals of the superintelligence we build and our own. It doesn’t have to be evil, it just has to” care about weird, other things that aren’t the things we care about and it wants to pursue those things” with an indifference to our existence. So what would convince Malo that AI is safe? Changing how we create AI systems, he says, from growing them “in a very brute force way” the way we do now to crafting them with a more principled sense of what we’re trying to do to make them safe. Finally, Malo suggests that the claim that “the people running these companies are, you know, bad, immoral people" isn’t quite right. He says that most of “these people aren't actually villains, they're normal people who are kind of trying to do the right thing but they're in a really bad situation. But I also think they're also doing a bunch of bad stuff.” What about you? We want to know what you think. How concerned should we be about a doomsday scenario? How concerned are you? Tell us in the comments below, CHAPTERS: 00:00 - Profits, Power, and AI Risk 02:36 - Are We Racing Toward Extinction? 03:33 - Are Big Tech’s Predictions of AGI Accurate? 04:50 - Are AI Models Exhibiting Signs of Understanding? 05:01 - Deception in Today’s Models 07:51 - Can’t We Just Build an AI Off-Switch? 10:10 - What Can We Actually Do? 11:06 - Harms Are Already Here – More Over The Horizon 12:00 - Global Race to AI Superintelligence 13:00 - Parallels to Nuclear Arms Race 15:14 - Pathways to Catastrophic Risk 16:14 - The Boss Fight – Misalignment As The Core Danger 18:51 - What Would “Safe” Look Like? 19:40 - Rapid Fire Questions for Malo Bourgon 19:44 - What Does Your Field Get Wrong? 20:53 - Where Do People Draw The Line On AI In Their Lives? 22:24 - Post-Interview Reflections 24:44 - What Do You Think: Doomsday Scenario or Not?

    25 min
  4. 19 FEB

    AI Hype vs. Reality with Prof. Emily Bender, Author of “The AI Con”

    ChatGPT has 800 million users. OpenAI is valued at $500 billion. But our guest today says the whole thing is a scam. Professor Emily Bender, author of “The AI Con” and Director of the Computational Linguistics Laboratory at University of Washington, argues Artificial Intelligence is just a broad marketing term and “AI" is just a label for unrelated tech - creating a false sense of an inevitable, God-like entity. Is she a prophet... or is she just wrong? We’ll ask our questions for Professor Bender in the episode, but if you’ve got questions for us, throw them into the comments below! Hosts Autria Godfrey and Laila Rizvi start by asking Emily whether AI is intelligent enough to replace humans. Emily says studies indicating that AI models are cheating, blackmailing, and playing dumb when they know they’re being tested don’t stand up. She says it’s elaborate interactive fiction, and that Anthropic’s “research” isn’t peer reviewed – basically, no more than blog posts. LLM training includes language that looks like introspection, so systems can output language that looks like introspection even though they have no capacity to actually engage in introspection. Emily suggests that replacing interns and entry level workers with AI short-circuits the process of training future leaders. She describes how AI systems exploit the Global South, with difficult psychological conditions and compensation so low it creates, as Autria suggests, the next generation of sweat shops. When it comes to AI 2027 and whether AI poses an existential threat, Emily says it’s just a case of “Big Tech Fan Fiction” from the same shared world as the thinking of Nick Bostrom and the Effective Altruist movement. What about claims by Anthropic that Claude Code wrote the code for Claude Cowork? Emily doubts those claims, explaining that those systems have no agency and require input to do something. Although Emily doesn’t buy into claims of near-term existential risk, AI is creating labor and environmental harm on local levels if not global ones, often with a lack of transparency. What about arguments like those by Nobel Prize winner Geoffrey Hinton that suggest LLMs understand meaning and can mirror how humans operate? Emily says that given his background and specific knowledge of how these systems are built, he “really ought to know better.” She explains that unless we have access to the training data actually used on these systems, we can’t know that they are actually understanding concepts without explicit training. After Professor Bender leaves, Autria and Laila discuss whether Professor Bender’s dismissal of some of the data Laila presented is appropriate or incorrect. CHAPTERS: 00:00 - Is AI Hype a Scam? 01:33 - AI: Existential Risk or Theater? 02:02 - Dario Amodei and Demis Hassabis At Davos: 1-2 years Until AI Is a Risk 02:50 - Revolution or Con? 03:07 - How intelligent is AI, really? We ask Emily Bender 03:30 - Is AI Intelligent Enough to Replace Humans? Emily Bender Says No! 04:24 - “Cheating” Models and False Agency 06:32 - Will AI Take Our Jobs or Just Make Them Crappier? 06:43 - AI and the Career Ladder Problem 07:54 - Are AI Systems Exploiting Data Workers in the Global South? 08:18 - The Hidden Human Labor of AI 10:47 - AI 2027 and Big Tech Fan Fiction? 12:29 - Are LLMs like Claude Really Writing Their Own Code? 13:45 - Does AI Code Itself? 14:41 - Does AI Need to Be All-Powerful to Pose an Existential Risk? 15:44 - Environmental and Labor Harms 16:35 - Is AI Power and Water Consumption As Bad As Some People Claim? 17:41 - If AI’s Importance to Humanity Is Overhyped, Why Do So Many Believe It? 17:52 - Why the Hype Worked 18:48 - Can Neural Networks Mirror Human Neurology? 21:02 - Geoffrey Hinton and “Understanding” 22:07 - What Is AI Actually Good For? 23:23 - Questions for Professor Bender 23:36 - Is AGI Inevitable? 24:08 - Where Do Humans Draw the Line? 25:28 - After the Interview: Who’s Right? 27:34 - What Do You Think: Doomsday or Hype?

    28 min
  5. 15 JAN

    AI, Big Tech & Global Power: Oxford University Dr. Jennifer Cassidy on Diplomacy

    Diplomacy used to be about treaties and territory – now it seems it's more about data, algorithms, and the companies that control them. At Donald Trump’s inauguration, Silicon Valley’s most powerful figures stood steps away, a sign that Big Tech now sits at the centre of global power. Tech companies pervade everyday life and wield power once reserved for nation states. Are the people in charge of global power those elected to office or those appointed to positions within those companies? To explore how AI is reshaping diplomacy, from negotiation and representation to influence operations and disinformation, hosts Autria Godfrey, Stephen Horn, and Laila Rizvi interview Dr. Jennifer Cassidy, AI & Diplomacy, University of Oxford, about: How AI is transforming diplomacy’s core functionsWhy Big Tech now rivals governments in geopolitical influenceThe rise of “digital sovereigns” and private powerWhen former political leaders move into tech, where accountability goesDemocratic versus authoritarian uses of AIWhy global AI governance is still largely non-bindingFor Dr. Cassidy, diplomacy rests on three, timeless pillars: communication, representation, and negotiation. AI “is not demolishing these pillars, but quietly rewiring the architecture that holds them together… Predictive analysis now allows ministries to read the global mood” almost in real-time. The United Nations and the World Bank use AI models that monitor food prices, rainfall patterns, and social media data to anticipate instability “up to 6 weeks before that instability might actually break out.” NATO employs machine learning to map Russian disinformation. “What we’re seeing here is the move from reactive diplomacy… to anticipatory diplomacy.” One of the most pressing questions is whose AI is being used to create “sovereign diplomatic AI systems.” France and the EU train their AI on Mistral, a French company. US AI models are OpenAI's and Anthropic's. Microsoft's Azure Cloud hosts data for NATO and national governments. These companies have become “digital sovereigns” – private actors who control the three levers of power that were once defined by the state: information, infrastructure and interpretation. Former politicians like Nick Clegg (Meta) and Rishi Sunak (Microsoft) represent a “circuit of influence” where “experience, access, and authority are just flowing continuously between capitals and campuses in Silicon Valley.” While “democracies do need experienced voices helping to steer the tech transition,” we must ensure that “when the expertise moves, accountability moves with it.” What about bad actors using AI? Jennifer says we’ve seen this in elections in the US and the world. In China, “predictive policing algorithms are tracking not just where crime might occur, but who might commit it… Authoritarian regimes are combining facial recognition, travel data, and digital behaviour into vast surveillance scores.” It is “digital authoritarianism in its most refined form… controlled by prediction, rather than force.” Dr. Cassidy concludes, “We have a very, very, very long way to go regarding the governance and structure of, and frameworks for AI… a difficult task… that has to be done.” What’s your take? Share your thoughts in the comments and subscribe for more on AI, geopolitics and global power. CHAPTERS 00:00 Tech, Trump and the New Global Power Game 01:26 Do Tech Giants Now Run Foreign Policy? 04:00 How AI Is Reshaping Diplomacy? 06:37 Why Nations Are Building Their Own AI Models 09:18 Have Big Tech Companies Become Sovereigns? 12:33 From Prime Minister to Big Tech: The Revolving Door 16:46 AI Power Politics Beyond the West 19:43 AI for Good or Digital Authoritarianism? 22:09 Who Sets the Rules for AI? 24:48 Closing Thoughts with Dr. Jennifer Cassidy 25:05 Debrief: Authoritarian Drift and Regulation Fights 27:13 AI Ministers, Echo Chambers and What Comes Next

    29 min
  6. 26/11/2025

    Ellison’s $2.5bn Bet - Can Santa Ono turn Oxford into Europe’s Silicon Valley?

    Larry Ellison built Oracle into a cornerstone of the modern tech economy. Now he is making a $2.5 billion bet on Oxford, backing the Ellison Institute of Technology at Oxford to fuse AI, medicine and sustainability in one global hub. In this episode of Agents of Tech, Autria Godfrey, Stephen Horn and Laila Rizvi sit down in Oxford with Professor Santa Ono, Global President of the Ellison Institute of Technology (EIT), to ask a simple question: Can Oxford really become Europe’s Silicon Valley? We explore: - Why Ellison chose Oxford and the UK over Chicago or California - How EIT plans to recruit 7,000 world class scientists and double Oxford’s research base - The model of science-led capitalism and why commercialization is central to Ellison’s vision - The UK’s unique advantage in health data and biobanks (NHS data, UK Biobank, Protein Data Bank) - How AI, machine learning and robotics will change drug discovery, pandemics and healthcare - The relationship between EIT and Oracle, and how independent the institute really is - Parallels and contrasts with the Bill & Melinda Gates Foundation model of philanthropy - What this means for the UK’s role between the US and China in the global innovation race Professor Ono explains why he believes the UK is now one of the best places in the world to build AI-driven science: from single-payer health data to a fast-growing ecosystem of serial entrepreneurs. He also addresses questions about data privacy, ethics, bioterrorism risks and public concerns about American tech money in historic British institutions. If you care about: - How AI and health data will reshape medicine - Whether Oxford and Cambridge can anchor Europe’s answer to Silicon Valley - What it really takes to build a global science and technology campus at scale …this conversation is for you. Tell us in the comments: Do you think Ellison’s Oxford gamble is a bold new model for global science, or another moonshot that will be hard to scale? CHAPTERS 00:00 Larry Ellison’s $2.5B bet on Oxford00:35 Agents of Tech intro01:22 Why Oxford?02:45 Interview begins: Santa Ono03:01 Ellison’s vision for EIT05:11 Scaling talent and entrepreneurship05:53 Science capitalism vs traditional philanthropy07:52 Why base EIT in the UK10:38 NHS data, privacy and AI concerns12:55 AI’s impact on jobs and drug discovery15:12 Commercialisation and scientific breakthroughs17:38 Building a new global research hub20:26 AI geopolitics and the UK’s role21:03 EIT as a global model22:50 Interview ends23:01 Post-interview reflections24:41 Closing and invitation to Larry Ellison

    26 min
  7. 17/10/2025

    Will the U.S. LOSE the AI Race to China? – Helen Toner, ex OpenAI Board Member

    Is the U.S. LOSING the AI race to China? China and the U.S. are neck and neck in the AI race for global dominance. Former OpenAI board member Helen Toner (now at Georgetown’s CSET) joins us in Washington, D.C. to break down China vs U.S. strategies—open-source diffusion vs big tech global dominance— and what “winning” actually means. Helen has recently spent time in China and works at the center of U.S. AI policy—offering a rare inside view of both ecosystems and who’s truly ahead. Helen explains: - Who’s ahead right now and how to measure it (frontier AI vs adoption/diffusion) - Open-source vs closed: DeepSeek, Qwen, Kimi, Gemma, Llama vs OpenAI, Anthropic, Google - Compute & chips: NVIDIA dependence, export controls, and why compute concentration matters - AGI timelines: whether “AI 2027” holds up and why short timelines cooled after GPT-5 - “AI+” strategy: applying AI to manufacturing, healthcare, and finance vs pure frontier bragging rights - What governments should do now: transparency, auditing, AI literacy, and measurement science Who do you think is winning and WHY – China or the U.S.? Drop one evidence-backed reason (links welcome). We’ll pin the best reply. Don’t forget to like and subscribe for more unfiltered conversations on AI, tech, and society. Chapters 00:00 – Two strategies, one AI race 01:00 – Open-source China vs Big-Tech USA 03:37 – Not one race: choose your finish line 04:04 – Who’s actually open? DeepSeek, Qwen/Kimi, Llama, Gemma, GPT-OSS 06:26 – Frontier bragging rights vs real-world adoption 07:46 – China’s “AI Plus” play (AI + industry) 10:06 – Is the US still ahead at the frontier? 12:04 – GPT-5 reality check & AGI timelines 20:58 – Compute decides: chips, export controls, auto-ML engineers 23:04 – What we need now: transparency, audits, AI literacy 28:02 – Standards in practice: de-facto beats de-jure 30:56 – Next 5 years: closed peaks, open bow wave 37:55 – Final take: which path wins? #OpenAI #HelenToner #ai #GPT5 #OpenSource #podcast #China #DeepSeek

    38 min

About

*Where big questions meet bold ideas* Agents of Tech is a video podcast exploring the biggest questions of our time—featuring bold thinkers and transformative ideas driving change. Perfect for the curious, the thoughtful and anyone invested in what’s next for our planet. Hosted by Stephen Horn, former BBC producer turned entrepreneur and CEO, Autria Godfrey, Emmy Award-winning journalist and Laila Rizvi, neuroscience and tech researcher, the show features conversations with trailblazers reshaping the scientific frontier.