Law://WhatsNext

Tom Rice and Alex Herrity

How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect: - Focused conversations with leading practitioners; technologists and educators - Deep dives into the intersection of law, technology, and organisational behaviour - Practical analysis and visualisation of how AI is augmenting our potential - Insights from adjacent industries that might inform our own

  1. The Quantum Paradox: Rebecca Keating and Laura Wright on the Race to Get Encryption-Ready

    4D AGO

    The Quantum Paradox: Rebecca Keating and Laura Wright on the Race to Get Encryption-Ready

    Google says it will be able to break RSA encryption by 2029. Third-party actors are already collecting encrypted data on the assumption they'll be able to read it later. The UK has just committed £2 billion to a quantum strategy. 🎙️This week we sit down with Rebecca Keating and Laura Wright — barristers at 4 Pump Court and the co-authors of A Practical Guide to Quantum Computing and the Law. Both are a rare breed of Barrister with technical credentials to complement their deep legal expertise. Rebecca worked in-house at Dropbox before being called to the Bar in 2017, sits on the ICO's Technology Advisory Panel, and has acted in one of the only quantum-related cases to pass through the UK courts. Laura took an MSc in Computing Science at Imperial mid-career — her final project was a new coding language for legal contracts — and now writes and speaks regularly on smart contracts, AI liability, and quantum risk. --- What You'll Learn What is Quantum Computing — Alex surprises us all with his own definition and a sneak preview into how he likes to prepare for our podcast conversations! 👀The Quantum Paradox — Rebecca's framing for the central tension of the technology - the ability of quantum computers to upend the security systems that are the basis upon which we keep information safe, but there's also the capability to have even more secure systems than we have ever had.Harvest Now, Decrypt Later — This is not a future threat. Third-party actors are already collecting RSA-encrypted data they can't read today on the assumption they'll be able to decrypt it within a few years. NIST's quantum-readiness window of 2030–2035 is, in Rebecca's view, too late to start the conversation — particularly for anyone holding sensitive medical, political, or nationally significant data. Contracting for Quantum Computing as a Service — Customers won't own quantum computers — they'll access them remotely on a pay-as-you-go basis. Laura walks through what features are likely to make "QCaaS" contracts genuinely different from SaaS.--- Connect with Rebecca Keating — Barrister at 4 Pump Court | Member, ICO Technology Advisory Panel Connect with Laura Wright — Barrister at 4 Pump Court | Co-host, 4 Pump Court podcast Their book — A Practical Guide to Quantum Computing and the Law (Law Brief Publishing, December 2024). The Law of AI (2nd edition, Sweet & Maxwell) — Rebecca and Laura author the chapter on AI and Professional Liability. Society for Computers and Law (SCL) — Rebecca and Laura's recent SCL webinar on quantum legal issues was the catalyst for this episode.  Both (+ Tom) are members of the SCL - a leading educational charity for the tech law community in the UK. --- If you enjoyed this conversation, please share it with someone or a community who you think would find it valuable . And if you have a moment, rate the show and tell us what landed — it helps us reach more people and keep getting brilliant guests like Rebecca and Laura. --- For more conversations at the intersection of law and technology, head to⁠⁠⁠ https://lawwhatsnext.substack.com/⁠⁠⁠.

    40 min
  2. Inside the Machine: Bilva Chandra on Trust, Truth, and the Future of Knowledge in an AI World

    APR 13

    Inside the Machine: Bilva Chandra on Trust, Truth, and the Future of Knowledge in an AI World

    🎤 This week we sit down with Bilva Chandra — who has spent the bulk of her career working on AI safety, ethics, and governance at the places where it matters most.  Her CV reads like a guided tour of the AI governance landscape: she's worked at OpenAI, RAND, the US AI Safety Institute (now CAISI), and most recently Google DeepMind. She's been inside the frontier labs building the technology and inside the institutions trying to govern or influence its development — often thinking about the same problems from both sides of the table. --- One of the catalysts for our conversation emanated from her recent contribution to a Google DeepMind paper — Architecting Trust in Artificial Epistemic Agents — exploring what happens when AI systems become active participants in how knowledge is created and shared. But it's Bilva's broader career at the intersection of AI and society that makes this conversation so compelling: she's someone who genuinely cares about getting this right, and isn't afraid to say when she's worried. We cover a lot of ground — from the practical challenge of making AI systems reliable enough for enterprise adoption, to the deeper worry about what happens to human judgment when cognitive work is increasingly offloaded to machines. What emerges is a picture of someone who is both genuinely optimistic about what AI can unlock and deeply clear-eyed about the societal fault lines it's accelerating. Bilva doesn't treat AI risk as a theoretical exercise. She frames it as a human problem — one tangled up with polarisation, declining trust in institutions, and an information environment that was already broken before the first LLM shipped. --- Connect with Bilva Chandra — on LinkedIn | Or by subscribing to Role Model, her new newsletter on AI and society. --- If you enjoyed this conversation, please share it with someone who you think would find it valuable — especially anyone grappling with how to govern AI responsibly in their organisation. And if you have a moment, rate the show and tell us what landed — it helps us reach more people and keep getting brilliant guests like Bilva. --- For more conversations at the intersection of law and technology, head to⁠⁠ https://lawwhatsnext.substack.com/⁠⁠.

    43 min
  3. The AI Dividend: David Bushby on Corporate Teams Going It Alone

    APR 8

    The AI Dividend: David Bushby on Corporate Teams Going It Alone

    🎤 This week we sit down with David Bushby — Head of Legal Operations at Canva and the voice behind In Counsel Weekly — for a conversation that's part provocation, part practical playbook, and part three mates who love this stuff catching up over a good topic.   David follows Sam Lewis as the second member of the brilliant Canva legal team to join us for a chat, and from our vantage it's easy to discern the talent, curiosity and sense of fun they must have coursing through their team. --- David has spent the last couple of years at the forefront of AI adoption in legal — not as a commentator, but as a practitioner. Custom GPTs, Claude skills in Cowork, vibe-coded legal research tools, Chrome extensions that make SaaS platforms do things they were never designed to do. He's materially contributing to the building of an AI-native legal function in real time. --- "I just started to feel, by the end of last year — we just really need to go it alone. We can control what tools we use... maybe the AI dividend is just up to us on the in-house side." Here's a big question David has been grappling with (and we get into it straight away during our catch up): are law firms passing on their AI productivity savings to clients? The invoice data says no. The rate increases — 12–16% in the US and UK, with individual partner hikes of 25–35% — say definitely not. And at the line-item level? Nothing.  So what does a smart in-house legal ops team do? They begin to contemplate going it alone.  David walks through Canva's owners mindset approach to outsourcing, the vibe-coded tools his team is building in Claude Code, and why the AI dividend might just be something in-house teams have to take for themselves. --- Connect with David Bushby — by subscribing to In Counsel Weekly — David's popular bite-sized weekly newsletter for in-house counsel | Or find him on LinkedIn  --- If you enjoyed our conversation, please do share it with someone who you think would be interested in listening (or is equally passionate or enraged by law firm productivity). And if you've got a spare minute, please rate the show and tell us what landed 🙏 It helps us grow our audience and continue to attract great guests like David. --- For more conversations at the intersection of law and technology, head to⁠ https://lawwhatsnext.substack.com/⁠.

    45 min
  4. Legal Tech Trends with Peter Duffy (Q1 2026)

    MAR 27

    Legal Tech Trends with Peter Duffy (Q1 2026)

    🎙️Peter Duffy is back for our quarterly deep dive into the biggest stories from his ever-popular Legal Tech Trends newsletter (celebrating its recent 50th edition 🎉). This time around, the conversation is dominated by one name: Anthropic. Between a legal plugin that spooked public markets, a viral tweet showcasing a "Claude-native" law firm, and a principled stand-off with the US Defense Department that sent millions of users switching sides — it's been quite the quarter. What we else dive into: Vibe coding hits legal — From weekend hackathons to working prototypes in 30 minutes. Peter explains why it's transforming ideation and prototyping, but flags the considerable leap from "amazing demo" to "enterprise-ready." Plus, Alex reveals his salmon regulation app “Branchly” is storming the charts over at vibecode.law. The privilege and compliance watch-outs — An SRA investigation into a solicitor uploading client docs to ChatGPT, a US ruling that use of consumer Claude waived attorney-client privilege, and judges struggling with where "AI" begins and ends. Shadow IT is alive and well. The LLM numbers blind spot — Peter's public service announcement: LLMs are not designed for numerical calculations and it's one of the easiest ways to trigger hallucinations.  The McKinsey security incident — A security researcher accessing 45 million+ internal chatbot messages. Not an AI-specific problem per se, but a timely reminder that vibe-coded tools and internal chatbots need proper security scrutiny — especially when you have client data and a reputation on the line. Harvey, Legora, and the question you shouldn't be asking — "Which one should I buy?" Maybe start with your problems, not the product. Talk to your users, define your requirements, understand the commercial value — then go to market with a structured evaluation.  --- Listen if: You want a grounded, hype-free take on the quarter that put legal AI firmly in the mainstream spotlight. --- Rate, subscribe, comment, and share if you enjoyed this chat with Peter! --- For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.

    34 min
  5. The Outsider Inside: Nick West on Rewiring the Law Firm

    MAR 19

    The Outsider Inside: Nick West on Rewiring the Law Firm

    🎤 This week we sit down (for our first in-person episode) with Nick West — Partner and Chief Strategy Officer at Mishcon de Reya — who has spent two decades working at the intersection of law, technology and business model innovation. Nick’s path is one of the more unusual and instructive in the industry: competition lawyer at Linklaters, strategy consultant at McKinsey, product leader at LexisNexis, Managing Director of Axiom UK, and now the person responsible for technological transformation and R&D at Mishcon. He founded MDR Lab (one of the first legal tech startup incubators) and the MDR Group (collection of specialist consultancy businesses that sit alongside but separate from the core Mischon legal practice), built one of the industry’s first in-house data science teams, and has overseen the firm’s AI adoption journey from early experimentation through to commercial platform deployment.  There are few people in the legal industry who’ve thought as deeply — or as practically — about how law firms actually work and how they might need to change. The conversation is wide-ranging — we cover the full arc of Nick’s career, the evolution of innovation culture inside a law firm, how Mishcon adopted AI (and what they got wrong along the way), the productivity question everyone’s asking, what happens when clients start sending genuinely good AI-drafted documents, and the early “signals” for where the business model of law might be heading. --- Connect with Nick West Partner and Chief Strategy Officer at Mishcon de Reya --- If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Nick)! --- For more conversations at the intersection of law and technology, head to⁠ https://lawwhatsnext.substack.com/⁠.

    1h 28m
  6. Who Pays for the Truth? The UK's Copyright Battle with Big Tech with Matt Rogerson

    MAR 13

    Who Pays for the Truth? The UK's Copyright Battle with Big Tech with Matt Rogerson

    🎙️This week Tom sits down with Matt Rogerson — Global Policy Director at the Financial Times and one of the more prominent and forceful voices in the UK press and publishing industry on the question of AI companies using copyrighted content without permission or payment. The timing could hardly be more significant. We recorded this conversation on the day the House of Lords Communications and Digital Committee published what may prove to be the most consequential UK report on AI and creative industries to date: AI, Copyright and the Creative Industries — an 85-page report drawing on testimony from Google, Meta, Microsoft, OpenAI and dozens of creative industry bodies, whose conclusions could not be clearer: the UK's copyright framework is not outdated, the problems stem from widespread unlicensed use, and the government should rule out a commercial text and data mining exception entirely. And just one week earlier, the FT helped launch SPUR — the Standards for Publisher Usage Rights coalition — alongside the BBC, The Guardian, Sky News and The Telegraph: a coalition not just defending the status quo, but getting on the front foot to build shared technical standards and licensing frameworks so AI developers can access quality journalism through rights-cleared channels. What provoked this conversation was a pamphlet published by Public First, a UK policy consultancy, titled "Text & Data Mining and its value to the UK economy" — which called for a broad commercial exception to UK copyright law, extending the argument to cover AI inference as well as training. Matt's reaction on LinkedIn was characteristically direct, and it got us talking. --- During our conversation, Matt dismantles several of the core narratives being advanced by AI lobbyists — the anthropomorphisation of models to normalise unlicensed use; the claim that licensing infrastructure is too hard to build; and the idea that the UK must weaken copyright to remain competitive. He makes a compelling case that the real opportunity lies not in capitulating to US hyperscalers, but in building sovereign AI models with transparent training data and proper licensing — pointing to the Allen Institute, a US model co-funded by the government and Nvidia, as proof that this is already happening. Matt highlights the infrastructure already being built to support fair licensing: Microsoft's Publisher Content Marketplace, the FT's existing commercial API access, and emerging thinking from writers like Florent Daudens on what a post-browser, agentic news economy could look like. The claim that it's "too hard" for AI companies to pay for content is not just wrong — it's being actively disproved by the market. And we close on what may be the most consequential long-term argument of all: the slop spiral. If there is no economic incentive to produce high-quality journalism — because AI companies can take it for free — the supply of reliable information degrades. AI models trained on and retrieving from an increasingly polluted information environment produce worse outputs. Trust erodes. And we drift into a world where the information we consume is dependent wholly on the alignment of a particular model and the commercial interests of those administering it. Matt makes the case that secure news and information supply chains could become a national security issue if this dynamic starts to accelerate. --- If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Matt)! --- For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/.

    15 min
  7. AI Security, Agentic Risk & What lawyers need to understand with Rok Popov Ledinski

    MAR 6

    AI Security, Agentic Risk & What lawyers need to understand with Rok Popov Ledinski

    We sit down with Rok Popov Ledinski — an independent legal AI and data consultant whose background spans high-security enterprise engineering through to advising law firms on their AI and security strategy. Our initial interest in Rok's work was sparked by his YouTube channel, where he's been producing sharp, accessible breakdowns of the real risks underpinning today's AI tools. Within minutes, we're into a forensic dissection of Anthropic's Claude Cowork — the agentic tool pitched at non-developers that launched earlier this year. Rok walks us through the contradictions in Anthropic's own technical documentation: a tool demonstrated by its creators as a way to organise your desktop, while the same support pages advise against granting it access to sensitive local files. A tool marketed for running tasks autonomously in the background — while its activity isn't captured by audit logs. A tool whose safety guidance asks users to watch for "suspicious actions that may indicate prompt injections" — aimed at an audience that, as Rok points out, has largely never heard of prompt injections.  Rok explains, in terms accessible to non-technical listeners, how hidden instructions embedded in an innocuous document can hijack an AI agent into exfiltrating sensitive client data. His hypothetical attack vector for law firms is disarmingly simple: find lawyers on LinkedIn who are openly using Cowork, send a document to their publicly available email address containing concealed instructions, and let the agent do the rest. But this isn't an anti-AI conversation. Rok is emphatic that these tools should be used — just not naively. Drawing on enterprise security frameworks from companies like Cisco, he advocates for a practical middle ground: map what your AI has access to, create sanitised copies of sensitive folders, scope permissions tightly, vet your MCP servers and plugins, and understand (physically, not just contractually) how data flows through your systems. Key Takeaways The Cowork Paradox: Anthropic's own documentation reveals a tension between how Cowork is marketed (autonomous, background task execution) and how it should be used (limited permissions, no sensitive files, manual monitoring for prompt injections).  Security attacks are now a "When," Not an "If": Unlike traditional cybersecurity breaches, prompt injection attacks exploit a fundamental limitation of large language models — they can't distinguish instructions from data. Research shows success rates as high as 90% for some proprietary LLMs. Claude is among the more resistant, but not immune. Practical Security for Legal Teams: Rok's actionable advice for in-house teams and law firms includes: creating clean data environments separate from originals; using self-hostable workflow tools like n8n; scoping AI permissions to the minimum necessary; and conducting genuine due diligence on every plugin and MCP server before connecting it to your systems. Key References Rok's YouTube Channel: where our interest in Rok's work began, and a recommended follow for anyone wanting to stay across the security dimensions of legal AI adoptionRok's LinkedIn — he hosts weekly live sessions every Saturday with a security expert specialising in air-gapped, offline AI deployments in regulated industriesThe Art of Modern Legal Warfare — Rok co authors with a former guest and friend of the show Anna Guo and Sakshi Udeshi a series of vulnerability types specific to legal AI use cases.If you enjoyed this conversation please do share it with someone or a community who you feel would benefit from listening. If you have any more time do tell us what resonated; what didn't; and, rate the show (it helps us grow the audience and get great guests like Rok)!

    39 min
  8. AI Governance: Ethics, Agents & the Human Question with Catie Sheret, Oliver Patel & Peter Lee

    FEB 25

    AI Governance: Ethics, Agents & the Human Question with Catie Sheret, Oliver Patel & Peter Lee

    🎙️Alex and Tom step aside for this one — handing the mic to their friend Catie Sheret (General Counsel at Cambridge University Press & Assessment), who hosts a rich three-way conversation with Oliver Patel (Head of Enterprise AI Governance at AstraZeneca) and Peter Lee (Partner at Simmons & Simmons). Three very different vantage points — converging on the same question: how do you actually make AI governance work in practice? What begins with a definitional exercise (what is AI governance, anyway?) quickly evolves. Oliver draws a sharp line between AI ethics, responsible AI, AI governance and AI safety as related but distinct disciplines — and makes a passionate case that governance is fundamentally change management, not compliance theatre. Peter describes the "golden thread" he sees in the best organisations: corporate philosophy flowing from the boardroom right down into the tools people use every day. Catie grounds everything in context — arguing that your principles only stick when they're anchored to what your organisation actually does: content IP at Cambridge, medical ethics at AstraZeneca etc. The conversation builds through the practical mechanics — use case assessment, vendor oversight, committee structures, crisis preparation — before arriving at the question everyone's wrestling with: agentic AI. Peter frames it as a mindset shift from "can we trust the output?" to "what actions can this system initiate?" Oliver goes further: the fundamental logic of agentic AI, he argues, is to take the human out of the loop — and organisations need to confront that honestly rather than pretending otherwise. There's a wonderful thread on human flourishing running throughout — Peter's insistence that philosophers have never been more important, Oliver's pride in AstraZeneca's "Thriving in the Age of AI" literacy programme, and a closing round of book recommendations that ranges from Richard Susskind's How to Think About AI to Jenny O'Dell's How to Do Nothing (Oliver's brilliantly contrarian pick about the importance of stepping away from screens entirely) to Governing the Machine by Ray Eitel-Porter, Paul Dongha, Miriam Vogel. It's a masterclass in how to think about governance as something that enables rather than constrains — hosted with warmth and real expertise by Catie. If you enjoyed this episode, please do share it with another friend, team or community who might also enjoy it! Please do let us know what resonated (by comment) and rate the show (if you haven't already)! We appreciate your time, attention and support! For more conversations at the intersection of law and technology, head to https://lawwhatsnext.substack.com/ for: (i) insights from leading practitioners, technologists, and educators; (ii) deep dives into the intersection of law, technology, and organisational behaviour; and (iii) practical analysis and visualisation of how AI is augmenting our potential.

    46 min

About

How are leading practitioners leveraging emerging technologies and ways of working to pursue their passion and objectives, and as a by product what are the implications for the future of legal practice? Let’s explore this together. What to expect: - Focused conversations with leading practitioners; technologists and educators - Deep dives into the intersection of law, technology, and organisational behaviour - Practical analysis and visualisation of how AI is augmenting our potential - Insights from adjacent industries that might inform our own

You Might Also Like