Thinking On Paper

Mark Fielding and Jeremy Gilbertson

Technology is woven into your work, relationships and culture. AI, quantum and robotics are changing the world faster than the rules that run it. Thinking On Paper is an independent podcast that helps you make sense of all the moving pieces. Mark and Jeremy are your guides. As Gen-X parents they worry about the change coming. As writers they've already experienced it. Now they ask the questions that survive the hype cycle, avoid the billionaire worship and put you at the centre of the story. New episodes are published every week. And there's a technology book club every month.

  1. AI Will Make You A Pampered Aristocrat: Anders Sandberg on AGI, longevity And Brain Emulation

    -4 DIAS

    AI Will Make You A Pampered Aristocrat: Anders Sandberg on AGI, longevity And Brain Emulation

    Imagine a world where there are multiple versions of you. A world where you can upload your consciousness into a humanoid or a virtual world... A world where you never die. Take it further. What if there were hundreds of you, all living out separate dreams and realities... and then merging back, splicing your memories together into the one you? In this episode, we Think On Paper with Anders Sandberg about where AGI, brain emulation, and human augmentation could take humanity. And the obstacles and challenges that await us when we get there. We start with a simple idea: you’re already augmented. Your phone, your notes, your habits, they’re all extensions of your mind. But as we push further, into digital minds and brain uploads, the questions get harder. If you copy your consciousness, are you still you? And what happens when those copies start living different lives? From there, we zoom out. AGI, global coordination, and the possibility that smarter systems might run the world better than we can… while quietly reducing our role in it.  And finally, space. Civilization moves off planet. Can we avoid taking our political division and inequality with us? And who owns the moon, what happens when asteroid economies replace ours? We had too much fun doing this interview, we hope you enjoy it as much. And if you do, please subscribe and share with your boss, sister, wife or best friend.  Cheers, Mark & Jeremy. 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- Chapters (00:00) Augmentation and Human Potential (08:09) The Impact of Mobile Technology on Humanity (11:51) Accountability in AI Agents (18:25) The Role of Empathy in Human-AI Interaction (25:35) AGI vs. Alien Life: A Comparative Analysis (27:36) Consciousness and Brain Emulation (35:52) The Future of Uploaded Minds (40:33) Exploring Parallel Realities and Memory Merging (45:16) The Future of Human Collaboration and Organizations (46:24) AI's Role in Managing Global Systems (51:23) The Dual Economy: Human vs AI Management (57:43) The Complexities of Space Ownership and Governance (01:05:18) The Future of Space Exploration and Human Expansion (01:17:49) The Impact of Space Race on Human Progress (01:21:43) The Role of Nations and Corporations in Space Exploration (01:24:22) Experimenting with New Forms of Governance (01:26:18) NASA's Future in the Age of Innovation (01:28:41) The Potential for Breakaway Movements in Space (01:30:16) Trust and Coordination in Space Governance (01:34:18) The Future of Fusion Energy (01:42:15) The Value of Time and Life Extension (01:48:06) Reinventing Identity in Extended Lifespans (01:52:03) The Future of Humanity and Technology

    1 h 54 min
  2. Who Made Sam Altman The New Oracle Of Delphi? - Carissa Veliz on Prophecy, AI And Living The Analogue Life

    24/04

    Who Made Sam Altman The New Oracle Of Delphi? - Carissa Veliz on Prophecy, AI And Living The Analogue Life

    When Sam Altman, Elon Musk or tell you AGI will take your job and 99% of the workforce will be living on universal basic income, do you believe them? Carissa Véliz, Oxford philosopher and author of Prophecy, Thinks On Paper about why predictions are never facts, why tech CEOs predict a future they want you to buy, and how to take back ownership of your own life in a world that increasingly wants to steal your attention... not to mention your data. They're called self-fulfilling prophecies. The idea is simple. If you say 'AI will take all the jobs and your HR department will be run by agentic agents, for agentic agents' often enough, people start to believe it. The predictions sneak and borough their way into the collective consciousness and before long your HR department is cutting jobs and spending budgets on OpenAI subscriptions. But there is a way for you to protect yourself from the modern day prophets. From the oracle of Delphi via Rasputin, the story is as old as human manipulation. From Seinfeld and Epicurus to Polymarket and books. Welcome to thinking on paper. Carissa Véliz is an Associate Professor at the Institute for Ethics in AI at the University of Oxford. Her new book Prophecy is out now. -- 📺 Watch On YouTube: 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- CHAPTERS   (00:00) Intro (01:00) What is the good life?  (02:00) Why knowing yourself matters more than strategy  (04:44) The analog world vs the digital world  (06:45) How prophecies exploit our need for security  (08:47) Why ancient Rome banned predicting the emperor's death  (10:11) The illusion of safety that AI sells us  (12:27) When predictions work, and when they don't  (15:00) Altman, Amodei, Huang: predictions or sales pitches?  (28:29) How to resist prophecies as a busy person  (29:53) Prediction markets, Polymarket, and democracy  (31:49) TikTok, algorithms, and the Molly Russell case  (36:08) "Engagement algorithms are cocaine in food"  (40:54) Self-fulfilling prophecies as the perfect crime  (43:44) Why comedy is the enemy of prophecy  (46:59) What Seinfeld teaches us about predictive algorithms  (52:16) Karikó and the Nobel Prize we almost missed  (53:40) Increase your serendipity  (56:13) Why Epicurus beats the Stoics

    1 h
  3. China Beats The USA To #1 In Top 10 Space Investment Of 2026

    9/04

    China Beats The USA To #1 In Top 10 Space Investment Of 2026

    They're Chinese and crashing rockets. That's about all we know about the space company with the biggest financial backing of 2026. There are plenty of US companies, so don't worry about the space race. Stoke Space, Sierra Space and Cesium Astro have scored hundreds of millions on the back of government security contracts. In fact, the ten largest funding rounds total over $3.7 billion. And it's only April. As well as defence, encrypted GPS alternatives and space-based weather platforms being used by Formula One to tell drivers if the track is wet or not feature. As do satellite communications and reusable rockets But the biggest surprise sits at the top of the list: Beijing-based iSpace China claimed the single largest raise at $729 million. The Top 10 In Full iSpace China — $729M Sierra Space — $550M (Series C) Vast Space — $500M (Series A) Cesium Astro — $470M Axiom Space — $350M Stoke Space — $350M PLD Space — €210M (Series C) Tomorrow.io — $175M Xona Space — $170M (Series C) StarCloud — $170M -- 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz Chapters (00:00) Starcloud (00:52) Xona Space (03:27) Tomorrow IO (06:01) PLD Space (08:00) Stoke Space (10:18) Axiom Space (12:29) Cesium Astro (14:50) VAST Space (19:02) Sierra Space (21:47) I-Space (Beijing Interstellar Glory Space Technology Ltd.)

    29 min
  4. How To Stop AGI Being An Asshole

    7/04

    How To Stop AGI Being An Asshole

    For 100,000 years, peace didn’t exist. And soon we’ll have AGI and cities on the moon and expect everyone to get along.  The first conference on not killing each other wasn’t until 1899. What the hell were we doing before that? Why did it take humanity so long to sit down and speak about peace? That’s a question for another podcast. For this, let’s rewind the clock and have a story.  The Martens Clause was a legal principle drafted by Russian-Imperial diplomat Fyodor Martens during the first Hague Peace Conference of 1899. It established that even in the absence of specific written law, nations and individuals remain bound by "the laws of humanity and the requirements of public conscience."  In short, don’t be an asshole.  Originally conceived as a compromise to prevent the collapse of early international humanitarian law negotiations - when smaller nations like Belgium objected to being smaller nations - the clause became a foundational backstop in international law.  It was subsequently invoked in some of the most consequential legal proceedings of the twentieth century, including the Nuremberg Trials of 1945-46, the 1949 Corfu Channel dispute and the 1986 ICJ ruling against the United States for mining Nicaraguan harbors and supporting the Contra insurgency. Now we want to know whether this 127-year-old clause could serve as what Jeremy calls a "minimum viable architecture" for governing emerging technologies. Please enjoy the show.  And keep the peace. -- 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- (00:00) The First Peace Conference: A Historical Perspective (07:37) The Martin's Clause: Implications for Modern Governance (10:05) Space Tech and the Outer Space Treaty (13:58) AI and the Need for Ethical Frameworks (17:21) Accountability in Technology Deployment (22:56) The Future of Humanity: Collaboration vs. Competition

    28 min
  5. Donald Trump Directs The Department of War To Accelerate America's Military AI Dominance

    3/04

    Donald Trump Directs The Department of War To Accelerate America's Military AI Dominance

    Imagine if the future of the world rested on the shoulders of Donald Trump, Pete Hegseth and their pet AI war strategy? Yep! We're about to find out. On January 9th 2026, the US Secretary of Defense signed a memorandum called Artificial Intelligence Strategy for the Department of War. Six weeks later, the US was at war with Iran and AI was identifying targets. Mark and Jeremy read the memo line by line. What they found: a strategy built on speed over safety, experimentation over caution, and the explicit statement that "the risks of not moving fast enough outweigh the risks of imperfect alignment." The memo outlines swarm warfare, AI-generated military intelligence, 30-day deadlines for federating classified data across all departments, and a talent war with Silicon Valley. Anthropic, the company that asked for safeguards against mass surveillance and full automation of the kill chain, was classified as a supply chain risk. This episode asks one question: does AI make war more likely or less likely? -- 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- Chapters (00:00) Artificial Intelligence Strategy for the Department of War (00:58) Executive Order 14179: America's AI Military Dominance (01:59) China And AI Arms Race (04:36) Anthropic & Eliminating Bureaucratic Barriers (07:20) The 7 Pace Setting Projects (PSPs) In The Memo (08:28) 100% LLM Kill Chain Capability (10:22) Palmer Luckey (11:53) Intelligence & The AI Open Arsenal (13:57) The War Time Approach To Blockers (16:46) AI Talent Acquisition At The DOW (18:54) We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment

    22 min

Sobre

Technology is woven into your work, relationships and culture. AI, quantum and robotics are changing the world faster than the rules that run it. Thinking On Paper is an independent podcast that helps you make sense of all the moving pieces. Mark and Jeremy are your guides. As Gen-X parents they worry about the change coming. As writers they've already experienced it. Now they ask the questions that survive the hype cycle, avoid the billionaire worship and put you at the centre of the story. New episodes are published every week. And there's a technology book club every month.

Talvez também goste