Thinking On Paper

Mark Fielding and Jeremy Gilbertson

We are a technological species. Thinking On Paper is an independent podcast that helps you keep track of all the moving pieces. Conversations about the human impact of artificial intelligence, quantum computers, NASA, asteroid mining, coordination, trust, books, robotics, space technology, web3, physics, chemistry, sustainability, music, art, science, neuroscience, work, rest and play. New episodes every Thursday. Tech book club every month.

  1. 1H AGO

    Quantum-Centric Supercomputing Explained: IBM's Scott Crowder

    Hello you quantum supercomputing, IBM intrigued disruptors and curious minds. Today we’re Thinking On Paper with Scott Crowder, VP of IBM Quantum Adoption. On the agenda? Yes, Quantum-centric supercomputing. This isn’t a quantum computer replacing a classical computer. It's both, working together, dancing around the qubits and solving the material science, chemistry and biology challenges an advanced civilization like ours needs to master.  Quantum handles the subroutines it does best. Classical handles everything else. Scott is here to explain IBM’s new reference architecture, why it matters, and what already runs on it today. You’ll learn why the "quantum vs. classical" framing fails, how Cleveland Clinic simulated a 303-atom protein that no classical machine can handle, why IBM picked superconducting qubits over trapped ions, how a state-of-the-art quantum computer draws less power than a single rack of AI GPUs and wonder just what Richard Feynman would make of quantum computing today. Please enjoy the show. And If you do, share it with one person you’d think would enjoy it as much as you. Then subscribe. 🏠 HQ: www.thinkingonpaper.xyz 📺 INSTAGRAM: https://www.instagram.com/thinkingonpaperpodcast/ 🎧 Spotify: https://open.spotify.com/show/00volKqMsQntToeho35W47 🎧 APPLE: https://podcasts.apple.com/us/podcast/thinking-on-paper-technology-moves-fast-think-slower/id1713227258 -- Mark x: https://x.com/markfielding99 Jeremy: https://www.linkedin.com/in/jeremygilbertson/ – Chapters (00:00) Trailer (01:20) Quantum computing: real, hyped, or both (02:40) Why reference architectures decide which technologies win (05:05) Superconducting vs. trapped ion vs. spin qubits (06:47) Why accessibility and algorithmic discovery are the real bottlenecks (12:34) Cleveland Clinic's 303-atom protein simulation (13:44) IBM's quantum-centric supercomputing architecture (16:07) What already runs on quantum computers today (17:58) The roadmap: how quantum and classical converge (22:28) What Richard Feynman would make of the field today (25:25) What quantum computing means for the future of data centers (32:01) Quantum computers in space, and why Crowder rejects Elon's pitch (34:10) What computing is actually for (42:19) Why Qiskit, NVIDIA, and open source matter for adoption

    44 min
  2. The Long Future: Anders Sandberg on Brain Emulation, AI Safety, and Living Forever

    APR 28

    The Long Future: Anders Sandberg on Brain Emulation, AI Safety, and Living Forever

    The man who wrote the original blueprint for mind uploading on what comes after Homo sapiens.Anders Sandberg, futurist, transhumanist, former Senior Research Fellow at Oxford's Future of Humanity Institute, and author of the forthcoming Law, Liberty and Leviathan: Human Autonomy in the Age of Artificial Intelligence, joins us for one of the widest-ranging conversations the show has ever recorded. This is a tour through the next thousand years. Anders pulls in memory palaces and atomic clocks, fruit-fly connectomes and Kuiper Belt city-states, drone warfare and Dracula's boredom, AI agents as "fallen angels" of your conscience, and what happens to marriage when both spouses can copy themselves. 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- Chapters (00:00) Augmentation and Human Potential (08:09) The Impact of Mobile Technology on Humanity (11:51) Accountability in AI Agents (18:25) The Role of Empathy in Human-AI Interaction (25:35) AGI vs. Alien Life: A Comparative Analysis (27:36) Consciousness and Brain Emulation (35:52) The Future of Uploaded Minds (40:33) Exploring Parallel Realities and Memory Merging (45:16) The Future of Human Collaboration and Organizations (46:24) AI's Role in Managing Global Systems (51:23) The Dual Economy: Human vs AI Management (57:43) The Complexities of Space Ownership and Governance (01:05:18) The Future of Space Exploration and Human Expansion (01:17:49) The Impact of Space Race on Human Progress (01:21:43) The Role of Nations and Corporations in Space Exploration (01:24:22) Experimenting with New Forms of Governance (01:26:18) NASA's Future in the Age of Innovation (01:28:41) The Potential for Breakaway Movements in Space (01:30:16) Trust and Coordination in Space Governance (01:34:18) The Future of Fusion Energy (01:42:15) The Value of Time and Life Extension (01:48:06) Reinventing Identity in Extended Lifespans (01:52:03) The Future of Humanity and Technology

    1h 54m
  3. What Seinfeld Knows That Sam Altman Doesn't: Carissa Véliz on AI, Prophecy, and Truth

    APR 24

    What Seinfeld Knows That Sam Altman Doesn't: Carissa Véliz on AI, Prophecy, and Truth

    Predictions are not facts. Yet we're betting our jobs, our democracies, and our children's attention spans on them. Oxford philosopher Carissa Véliz, author of Prophecy and Privacy Is Power, joins Mark Fielding and Jeremy Gilbertson on Thinking On Paper to dismantle the most lucrative con of the AI era: the self-fulfilling prophecy. When Sam Altman tells you that anyone in 2035 will command "the intellectual capacity equivalent to everyone in 2025"… when Dario Amodei warns AI will wipe out 50% of entry-level white-collar jobs… when Jensen Huang announces the IT department is now an HR department for AI agents... you are not hearing forecasts. You are hearing sales pitches dressed as inevitability. Repeat them often enough, and HR really does start firing humans and buying OpenAI subscriptions. Klarna fired 700 people on the AI hype. A year later, they were hiring 700 people back. This is the oldest trick in the book. From the Oracle of Delphi to Rasputin to Polymarket and Kalshi, and Carissa shows you how to see through it. -- 📺 Watch On YouTube: 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- CHAPTERS   (00:00) Intro (01:00) What is the good life?  (02:00) Why knowing yourself matters more than strategy  (04:44) The analog world vs the digital world  (06:45) How prophecies exploit our need for security  (08:47) Why ancient Rome banned predicting the emperor's death  (10:11) The illusion of safety that AI sells us  (12:27) When predictions work, and when they don't  (15:00) Altman, Amodei, Huang: predictions or sales pitches?  (28:29) How to resist prophecies as a busy person  (29:53) Prediction markets, Polymarket, and democracy  (31:49) TikTok, algorithms, and the Molly Russell case  (36:08) "Engagement algorithms are cocaine in food"  (40:54) Self-fulfilling prophecies as the perfect crime  (43:44) Why comedy is the enemy of prophecy  (46:59) What Seinfeld teaches us about predictive algorithms  (52:16) Karikó and the Nobel Prize we almost missed  (53:40) Increase your serendipity  (56:13) Why Epicurus beats the Stoics

    1 hr
  4. Asteroid Mining, Property Rights, and Who Owns The Moon: Space to Grow

    APR 15

    Asteroid Mining, Property Rights, and Who Owns The Moon: Space to Grow

    Who owns the Moon? China? The USA? Nobody, everybody? We're about to find out. Who owns the Moon? China? The USA? Nobody, everybody? We're about to find out. It's the last part of Space to Grow by Matthew Weinzierl and Brendan Rousseau, and today we learn about asteroid mining, the trillion-dollar promise of Psyche-16 and the property rights questions raised by the 1967 Outer Space Treaty. We detour to the Kuiper belt via John Locke, Kant, Hume, and Rousseau to ask who actually owns space. Along the way: Peter Diamandis and Planetary Resources, the role of DARPA and national security in funding the space industry, the "military celestial complex," and what happens when the global south is locked out of the rules being written above their heads. If SpaceX builds at the south pole of the Moon and China plants a flag in the Sea of Storms, what then? -- Chapters (00:00) Global Conflict and Space Resources (02:04) Human Nature and Space Exploration (03:28) The Economics of Asteroid Mining (05:53) Legal Frameworks for Space Mining (11:05) The Space Resource Exploration Act (13:01) International Reactions to Space Mining Legislation (17:19) Philosophical Perspectives on Space Ownership (20:14) The Role of National Security in Space (20:40) The Role of Government in Space Innovation (21:34) National Security and the Space Industry (23:10) Weaponization of Space: A New Era (24:47) The Prisoner's Dilemma in Space Cooperation (26:40) Humanity's Moral Compass in Space Exploration (27:03) The Future of Humanity in Space

    28 min
  5. $3 Billion in Space Tech: The Top 10 Raises of 2026 So Far

    APR 9

    $3 Billion in Space Tech: The Top 10 Raises of 2026 So Far

    They're Chinese and crashing rockets. That's about all we know about the space company with the biggest financial backing of 2026. There are plenty of US companies, so don't worry about the space race. Stoke Space, Sierra Space and Cesium Astro have scored hundreds of millions on the back of government security contracts. In fact, the ten largest funding rounds total over $3.7 billion. And it's only April. As well as defence, encrypted GPS alternatives and space-based weather platforms being used by Formula One to tell drivers if the track is wet or not feature. As do satellite communications and reusable rockets But the biggest surprise sits at the top of the list: Beijing-based iSpace China claimed the single largest raise at $729 million. The Top 10 In Full iSpace China — $729M Sierra Space — $550M (Series C) Vast Space — $500M (Series A) Cesium Astro — $470M Axiom Space — $350M Stoke Space — $350M PLD Space — €210M (Series C) Tomorrow.io — $175M Xona Space — $170M (Series C) StarCloud — $170M -- 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz Chapters (00:00) Starcloud (00:52) Xona Space (03:27) Tomorrow IO (06:01) PLD Space (08:00) Stoke Space (10:18) Axiom Space (12:29) Cesium Astro (14:50) VAST Space (19:02) Sierra Space (21:47) I-Space (Beijing Interstellar Glory Space Technology Ltd.)

    29 min
  6. An 1899 Law for AI and Space: The Martens Clause

    APR 7

    An 1899 Law for AI and Space: The Martens Clause

    For 100,000 years, peace didn’t exist. And soon we’ll have AGI and cities on the moon and expect everyone to get along.  The first conference on not killing each other wasn’t until 1899. What the hell were we doing before that? Why did it take humanity so long to sit down and speak about peace? That’s a question for another podcast. For this, let’s rewind the clock and have a story.  The Martens Clause was a legal principle drafted by Russian-Imperial diplomat Fyodor Martens during the first Hague Peace Conference of 1899. It established that even in the absence of specific written law, nations and individuals remain bound by "the laws of humanity and the requirements of public conscience."  In short, don’t be an a*****e.  Originally conceived as a compromise to prevent the collapse of early international humanitarian law negotiations - when smaller nations like Belgium objected to being smaller nations - the clause became a foundational backstop in international law.  It was subsequently invoked in some of the most consequential legal proceedings of the twentieth century, including the Nuremberg Trials of 1945-46, the 1949 Corfu Channel dispute and the 1986 ICJ ruling against the United States for mining Nicaraguan harbors and supporting the Contra insurgency. Now we want to know whether this 127-year-old clause could serve as what Jeremy calls a "minimum viable architecture" for governing emerging technologies. Please enjoy the show.  And keep the peace. -- 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- (00:00) The First Peace Conference: A Historical Perspective (07:37) The Martin's Clause: Implications for Modern Governance (10:05) Space Tech and the Outer Space Treaty (13:58) AI and the Need for Ethical Frameworks (17:21) Accountability in Technology Deployment (22:56) The Future of Humanity: Collaboration vs. Competition

    28 min
  7. AI, the Kill Chain, and the Race Against China: The Pentagon's AI Memo

    APR 3

    AI, the Kill Chain, and the Race Against China: The Pentagon's AI Memo

    Imagine if the future of the world rested on the shoulders of Donald Trump, Pete Hegseth and their pet AI war strategy? Yep! We're about to find out. On January 9th 2026, the US Secretary of Defense signed a memorandum called Artificial Intelligence Strategy for the Department of War. Six weeks later, the US was at war with Iran and AI was identifying targets. Mark and Jeremy read the memo line by line. What they found: a strategy built on speed over safety, experimentation over caution, and the explicit statement that "the risks of not moving fast enough outweigh the risks of imperfect alignment." The memo outlines swarm warfare, AI-generated military intelligence, 30-day deadlines for federating classified data across all departments, and a talent war with Silicon Valley. Anthropic, the company that asked for safeguards against mass surveillance and full automation of the kill chain, was classified as a supply chain risk. This episode asks one question: does AI make war more likely or less likely? -- 🎧 Listen to every podcast⁠ 📺 Follow us on ⁠Instagram⁠ 🏠 Follow us on ⁠X⁠ 🏠 Follow Jeremy on ⁠LinkedIn⁠ To suggest guests or sponsor the show, please email: hello@thinkingonpaper.xyz -- Chapters (00:00) Artificial Intelligence Strategy for the Department of War (00:58) Executive Order 14179: America's AI Military Dominance (01:59) China And AI Arms Race (04:36) Anthropic & Eliminating Bureaucratic Barriers (07:20) The 7 Pace Setting Projects (PSPs) In The Memo (08:28) 100% LLM Kill Chain Capability (10:22) Palmer Luckey (11:53) Intelligence & The AI Open Arsenal (13:57) The War Time Approach To Blockers (16:46) AI Talent Acquisition At The DOW (18:54) We must accept that the risks of not moving fast enough outweigh the risks of imperfect alignment

    22 min

Ratings & Reviews

5
out of 5
2 Ratings

About

We are a technological species. Thinking On Paper is an independent podcast that helps you keep track of all the moving pieces. Conversations about the human impact of artificial intelligence, quantum computers, NASA, asteroid mining, coordination, trust, books, robotics, space technology, web3, physics, chemistry, sustainability, music, art, science, neuroscience, work, rest and play. New episodes every Thursday. Tech book club every month.

You Might Also Like