Crazy Wisdom

In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.

  1. Episode #523: Space Computer: When Your Trusted Execution Environment Needs a Rocket

    2D AGO

    Episode #523: Space Computer: When Your Trusted Execution Environment Needs a Rocket

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Daniel Bar, co-founder of Space Computer, a satellite-based secure compute protocol that creates a "root of trust in space" using tamper-resistant hardware for cryptographic applications. The conversation explores the fascinating intersection of space technology, blockchain infrastructure, and trusted execution environments (TEEs), touching on everything from cosmic radiation-powered random number generators to the future of space-based data centers and Daniel's journey from quantum computing research to building what they envision as the next evolution beyond Ethereum's "world computer" concept. For more information about Space Computer, visit spacecomputer.io, and check out their new podcast "Frontier Pod" on the Space Computer YouTube channel. Timestamps 00:00 Introduction to Space Computer02:45 Understanding Layer 1 and Layer 2 in Space Computing06:04 Trusted Execution Environments in Space08:45 The Evolution of Trusted Execution Environments11:59 The Role of Blockchain in Space Computing14:54 Incentivizing Satellite Deployment17:48 The Future of Space Computing and Its Applications20:58 Radiation Hardening and Space Environment Challenges23:45 Kardashev Civilizations and the Future of Energy26:34 Quantum Computing and Its Implications29:49 The Intersection of Quantum and Crypto32:26 The Future of Space Computer and Its Vision Key Insights 1. Space-based data centers solve the physical security problem for Trusted Execution Environments (TEEs). While TEEs provide secure compute through physical isolation, they remain vulnerable to attacks requiring physical access - like electron microscope forensics to extract secrets from chips. By placing TEEs in space, these attack vectors become practically impossible, creating the highest possible security guarantees for cryptographic applications.2. The space computer architecture uses a hybrid layer approach with space-based settlement and earth-based compute. The layer 1 blockchain operates in space as a settlement layer and smart contract platform, while layer 2 solutions on earth provide high-performance compute. This design leverages space's security advantages while compensating for the bandwidth and compute constraints of orbital infrastructure through terrestrial augmentation.3. True randomness generation becomes possible through cosmic radiation harvesting. Unlike pseudo-random number generators used in most blockchain applications today, space-based systems can harvest cosmic radiation as a genuinely stochastic process. This provides pure randomness critical for cryptographic applications like block producer selection, eliminating the predictability issues that compromise security in earth-based random number generation.4. Space compute migration is inevitable as humanity advances toward Kardashev Type 1 civilization. The progression toward planetary-scale energy control requires space-based infrastructure including solar collection, orbital cities, and distributed compute networks. This technological evolution makes space-based data centers not just viable but necessary for supporting the scale of computation required for advanced civilization development.5. The optimal use case for space compute is high-security applications rather than general data processing. While space-based data centers face significant constraints including 40kg of peripheral infrastructure per kg of compute, maintenance impossibility, and 5-year operational lifespans, these limitations become acceptable when the application requires maximum security guarantees that only space-based isolation can provide.6. Space computer will evolve from centralized early-stage operation to a decentralized satellite constellation. Similar to early Ethereum's foundation-operated nodes, space computer currently runs trusted operations but aims to enable public participation through satellite ownership stakes. Future participants could fractionally own satellites providing secure compute services, creating economic incentives similar to Bitcoin mining pools or Ethereum staking.7. Blockchain represents a unique compute platform that meshes hardware, software, and free market activity. Unlike traditional computers with discrete inputs and outputs, blockchain creates an organism where market participants provide inputs through trading, lending, and other economic activities, while the distributed network processes and returns value through the same market mechanisms, creating a cyborg-like integration of technology and economics.

    1h 4m
  2. Episode #522: The Hardware Heretic: Why Everything You Think About FPGAs Is Backwards

    6D AGO

    Episode #522: The Hardware Heretic: Why Everything You Think About FPGAs Is Backwards

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Peter Schmidt Nielsen, who is building FPGA-accelerated servers at Saturn Data. The conversation explores why servers need FPGAs, how these field-programmable gate arrays work as "IO expanders" for massive memory bandwidth, and why they're particularly well-suited for vector database and search applications. Peter breaks down the technical realities of FPGAs - including why they "really suck" in many ways compared to GPUs and CPUs - while explaining how his company is leveraging them to provide terabyte-per-second bandwidth to 1.3 petabytes of flash storage. The discussion ranges from distributed systems challenges and the CAP theorem to the hardware-software relationship in modern computing, offering insights into both the philosophical aspects of search technology and the nuts-and-bolts engineering of memory controllers and routing fabrics. For more information about Peter's work, you can reach him on Twitter at @PTRSCHMDTNLSN or find his website at saturndata.com. Timestamps 00:00 Introduction to FPGAs and Their Role in Servers02:47 Understanding FPGA Limitations and Use Cases05:55 Exploring Different Types of Servers08:47 The Importance of Memory and Bandwidth11:52 Philosophical Insights on Search and Access Patterns14:50 The Relationship Between Hardware and Search Queries17:45 Challenges of Distributed Systems20:47 The CAP Theorem and Its Implications23:52 The Evolution of Technology and Knowledge Management26:59 FPGAs as IO Expanders29:35 The Trade-offs of FPGAs vs. ASICs and GPUs32:55 The Future of AI Applications with FPGAs35:51 Exciting Developments in Hardware and Business Key Insights 1. FPGAs are fundamentally "crappy ASICs" with serious limitations - Despite being programmable hardware, FPGAs perform far worse than general-purpose alternatives in most cases. A $100,000 high-end FPGA might only match the memory bandwidth of a $600 gaming GPU. They're only valuable for specific niches like ultra-low latency applications or scenarios requiring massive parallel I/O operations, making them unsuitable for most computational workloads where CPUs and GPUs excel.2. The real value of FPGAs lies in I/O expansion, not computation - Rather than using FPGAs for their processing power, Saturn Data leverages them primarily as cost-effective ways to access massive amounts of DRAM controllers and NVMe interfaces. Their server design puts 200 FPGAs in a 2U enclosure with 1.3 petabytes of flash storage and terabyte-per-second read bandwidth, essentially using FPGAs as sophisticated I/O expanders.3. Access patterns determine hardware performance more than raw specs - The way applications access data fundamentally determines whether specialized hardware will provide benefits. Applications that do sparse reads across massive datasets (like vector databases) benefit from Saturn Data's architecture, while those requiring dense computation or frequent inter-node communication are better served by traditional hardware. Understanding these patterns is crucial for matching workloads to appropriate hardware.4. Distributed systems complexity stems from failure tolerance requirements - The difficulty of distributed systems isn't inherent but depends on what failures you need to tolerate. Simple approaches that restart on any failure are easy but unreliable, while Byzantine fault tolerance (like Bitcoin) is extremely complex. Most practical systems, including banks, find middle ground by accepting occasional unavailability rather than trying to achieve perfect consistency, availability, and partition tolerance simultaneously.5. Hardware specialization follows predictable cycles of generalization and re-specialization - Computing hardware consistently follows "Makimoto's Wave" - specialized hardware becomes more general over time, then gets leapfrogged by new specialized solutions. CPUs became general-purpose, GPUs evolved from fixed graphics pipelines to programmable compute, and now companies like Etched are creating transformer-specific ASICs. This cycle repeats as each generation adds programmability until someone strips it away for performance gains.6. Memory bottlenecks are reshaping the hardware landscape - The AI boom has created severe memory shortages, doubling costs for DRAM components overnight. This affects not just GPU availability but creates opportunities for alternative architectures. When everyone faces higher memory costs, the relative premium for specialized solutions like FPGA-based systems becomes more attractive, potentially shifting the competitive landscape for memory-intensive applications.7. Search applications represent ideal FPGA use cases due to their sparse access patterns - Vector databases and search workloads are particularly well-suited to FPGA acceleration because they involve searching through massive datasets with sparse access patterns rather than dense computation. These applications can effectively utilize the high bandwidth to flash storage and parallel I/O capabilities that FPGAs provide, making them natural early adopters for this type of specialized hardware architecture.

    53 min
  3. Episode #521: From Borges to Threadrippers: How Argentina's Emotional Culture Shapes the AI Future

    JAN 9

    Episode #521: From Borges to Threadrippers: How Argentina's Emotional Culture Shapes the AI Future

    In this episode of the Crazy Wisdom Podcast, host Stewart Alsop interviews Aurelio Gialluca, an economist and full stack data professional who works across finance, retail, and AI as both a data engineer and machine learning developer, while also exploring human consciousness and psychology. Their wide-ranging conversation covers the intersection of science and psychology, the unique cultural characteristics that make Argentina a haven for eccentrics (drawing parallels to the United States), and how Argentine culture has produced globally influential figures from Borges to Maradona to Che Guevara. They explore the current AI landscape as a "centralizing force" creating cultural homogenization (particularly evident in LinkedIn's cookie-cutter content), discuss the potential futures of AI development from dystopian surveillance states to anarchic chaos, and examine how Argentina's emotionally mature, non-linear communication style might offer insights for navigating technological change. The conversation concludes with Gialluca describing his ambitious project to build a custom water-cooled workstation with industrial-grade processors for his quantitative hedge fund, highlighting the practical challenges of heat management and the recent tripling of RAM prices due to market consolidation. Timestams 00:00 Exploring the Intersection of Psychology and Science02:55 Cultural Eccentricity: Argentina vs. the United States05:36 The Influence of Religion on National Identity08:50 The Unique Argentine Cultural Landscape11:49 Soft Power and Cultural Influence14:48 Political Figures and Their Cultural Impact17:50 The Role of Sports in Shaping National Identity20:49 The Evolution of Argentine Music and Subcultures23:41 AI and the Future of Cultural Dynamics26:47 Navigating the Chaos of AI in Culture33:50 Equilibrating Society for a Sustainable Future35:10 The Patchwork Age: Decentralization and Society35:56 The Impact of AI on Human Connection38:06 Individualism vs. Collective Rules in Society39:26 The Future of AI and Global Regulations40:16 Biotechnology: The Next Frontier42:19 Building a Personal AI Lab45:51 Tiers of AI Labs: From Personal to Industrial48:35 Mathematics and AI: The Foundation of Innovation52:12 Stochastic Models and Predictive Analytics55:47 Building a Supercomputer: Hardware Insights Key Insights 1. Argentina's Cultural Exceptionalism and Emotional Maturity: Argentina stands out globally for allowing eccentrics to flourish and having a non-linear communication style that Gialluca describes as "non-monotonous systems." Argentines can joke profoundly and be eccentric while simultaneously being completely organized and straightforward, demonstrating high emotional intelligence and maturity that comes from their unique cultural blend of European romanticism and Latino lightheartedness.2. Argentina as an Underrecognized Cultural Superpower: Despite being introverted about their achievements, Argentina produces an enormous amount of global culture through music, literature, and iconic figures like Borges, Maradona, Messi, and Che Guevara. These cultural exports have shaped entire generations worldwide, with Argentina "stealing the thunder" from other nations and creating lasting soft power influence that people don't fully recognize as Argentine.3. AI's Cultural Impact Follows Oscillating Patterns: Culture operates as a dynamic system that oscillates between centralization and decentralization like a sine wave. AI currently represents a massive centralizing force, as seen in LinkedIn's homogenized content, but this will inevitably trigger a decentralization phase. The speed of this cultural transformation has accelerated dramatically, with changes that once took generations now happening in years.4. The Coming Bifurcation of AI Futures: Gialluca identifies two extreme possible endpoints for AI development: complete centralized control (the "Mordor" scenario with total surveillance) or complete chaos where everyone has access to dangerous capabilities like creating weapons or viruses. Finding a middle path between these extremes is essential for society's survival, requiring careful equilibrium between accessibility and safety.5. Individual AI Labs Are Becoming Democratically Accessible: Gialluca outlines a tier system for AI capabilities, where individuals can now build "tier one" labs capable of fine-tuning models and processing massive datasets for tens of thousands of dollars. This democratization means that capabilities once requiring teams of PhD scientists can now be achieved by dedicated individuals, fundamentally changing the landscape of AI development and access.6. Hardware Constraints Are the New Limiting Factor: While AI capabilities are rapidly advancing, practical implementation is increasingly constrained by hardware availability and cost. RAM prices have tripled in recent months, and the challenge of managing enormous heat output from powerful processors requires sophisticated cooling systems. These physical limitations are becoming the primary bottleneck for individual AI development.7. Data Quality Over Quantity Is the Critical Challenge: The main bottleneck for AI advancement is no longer energy or GPUs, but high-quality data for training. Early data labeling efforts produced poor results because labelers lacked domain expertise. The future lies in reinforcement learning (RL) environments where AI systems can generate their own high-quality training data, representing a fundamental shift in how AI systems learn and develop.

    1h 8m
  4. Episode #520: Training Super Intelligence One Simulated Workflow at a Time

    JAN 5

    Episode #520: Training Super Intelligence One Simulated Workflow at a Time

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Josh Halliday, who works on training super intelligence with frontier data at Turing. The conversation explores the fascinating world of reinforcement learning (RL) environments, synthetic data generation, and the crucial role of high-quality human expertise in AI training. Josh shares insights from his years working at Unity Technologies building simulated environments for everything from oil and gas safety scenarios to space debris detection, and discusses how the field has evolved from quantity-focused data collection to specialized, expert-verified training data that's becoming the key bottleneck in AI development. They also touch on the philosophical implications of our increasing dependence on AI technology and the emerging job market around AI training and data acquisition. Timestamps 00:00 Introduction to AI and Reinforcement Learning03:12 The Evolution of AI Training Data05:59 Gaming Engines and AI Development08:51 Virtual Reality and Robotics Training11:52 The Future of Robotics and AI Collaboration14:55 Building Applications with AI Tools17:57 The Philosophical Implications of AI20:49 Real-World Workflows and RL Environments26:35 The Impact of Technology on Human Cognition28:36 Cultural Resistance to AI and Data Collection31:12 The Bottleneck of High-Quality Data in AI32:57 Philosophical Perspectives on Data35:43 The Future of AI Training and Human Collaboration39:09 The Role of Subject Matter Experts in Data Quality43:20 The Evolution of Work in the Age of AI46:48 Convergence of AI and Human Experience Key Insights1. Reinforcement Learning environments are sophisticated simulations that replicate real-world enterprise workflows and applications. These environments serve as training grounds for AI agents by creating detailed replicas of tools like Salesforce, complete with specific tasks and verification systems. The agent attempts tasks, receives feedback on failures, and iterates until achieving consistent success rates, effectively learning through trial and error in a controlled digital environment.2. Gaming engines like Unity have evolved into powerful platforms for generating synthetic training data across diverse industries. From oil and gas companies needing hazardous scenario data to space intelligence firms tracking orbital debris, these real-time 3D engines with advanced physics can create high-fidelity simulations that capture edge cases too dangerous or expensive to collect in reality, bridging the gap where real-world data falls short.3. The bottleneck in AI development has fundamentally shifted from data quantity to data quality. The industry has completely reversed course from the previous "scale at all costs" approach to focusing intensively on smaller, higher-quality datasets curated by subject matter experts. This represents a philosophical pivot toward precision over volume in training next-generation AI systems.4. Remote teleoperation through VR is creating a new global workforce for robotics training. Workers wearing VR headsets can remotely control humanoid robots across the globe, teaching them tasks through direct demonstration. This creates opportunities for distributed talent while generating the nuanced human behavioral data needed to train autonomous systems.5. Human expertise remains irreplaceable in the AI training pipeline despite advancing automation. Subject matter experts provide crucial qualitative insights that go beyond binary evaluations, offering the contextual "why" and "how" that transforms raw data into meaningful training material. The challenge lies in identifying, retaining, and properly incentivizing these specialists as demand intensifies.6. First-person perspective data collection represents the frontier of human-like AI training. Companies are now paying people to life-log their daily experiences, capturing petabytes of egocentric data to train models more similarly to how human children learn through constant environmental observation, rather than traditional batch-processing approaches.7. The convergence of simulation, robotics, and AI is creating unprecedented philosophical and practical challenges. As synthetic worlds become indistinguishable from reality and AI agents gain autonomy, we're entering a phase where the boundaries between digital and physical, human and artificial intelligence, become increasingly blurred, requiring careful consideration of dependency, agency, and the preservation of human capabilities.

    50 min
  5. Episode #519: Inside the Stack: What Really Makes Robots “Intelligent”

    JAN 2

    Episode #519: Inside the Stack: What Really Makes Robots “Intelligent”

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop interviews Marcin Dymczyk, CPO and co-founder of SevenSense Robotics, exploring the fascinating world of advanced robotics and AI. Their conversation covers the evolution from traditional "standard" robotics with predetermined pathways to advanced robotics that incorporates perception, reasoning, and adaptability - essentially the AGI of physical robotics. Dymczyk explains how his company builds "the eyes and brains of mobile robots" using camera-based autonomy algorithms, drawing parallels between robot sensing systems and human vision, inner ear balance, and proprioception. The discussion ranges from the technical challenges of sensor fusion and world models to broader topics including robotics regulation across different countries, the role of federalism in innovation, and how recent geopolitical changes are driving localized high-tech development, particularly in defense applications. They also touch on the democratization of robotics for small businesses and the philosophical implications of increasingly sophisticated AI systems operating in physical environments. To learn more about SevenSense, visit www.sevensense.ai. Check out this GPT we trained on the conversation Timestamps 00:00 Introduction to Robotics and Personal Journey 05:27 The Evolution of Robotics: From Standard to Advanced 09:56 The Future of Robotics: AI and Automation 12:09 The Role of Edge Computing in Robotics 17:40 FPGA and AI: The Future of Robotics Processing 21:54 Sensing the World: How Robots Perceive Their Environment 29:01 Learning from the Physical World: Insights from Robotics 33:21 The Intersection of Robotics and Manufacturing 35:01 Journey into Robotics: Education and Passion 36:41 Practical Robotics Projects for Beginners 39:06 Understanding Particle Filters in Robotics 40:37 World Models: The Future of AI and Robotics 41:51 The Black Box Dilemma in AI and Robotics 44:27 Safety and Interpretability in Autonomous Systems 49:16 Regulatory Challenges in Robotics and AI 51:19 Global Perspectives on Robotics Regulation 54:43 The Future of Robotics in Emerging Markets 57:38 The Role of Engineers in Modern Warfare Key Insights1. Advanced robotics transcends traditional programming through perception and intelligence. Dymczyk distinguishes between standard robotics that follows rigid, predefined pathways and advanced robotics that incorporates perception and reasoning. This evolution enables robots to make autonomous decisions about navigation and task execution, similar to how humans adapt to unexpected situations rather than following predetermined scripts.2. Camera-based sensing systems mirror human biological navigation. SevenSense Robotics builds "eyes and brains" for mobile robots using multiple cameras (up to eight), IMUs (accelerometers/gyroscopes), and wheel encoders that parallel human vision, inner ear balance, and proprioception. This redundant sensing approach allows robots to navigate even when one system fails, such as operating in dark environments where visual sensors are compromised.3. Edge computing dominates industrial robotics due to connectivity and security constraints. Many industrial applications operate in environments with poor connectivity (like underground grocery stores) or require on-premise solutions for confidentiality. This necessitates powerful local processing capabilities rather than cloud-dependent AI, particularly in automotive factories where data security about new models is paramount.4. Safety regulations create mandatory "kill switches" that bypass AI decision-making. European and US regulatory bodies require deterministic safety systems that can instantly stop robots regardless of AI reasoning. These systems operate like human reflexes, providing immediate responses to obstacles while the main AI brain handles complex navigation and planning tasks.5. Modern robotics development benefits from increasingly affordable optical sensors. The democratization of 3D cameras, laser range finders, and miniature range measurement chips (costing just a few dollars from distributors like DigiKey) enables rapid prototyping and innovation that was previously limited to well-funded research institutions.6. Geopolitical shifts are driving localized high-tech development, particularly in defense applications. The changing role of US global leadership and lessons from Ukraine's drone warfare are motivating countries like Poland to develop indigenous robotics capabilities. Small engineering teams can now create battlefield-effective technology using consumer drones equipped with advanced sensors.7. The future of robotics lies in natural language programming for non-experts. Dymczyk envisions a transformation where small business owners can instruct robots using conversational language rather than complex programming, similar to how AI coding assistants now enable non-programmers to build applications through natural language prompts.

    1h 2m
  6. Episode #518: Decentralization Without Romance: Incentives, Mesh Networks, and Practical Crypto

    12/29/2025

    Episode #518: Decentralization Without Romance: Incentives, Mesh Networks, and Practical Crypto

    In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Mike Bakon to explore the fascinating intersection of hardware hacking, blockchain technology, and decentralized systems. Their conversation spans from Mike's childhood fascination with taking apart electronics in 1980s Poland to his current work with ESP32 microcontrollers, LoRa mesh networks, and Cardano blockchain development. They discuss the technical differences between UTXO and account-based blockchains, the challenges of true decentralization versus hybrid systems, and how AI tools are changing the development landscape. Mike shares his vision for incentivizing mesh networks through blockchain technology and explains why he believes mass adoption of decentralized systems will come through abstraction rather than technical education. The discussion also touches on the potential for creating new internet infrastructure using ad hoc mesh networks and the importance of maintaining truly decentralized, permissionless systems in an increasingly surveilled world. You can find Mike in Twitter as @anothervariable. Check out this GPT we trained on the conversation Timestamps 00:00 Introduction to Hardware and Early Experiences 02:59 The Evolution of AI in Hardware Development 05:56 Decentralization and Blockchain Technology 09:02 Understanding UTXO vs Account-Based Blockchains 11:59 Smart Contracts and Their Functionality 14:58 The Importance of Decentralization in Blockchain 17:59 The Process of Data Verification in Blockchain 20:48 The Future of Blockchain and Its Applications 34:38 Decentralization and Trustless Systems 37:42 Mainstream Adoption of Blockchain 39:58 The Role of Currency in Blockchain 43:27 Interoperability vs Bridging in Blockchain 47:27 Exploring Mesh Networks and LoRa Technology 01:00:25 The Future of AI and Decentralization Key Insights 1. Hardware curiosity drives innovation from childhood - Mike's journey into hardware began as a child in 1980s Poland, where he would disassemble toys like battery-powered cars to understand how they worked. This natural curiosity about taking things apart and understanding their inner workings laid the foundation for his later expertise in microcontrollers like the ESP32 and his deep understanding of both hardware and software integration. 2. AI as a research companion, not a replacement for coding - Mike uses AI and LLMs primarily as research tools and coding companions rather than letting them write entire applications. He finds them invaluable for getting quick answers to coding problems, analyzing Git repositories, and avoiding the need to search through Stack Overflow, but maintains anxiety when AI writes whole functions, preferring to understand and write his own code. 3. Blockchain decentralization requires trustless consensus verification - The fundamental difference between blockchain databases and traditional databases lies in the consensus process that data must go through before being recorded. Unlike centralized systems where one entity controls data validation, blockchains require hundreds of nodes to verify each block through trustless consensus mechanisms, ensuring data integrity without relying on any single authority. 4. UTXO vs account-based blockchains have fundamentally different architectures - Cardano uses an extended UTXO model (like Bitcoin but with smart contracts) where transactions consume existing UTXOs and create new ones, keeping the ledger lean. Ethereum uses account-based ledgers that store persistent state, leading to much larger data requirements over time and making it increasingly difficult for individuals to sync and maintain full nodes independently. 5. True interoperability differs fundamentally from bridging - Real blockchain interoperability means being able to send assets directly between different blockchains (like sending ADA to a Bitcoin wallet) without intermediaries. This is possible between UTXO-based chains like Cardano and Bitcoin. Bridges, in contrast, require centralized entities to listen for transactions on one chain and trigger corresponding actions on another, introducing centralization risks. 6. Mesh networks need economic incentives for sustainable infrastructure - While technologies like LoRa and Meshtastic enable impressive decentralized communication networks, the challenge lies in incentivizing people to maintain the hardware infrastructure. Mike sees potential in combining blockchain-based rewards (like earning ADA for running mesh network nodes) with existing decentralized communication protocols to create self-sustaining networks. 7. Mass adoption comes through abstraction, not education - Rather than trying to educate everyone about blockchain technology, mass adoption will happen when developers can build applications on decentralized infrastructure that users interact with seamlessly, without needing to understand the underlying blockchain mechanics. Users should be able to benefit from decentralization through well-designed interfaces that abstract away the complexity of wallets, addresses, and consensus mechanisms.

    1h 9m
  7. Episode #517: How Orbital Robotics Turns Space Junk into Infrastructure

    12/26/2025

    Episode #517: How Orbital Robotics Turns Space Junk into Infrastructure

    In this episode of the Crazy Wisdom Podcast, host Stewart Alsop speaks with Aaron Borger, founder and CEO of Orbital Robotics, about the emerging world of space robotics and satellite capture technology. The conversation covers a fascinating range of topics including Borger's early experience launching AI-controlled robotic arms to space as a student, his work at Blue Origin developing lunar lander software, and how his company is developing robots that can capture other spacecraft for refueling, repair, and debris removal. They discuss the technical challenges of operating in space - from radiation hardening electronics to dealing with tumbling satellites - as well as the broader implications for the space economy, from preventing the Kessler effect to building space-based recycling facilities and mining lunar ice for rocket fuel. You can find more about Aaron Borger’s work at Orbital Robots and follow him on LinkedIn for updates on upcoming missions and demos.  Check out this GPT we trained on the conversation Timestamps 00:00 Introduction to orbital robotics, satellite capture, and why sensing and perception matter in space 05:00 The Kessler Effect, cascading collisions, and why space debris is an economic problem before it is an existential one 10:00 From debris removal to orbital recycling and the idea of turning junk into infrastructure 15:00 Long-term vision of space factories, lunar ice, and refueling satellites to bootstrap a lunar economy 20:00 Satellite upgrading, servicing live spacecraft, and expanding today’s narrow space economy 25:00 Costs of collision avoidance, ISS maneuvers, and making debris capture economically viable 30:00 Early experiments with AI-controlled robotic arms, suborbital launches, and reinforcement learning in microgravity 35:00 Why deterministic AI and provable safety matter more than LLM hype for spacecraft control 40:00 Radiation, single event upsets, and designing space-safe AI systems with bounded behavior 45:00 AI, physics-based world models, and autonomy as the key to scaling space operations 50:00 Manufacturing constraints, space supply chains, and lessons from rocket engine software 55:00 The future of space startups, geopolitics, deterrence, and keeping space usable for humanity Key Insights 1. Space Debris Removal as a Growing Economic Opportunity: Aaron Borger explains that orbital debris is becoming a critical problem with approximately 3,000-4,000 defunct satellites among the 15,000 total satellites in orbit. The company is developing robotic arms and AI-controlled spacecraft to capture other satellites for refueling, repair, debris removal, and even space station assembly. The economic case is compelling - it costs about $1 million for the ISS to maneuver around debris, so if their spacecraft can capture and remove multiple pieces of debris for less than that cost per piece, it becomes financially viable while addressing the growing space junk problem.2. Revolutionary AI Safety Methods Enable Space Robotics: Traditional NASA engineers have been reluctant to use AI for spacecraft control due to safety concerns, but Orbital Robotics has developed breakthrough methods combining reinforcement learning with traditional control systems that can mathematically prove the AI will behave safely. Their approach uses physics-based world models rather than pure data-driven learning, ensuring deterministic behavior and bounded operations. This represents a significant advancement over previous AI approaches that couldn't guarantee safe operation in the high-stakes environment of space.3. Vision for Space-Based Manufacturing and Resource Utilization: The long-term vision extends beyond debris removal to creating orbital recycling facilities that can break down captured satellites and rebuild them into new spacecraft using existing materials in orbit. Additionally, the company plans to harvest propellant from lunar ice, splitting it into hydrogen and oxygen for rocket fuel, which could kickstart a lunar economy by providing economic incentives for moon-based operations while supporting the growing satellite constellation infrastructure.4. Unique Space Technology Development Through Student Programs: Borger and his co-founder gained unprecedented experience by launching six AI-controlled robotic arms to space through NASA's student rocket programs while still undergraduates. These missions involved throwing and catching objects in microgravity using deep reinforcement learning trained in simulation and tested on Earth. This hands-on space experience is extremely rare and gave them practical knowledge that informed their current commercial venture.5. Hardware Challenges Require Innovative Engineering Solutions: Space presents unique technical challenges including radiation-induced single event upsets that can reset processors for up to 10 seconds, requiring "passive safe" trajectories that won't cause collisions even during system resets. Unlike traditional space companies that spend $100,000 on radiation-hardened processors, Orbital Robotics uses automotive-grade components made radiation-tolerant through smart software and electrical design, enabling cost-effective operations while maintaining safety.6. Space Manufacturing Supply Chain Constraints: The space industry faces significant manufacturing bottlenecks with 24-week lead times for space-grade components and limited suppliers serving multiple companies simultaneously. This creates challenges for scaling production - Orbital Robotics needs to manufacture 30 robotic arms per year within a few years. They've partnered with manufacturers who previously worked on Blue Origin's rocket engines to address these supply chain limitations and achieve the scale necessary for their ambitious deployment timeline.7. Emerging Space Economy Beyond Communications: While current commercial space activities focus primarily on communications satellites (with SpaceX Starlink holding 60% market share) and Earth observation, new sectors are emerging including AI data centers in space and orbital manufacturing. The convergence of AI, robotics, and space technology is enabling more sophisticated autonomous operations, from predictive maintenance of rocket engines using sensor data to complex orbital maneuvering and satellite servicing that was previously impossible with traditional control methods.

    59 min
  8. Episode #516: China’s AI Moment, Functional Code, and a Post-Centralized World

    12/22/2025

    Episode #516: China’s AI Moment, Functional Code, and a Post-Centralized World

    In this episode, Stewart Alsop sits down with Joe Wilkinson of Artisan Growth Strategies to talk through how vibe coding is changing who gets to build software, why functional programming and immutability may be better suited for AI-written code, and how tools like LLMs are reshaping learning, work, and curiosity itself. The conversation ranges from Joe’s experience living in China and his perspective on Chinese AI labs like DeepSeek, Kimi, Minimax, and GLM, to mesh networks, Raspberry Pi–powered infrastructure, decentralization, and what sovereignty might mean in a world where intelligence is increasingly distributed. They also explore hallucinations, AlphaGo’s Move 37, and why creative “wrongness” may be essential for real breakthroughs, along with the tension between centralized power and open access to advanced technology. You can find more about Joe’s work at https://artisangrowthstrategies.com and follow him on X at https://x.com/artisangrowth. Check out this GPT we trained on the conversation Timestamps 00:00 – Vibe coding as a new learning unlock, China experience, information overload, and AI-powered ingestion systems05:00 – Learning to code late, Exercism, syntax friction, AI as a real-time coding partner10:00 – Functional programming, Elixir, immutability, and why AI struggles with mutable state15:00 – Coding metaphors, “spooky action at a distance,” and making software AI-readable20:00 – Raspberry Pi, personal servers, mesh networks, and peer-to-peer infrastructure25:00 – Curiosity as activation energy, tech literacy gaps, and AI-enabled problem solving30:00 – Knowledge work superpowers, decentralization, and small groups reshaping systems35:00 – Open source vs open weights, Chinese AI labs, data ingestion, and competitive dynamics40:00 – Power, safety, and why broad access to AI beats centralized control45:00 – Hallucinations, AlphaGo’s Move 37, creativity, and logical consistency in AI50:00 – Provenance, epistemology, ontologies, and risks of closed-loop science55:00 – Centralization vs decentralization, sovereign countries, and post-global-order shifts01:00:00 – U.S.–China dynamics, war skepticism, pragmatism, and cautious optimism about the futureKey Insights Vibe coding fundamentally lowers the barrier to entry for technical creation by shifting the focus from syntax mastery to intent, structure, and iteration. Instead of learning code the traditional way and hitting constant friction, AI lets people learn by doing, correcting mistakes in real time, and gradually building mental models of how systems work, which changes who gets to participate in software creation.Functional programming and immutability may be better aligned with AI-written code than object-oriented paradigms because they reduce hidden state and unintended side effects. By making data flows explicit and preventing “spooky action at a distance,” immutable systems are easier for both humans and AI to reason about, debug, and extend, especially as code becomes increasingly machine-authored.AI is compressing the entire learning stack, from software to physical reality, enabling people to move fluidly between abstract knowledge and hands-on problem solving. Whether fixing hardware, setting up servers, or understanding networks, the combination of curiosity and AI assistance turns complex systems into navigable terrain rather than expert-only domains.Decentralized infrastructure like mesh networks and personal servers becomes viable when cognitive overhead drops. What once required extreme dedication or specialist knowledge can now be done by small groups, meaning that relatively few motivated individuals can meaningfully change communication, resilience, and local autonomy without waiting for institutions to act.Chinese AI labs are likely underestimated because they operate with different constraints, incentives, and cultural inputs. Their openness to alternative training methods, massive data ingestion, and open-weight strategies creates competitive pressure that limits monopolistic control by Western labs and gives users real leverage through choice.Hallucinations and “mistakes” are not purely failures but potential sources of creative breakthroughs, similar to AlphaGo’s Move 37. If AI systems are overly constrained to consensus truth or authority-approved outputs, they risk losing the capacity for novel insight, suggesting that future progress depends on balancing correctness with exploratory freedom.The next phase of decentralization may begin with sovereign countries before sovereign individuals, as AI enables smaller nations to reason from first principles in areas like medicine, regulation, and science. Rather than a collapse into chaos, this points toward a more pluralistic world where power, knowledge, and decision-making are distributed across many competing systems instead of centralized authorities.

    1h 5m
4.9
out of 5
69 Ratings

About

In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.

More From Crazy Wisdom

You Might Also Like