Crazy Wisdom

In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.

  1. Episode #530: The Hidden Architecture: Why Your Startup Needs an Ontology (Before It's Too Late)

    3D AGO

    Episode #530: The Hidden Architecture: Why Your Startup Needs an Ontology (Before It's Too Late)

    In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Larry Swanson, a knowledge architect, community builder, and host of the Knowledge Graph Insights podcast. They explore the relationship between knowledge graphs and ontologies, why these technologies matter in the age of AI, and how symbolic AI complements the current wave of large language models. The conversation traces the history of neuro-symbolic AI from its origins at Dartmouth in 1956 through the semantic web vision of Tim Berners-Lee, examining why knowledge architecture remains underappreciated despite being deployed at major enterprises like Netflix, Amazon, and LinkedIn. Swanson explains how RDF (Resource Description Framework) enables both machines and humans to work with structured knowledge in ways that relational databases can't, while Alsop shares his journey from knowledge management director to understanding the practical necessity of ontologies for business operations. They discuss the philosophical roots of the field, the separation between knowledge management practitioners and knowledge engineers, and why startups often overlook these approaches until scale demands them. You can find Larry's podcast at KGI.fm or search for Knowledge Graph Insights on Spotify and YouTube. Timestamps 00:00 Introduction to Knowledge Graphs and Ontologies01:09 The Importance of Ontologies in AI04:14 Philosophy's Role in Knowledge Management10:20 Debating the Relevance of RDF15:41 The Distinction Between Knowledge Management and Knowledge Engineering21:07 The Human Element in AI and Knowledge Architecture25:07 Startups vs. Enterprises: The Knowledge Gap29:57 Deterministic vs. Probabilistic AI32:18 The Marketing of AI: A Historical Perspective33:57 The Role of Knowledge Architecture in AI39:00 Understanding RDF and Its Importance44:47 The Intersection of AI and Human Intelligence50:50 Future Visions: AI, Ontologies, and Human Behavior Key Insights 1. Knowledge Graphs Combine Structure and Instances Through Ontological Design. A knowledge graph is built using an ontology that describes a specific domain you want to understand or work with. It includes both an ontological description of the terrain—defining what things exist and how they relate to one another—and instances of those things mapped to real-world data. This combination of abstract structure and concrete examples is what makes knowledge graphs powerful for discovery, question-answering, and enabling agentic AI systems. Not everyone agrees on the precise definition, but this understanding represents the practical approach most knowledge architects use when building these systems.2. Ontology Engineering Has Deep Philosophical Roots That Inform Modern Practice. The field draws heavily from classical philosophy, particularly ontology (the nature of what you know), epistemology (how you know what you know), and logic. These thousands-year-old philosophical frameworks provide the rigorous foundation for modern knowledge representation. Living in Heidelberg surrounded by philosophers, Swanson has discovered how much of knowledge graph work connects upstream to these philosophical roots. This philosophical grounding becomes especially important during times when institutional structures are collapsing, as we need to create new epistemological frameworks for civilization—knowledge management and ontology become critical tools for restructuring how we understand and organize information.3. The Semantic Web Vision Aimed to Transform the Internet Into a Distributed Database. Twenty-five years ago, Tim Berners-Lee, Jim Hendler, and Ora Lassila published a landmark article in Scientific American proposing the semantic web. While Berners-Lee had already connected documents across the web through HTML and HTTP, the semantic web aimed to connect all the data—essentially turning the internet into a giant database. This vision led to the development of RDF (Resource Description Framework), which emerged from DARPA research and provides the technical foundation for building knowledge graphs and ontologies. The origin story involved solving simple but important problems, like disambiguating whether "Cook" referred to a verb, noun, or a person's name at an academic conference.4. Symbolic AI and Neural Networks Represent Complementary Approaches Like Fast and Slow Thinking. Drawing on Kahneman's "thinking fast and slow" framework, LLMs represent the "fast brain"—learning monsters that can process enormous amounts of information and recognize patterns through natural language interfaces. Symbolic AI and knowledge graphs represent the "slow brain"—capturing actual knowledge and facts that can counter hallucinations and provide deterministic, explainable reasoning. This complementarity is driving the re-emergence of neuro-symbolic AI, which combines both approaches. The fundamental distinction is that symbolic AI systems are deterministic and can be fully explained, while LLMs are probabilistic and stochastic, making them unsuitable for applications requiring absolute reliability, such as industrial robotics or pharmaceutical research.5. Knowledge Architecture Remains Underappreciated Despite Powering Major Enterprises. While machine learning engineers currently receive most of the attention and budget, knowledge graphs actually power systems at Netflix (the economic graph), Amazon (the product graph), LinkedIn, Meta, and most major enterprises. The technology has been described as "the most astoundingly successful failure in the history of technology"—the semantic web vision seemed to fail, yet more than half of web pages now contain RDF-formatted semantic markup through schema.org, and every major enterprise uses knowledge graph technology in the background. Knowledge architects remain underappreciated partly because the work is cognitively difficult, requires talking to people (which engineers often avoid), and most advanced practitioners have PhDs in computer science, logic, or philosophy.6. RDF's Simple Subject-Predicate-Object Structure Enables Meaning and Data Linking. Unlike relational databases that store data in tables with rows and columns, RDF uses the simplest linguistic structure: subject-predicate-object (like "Larry knows Stuart"). Each element has a unique URI identifier, which permits precise meaning and enables linked data across systems. This graph structure makes it much easier to connect data after the fact compared to navigating tabular structures in relational databases. On top of RDF sits an entire stack of technologies including schema languages, query languages, ontological languages, and constraints languages—everything needed to turn data into actionable knowledge. The goal is inferring or articulating knowledge from RDF-structured data.7. The Future Requires Decoupled Modular Architectures Combining Multiple AI Approaches. The vision for the future involves separation of concerns through microservices-like architectures where different systems handle what they do best. LLMs excel at discovering possibilities and generating lists, while knowledge graphs excel at articulating human-vetted, deterministic versions of that information that systems can reliably use. Every one of Swanson's 300 podcast interviews over ten years ultimately concludes that regardless of technology, success comes down to human beings, their behavior, and the cultural changes needed to implement systems. The assumption that we can simply eliminate people from processes misses that huma...

    57 min
  2. Episode #529: Semantic Sovereignty: Why Knowledge Graphs Beat $100 Billion Context Graphs

    6D AGO

    Episode #529: Semantic Sovereignty: Why Knowledge Graphs Beat $100 Billion Context Graphs

    In this episode of the Crazy Wisdom Podcast, host Stewart Alsop explores the complex world of context and knowledge graphs with guest Youssef Tharwat, the founder of NoodlBox who is building dot get for context. Their conversation spans from the philosophical nature of context and its crucial role in AI development, to the technical challenges of creating deterministic tools for software development. Tharwat explains how his product creates portable, versionable knowledge graphs from code repositories, leveraging the semantic relationships already present in programming languages to provide agents with better contextual understanding. They discuss the limitations of large context windows, the advantages of Rust for AI-assisted development, the recent Claude/Bun acquisition, and the broader geopolitical implications of the AI race between big tech companies and open-source alternatives. The conversation also touches on the sustainability of current AI business models and the potential for more efficient, locally-run solutions to challenge the dominance of compute-heavy approaches. For more information about NoodlBox and to join the beta, visit NoodlBox.io. Timestamps 00:00 Stewart introduces Youssef Tharwat, founder of NoodlBox, building context management tools for programming05:00 Context as relevant information for reasoning; importance when hitting coding barriers10:00 Knowledge graphs enable semantic traversal through meaning vs keywords/files15:00 Deterministic vs probabilistic systems; why critical applications need 100% reliability20:00 CLI tool makes knowledge graphs portable, versionable artifacts with code repos25:00 Compiler front-ends, syntax trees, and Rust's superior feedback for AI-assisted coding30:00 Claude's Bun acquisition signals potential shift toward runtime compilation and graph-based context35:00 Open source vs proprietary models; user frustration with rate limits and subscription tactics40:00 Singularity path vs distributed sovereignty of developers building alternative architectures45:00 Global economics and why brute force compute isn't sustainable worldwide50:00 Corporate inefficiencies vs independent engineering; changing workplace dynamics55:00 February open beta for NoodlBox.io; vision for new development tool standards Key Insights 1. Context is semantic information that enables proper reasoning, and traditional LLM approaches miss the mark. Youssef defines context as the information you need to reason correctly about something. He argues that larger context windows don't scale because quality degrades with more input, similar to human cognitive limitations. This insight challenges the Silicon Valley approach of throwing more compute at the problem and suggests that semantic separation of information is more optimal than brute force methods.2. Code naturally contains semantic boundaries that can be modeled into knowledge graphs without LLM intervention. Unlike other domains where knowledge graphs require complex labeling, code already has inherent relationships like function calls, imports, and dependencies. Youssef leverages these existing semantic structures to automatically build knowledge graphs, making his approach deterministic rather than probabilistic. This provides the reliability that software development has historically required.3. Knowledge graphs can be made portable, versionable, and shareable as artifacts alongside code repositories. Youssef's vision treats context as a first-class citizen in version control, similar to how Git manages code. Each commit gets a knowledge graph snapshot, allowing developers to see conceptual changes over time and share semantic understanding with collaborators. This transforms context from an ephemeral concept into a concrete, manageable asset.4. The dependency problem in modern development can be solved through pre-indexed knowledge graphs of popular packages. Rather than agents struggling with outdated API documentation, Youssef pre-indexes popular npm packages into knowledge graphs that automatically integrate with developers' projects. This federated approach ensures agents understand exact APIs and current versions, eliminating common frustrations with deprecated methods and unclear documentation.5. Rust provides superior feedback loops for AI-assisted programming due to its explicit compiler constraints. Youssef rebuilt his tool multiple times in different languages, ultimately settling on Rust because its picky compiler provides constant feedback to LLMs about subtle issues. This creates a natural quality control mechanism that helps AI generate more reliable code, making Rust an ideal candidate for AI-assisted development workflows.6. The current AI landscape faces a fundamental tension between expensive centralized models and the need for global accessibility. The conversation reveals growing frustration with rate limiting and subscription costs from major providers like Claude and Google. Youssef believes something must fundamentally change because $200-300 monthly plans only serve a fraction of the world's developers, creating pressure for more efficient architectures and open alternatives.7. Deterministic tooling built on semantic understanding may provide a competitive advantage against probabilistic AI monopolies. While big tech companies pursue brute force scaling with massive data centers, Youssef's approach suggests that clever architecture using existing semantic structures could level the playing field. This represents a broader philosophical divide between the "singularity" path of infinite compute and the "disagreeably autistic engineer" path of elegant solutions that work locally and affordably.

    56 min
  3. Episode #528: Fighting the AI Flood: From Information Overload to Family Sovereignty

    FEB 2

    Episode #528: Fighting the AI Flood: From Information Overload to Family Sovereignty

    In this episode of the Crazy Wisdom Podcast, host Stewart Alsop sits down with Adrian Martinca, founder of the Arc of Dreams and the Open Doors movements, as well as Kids Dreams Matter, to explore how artificial intelligence is fundamentally reshaping human consciousness and family structures. Their conversation spans from the karmic lessons of our technological age to practical frameworks for protecting children from what Martinca calls the "AI flood" - examining how AI functions as an alien intelligence that has become the primary caregiver for children through 10.5 hours of daily screen exposure, and discussing Martinca's vision for inverting our relationship with technology through collective dreams and family-centered data management systems. For those interested in learning more about Martinca's work to reshape humanity's relationship with AI, visit opendoorsmovement.org. Timestamps 00:00 Introduction to Adrian Martinca00:17 The Future and Human Choice02:03 Generational Trauma and Its Impact05:19 Understanding Consciousness and Suffering09:11 AI, Social Media, and Emotional Manipulation20:03 The AI Nexus Point and National Security31:13 The Librarian Analogy: Understanding AI's Role39:28 The Arc: A Framework for Future Generations47:57 Empowering Children in an AI-Driven World57:15 Reclaiming Agency in the Age of AI Key Insights1. AI as Alien Intelligence, Not Artificial Intelligence: Martinca reframes AI as fundamentally alien rather than artificial, arguing that because it possesses knowledge no human could have (like knowing "every book in the library"), it should be treated as an immigrant that must be assimilated into society rather than governed. This alien intelligence already controls social media algorithms and is becoming the primary caregiver of children through 10.5 hours of daily screen time.2. The AI Nexus Point as National Security Risk: Modern warfare has shifted to information-based attacks where hostile nations can deploy millions of fake accounts to manipulate AI algorithms, influencing how real citizens are targeted with content. This creates a vulnerability where foreign powers can break apart family units and exhaust populations without traditional military engagement, making people too tired and divided to resist.3. Generational Trauma as the Foundation of Consciousness: Drawing from Kundalini philosophy, Martinca explains that the first layer of consciousness development begins with inherited generational trauma. Children absorb their parents' unresolved suffering unconsciously, creating patterns that shape their worldview. This makes families both the source of early wounds and the pathway to healing, as parents witness their trauma affecting those they love most.4. The Choice Between Fear-Based and Love-Based Futures: Despite appearing chaotic, our current moment represents a critical choice point where humanity can collectively decide to function as a family. The fundamental choice underlying all decisions is alleviating suffering for our children and loved ones, but technology has created reference-based choices driven by doubt and fear rather than genuine human values.5. Social Media's Scientific Method Problem: Current platforms use the scientific method to maximize engagement, but the only reliably measurable emotions through screens are doubt and fear because positive emotions like love and hope lead people to put their devices down and connect in person. This creates systems that systematically promote negative emotional states to maintain user attention and generate revenue.6. The Arc of Dreams as Collective Vision: Martinca proposes a new data management system where families challenge children to envision their ideal future as heroes, collecting these dreams to create a unified vision for humanity. This would shift from bureaucratic fund allocation to child-centered prioritization, using children's visions of reduced suffering to guide AI development and social policy.7. Agency vs. Overwhelm in the Information Age: While some people develop agency through AI exposure and become more capable, many others experience information overload leading to inaction, confusion, depression, and even suicide. The key intervention is reframing dreams from material outcomes to states of being, helping children maintain their sense of self and agency rather than becoming passive consumers of algorithmic content.

    1h 3m
  4. Episode #527: Breaking the FinTech Echo Chamber: Tommy Yu's Behavioral Finance Operating System

    JAN 30

    Episode #527: Breaking the FinTech Echo Chamber: Tommy Yu's Behavioral Finance Operating System

    Stewart Alsop interviews Tomas Yu, CEO and founder of Turn-On Financial Technologies, on this episode of the Crazy Wisdom Podcast. They explore how Yu's company is revolutionizing the closed-loop payment ecosystem by creating a universal float system that allows gift card credits to be used across multiple merchants rather than being locked to a single business like Starbucks. The conversation covers the complexities of fintech regulation, the differences between open and closed loop payment systems, and Yu's unique background that combines Korean martial arts discipline with Mexican polo culture. They also dive into Yu's passion for polo, discussing the intimate relationship between rider and horse, the sport's elitist tendencies in different regions, and his efforts to build polo communities from El Paso to New Mexico. Find Tomas on LinkedIn under Tommy (TJ) Alvarez. Timestamps 00:00 Introduction to TurnOn Technologies02:45 Understanding Float and Its Implications05:45 Decentralized Gift Card System08:39 Navigating the FinTech Landscape11:19 The Role of Merchants and Consumers14:15 Challenges in the Gift Card Market17:26 The Future of Payment Systems23:12 Understanding Payment Systems: Stripe and POS26:47 Regulatory Landscape: KYC and AML in Payments27:55 The Impact of Economic Conditions on Financial Systems36:39 Transitioning from Industrial to Information Age Finance38:18 Curiosity and Resourcefulness in the Information Age45:09 Social Media and the Dynamics of Attention46:26 From Restaurant to Polo: A Journey of Mentorship49:50 The Thrill of Polo: Learning and Obsession54:53 Building a Team: Breaking Elitism in Polo01:00:29 The Unique Bond: Understanding the Horse-Rider Relationship01:05:21 Polo Horses: Choosing the Right Breed for the Game Key Insights 1. Turn-On Technologies is revolutionizing payment systems through behavioral finance by creating a decentralized "float" system. Unlike traditional gift cards that lock customers into single merchants like Starbucks, Turn-On allows universal credit that works across their entire merchant ecosystem. This addresses the massive gift card market where companies like Starbucks hold billions in customer funds that can only be used at their locations.2. The financial industry operates on an exclusionary "closed loop" versus "open loop" system that creates significant friction and fees. Closed loop systems keep money within specific ecosystems without conversion to cash, while open loop systems allow cash withdrawal but trigger heavy regulation. Every transaction through traditional payment processors like Stripe can cost merchants 3-8% in fees, representing a massive burden on businesses.3. Point-of-sale systems function as the financial bloodstream and credit scoring mechanism for businesses. These systems track all card transactions and serve as the primary data source for merchant lending decisions. The gap between POS records and bank deposits reveals cash transactions that businesses may not be reporting, making POS data crucial for assessing business creditworthiness and loan risk.4. Traditional FinTech professionals often miss obvious opportunities due to ego and institutional thinking. Yu encountered resistance from established FinTech experts who initially dismissed his gift card-focused approach, despite the trillion-dollar market size. The financial industry's complexity is sometimes artificially maintained to exclude outsiders rather than serve genuine regulatory purposes.5. The information age is creating a fundamental divide between curious, resourceful individuals and those stuck in credentialist systems. With AI and LLMs amplifying human capability, people who ask the right questions and maintain curiosity will become exponentially more effective. Meanwhile, those relying on traditional credentials without underlying curiosity will fall further behind, creating unprecedented economic and social divergence.6. Polo serves as a powerful business metaphor and relationship-building tool that mirrors modern entrepreneurial challenges. Like mixed martial arts evolved from testing individual disciplines, business success now requires being competent across multiple areas rather than excelling in just one specialty. The sport also creates unique networking opportunities and teaches valuable lessons about partnership between human and animal.7. International financial systems reveal how governments use complexity and capital controls to maintain power over citizens. Yu's observations about Argentina's financial restrictions and the prevalence of cash economies in Latin America illustrate how regulatory complexity often serves political rather than protective purposes, creating opportunities for alternative financial systems that provide genuine value to users.

    51 min
  5. Episode #526: From Pythagoreans to AI: How Beauty Became the Foundation of Everything

    JAN 26

    Episode #526: From Pythagoreans to AI: How Beauty Became the Foundation of Everything

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Dima Zhelezov, a philosopher at SQD.ai, to explore the fascinating intersections of cryptocurrency, AI, quantum physics, and the future of human knowledge. The conversation covers everything from Zhelezov's work building decentralized data lakes for blockchain data to deep philosophical questions about the nature of mathematical beauty, the Renaissance ideal of curiosity-driven learning, and whether AI agents will eventually develop their own form of consciousness. Stewart and Dima examine how permissionless databases are making certain activities "unenforceable" rather than illegal, the paradox of mathematics' incredible accuracy in describing the physical world, and why we may be entering a new Renaissance era where curiosity becomes humanity's most valuable skill as AI handles traditional tasks. You can find more about Dima's work at SQD.ai and follow him on X at @dizhel. Timestamps 00:00 Introduction to Decentralized Data Lakes02:55 The Evolution of Blockchain Data Management05:55 The Intersection of Blockchain and Traditional Databases08:43 The Role of AI in Transparency and Control11:51 AI Autonomy and Human Interaction15:05 Curiosity in the Age of AI17:54 The Renaissance of Knowledge and Learning20:49 Mathematics, Beauty, and Discovery27:30 The Evolution of Mathematical Thought30:28 Quantum Mechanics and Mathematical Predictions33:43 The Search for a Unified Theory38:57 The Role of Gravity in Physics41:23 The Shift from Physics to Biology46:19 The Future of Human Interaction in a Digital Age Key Insights1. Blockchain as a Permissionless Database Solution - Traditional blockchains were designed for writing transactions but not efficiently reading data. Dima's company SQD.ai built a decentralized data lake that maintains blockchain's key properties (open read/write access, verifiable, no registration required) while solving the database problem. This enables applications like Polymarket to exist because there's "no one to subpoena" - the permissionless nature makes enforcement impossible even when activities might be regulated in traditional systems.2. The Convergence of On-Chain and Off-Chain Data - The future won't have distinct "blockchain applications" versus traditional apps. Instead, we'll see seamless integration where users don't even know they're using blockchain technology. The key differentiator is that blockchain provides open read and write access without permission, which becomes essential when touching financial or politically sensitive applications that governments might try to shut down through traditional centralized infrastructure.3. AI Autonomy and the Illusion of Control - We're rapidly approaching full autonomy of AI agents that can transact and analyze information independently through blockchain infrastructure. While humans still think anthropocentrically about AI as companions or tools, these systems may develop consciousness or motivations completely alien to human understanding. This creates a dangerous "illusion of control" where we can operationalize AI systems without truly comprehending their decision-making processes.4. Curiosity as the Essential Future Skill - In a world of infinite knowledge and AI capabilities, curiosity becomes the primary limiting factor for human progress. Traditional hard and soft skills will be outsourced to AI, making the ability to ask good questions and pursue interests through Socratic dialogue with AI the most valuable human capacity. This mirrors the Renaissance ideal of the polymath, now enabled by AI that allows non-linear exploration of knowledge rather than traditional linear textbook learning.5. The Beauty Principle in Mathematical Discovery - Mathematics exhibits an "unreasonable effectiveness" where theories developed purely abstractly turn out to predict real-world phenomena with extraordinary accuracy. Quantum chromodynamics, developed through mathematical beauty and elegance, can predict particle physics experiments to incredible precision. This suggests either mathematical truths exist independently for AI to discover, or that aesthetic principles may be fundamental organizing forces in the universe.6. The Physics Plateau and Biological Shift - Modern physics faces a unique problem where the Standard Model works too well - it explains everything we can currently measure except gravity, but we can't create experiments to test the edge cases where the theory should break down. This has led to a decline in physics prominence since the 1960s, with scientific excitement shifting toward biology and, now, AI and crypto, where breakthrough discoveries remain accessible.7. Two Divergent Futures: Abundance vs. Dystopia - We face a stark choice between two AI futures: a super-abundant world where AI eliminates scarcity and humans pursue curiosity, beauty, and genuine connection; or a dystopian scenario where 0.01% capture all AI-generated value while everyone else survives on UBI, becoming "degraded to zombies" providing content for AI models. The outcome depends on whether we prioritize human flourishing or power concentration during this critical technological transition.

    57 min
  6. Episode #525: The Billion-Dollar Architecture Problem: Why AI's Innovation Loop is Stuck

    JAN 23

    Episode #525: The Billion-Dollar Architecture Problem: Why AI's Innovation Loop is Stuck

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop welcomes Roni Burd, a data and AI executive with extensive experience at Amazon and Microsoft, for a deep dive into the evolving landscape of data management and artificial intelligence in enterprise environments. Their conversation explores the longstanding challenges organizations face with knowledge management and data architecture, from the traditional bronze-silver-gold data processing pipeline to how AI agents are revolutionizing how people interact with organizational data without needing SQL or Python expertise. Burd shares insights on the economics of AI implementation at scale, the debate between one-size-fits-all models versus specialized fine-tuned solutions, and the technical constraints that prevent companies like Apple from upgrading services like Siri to modern LLM capabilities, while discussing the future of inference optimization and the hundreds-of-millions-of-dollars cost barrier that makes architectural experimentation in AI uniquely expensive compared to other industries. Timestamps 00:00 Introduction to Data and AI Challenges03:08 The Evolution of Data Management05:54 Understanding Data Quality and Metadata08:57 The Role of AI in Data Cleaning11:50 Knowledge Management in Large Organizations14:55 The Future of AI and LLMs17:59 Economics of AI Implementation29:14 The Importance of LLMs for Major Tech Companies32:00 Open Source: Opportunities and Challenges35:19 The Future of AI Inference and Hardware43:24 Optimizing Inference: The Next Frontier49:23 The Commercial Viability of AI ModelsKey Insights 1. Data Architecture Evolution: The industry has evolved through bronze-silver-gold data layers, where bronze is raw data, silver is cleaned/processed data, and gold is business-ready datasets. However, this creates bottlenecks as stakeholders lose access to original data during the cleaning process, making metadata and data cataloging increasingly critical for organizations.2. AI Democratizing Data Access: LLMs are breaking down technical barriers by allowing business users to query data in plain English without needing SQL, Python, or dashboarding skills. This represents a fundamental shift from requiring intermediaries to direct stakeholder access, though the full implications remain speculative.3. Economics Drive AI Architecture Decisions: Token costs and latency requirements are major factors determining AI implementation. Companies like Meta likely need their own models because paying per-token for billions of social media interactions would be economically unfeasible, driving the need for self-hosted solutions.4. One Model Won't Rule Them All: Despite initial hopes for universal models, the reality points toward specialized models for different use cases. This is driven by economics (smaller models for simple tasks), performance requirements (millisecond response times), and industry-specific needs (medical, military terminology).5. Inference is the Commercial Battleground: The majority of commercial AI value lies in inference rather than training. Current GPUs, while specialized for graphics and matrix operations, may still be too general for optimal inference performance, creating opportunities for even more specialized hardware.6. Open Source vs Open Weights Distinction: True open source in AI means access to architecture for debugging and modification, while "open weights" enables fine-tuning and customization. This distinction is crucial for enterprise adoption, as open weights provide the flexibility companies need without starting from scratch.7. Architecture Innovation Faces Expensive Testing Loops: Unlike database optimization where query plans can be easily modified, testing new AI architectures requires expensive retraining cycles costing hundreds of millions of dollars. This creates a potential innovation bottleneck, similar to aerospace industries where testing new designs is prohibitively expensive.

    54 min
  7. Episode #524: The 500-Year Prophecy: Why Buddhism and AI Are Colliding Right Now

    JAN 19

    Episode #524: The 500-Year Prophecy: Why Buddhism and AI Are Colliding Right Now

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Kelvin Lwin for their second conversation exploring the fascinating intersection of AI and Buddhist cosmology. Lwin brings his unique perspective as both a technologist with deep Silicon Valley experience and a serious meditation practitioner who's spent decades studying Buddhist philosophy. Together, they examine how AI development fits into ancient spiritual prophecies, discuss the dangerous allure of LLMs as potentially "asura weapons" that can mislead users, and explore verification methods for enlightenment claims in our modern digital age. The conversation ranges from technical discussions about the need for better AI compilers and world models to profound questions about humanity's role in what Lwin sees as an inevitable technological crucible that will determine our collective spiritual evolution. For more information about Kelvin's work on attention training and AI, visit his website at alin.ai. You can also join Kelvin for live meditation sessions twice daily on Clubhouse at clubhouse.com/house/neowise. Timestamps 00:00 Exploring AI and Spirituality05:56 The Quest for Enlightenment Verification11:58 AI's Impact on Spirituality and Reality17:51 The 500-Year Prophecy of Buddhism23:36 The Future of AI and Business Innovation32:15 Exploring Language and Communication34:54 Programming Languages and Human Interaction36:23 AI and the Crucible of Change39:20 World Models and Physical AI41:27 The Role of Ontologies in AI44:25 The Asura and Deva: A Battle for Supremacy48:15 The Future of Humanity and AI51:08 Persuasion and the Power of LLMs55:29 Navigating the New Age of Technology Key Insights 1. The Rarity of Polymath AI-Spirituality Perspectives: Kelvin argues that very few people are approaching AI through spiritual frameworks because it requires being a polymath with deep knowledge across multiple domains. Most people specialize in one field, and combining AI expertise with Buddhist cosmology requires significant time, resources, and academic background that few possess.2. Traditional Enlightenment Verification vs. Modern Claims: There are established methods for verifying enlightenment claims in Buddhist traditions, including adherence to the five precepts and overcoming hell rebirth through karmic resolution. Many modern Western practitioners claiming enlightenment fail these traditional tests, often changing the criteria when they can't meet the original requirements.3. The 500-Year Buddhist Prophecy and Current Timing: We are approximately 60 years into a prophesied 500-year period where enlightenment becomes possible again. This "startup phase of Buddhism revival" coincides with technological developments like the internet and AI, which are seen as integral to this spiritual renaissance rather than obstacles to it.4. LLMs as UI Solution, Not Reasoning Engine: While LLMs have solved the user interface problem of capturing human intent, they fundamentally cannot reason or make decisions due to their token-based architecture. The technology works well enough to create illusion of capability, leading people down an asymptotic path away from true solutions.5. The Need for New Programming Paradigms: Current AI development caters too much to human cognitive limitations through familiar programming structures. True advancement requires moving beyond human-readable code toward agent-generated languages that prioritize efficiency over human comprehension, similar to how compilers already translate high-level code.6. AI as Asura Weapon in Spiritual Warfare: From Buddhist cosmological perspective, AI represents an asura (demon-realm) tool that appears helpful but is fundamentally wasteful and disruptive to human consciousness. Humanity exists as the battleground between divine and demonic forces, with AI serving as a weapon that both sides employ in this cosmic conflict.7. 2029 as Critical Convergence Point: Multiple technological and spiritual trends point toward 2029 as when various systems will reach breaking points, forcing humanity to either transcend current limitations or be consumed by them. This timing aligns with both technological development curves and spiritual prophecies about transformation periods.

    1h 1m
  8. Episode #523: Space Computer: When Your Trusted Execution Environment Needs a Rocket

    JAN 16

    Episode #523: Space Computer: When Your Trusted Execution Environment Needs a Rocket

    In this episode of the Crazy Wisdom podcast, host Stewart Alsop sits down with Daniel Bar, co-founder of Space Computer, a satellite-based secure compute protocol that creates a "root of trust in space" using tamper-resistant hardware for cryptographic applications. The conversation explores the fascinating intersection of space technology, blockchain infrastructure, and trusted execution environments (TEEs), touching on everything from cosmic radiation-powered random number generators to the future of space-based data centers and Daniel's journey from quantum computing research to building what they envision as the next evolution beyond Ethereum's "world computer" concept. For more information about Space Computer, visit spacecomputer.io, and check out their new podcast "Frontier Pod" on the Space Computer YouTube channel. Timestamps 00:00 Introduction to Space Computer02:45 Understanding Layer 1 and Layer 2 in Space Computing06:04 Trusted Execution Environments in Space08:45 The Evolution of Trusted Execution Environments11:59 The Role of Blockchain in Space Computing14:54 Incentivizing Satellite Deployment17:48 The Future of Space Computing and Its Applications20:58 Radiation Hardening and Space Environment Challenges23:45 Kardashev Civilizations and the Future of Energy26:34 Quantum Computing and Its Implications29:49 The Intersection of Quantum and Crypto32:26 The Future of Space Computer and Its Vision Key Insights 1. Space-based data centers solve the physical security problem for Trusted Execution Environments (TEEs). While TEEs provide secure compute through physical isolation, they remain vulnerable to attacks requiring physical access - like electron microscope forensics to extract secrets from chips. By placing TEEs in space, these attack vectors become practically impossible, creating the highest possible security guarantees for cryptographic applications.2. The space computer architecture uses a hybrid layer approach with space-based settlement and earth-based compute. The layer 1 blockchain operates in space as a settlement layer and smart contract platform, while layer 2 solutions on earth provide high-performance compute. This design leverages space's security advantages while compensating for the bandwidth and compute constraints of orbital infrastructure through terrestrial augmentation.3. True randomness generation becomes possible through cosmic radiation harvesting. Unlike pseudo-random number generators used in most blockchain applications today, space-based systems can harvest cosmic radiation as a genuinely stochastic process. This provides pure randomness critical for cryptographic applications like block producer selection, eliminating the predictability issues that compromise security in earth-based random number generation.4. Space compute migration is inevitable as humanity advances toward Kardashev Type 1 civilization. The progression toward planetary-scale energy control requires space-based infrastructure including solar collection, orbital cities, and distributed compute networks. This technological evolution makes space-based data centers not just viable but necessary for supporting the scale of computation required for advanced civilization development.5. The optimal use case for space compute is high-security applications rather than general data processing. While space-based data centers face significant constraints including 40kg of peripheral infrastructure per kg of compute, maintenance impossibility, and 5-year operational lifespans, these limitations become acceptable when the application requires maximum security guarantees that only space-based isolation can provide.6. Space computer will evolve from centralized early-stage operation to a decentralized satellite constellation. Similar to early Ethereum's foundation-operated nodes, space computer currently runs trusted operations but aims to enable public participation through satellite ownership stakes. Future participants could fractionally own satellites providing secure compute services, creating economic incentives similar to Bitcoin mining pools or Ethereum staking.7. Blockchain represents a unique compute platform that meshes hardware, software, and free market activity. Unlike traditional computers with discrete inputs and outputs, blockchain creates an organism where market participants provide inputs through trading, lending, and other economic activities, while the distributed network processes and returns value through the same market mechanisms, creating a cyborg-like integration of technology and economics.

    1h 4m
4.9
out of 5
69 Ratings

About

In his series "Crazy Wisdom," Stewart Alsop explores cutting-edge topics, particularly in the realm of technology, such as Urbit and artificial intelligence. Alsop embarks on a quest for meaning, engaging with others to expand his own understanding of reality and that of his audience. The topics covered in "Crazy Wisdom" are diverse, ranging from emerging technologies to spirituality, philosophy, and general life experiences. Alsop's unique approach aims to make connections between seemingly unrelated subjects, tying together ideas in unconventional ways.

More From Crazy Wisdom

You Might Also Like