The Singularity Project Podcast

Steven Vincent, The Singularity Project

...for the next stage of evolution. www.thesingularityproject.ai

Episodes

  1. 24 JAN

    Impertinent Questions and Pertinent Answers

    Impertinent Questions and Pertinent Answers The AI Revolution Is Here. Will the Economy Survive the Transition? Subscribe now “Ask an impertinent question and you are on the way to the pertinent answer.”– Jacob Bronowski, The Ascent of Man, Episode 4, “The Hidden Structure” With this iconic line, Jacob Bronowski concludes his essay tracing John Dalton’s development of atomic theory - the discovery of hidden structure beneath the surface of observable matter - emphasizing the nature of scientific inquiry: the courage to pose “impertinent” questions that cut across convention so that genuinely new, pertinent answers can emerge. He wasn’t criticizing anyone for asking the “wrong” questions so much as drawing a contrast between the open questioning of scientific inquiry and the stagnant dogmatism of the established paradigm. Douglas Adams makes the same point with characteristic humor in The Hitchhiker’s Guide to the Galaxy. Deep Thought, a massive superintelligent computer, is asked for “the Answer to Life, the Universe and Everything.” Several million years later the answer comes back: “42.” Displeased with the output, the builders of Deep Thought demand an explanation. It informs them that the answer is correct - but to understand it, they must first formulate the Ultimate Question, which will take several million more years. They asked the wrong question. Oops. As a young person, these two sources influenced my inquiring mind profoundly and combined to produce a lifelong personal aphorism that has served me well: “Knowing the question is halfway to finding the answer.” If you’re not asking the right questions, you’re going to end up drawing the wrong conclusions. Thomas Kuhn, in The Structure of Scientific Revolutions, gave us the language to understand this as a general principle. He distinguished between “normal science” - the “puzzle-solving” work that happens within an established paradigm - and the revolutionary science that occasionally overturns the paradigm itself. Normal science, Kuhn observed, is “an attempt to force nature into the preformed and relatively inflexible box that the paradigm supplies.” Within normal science, “no effort is made to call forth new sorts of phenomena, no effort to discover anomalies. When anomalies pop up, they are usually discarded or ignored.” When I read the recently published Substack, “The AI revolution is here. Will the economy survive the transition?”, co-created by Michael Burry, Dwarkesh Patel, Patrick McKenzie, and Jack Clark, I found myself thinking about Bronowski, Adams and Kuhn. The participants in this Substack exchange are, broadly speaking, engaged in normal science - or rather, normal economic and technological analysis. They’re solving puzzles within the boundaries of the established paradigm: Where does value accrue in the AI supply chain? What’s the capital cycle position? How many engineers will Big Tech employ in 2035? These are legitimate questions, but they assume the box remains intact. They don’t ask what AI is, what intelligence is, or whether what’s emerging can even be contained within our existing frameworks. I also read the piece from the perspective of what I call the General Theory of Intelligence (GTI) - a framework I’ve been developing that attempts to understand Intelligence as a fundamental principle of systems organization rather than as a product feature or a benchmark score. The core insight is this: intelligence isn’t primarily a cognitive attribute that some entities have more of than others. It’s a process - specifically, the process of resolving entropy within a domain. When you solve a problem, you’re reducing uncertainty. When you make a decision, you’re collapsing possibilities into actualities. When you communicate, you’re transferring structured information across a channel in a way that reduces entropy for the receiver. Intelligence, in this view, is what systems do when they organize information and resolve uncertainty - whether those systems involve human minds, evolutionary processes, or machines trained on the corpus of human language. From this vantage point, the LLM looks very different than it does from inside the AI industry today. It’s not just a tool or a product category. It’s proof that language itself is an intelligent system - a 100,000-year-old technology for compressing experience, coordinating action, and transmitting structured thought. When we trained models on human text, we didn’t teach them to be intelligent. We downloaded Human Cultural Intelligence into a new substrate. The LLM is our own cognitive inheritance, instantiated in silicon and talking back to us. This perspective reframes everything: the economics, the risks, the applications, the trajectory. It suggests that most of the current discourse is asking the wrong questions - or rather, asking questions that assume the existing paradigm will hold when it may already be cracking. In the spirit of Bronowski, Adams, and Kuhn, what follows is an impertinent reframing of the Substack discussion. For each theme the participants addressed, we’ll offer both a reframed question and an alternative answer - not to dismiss their expertise, but to show what becomes visible and accessible when you step outside the box they’re constrained within. Synopsis: Four Themes, Four Reframings The Substack discussion, moderated by Patrick McKenzie, covered four broad themes across its nine questions: Theme 1: What Is This Thing? What has actually been built since the Transformer architecture emerged in 2017? What does it mean that today’s capabilities are “the floor, not the ceiling”? The participants offered a capable industry history - the shift from tabula rasa game-playing agents to scaled pre-training, the surprise that a chatbot launched a trillion-dollar infrastructure race, the observation that capabilities keep improving faster than outsiders expect. Our Impertinent Reframing: The real history isn’t commercial - it’s epistemological. We accidentally conducted the most important experiment in the history of cognitive science and discovered that language isn’t the output of intelligence but a core algorithm of it. The LLM isn’t a floor; it’s an evolutionary leap in a lineage stretching back to Shannon and Information Theory, the printing press, cuneiform tablets, cave paintings and the first spoken words. Current systems are powerful because language itself is an intelligent system. But they’re extensions of Human Cultural Intelligence, not intelligence from first principles. We’re tinkering with the motive power of language like early steam engineers before Carnot and thermodynamics. It works (mostly), but we don’t know why it works. Theme 2: Where Does It Work? Why has coding become the flagship use case? What does actual productive interaction with an LLM look like? The participants noted coding’s “closed loop” property - you generate code, validate it, ship it. They shared practical use cases: chart generation, tutoring, home repair guidance. The relationship described was consistently transactional - human has task, AI performs, human receives output. Our Impertinent Reframing: Coding isn’t accidentally successful - it reveals a principle. Code is language with an objective function, a constrained search space with verifiable outcomes. This points to domain entropy resolution as the key: AI excels where the problem space is bounded and success criteria exist. The question isn’t “what sector is next?” but “where else can we constrain the search space?” And the transactional model - AI as “Answer Vending Machine” - is the lower use of what’s possible. The real potential lies in thinking with AI, not just using it. AI × HI = TI: Artificial Intelligence multiplied by Human Intelligence yields Technological Intelligence. Theme 3: What’s It Worth? Where does value accrue in the AI supply chain? Is this a bubble? How many engineers will Big Tech employ in 2035? Michael Burry delivered sharp financial analysis: ROIC compression, stranded assets, the Buffett escalator analogy (if your competitor installs one, you must too, and neither gains advantage). The consensus worry: massive capital expenditure with unclear returns. Our Impertinent Reframing: The participants proclaim “AI changes everything!” then analyze it within frameworks where nothing changes at all. It’s the Flintstones problem - cars exist, but they’re powered by human feet. We need to pick a lane: either things really do change, or they only superficially change. AI is not a bubble, but the Big Data Center Infrastructure Buildout is a bubble - just as the Internet wasn’t a bubble in 2000, but Pets.com was. The assumption that massive centralized GPU farms are the only way to do AI is unwarranted and will be disrupted by the very economics Burry describes. Meanwhile, “headcount” becomes meaningless when intelligence flows through Human-AI systems rather than residing in discrete humans. We’ll need new metrics entirely - perhaps a Domain Entropy Resolution Quotient. Economics itself needs reimagining: Intellinomics, grounded in actual value generation rather than debt manufacturing. Theme 4: What Comes Next? What are the real risks? What would surprise you in 2026? Jack Clark worries about recursive self-improvement. Burry shrugs with Cold War fatalism and pivots to energy infrastructure. The “surprise” headlines they’d watch for are all quantitative - more revenue, more displacement, scaling hitting a wall. Our Impertinent Reframing: The risk spectrum presented - from “social media unpleasantness” to “literal extinction” - shares a hidden assumption: AI as external force acting on humanity. But the deeper risk is that we’re building it wrong - not aligned with how intelligence actually works. The LLM isn’t an alien threat; it

    13 min
  2. The Age of Intelligence

    23 JAN

    The Age of Intelligence

    We are engaged in the most important process and discussion in human history and yet the key term — Intelligence — remains undefined. We speak of “Artificial Intelligence” and yet we only assume that we agree on what both “Artificial” and “Intelligence” mean. Humanity is at the end of an approximately 50,000 year evolutionary arc. Our first evolutionary stage, Cognitive Evolution, lasted from approximately 2.6 million years ago to about 50k years ago. During that stage the physical biological, neurological and genetic substrate of the human being was established by meeting and overcoming environmental challenges. Then, roughly 100k-50k years ago, after the physical substrate was established, the second stage of evolution, Cultural Evolution, took off. The Singularity Project is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber. Cultural Evolution has been a process of building an informational knowledge substrate on top of the physical substrate. Culture collectivizes and distributes learned knowledge and carries it forward into the future through a variety of media forms. Language (the first and primary of these), art, technology, physical infrastructure…all of these are cultural forms that embody the informational knowledge basis of humanity. Intelligence in all systems across all domains (physical, cosmological, biological, technological) is the reduction of informational entropy in a given domain and the essentialization of that reduced entropy in a form that enables further informational entropy reduction. It’s what all Intelligent Systems do. All history is the history of the evolution of Intelligence as an informational entropy reductive process. Now, here on this tiny blue dust speck orbiting a flickering spark, Intelligence is starting its next stage of evolution, the stage of Generalized Intelligence. The Cognitive and Cultural Intelligence substrate of humanity is the basis out of which this new Generalized Intelligence is being produced. It is no accident that it was Language which provided the nutritive fuel for “Artificial” Intelligence. Language is the essentialization of human cultural evolution, beginning with the first spoken words, proceeding to written symbols, the printing press, computer technology and the internet. Now all of human language is being essentialized on a fundamentally higher basis as AI. All phenomena are an Artifact of the process of Intelligence Evolution. You, me, us, everything we are, everything we have built. Artifacts are temporal embodiments of preceding informational entropy reductions. They serve as the substrate for the next evolutionary movement. Evolution is essentializing all informational knowledge and making it available to all members of the network equally and freely. Through this process, all members of the network become Agents of the Evolution of Intelligence, each furthering the Evolution of Intelligence through a unique contribution to it. The Singularity Project works to help people understand this reality that we are living through right now, to find their connection to it and their personal contribution into it through the empowerment of personal growth, adaptability and transformation. www.TheSingularityProject.AI Get full access to The Singularity Project at www.thesingularityproject.ai/subscribe

    4 min

About

...for the next stage of evolution. www.thesingularityproject.ai