Token Intelligence

Eric Dodds & John Wessel

Two friends break down AI, technology, and entrepreneurship through mental models, real-world experience and the pursuit of a life well-lived.

Episodes

  1. 1 DAY AGO

    The map is not the territory

    How do you navigate the pace of AI disruption? This mental model helps you decode AI hype, catch cartographer bias, and avoid being blinded by the past. Summary Eric and John break down the mental model "the map is not the territory" and pressure-test it against AI hype, career war stories, and the beloved platitude "perception is reality." They walk through Shane Parish’s three principles: 1) reality is the ultimate update, 2) consider the cartographer, and 3) maps can influence territories, and show why each one matters when billions are flowing into AI and the territory is shifting under everyone's feet. Key takeaways "Perception is reality" is a useful awareness tool and a terrible life principle. It helps you understand why people behave the way they do, but centering your life around it leads to incongruity and character problems. Reality will update your map whether you like it or not. AI skeptics who refuse to revise their position as capabilities improve are a real-time case study in map–territory mismatch. The faster the territory changes, the more dangerous a stale map becomes. The cartographer always has a bias. Whether it's a CRO whose commission rewards higher ACV or a frontier-model company that needs to justify billions in investment, the person drawing the map has incentives baked in. Always ask who made the map and what they gain from it. Maps shape the territory they claim to describe. The ROI-first map for AI is concentrating nearly all successful tooling around knowledge-worker productivity (especially coding), even though AI is capable of far more. That’s limiting what gets built and funded. Touch the territory. Financial models, performance reviews, product demos, and AI benchmarks are all maps. The risk you miss is always the one the map doesn't show, so get your hands on the actual thing before making big decisions. Notable mentions and links Charlie Munger of Berkshire Hathaway fame is credited with championing the idea of collecting mental models from many disciplines to improve decision-making. Shane Parrish is a Munger disciple who runs the Farnham Street blog, wrote the book series The Great Mental Models. You can read the Farnham Street blog post on this mental model.

    31 min
  2. 31 JAN

    Text message bankruptcy, OpenClaw, and 20 years of email data

    Eric hits 247 unread texts, meets OpenClaw, and reminisces on Merlin Mann’s “pebble problem”. He and John learn why messaging is now entertainment and pave a path towards better communication. Summary Eric accidentally reveals he has 247 unread texts and declares text message bankruptcy. In his effort to reorganize, he and John take a sharp look at how modern communication channels have morphed into entertainment and how AI makes the problem worse. Along the way they Run an analysis on 20 years of personal email Discuss the extremity of giving OpenClaw (né Moltbot, né Clawdbot) root access to your email and messages Revisit decades-old lessons from Merlin Mann’s Inbox Zero legacy By the end of the show, they land practical ways to overcome the limitations of form factor in order to communicate well with the people you care about. Key takeaways The real goal is relational integrity: The episode lands on the uncomfortable truth that your communication backlog reveals your lived priorities. Improving the system is ultimately about showing up for people you care about. Communication channels are “feedifying”: email and texting increasingly behave like entertainment/content distribution streams, shifting norms toward higher volume and weaker connection. The inbox problem is now big enough to drive extreme solutions: people are running local, open-source AI agents (often on dedicated Macs) and a primary use case is triaging and responding to messages (which comes with significant security risk). Inbox Zero and the pebble problem still explain the pain: the enduring issue is tiny, individually “light” messages compounding into an attention debt that feels impossible to repay without a decision framework. Merlin Mann’s work on this has stood the test of time. The medium and tools shape behavior: Apple’s Messages app is optimized for synchronous bursts and dopamine-triggering reactions, while lacking robust workflow affordances. Text message bankruptcy is partly structural, not just personal discipline. Notable mentions and links Eric coined the term “text message bankruptcy” in a blog post he wrote about the experience. OpenClaw, formerly namesd Moltbot, formerly named Clawdbot, is an open source personal AI assistant that can have root access to everything on your computer. A primary use case is managing email and text messaging, though people are using it in extreme and insecure ways, giving OpenClaw access to their passwords and credit cards. *How we lost communication to entertainment* is a fascinating article about modern communication channels trending towards entertainment, robbing users of real connection. Marshall McLuhan coined the term “the medium is the message” to describe how the medium a message is delivered through isn’t neutral, but is part of the message itself. T9 Word was one of the first innovations in messaging on dumb phones before Blackberry brought the full QWERTY keyboard to mobile at scale. Merlin Mann has written for decades about productivity and coined the term Inbox Zero in a talk he gave at Google. Merlin Mann used a “pebble” metaphor to describe the light ‘weight’ of an individual message and the difference in expectations that creates between the sender and receiver.

    52 min
  3. 24 JAN

    Sunk cost, AI deniers, and Elon talks with Jesus

    Sunk cost in the AI era: John and Eric define the bias, share candid stories, and show how identity, tech debt, and market shifts demand pivots, reality checks and the freedom of starting over. Summary John and Eric unpack the sunk cost fallacy through personal stories, clean definitions, and why it intensifies in fast-moving AI and software. They contrast stubbornness-as-craft with market reality, show how identity and ego can cloud pivots, and offer practical checks: external feedback, tighter problem framing, and willingness to start over. Key takeaways Name the bias: Prior investment should not drive future investment. Always optimize for present and future ROI, not the past. Identity check: Notice when a project becomes “part of me,” because that’s when impartial judgment collapses. Use outside calibration: Ask trusted, domain-relevant peers to sanity-check your assumptions. Accept utilitarian wins: AI-produced code may be inelegant, yet commercially superior. Tests and agents will raise quality anyway, so it’s time to accept the future of software development. Freedom is willingness to start over: If you can let go of valuable things and start from zero, you won’t run the risk of getting bogged down by sunk costs. Noticeable mentions and links Sunk cost fallacy is defined as the bias of using prior investment (time, money, effort) to justify continued investment, even when it impairs present decision-making. Thinking, Fast and Slow, written by Daniel Kahneman, is referenced for its System 1 / System 2 lens to explain why sunk cost can feel emotional and irrational. Steam-powered boats and the Morse code/telegraph are cited as cases where stubborn persistence eventually met enabling tech, highlighting survivorship bias. The "rich young ruler" story from Matthew 19 in the Bible is used to illustrate identity attachment and how letting go of things core to oneself can be the real barrier to change. Elon Musk, via Walter Isaacson's biography, is referenced as an anti–sunk-cost archetype, repeatedly risking everything and switching when needed. Benn Stancil's framing (LLMs read fast and summarize "roughly") is echoed to explain why AI coding feels transformative: machines don't slow down on code reading/writing.

    41 min
  4. 18 JAN

    AI's chat interface problem and Lobe's imaginary seed round

    Eric and John riff on Lobe's seed round, then dive deep on why chat is the wrong UI for most AI. They unpack the blank page problem, why context matters, and how embedded AI will replace chat. Summary In Episode 2, Lobe gets a theoretical 3 million dollar seed round, and Eric and John discuss how they are going to deploy the capital, which includes potential acquisitions. Next, they dive into a detailed discussion about why chat is a ubiquitous UI for AI. Eric feels very strongly about the shortcomings, which include poor literacy rates, the blank page problem, and which use cases chat is actually good for. The why is even more interesting, and their hypothesis is that cost is one of the primary drivers because of how expensive it is to run models at scale. They wrap up by imagining a future where AI disappears from interfaces altogether, and is embedded natively in intuitive, multi-model user experiences. Key takeaways Lobe.ai Lobe’s path forward: acquire and partner for distribution (apps/sleep brands), integrate biometrics for REM triggers, and monetize interpretation and creative outputs. The AI chat interface Chat is the wrong default interface for AI: it shines for search and inside high-context environments with clear task frames, but obfuscates the power of the tools in most other cases. Fundamental barriers limit the utility of chat: Americans have low literacy rates, and combined with the blank page problem, chat will limit the value people can get from AI. Context is king: multimodal, embedded AI will replace generic chat for many jobs. Think IDEs, docs, and app-native flows that deliver value in place. Hard costs influence the interface: cost and infra realities favor user-initiated interactions now; as economics improve, proactive, background “agentic” features will grow. Notable mentions with links Poe (by Quora) is shown as a chat aggregator illustrating how many tools converge on chat as the primary interface. Notion AI is used to demonstrate higher-context chat inside documents. It's helpful, but with UX pitfalls (e.g., overwriting content and unclear "terms of the transaction"). Cursor (AI IDE) is highlighted as a high-context environment where chat + multimodal controls (browser, on‑page edits) make AI assistance more precise and useful. v0 is referenced as a multimodal design/build flow that lets users edit generated UI directly, going beyond pure chat to reduce the blank-page burden. Rabbit R1 is discussed as an alternative, voice‑forward hardware form factor pushing beyond chat, with lessons about timing, expectations, and risk. Naveen Rao (Databricks) is quoted arguing that generic chat is “the worst interface for most apps,” calling for insight delivered “at the right time in the right context.” Benedict Evans is cited for the idea that most people will experience LLMs embedded inside apps rather than as standalone chatbots, similar to how SQL is invisible in products. Jakob Nielsen is noted for the view that prompt engineering’s rise signals a UX gap, and that AI needs a Google‑level leap in usability to cross the chasm. Low literacy rates are discussed as a key limiter. Good writers tend to extract more value from chat tools.

    56 min
  5. 10 JAN

    Bottlenecks mental model & tool time with Zo Computer

    Eric and John discuss bottlenecks as a mental model, uncovering why constraints are leverage, not blockers. Hands-on Tool Time is with Zo Computer, a stateful, powerful, AI-enabled cloud computer. Summary In the second half of Episode 1, Eric and John tackle “bottlenecks” as a core mental model: why they limit system output, when to keep them on purpose, and how to fix the right ones without creating worse slowdowns. They share examples from product development, content quality control at scale, and how the youngest child changes family life. In Tool Time, they go hands-on with Zo Computer, an AI-enabled cloud computer with state, plus agents and a real file system. Eric shares his screen to explore use cases like media management, hybrid search over local files, and remote development, ultimately questioning where the day-to-day value beats existing tools. Eric analyzes his entire history of blog post markdown files, and they conclude that running AI against physical files will be a big deal, but wonder if Zo is the right form factor. Key takeaways Mental model: bottlenecks Identify the real constraint and keep good bottlenecks: Focus on the true bottleneck, not the noisiest part. Optimizing fast stages is wasted effort. Some constraints (security, editorial review) protect quality and safety, so preserve them intentionally. Fewer focused people beat swarm tactics: Small, targeted groups resolve bottlenecks faster than all-hands pile-ons. Prototype fast, still ship with specs: High-fidelity prototypes unblock product velocity, but clear specifications prevent new downstream bottlenecks. Tool Time with Zo Computer Save long-running AI work as real artifacts: Working against files and services with memory beats transient chats when your work is long-running or spans multiple sessions. Files beat context windows: Hybrid search over a real file system is faster and more precise than stuffing giant context windows. What uses cases the remote AI computer will really solve: Tools like Zo seem well suited when it beats local workflows on security (code/data never leaves a controlled environment), scalable compute (beefy GPUs/CPU on demand), or collaborative persistence (shared stateful workspaces, services, and logs that multiple people and agents can access). Notable mentions with links Mental model: bottlenecks The Great Mental Models is a book series by Shane Parrish that breaks down fundamental decision-making through Charlie Munger’s latticework of mental models. The Goal is a business novel by Eliyahu M. Goldratt that popularizes the Theory of Constraints and introduces the “Herbie” Boy Scout hike as a vivid metaphor for bottlenecks. The Phoenix Project is an IT/DevOps retelling of The Goal that applies the Theory of Constraints to modern software delivery and operations. The Trans-Siberian Railway is used in The Great Mental Models to show how relieving one constraint in a massive project can trigger new ones elsewhere. Vercel’s v0 is an AI-assisted tool for generating websites and apps that shrinks the prototyping gap and increases product velocity and fidelity. Tools and AI Raycast is a next‑gen Mac launcher in the Spotlight/Alfred lineage that sparked a thought experiment about OS-level AI with rich local context and access. Alfred is an earlier Mac power-user launcher that provides historical context for Raycast’s approach to extensible search and commands. Zo Computer is a persistent cloud computer with memory, storage, agents, services, and a real file system that the hosts tested for Plex, blog analysis, and remote development. ... (Read more at the episode page)

    1 hr
  6. 4 JAN

    The Inner Ring & creating an AI startup on demand

    Eric and John invent “Lobe,” a screenless AI for dream capture, then unpack C.S. Lewis’s “Inner Ring” to explore status, AI FOMO, and the long game of craft, character, trust, and defining “enough.” Summary Eric and John kick off the inaugural episode of Token Intelligence with a live AI startup creation challenge. Responding to John’s prompt, Eric imagines “Lobe,” a screenless AI device for passive sleep listening that reconstructs and interprets your dreams. Charting a course to more serious waters, the hosts pivot to C.S. Lewis’s “Inner Ring,” an 80-year-old college commencement speech, to unpack status, belonging, and career ambition in tech. They connect Lewis’s warning to today’s AI FOMO, contrasting short‑game inner-ring chasing with the long‑game path of craftsmanship, character, trust, and defining “enough” in work and life. Along the way, they share candid stories of startups, inner circles at school and work, and practical ways to stay curious without getting swept up in AI hype. Key takeaways Live-creating an AI startup called Lobe: A screenless, passive sleep-listening device that records during REM, blends audio with biometrics, reconstructs your dream, and offers paid interpretations—with optional visualizations via generative video tools. The Inner Ring college commencement speech: C.S. Lewis’s warning, that chasing insider status “will break your heart,” maps to modern tech careers where influence, visibility, and belonging can overshadow the work itself. Short game vs long game: Inner-ring-chasing can move titles fast, but the durable path is craftsmanship + character → trust → meaningful opportunities and friendship. Define “enough”: If freedom and time with loved ones are the goals, you can often change life structures now rather than deferring everything to a future exit or windfall. Managing AI FOMO: Name it, keep simple systems to stay current, study fundamentals (economics, incentives), and build small projects to demystify the tech without drowning in hype. Notable mentions with links Startup riff: inventing “Lobe” (screenless, passive listening AI) Sleep tracking apps like Sleep Cycle are referenced as prior art for nighttime audio capture and sleep analysis, inspiring Lobe’s focus on REM-triggered recording. Eric mistakenly referred to this a "Sleep Score" in the show. Eight Sleep is mentioned as a potential smart-mattress integration partner within the broader sleep-tech ecosystem. Sora is cited as a generative video tool that could visualize reconstructed dreams as shareable clips, extending Lobe’s premium features. Career and culture: C.S. Lewis, inner circles, and the craft The Inner Ring is a commencement speech given by C.S. Lewis at King’s College, University of London, in 1944. War and Peace, by Leo Tolstoy, is quoted in The Inner Ring to illustrate the existence of informal “unwritten systems” that shape real power and belonging. The “Pie Theory” of career success: Performance, Image, and Exposure are discussed as a common framework for how people advance inside organizations. The Staff Engineer career path is highlighted as an individual-contributor track that rewards deep expertise and influence without requiring a move into management. Personal startup journeys and ecosystems The Iron Yard is referenced as a coding school startup experience that exposed the host to founder networks, fundraising, and an eventual exit. Zappos and Tony Hsieh are mentioned in the context of a founder lunch and talent pipeline discussions during that startup phase. ... (Read more at the episode page)

    1h 50m

About

Two friends break down AI, technology, and entrepreneurship through mental models, real-world experience and the pursuit of a life well-lived.