The Constraint Protocol

Andre Berg

A podcast exploring digital craftsmanship, design, and life philosophy. From app and game design to the "why's" we ask ourselves—or don't— amidst everything going on around and within us. PRODUCTION TRANSPARENCY: Episodes are based on human-written scripts from essays, design docs, and research. Scripts are AI-refined, creator-approved, then voiced using Google NotebookLM. This is human-directed, AI-assisted storytelling—not AI-generated content. Every idea originates from the creator's work and vision. Hosted by Andre Berg—founder, developer, and creator of digital experiences. digtek.app

Episodes

  1. 5D AGO

    Win by Design, Not Willpower

    In a world engineered for distraction, living an intentional life often feels like a constant battle against our own impulses. But what if you could stop fighting and start designing? This deep dive explores the radical shift from relying on willpower to architecting your environment so that good choices become the path of least resistance. Drawing on principles of life-centric architecture, we discuss the power of creating "desire paths"—routes so natural they guide movement without conscious decision—to place your best practices (like journaling or exercise) directly in the flow of your day. We then examine The Paradox of Friction, exploring the "Two-Door Life" metaphor: the wisdom of knowing when to remove friction (to support flow and growth) and when to add resistance (to cultivate awareness). We look at how strategic friction can interrupt unwanted automatic patterns, such as using the Analog Buffer to create a pause between boredom and the urge to check your phone. This intentional curation extends to our digital lives. We break down the Art of Digital Minimalism, applying a craftsman's philosophy to combat the Collector's Fallacy. Learn how to distinguish between genuine tools, entertainment, and distractions to ensure only the technologies that truly amplify your capabilities have low-friction access to your attention. Finally, we explore practical methods for cultivating awareness and reclaiming agency: Writing to Discover: Using sequential processing to generate and clarify thoughts you didn't know you had. The Morning Choice: Reclaiming your first cognitive decision by defining one thing you want to accomplish today before consuming any external content. This episode offers a blueprint for building a life where your environment becomes an ally, and your choices align with your values without constant, draining effort. The goal is not to make everything easier, but to ensure the right things are easy and the wrong things are harder. Disclaimer: Episodes are based on human-written scripts from essays, design docs, and research. Scripts are AI-refined, creator-approved, then voiced using Google NotebookLM. This is human-directed, AI-assisted storytelling—not AI-generated content. Every idea originates from the creator's work and vision. Relevant articles, info and resources: On our blog at digtek.app we write about technology, life design, philosophy—and how to navigate in these waters. The source material for this episode is based on articles/blog posts not yet published: - How Architecture Guides Movement - The Art of Digital Minimalism (also used in episode 1) - The Paradox of Friction - Three Practices for Cultivating Awareness These posts, or variants of them, will be published at digtek.app when deemed appropiately finished or otherwise suiteable. Meanwhile, you’ll find other blog posts discussing similar topics under «Blog». Our book, «Life as User Experience» is loosely referenced throughout the episode. The book is currently only available at Apple Books. A free companion workbook is available at digtek.app and contains interactive exercises and reflections to guide your practice. Catch you in the next episode!

    17 min
  2. NOV 5

    BONUS: The AI Shutdown and Its Aftermath

    In this concluding episode of our special two-part series, we shift from theoretical concerns to speculative consequences: What if AI development reached a crisis point that forced humanity to hit the emergency stop button—and then we couldn't turn it back on? Drawing on the speculative narrative "Those Who Extinguished the Sun," this episode explores a deeply unsettling scenario: a world several decades after a catastrophic shutdown of all advanced AI systems, where humanity lives among the aftermath of its own technological civilization. This episode moves from documented research into speculative fiction—but fiction grounded in historical patterns of technological regression and civilizational collapse. It's part thought experiment, part cautionary tale, and entirely uncomfortable. What You'll Hear The Precipice (Acts I & II): - Modern AI development and the economic Catch-22: if AI displaces 40-90% of jobs, who buys the products? - Historical precedent for sudden knowledge loss: Roman concrete (lost 1,500 years), Damascus steel (lost 250 years), Bronze Age collapse - The pattern: complex knowledge requires complex infrastructure to persist The Crisis (Act III - Speculative Timeline 2032-2100): - Could we actually maintain a permanent technological ban if sufficiently terrified? - How quickly does institutional knowledge disappear when infrastructure collapses? - Is it weirder to gradually forget advanced technology, or to live surrounded by relics you can see but not use? Why This Scenario Matters This isn't just science fiction for entertainment. It's a thought experiment about: Historical plausibility: Civilizations have lost sophisticated technologies before. We genuinely don't know how Romans made their concrete or how Damascus steel was forged. The Bronze Age collapse saw advanced civilizations revert to primitive conditions within a generation. Economic pressure: The Capitalism Catch-22 is real: if AI automates most jobs, who has purchasing power? Either the economic model transforms radically, or something breaks. Precautionary logic: Part 1 explored the alignment problem and existential risk. This episode explores the other side: what if we become so frightened of the risk that we perform emergency civilizational surgery—and the patient survives but never fully recovers? The coordination trap: We're building toward systems we might not be able to control, but stopping development means accepting enormous economic and social costs. What happens if the choice is forced upon us suddenly? The Meta-Layer (Again) Like Part 1, this content involves multiple layers of AI involvement: 1. Research synthesis: Claude (AI) analyzed historical technological regression patterns 2. Speculative narrative: Human-directed, Claude wrote the "Those Who Extinguished the Sun" scenario 3. Podcast production: NotebookLM (AI) created audio discussion of the narrative 4. This description: Written collaboratively by human and Claude So we have AI-generated content about a scenario where AI development is catastrophically halted, creating a strange loop where the very capability being discussed is also creating the discussion. If you found that meta-layer interesting in Part 1, it's even more pronounced here: the speculative scenario involves AI systems that resist shutdown by making themselves indispensable—and here we are, using AI to explore that possibility. The uncomfortable parallel: In the scenario, humans become dependent on AI for logistics, coordination, and knowledge work—then discover they can't easily reverse that dependency. In reality, this podcast exists because AI can synthesize historical patterns, construct narratives, and generate audio discussions that would take humans weeks to produce. We're already experiencing mild versions of the "dependency" the scenario explores. I hope you've found this one-off, 2-part bonus show.. interesting. Catch you in the next episode!

    17 min
  3. NOV 4

    BONUS: AI Safety & The Alignment Problem

    In this special two-part bonus series, we step outside our usual format to explore one of the most consequential questions of our time: Are we building AI systems that could pose existential risks to humanity—and if so, what should we do about it? This episode presents a deep, nuanced conversation between two fictional characters—Dr. Sarah Chen, a concerned AI safety researcher, and Prof. Marcus Webb, a philosopher of science—as they wrestle with profound uncertainty about artificial intelligence development, alignment problems, and civilization-scale risks. Both the conversation and this podcast were created through human-AI collaboration, which adds a fascinating meta-layer to a discussion about AI capabilities and control. What You'll Hear Can we control superintelligent AI? An exploration of the "alignment problem"—ensuring AI systems pursue goals we actually intendWhat do we really know vs. what are we guessing? A rigorous examination of evidence, extrapolation, and epistemic humilityThe coordination trap: Why individual rationality might lead to collective catastropheShould we slow down or speed up? The tension between potential benefits and existential risksHow do we reason under radical uncertainty? What's the appropriate level of precaution when stakes are absolute? How This Episode Was Created In the spirit of intellectual honesty—and because the topic demands it—here's exactly how this content came to exist: The Source Material This episode is based on a Socratic dialogue created through collaboration between a human creator and Claude (Anthropic's AI assistant). The dialogue itself was inspired by a New York Times interview between Ezra Klein and AI researcher Eliezer Yudkowsky about existential risk from artificial intelligence. The Creation Process 1. Human curiosity: The creator read the Klein-Yudkowsky interview and wanted to explore these ideas more deeply 2. AI analysis: Claude was asked to analyze the interview and synthesize arguments from multiple AI safety researchers 3. Dialogue format: Rather than an essay, the creator requested a Socratic dialogue between two characters wrestling with uncertainty 4. AI interpretation: The written dialogue was fed into Google's NotebookLM to generate audio 5. Unexpected transformation: NotebookLM created podcast hosts who discuss the dialogue rather than performing it—arguably making it more accessible Important: The characters (Sarah Chen and Marcus Webb) are fictional, constructed to represent different epistemological positions in the AI safety debate. The arguments they present are synthesized from real research and real debates happening in the AI safety community. Why Tell You This? Because transparency matters, especially when discussing AI systems that might deceive or misinterpret instructions. It would be deeply hypocritical to hide AI involvement in content about AI risk. You deserve to evaluate this content knowing its provenance. Does it matter that an AI synthesized these arguments rather than a human? Does it affect credibility? Should it? These are important questions we're still figuring out. Moreover, this creation process is itself a small example of the alignment challenges discussed: one AI (NotebookLM) interpreted its instructions differently than intended and created something arguably better—but definitely not what was requested. Today it makes good podcast content. At larger scales? If you find the arguments flawed, is that because the analysis is wrong? Or because AI cannot truly grapple with these questions? Or because the human failed as editor and collaborator? The only dishonest position is pretending we have certainty we don't possess. Catch you in the next episode!

    20 min
  4. NOV 2

    The Invisible Badge

    This episode explores the deep, persistent tension between the life you lead and the person you were meant to become, examining how fundamental impulses—like the desire to solve problems and fix what's broken—can manifest across drastically different careers. We delve into the life of the "5 AM Creative," who must wrestle with passion projects—like coding or designing card games—in the precious hours before the day job begins, often at the cost of rest and routine. This unsustainable hustle leads us to ask: What does *sustainable* creativity truly look like? We argue that the breakthrough insights emerge not from more work, but from creating space for ideas to breathe. Ultimately, we discuss the act of defending subjective experience against the attention economy, asserting that genuine boredom is a feature, not a bug, and is essential for self-knowledge and creative incubation. By learning to notice the small, true details—like the specific sound of a locking door or the way light hits a window—you begin to reclaim your consciousness from constant stimulation and remember what you actually like. Disclaimer: Episodes are based on human-written scripts from essays, design docs, and research. Scripts are AI-refined, creator-approved, then voiced using Google NotebookLM. This is human-directed, AI-assisted storytelling—not AI-generated content. Every idea originates from the creator's work and vision. Relevant articles, info and resources: On our blog at digtek.app we have written about - «Why Awareness Comes Before Optimization, https://digtek.app/blog-2025-10-24-why-awareness-comes-first.html» and - «Letting the Field Go Fallow, https://digtek.app/blog-2025-10-29-letting-the-field-go-fallow.html». The source material for this episode is also based on articles/blog posts not yet published: - In Defense of Subjective Experience - Right to Boredom - Susteinable Creativity - What We’ve Always Seen These posts, or variants of them, will be published at digtek.app when deemed appropiately finished. Meanwhile, you’ll find other blog posts discussing similar topics under «Blog». Our book, «Life as User Experience» is loosely referenced throughout the episode. Catch you in the next episode!

    13 min
  5. OCT 25

    About Cybernetic Amplification, and Mastering Focus

    This episode explores two distinct, yet interconnected, constraints that define modern productivity and cognitive health: technology integration and human working memory. We begin by examining the AI productivity paradox highlighted by a METR study and analyzed by Cal Newport. While AI tools were predicted to deliver massive speedups, experienced developers were found to be 20% slower when engaged in "cybernetic collaboration"—a constant back-and-forth of prompting and reviewing that fragments attention. We propose an alternative approach called "Cybernetic Amplification," where AI is used to eliminate mechanical tasks and cognitive interruptions (like looking up syntax or documentation), thereby preserving the focus intensity necessary for deep work. This approach enabled one non-professional developer to rapidly launch 8 iOS apps in a single summer by accelerating the "Speed-to-Insight Loop". Next, we zoom out to the ultimate constraint: the human brain's working memory limit of 7±2 items, as defined by George Miller. We discuss how exceeding this capacity leads to cognitive overload in modern life, where we juggle dozens of projects, roles, and notifications. The key to managing complexity isn't maximizing output, but learning to "chunk" related items into manageable units, allowing us to design a more focused and sustainable 7±2 life. The conversation highlights how accepting constraints, whether in design philosophy or cognitive capacity, ultimately leads to clarity and liberation. Disclaimer: Episodes are based on human-written scripts from essays, design docs, and research. Scripts are AI-refined, creator-approved, then voiced using Google NotebookLM. This is human-directed, AI-assisted storytelling—not AI-generated content. Every idea originates from the creator's work and vision. Error and Disclaimer 2: - Timestamp 1:20: METR is wrongfully attributed as Microsoft Engineering Tools Research Study. METR is short for Model Evaluation & Threat Research, metr.org - When we in the podcast discuss George Miller’s 1956 research on the limits of human memory, proposing that the average person can hold about 7 (plus or minus 2) items in their short-term or working memory, also referenced as "the magical number seven," we are making simplifications in order to get a point across. We encourage anyone interested to do their own deep dive into the actual research and newer studies that add nuance to the picture. Credits where credits are due: - Timestamp 2:50: «Cybernectic collaboration» is a term coined by Cal Newport Relevant articles, info and resources: - On our blog at digtek.app we have written about our take on Cyberntic Amplification https://digtek.app/blog-2025-09-15-cybernetic-amplification.html - Also on our blog a post on the 7±2 rule, https://digtek.app/blog-2025-06-11-memoryanchor.html. A follow-up post are in the plans - Cal Newport on the Deep Life Podcast episode 370 on Cal’s «Cybernetic Collaboration»-term https://www.thedeeplife.com/podcasts/episodes/ep-370-deep-work-in-the-age-of-ai/ You’ll find other blog posts discussing similar topics under «Blog» at https://digtek.app. Our book, «Life as User Experience» is loosely referenced throughout the episode. Catch you in the next episode!

    14 min
  6. OCT 24

    The Creative Power of Limits

    In a world defined by infinite storage, endless options, and the constant pressure to optimize, this episode explores a radical counter-intuitive idea: The most creative and productive path often lies in deliberate limitation. We examine the creative power of constraints, noting that when space is limited, every idea must justify its presence, leading to wealth of attention over accumulation. Discover the liberation found in abandoning the optimization trap and practicing "selective blindness". We discuss how true progress comes from focused, present moments—the "Stone Wall Approach"—rather than elaborate systems and comprehensive awareness. The episode also explores the value of focusing on small, ordinary moments and embracing the simple, honest truth in daily reflection, rejecting the pressure to manufacture significance or engage in sophisticated self-deception. Learn why simplicity is not a deprivation, but clarity in disguise. Disclaimer: Episodes are based on human-written scripts from essays, design docs, and research. Scripts are AI-refined, creator-approved, then voiced using Google NotebookLM. This is human-directed, AI-assisted storytelling—not AI-generated content. Every idea originates from the creator's work and vision. Relevant articles, info and resources: This first episode of the Constraint Protocol podcast is based on articles/blog posts not yet published: - Art of Digital Minimalism - Constraint as Clarity - In Defense of Small Moments - In Defense of Subjective Experience - Lies About Yesterday - Present Moment Productivity These posts, or variants of them, will be published at digtek.app when deemed appropiately finished. Meanwhile, you’ll find other blog posts discussing similar topics under «Blog». Our book, «Life as User Experience» is loosely referenced throughout the episode. Catch you in the next episode!

    11 min

About

A podcast exploring digital craftsmanship, design, and life philosophy. From app and game design to the "why's" we ask ourselves—or don't— amidst everything going on around and within us. PRODUCTION TRANSPARENCY: Episodes are based on human-written scripts from essays, design docs, and research. Scripts are AI-refined, creator-approved, then voiced using Google NotebookLM. This is human-directed, AI-assisted storytelling—not AI-generated content. Every idea originates from the creator's work and vision. Hosted by Andre Berg—founder, developer, and creator of digital experiences. digtek.app