Coordinated with Fredrik

Fredrik Ahlgren

Coordinated with Fredrik is an ongoing exploration of ideas at the intersection of technology, systems, and human curiosity. Each episode emerges from deep research. A process that blends AI tools like ChatGPT, Gemini, Claude, and Grok with long-form synthesis in NotebookLM. It’s a manual, deliberate workflow, part investigation, part reflection, where I let curiosity lead and see what patterns emerge. This project began as a personal research lab, a way to think in public and coordinate ideas across disciplines. If you find these topics as fascinating as I do, from decentralized systems to the psychology of coordination — you’re welcome to listen in. Enjoy the signal. frahlg.substack.com

  1. Europe’s Grid Plumbing Crisis

    1D AGO

    Europe’s Grid Plumbing Crisis

    There’s a strange paradox at the heart of Europe’s energy transition. On paper, 2025 looks like a victory lap. Renewables generated nearly half of the EU’s electricity. Wind and solar alone overtook fossil fuels. A decade ago, that would have sounded like science fiction. We have become incredibly good at harvesting energy from the sky and the wind. And yet, at the same time, we are wasting staggering amounts of it. This is not a generation crisis. It’s a plumbing crisis. The Ferrari Engine in a Model T Chassis Here’s the uncomfortable truth: We built a 21st-century renewable generation fleet and plugged it into a 20th-century grid. Modern renewables are: * Variable – the sun and wind don’t follow office hours * Distributed – rooftops, fields, offshore wind farms * Digital – powered by inverters and electronics But the grid they connect to was designed for something entirely different: * Centralized, giant power plants * One-way power flow * Passive consumers * Heavy mechanical inertia It was built for coal plants pushing electrons in one direction. Now power flows in every direction, from millions of rooftops, batteries, EVs, and wind farms. The infrastructure was never designed for that. And the consequences are becoming impossible to ignore. Curtailment: Paying for Power We Throw Away When the grid can’t move electricity from where it’s produced to where it’s needed, operators do something painful. They curtail. That means telling wind farms or solar plants to stop producing electricity—even when the wind is blowing and the sun is shining. And because of contractual agreements, we often pay them anyway. In 2025: * Germany, France, and the Netherlands curtailed 3.9 TWh of renewables. * Great Britain alone wasted 10 TWh—roughly enough to power every home in London for a year. * Congestion costs across seven key European countries hit €7.2 billion in a single year. Let that sink in. We are paying billions to shut off clean power and then paying again to turn on fossil fuel plants elsewhere because the wires can’t carry the energy to where demand is. This isn’t an environmental failure. It’s a coordination failure. Negative Prices Are Not a Good Thing You may have seen headlines about negative electricity prices. It sounds like abundance. It sounds like victory. It’s not. In 2025: * Germany experienced 539 hours of negative prices. * The Netherlands saw 584 hours. Negative prices happen when there’s too much electricity in the wrong place at the wrong time and not enough flexibility to absorb it. The market is essentially screaming: “Someone please use this power.”“Someone please stop producing.” But we don’t have enough storage.We don’t have enough transmission.And we don’t have enough flexible demand. So the price collapses. Negative prices are not a sign of success. They are a stress signal. The Iberian Blackout: When Physics Fought Back In July 2025, Spain and Portugal experienced a massive blackout affecting 60 million people. It started with what should have been a manageable grid fault. But that day, the system had very low inertia. Old grids relied on massive spinning turbines in coal and nuclear plants. Those rotating machines act like giant flywheels. They resist sudden changes and stabilize frequency. Solar panels don’t spin. They connect through inverters. They have zero physical inertia. When the disturbance hit, frequency collapsed almost instantly. Protective systems cascaded. Spain and Portugal were electrically islanded to protect the rest of Europe. This was a wake-up call. A grid dominated by electronics behaves differently than a grid dominated by heavy mechanical systems. If we don’t design for that difference, we pay for it. How We Got Here: The Postwar Blueprint To understand the problem, we have to go back to 1949. European engineers toured the United States under the Marshall Plan and returned with a clear philosophy: * Big power plants * Centralized generation * One-way transmission * Predict and provide It worked. It rebuilt Europe. It powered decades of growth. But it baked in a core assumption: energy flows from the center outward. Today, that assumption is broken. And nearly 40% of Europe’s distribution grids are over 40 years old. They were not built for: * EV charging * Rooftop solar * Bidirectional power flow * Real-time flexibility We added millions of high-performance renewable assets. We did not upgrade the roads. The Queue: 1,700 GW Waiting in Line Across 16 European countries, 1,700 GW of renewable capacity is stuck in grid connection queues. That’s three times what Europe needs to meet its 2030 climate targets. Why? Because: * Grid studies are slow and bureaucratic * Transmission build-out takes 8–15 years * Developers submit speculative “ghost” projects to secure queue spots * Permitting and public opposition delay everything Solar farms can be built in 1–3 years. Transmission lines take a decade. We’re trying to run a sprint while tied to a marathon walker. The Perverse Incentive: CapEx Bias Here’s the structural problem few people talk about. Grid utilities earn regulated returns on capital expenditure (CapEx). That means: * Build a €1 million transmission line → earn guaranteed returns for decades * Install €100,000 worth of congestion-solving software → no profit margin Under most regulatory systems, utilities are incentivized to build concrete, not code. Even if software solves the problem faster and cheaper. This is not a technical limitation. It’s a regulatory one. If we don’t fix the incentive structure, we will keep choosing the slow, expensive option. The Software-Defined Grid The future grid needs four layers: 1. Sensing You cannot manage what you cannot measure.We need real-time visibility at the edge of the network. 2. Control & Automation Grid events unfold in milliseconds.Human reaction times are irrelevant.Automation must stabilize frequency and manage flows instantly. 3. Markets Flexibility must be valued.If your EV or home battery provides balancing services, you should be paid.Price signals need to reach the edge. 4. Physical Build Yes, we still need more wires.Especially high-capacity transmission corridors.But they must be built strategically. This is not hardware versus software. It’s hardware + software + coordination. Dynamic Line Rating: Free Capacity We’re Ignoring Today, most transmission lines operate using static ratings based on worst-case scenarios (e.g., hot, windless summer days). But in reality, lines are often cooler and wind-cooled. Dynamic Line Rating (DLR) uses real-time weather and sensor data to increase line capacity safely. The result? 30–40% more capacity from the same wire. Without building anything new. That’s not incremental. That’s transformative. Virtual Power Plants: Coordination at the Edge Instead of building new gas plants to meet peaks, we can aggregate: * EV batteries * Home storage * Smart water heaters * Flexible industrial loads Thousands of small devices can act like one large power plant. A virtual power plant (VPP) can: * Discharge during peak demand * Absorb surplus during negative price events * Provide balancing services At roughly one-tenth the cost of building new physical infrastructure. This is coordination as infrastructure. The Global Race: China Is Moving Faster In 2024 alone, China invested $83 billion in its grid. It has built 37 of the world’s 39 ultra-high voltage (UHV) transmission lines. Europe has zero. UHV lines move enormous amounts of power across thousands of kilometers with minimal losses. This is not just about climate. It’s about industrial competitiveness. If cheap renewable energy can’t reach European industry, manufacturing migrates to where it can. The grid is now geopolitics. The €584 Billion Question Europe needs approximately €584 billion in grid investment by 2030. That sounds enormous. But every euro invested in grid infrastructure saves roughly two euros in system costs by: * Reducing curtailment * Avoiding fossil backup * Lowering congestion costs * Increasing efficiency The cost of inaction is far higher. The Real Political Question We talk about NIMBY—Not In My Backyard. But the coming battle is deeper: * Not under my street (new cables) * Not in my view (new pylons) * Not near my town (new substations) We have to decide: Are we willing to build visible infrastructure to enable an invisible energy revolution? Or will we let a clean energy abundance die in a bureaucratic waiting room? From Abundance to Coordination We no longer have an energy scarcity problem. We have a coordination problem. The next decade of the energy transition will not be defined by how many solar panels we install. It will be defined by: * How intelligently we move electricity * How well we coordinate distributed assets * How fast we reform regulation * How effectively we digitize the grid The future of clean energy is not just generation. It’s plumbing. And the question is simple: Will we upgrade it in time? This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    35 min
  2. The Battery Is the Bucket

    1D AGO

    The Battery Is the Bucket

    On February 18th, 1745, in Como, Italy, a child was born who would quietly alter the trajectory of civilization. Two centuries later, the unit of electric potential — the volt — would carry his name. But the real story doesn’t begin with a nobleman in northern Italy.It begins with a dead frog. In the late 1700s, Luigi Galvani observed something uncanny: a dissected frog’s leg twitching violently when touched with two different metals. He believed he had discovered animal electricity — a vital force inside living tissue. Alessandro Volta disagreed. Volta argued the frog wasn’t the source of the electricity. It was merely a conductor — a wet, salty bridge between two metals. To prove it, in 1800 he stacked alternating discs of zinc and copper separated by brine-soaked cardboard. When he connected the top and bottom, current flowed — continuously. Not a spark. Not static.A steady stream. The world’s first battery. That stack of metal and wet paper — the voltaic pile — was the ancestor of every lithium-ion pack on Earth today. And I believe we are living through a moment just as significant as that day in 1800. The Constraint That Shaped 300 Years For three centuries, our civilization has had one fundamental limitation: We could produce energy.But we could not store it. Electricity had to be used the exact second it was generated. The grid had to balance supply and demand every millisecond. Turn on a light, and somewhere a gas turbine spins slightly faster. We were tethered to fuel. Batteries cut the tether. Energy storage is not a convenience feature. It is a structural transformation. For the first time in history, we can decouple energy production from energy consumption at scale. And the cost curves are collapsing. In 2010, lithium-ion battery packs cost around $1,100 per kilowatt-hour.In early 2026, they sit around $108 per kilowatt-hour. A 93% drop. If your rent or groceries had dropped 93% in fifteen years, you would live in a different world. This is not incremental improvement. This is the precondition for energy abundance. Why Batteries Were Stuck for a Century From roughly 1860 to 1990, battery energy density barely doubled. Over a century of stagnation. Why? Because batteries don’t follow Moore’s Law. A computer chip moves electrons — nearly massless particles. You can miniaturize pathways for electrons endlessly. A battery moves ions.Lithium ions. Sodium ions. Lead ions. Ions are physical objects with mass and volume. You cannot shrink atoms. If you want more energy, you need more material. You are constrained by thermodynamics — by the chemical bonds themselves. That’s why gasoline dominated the 20th century. One kilogram of gasoline holds around 12,000 watt-hours of energy.A kilogram of early lead-acid batteries held maybe 30. Physics was ruthless. Until lithium-ion. The Three-Person Relay That Changed Everything Lithium-ion wasn’t one breakthrough. It was a relay race across decades. * M. Stanley Whittingham discovered intercalation — lithium ions sliding between layers of a crystal without destroying it. * John Goodenough doubled the voltage by introducing lithium cobalt oxide. * Akira Yoshino removed volatile lithium metal and replaced it with carbon, making the system safe. And in 1991, Sony commercialized it. Not in a car. In a camcorder. The early lithium-ion battery cost about $7,500 per kilowatt-hour. Astronomical. But consumers weren’t buying energy — they were buying memories. Longer filming time. Portability. Experience. Consumer electronics funded the factories. Your old camcorder paid for the battery in your EV. The Learning Curve Miracle The collapse in battery prices is not random. It follows Wright’s Law: every time cumulative production doubles, costs fall by a predictable percentage. For lithium-ion, that learning rate sits roughly between 18% and 28%. Gigafactories didn’t just scale production — they accelerated learning. Thousands of micro-improvements: thinner foils, faster rollers, optimized coatings, better yields. China’s massive overcapacity — often criticized — has actually intensified price competition and accelerated cost decline. This is how revolutions compound. The Chemistries of Abundance Lithium-ion is no longer one chemistry. The future is diversification toward abundance. LFP — Lithium Iron Phosphate * No cobalt. * No nickel. * Iron and phosphate — cheap, globally abundant. * Longer lifespan (thousands of cycles). * Much safer thermal characteristics. Modern cell-to-pack designs have compensated for lower cell energy density through structural innovation. LFP is no longer the “cheap and weak” option. It’s becoming the workhorse of global electrification. Sodium-Ion Sodium is everywhere. It’s in salt. In oceans. In the Earth’s crust. It’s not energy-dense enough for high-performance cars yet, but for: * City vehicles * Scooters * Grid storage * Cold climates It’s extraordinary. It can be shipped at zero volts. It performs well at extreme cold. It avoids many supply-chain bottlenecks. This is true materials abundance. Iron-Air For grid-scale, long-duration storage, iron-air may be transformative. It literally rusts and unrusts iron to store energy. Heavy. Slow. Perfect for the grid. Target cost: around $20 per kilowatt-hour. At those prices, storage becomes almost infrastructural — like concrete. The Grid Is Already Changing This is not theoretical. In California, batteries have become the largest contributor during evening peak demand. They are flattening the infamous duck curve by storing midday solar and discharging after sunset. China installed staggering amounts of storage capacity in 2025. Saudi Arabia and Abu Dhabi are building solar-plus-storage systems designed for 24/7 dispatchability. The old critique — “renewables are intermittent” — weakens when storage becomes cheap. At projected costs of $32–$54 per kilowatt-hour by 2030, building new solar-plus-storage may become cheaper than fueling existing gas plants. That’s not moral persuasion. That’s spreadsheet logic. Energy Abundance and the Jevons Question There is a paradox in economics: as efficiency improves, consumption often increases. If energy becomes cheap, we will use more of it. That is not necessarily a problem. Cheap energy enables: * Desalination at scale. * Industrial recycling. * Indoor agriculture. * AI infrastructure. * Material synthesis. * Climate adaptation technologies. Every time energy became cheaper in history — from wood to coal to oil — human living standards rose. The difference now is that the primary source is stellar. The sun sends 173,000 terawatts to Earth continuously. The constraint was never generation. It was storage. Circularity and Resilience A “dead” battery is not waste. It is high-grade ore. Battery recycling can recover over 95% of critical materials. Before recycling, batteries can serve second-life roles in stationary storage. We move from extractive mining toward circular supply chains. Add AI into this ecosystem — optimizing dispatch, predicting degradation, orchestrating millions of distributed assets — and the system becomes self-balancing. An Internet of Energy. The Bigger Frame Civilization is a heat engine. For thousands of years, that engine ran by burning things. Now we are transitioning to storing sunlight. The battery is the bucket. And we are learning to make buckets from iron, sodium, carbon — from common materials, not rare ones. When energy becomes abundant: * Water can become abundant. * Food can become abundant. * Computation can become abundant. * Intelligence can become abundant. This isn’t utopian thinking. It’s where the cost curves point. The next time you charge your phone, plug in your car, or see a solar panel on a roof — don’t just see a device. See the early architecture of a civilization untethered from combustion. Volta would be shocked. And we’re just getting started. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    43 min
  3. The Operating System of Execution

    2D AGO

    The Operating System of Execution

    Coordinated with Fredrik There is a fundamental tension at the heart of every ambitious company. On one side: the blue-sky vision.On the other: the gritty, operational reality of making it real. Engineers understand this tension intuitively. Physics doesn’t negotiate. Gravity doesn’t care about your roadmap. Thermodynamics ignores your quarterly goals. If your calculations are off, the bridge collapses. Leadership, however, is fuzzier. There are no immutable equations. No compile errors flashing red before failure. Just drift. This episode is about how to remove that drift. It’s about execution. And it’s rooted in the philosophy behind Measure What Matters by John Doerr—and the engineering rigor of Andy Grove. The Activity Trap Andy Grove, former CEO of Intel, had a deep disdain for what he called the activity trap. The activity trap is motion without progress. Lights on late at night.Slack buzzing.Meetings stacked.Features shipping. But no real value created. Grove’s insight was simple but brutal: Knowledge is potential energy.Execution is kinetic energy. Most organizations measure effort. Few measure output. That distinction is the entire game. Operation Crush: Alignment as Survival In 1980, Intel was threatened by Motorola’s 68000 chip. It was faster. Cleaner. Technically superior. The default engineering response would have been: build a better chip. But chips take years. Grove didn’t change the product.He changed the battlefield. He launched Operation Crush. Objective: Crush Motorola.Key Result: Win 2,000 design wins in one year. That specificity mattered. Not “increase market share.”Not “improve positioning.”But 2,000. Engineering pivoted to support sales.Marketing pivoted to target executives instead of programmers.The entire organism aligned around one measurable outcome. They hit it. The lesson: A technical deficit can be overcome by organizational alignment. Execution is leverage. The Four Superpowers of OKRs At the center of this operating system are OKRs: Objectives and Key Results. They are not to-do lists.They are not performance metrics.They are directional control systems. Let’s break down the four “superpowers.” 1. Focus and Commit High-performing teams set three to five objectives max. More than that is dilution. Focus is not about remembering what to do.It is about killing what not to do. The discipline is brutal. If your goal is engagement, you may have to say no to features users ask for—if they reduce engagement. Even good ideas must die if they don’t serve the objective. Focus requires commitment. If leadership wavers, the system collapses. If the CEO’s calendar doesn’t reflect the OKRs, neither will the company’s behavior. Signal must match structure. 2. Align and Connect As companies scale, silos form. Engineering optimizes for elegance.Sales optimizes for revenue.Marketing optimizes for visibility. Without alignment, each team finds a local maximum while the company misses the global one. OKRs solve this through radical transparency. Everyone’s goals are visible. Dependencies become explicit instead of accidental. The model isn’t purely top-down. Healthy systems are roughly: * 50% top-down direction * 50% bottom-up initiative This balance protects innovation while maintaining coherence. It’s how Google built Gmail through 20% time—bottom-up ideas aligned to top-level vision. 3. Track for Accountability Tracking is not about punishment.It is about feedback loops. A healthy OKR culture treats missed goals as data, not moral failure. Google famously scores OKRs on a 0.0–1.0 scale. And here’s the twist: 0.6–0.7 is the sweet spot. If you’re consistently scoring 1.0, your goals are too safe. A 0.7 means you stretched. A red is not shame.A red is signal. Some organizations institutionalize this through rituals like “selling your reds” — publicly sharing failures to invite support and problem-solving. That level of psychological safety is not optional. It is infrastructure. 4. Stretch for Amazing There are two kinds of objectives: Committed objectivesOperational promises. Must hit 1.0.(Example: payroll accuracy, uptime guarantees.) Aspirational objectivesMoonshots. You don’t know how to achieve them when you set them. This was the philosophy behind Google Chrome under Sundar Pichai. Early goals were missed. They doubled down. Then doubled again. Eventually, they won. Stretch goals force non-linear thinking. If the target is incremental, solutions remain incremental. If the target is absurd, the architecture must change. This is how YouTube moved from counting clicks to measuring watch time—and set a billion-hours-per-day target. The number wasn’t the point. The forced re-architecture was. The Human Operating System: CFRs OKRs without human infrastructure fail. This is where CFRs come in: * Conversations * Feedback * Recognition Annual reviews are too slow.Markets move weekly. Continuous check-ins replace yearly judgment. Leadership isn’t just about metrics. It’s about understanding: * What energizes your people? * What drains them? * Where do they want to go? Performance without humanity becomes brittle.Humanity without performance becomes chaos. The system requires both. Structure Does Not Kill Soul Even activists discovered this. Bono once feared that introducing structure into advocacy would dilute passion. Instead, it amplified it. Emotion without targets is noise.Passion with metrics becomes force. Structure is not bureaucracy.It is a vessel. Engineering the Organization Here is the synthesis. If you are building complex systems—power grids, distributed energy resources, intelligent controllers—you would never operate without telemetry. You measure load.You measure voltage.You monitor frequency. You debug the system continuously. Your organization deserves the same rigor. OKRs are telemetry for ambition. They translate vision into velocity. They convert hallucination into measurable progress. And they expose drift before collapse. The Final Question Look at your current dashboard. Is everything green? If so, you are probably managing the probable. Greatness lives in the red. The question isn’t whether you’re missing goals. The question is whether your goals are bold enough to miss. Execution is everything. The operating system is optional. The outcomes are not. —Coordinated with Fredrik This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    29 min
  4. Boiling the Ocean: Why Incremental Thinking Is Now the Most Dangerous Strategy

    FEB 8

    Boiling the Ocean: Why Incremental Thinking Is Now the Most Dangerous Strategy

    There is a phrase that has quietly governed modern management culture for decades: “Don’t boil the ocean.” It’s the sentence that appears whenever ambition starts to feel uncomfortable.When scope expands.When a system-level question threatens a quarterly roadmap. It’s framed as wisdom. Prudence. Maturity. But what if that advice—so deeply internalized that we barely question it anymore—has quietly become dangerous? This episode, and this essay, explores a contrarian but increasingly unavoidable thesis: In an era of collapsing intelligence costs, not boiling the ocean is how you lose. This is not a motivational slogan. It’s an economic and engineering argument. To understand why, we need to rewind nearly 160 years—back to coal mines, steam engines, and a mistake humanity has repeated every time a general-purpose resource becomes radically cheaper. The Original Mistake: When Efficiency Backfires In 1865, at the height of the British Industrial Revolution, a young economist named William Stanley Jevons published a book called The Coal Question. At the time, Britain was anxious about energy dominance. Coal powered everything: factories, railways, ships, empire. The assumption among policymakers was simple and intuitive: As engines become more efficient, total coal consumption will fall. After all, James Watt’s steam engine was dramatically better than the old Newcomen design—using roughly one quarter of the fuel for the same mechanical work. Efficiency should lead to conservation. Except it didn’t. Jevons observed something deeply counterintuitive: Despite massive efficiency gains, coal consumption didn’t fall at all.It exploded. UK coal production grew steadily for decades—rising from ~5 million tons in 1750 to over 100 million tons by the 1860s, eventually peaking near 300 million tons in the early 20th century. Jevons summarized the paradox succinctly: “It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.” This is what we now call the Jevons Paradox: When a general-purpose resource becomes more efficient and cheaper, total consumption increases—because new uses become economically viable. Efficiency doesn’t cap demand.It unlocks it. Latent Demand and the Threshold of Viability Why does this happen? Because demand for fundamental resources isn’t fixed—it’s latent. When steam power was expensive and inefficient, it was used only for extreme, high-value tasks (like pumping water out of deep coal mines). Once efficiency improved, the activation energy dropped. Suddenly it made sense to: * Put engines in textile mills * Power ships and locomotives * Mechanize entire industries The question was never “How much steam power do humans need?”The question was “At what price does entirely new behavior emerge?” That same dynamic has repeated itself over and over again. Light: From Luxury to Pollution Nothing illustrates this better than the history of light. In the 1300s, producing a fixed amount of illumination—about one million lumen-hours—cost the modern equivalent of £40,000. Light was so expensive that people rationed candles the way we ration fuel during wartime. By 2006, that same amount of light cost £2.90. A 14,000× reduction in real cost. So did we save energy? No. Between 1800 and 2000, per-capita light consumption increased ~6,500×. We didn’t stop at lighting rooms. We lit cities. Highways. Stadiums. Parking lots at 3 a.m. We put light into pockets, shoes, keyboards, architecture. We created an entirely new problem—light pollution—because light became too cheap to care about. LEDs repeated the pattern again: * Lower wattage per bulb * Explosive growth in total lighting Efficiency didn’t restrain usage.It expanded imagination. Doing More With Less: Buckminster Fuller’s Missing Half This is where Buckminster Fuller enters the story. Fuller described a long-term technological trajectory he called ephemeralization: Doing more and more with less and less—until eventually you can do everything with almost nothing. His favorite example was bridges. * Roman bridges: massive stone, brute force, pure compression * Iron bridges: lattice structures, geometry, less material * Steel suspension bridges: tension, elegance, minimal mass * Eventually: radio waves, fiber optics—connection without material The function remains.The atoms disappear. At first glance, Fuller seems to contradict Jevons. But he doesn’t. They describe the same system from different angles: * Ephemeralization → less material per unit of function * Jevons Paradox → vastly more total units once the function becomes cheap We didn’t save copper by inventing fiber optics.We used orders of magnitude more communication. The Great Bet: Ingenuity vs. Scarcity This tension came to a head in the 1970s. On one side: * The Club of Rome * Paul Ehrlich * Limits to Growth, The Population Bomb * A zero-sum worldview: finite resources, inevitable collapse On the other: * Julian Simon * Buckminster Fuller * The belief that human ingenuity is the ultimate resource In 1980, Simon challenged Ehrlich to a bet. Ehrlich chose five industrial metals—copper, chromium, nickel, tin, tungsten—and predicted prices would rise over the next decade as population exploded. Instead, prices fell by 57%. Why? * Substitution (fiber replaces copper) * Better extraction * Recycling * Design efficiency Ingenuity outran depletion. Intelligence Enters the Equation All of this matters because we are now repeating the same mistake—but with something far more powerful than coal or light or copper. We are making intelligence cheap. The cost of AI inference has been collapsing at an unprecedented rate—on the order of hundreds of times per year. What cost ~$20 per million tokens in 2022 costs cents today. This is ephemeralization of cognition. And if Jevons holds—as it always has—then the implication is unavoidable: Cheap intelligence will not reduce work.It will explode the scope of what gets built. The fear narrative—“AI will take jobs”—is the same zero-sum thinking that lost the Simon-Ehrlich bet. It assumes: * A fixed amount of code * A fixed amount of analysis * A fixed amount of problem-solving History says the opposite. When the cost of thinking drops, we attempt problems that were previously unthinkable. The Real Bottleneck Has Moved When software was expensive, the bottleneck was execution. Now execution is cheap. The bottleneck is vision. This is why incrementalism is suddenly dangerous. Optimizing for 1.05×—cutting support costs, shaving headcount, marginal automation—is defensive thinking applied to an abundance problem. As Astro Teller famously put it: “It’s often easier to make something 10× better than 10% better.” Why? Because 10% forces you to argue with legacy constraints.10× forces you to throw them out. Energy, AI, and the Literal Ocean In energy, this becomes especially clear. If AI helps unlock controlled fusion—abundant, clean, baseload power—the question isn’t “How much cheaper is my electricity bill?” The question is: * What becomes possible when energy is no longer the constraint? Desalination at planetary scale.Carbon capture as infrastructure.Terraforming, not conservation theater. This is Jevons again—at civilizational scale. The Actual Choice Buckminster Fuller framed it starkly: Utopia or oblivion. Not because technology guarantees utopia—but because fear guarantees stagnation. The tools are arriving whether we are psychologically ready or not. The only remaining decision is whether leaders choose: * Scarcity thinking and protectionism * Or positive-sum ambition and construction So the real strategic question becomes: Where are you still optimizing for 1.05× when the physics now allow 10×? What ocean are you refusing to boil—not because it’s impossible, but because it used to be? Because the water is ready.The apparatus exists.And timid incrementalism is no longer neutral—it’s a risk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    34 min
  5. Wire the Planet or Wire the Solar System?

    FEB 7

    Wire the Planet or Wire the Solar System?

    Coordinated with Fredrik — Episode Recap On January 30, 2026, SpaceX filed what looked like the most boring piece of regulatory paperwork imaginable. An FCC application. A string of numbers. The kind of thing you scroll past. Except this one was for permission to launch one million orbiting data centers. And in the preamble, they called it “a first step towards becoming a Kardashev Type II civilization.” That is not normal language for a permit application. That is a declaration of intent for a different species. This episode digs into what that filing actually means, and why it forces anyone working in energy to confront a question that used to be reserved for philosophy seminars: are we wiring the planet, or wiring the solar system? Three visions, one fork in the road The episode walks through three competing models for where energy goes from here. They sound like they belong in different centuries, but all three are showing up on balance sheets right now. The Earthbound Optimizer. This is Professor Mark Jacobson’s model out of Stanford. His thesis is that we can run 100% of civilization on wind, water, and solar. Not just electricity. Everything. Transport, heating, industry, agriculture, the military. All of it, with existing technology. No fusion. No miracle batteries. No carbon capture. The physics behind it is surprisingly straightforward. Combustion is terrible at converting energy into useful work. A gasoline car turns only 17-20% of its fuel into motion. The rest is heat and noise. An electric motor runs at 90-95% efficiency. A heat pump moves three to four units of heat for every one unit of electricity you put in. Jacobson calculates that simply by electrifying everything, we cut global energy demand by 56.4%. The upfront cost is around $61.5 trillion, but annual energy costs drop from $17.8 trillion to $6.6 trillion. That is a six-year payback with an infinite tail of savings. Any board would fund that project in a heartbeat. So why haven’t we done it? Because the model assumes 80% of daily electrical loads can be shifted within an eight-hour window. Charging your car at 2 AM instead of 6 PM? Easy. Asking a steel mill or a data center training an AI model to pause for eight hours? That is where the model meets reality. The Orbital Industrialist. This is the Musk play. He looked at the seven-year wait for a new substation in Virginia, the zoning fights, the interconnection queues, and decided that the grid is a political problem. Rockets are a physics problem. He prefers physics problems. In a sun-synchronous orbit, solar panels get 99% uptime. No clouds, no night, no atmosphere scattering the light. A panel in space generates six to eight times more energy per year than the same panel on Earth. The whole idea is to move the heavy compute, the training runs that take months and consume staggering amounts of power, off the planet entirely. Train the model in orbit where energy is constant and free. Beam the finished weights back to Earth. Learning happens in the sky. Thinking happens on your phone. The catch is cooling. Space is a perfect insulator. There is no air for convection. The only way to dump heat is radiation, and to radiate at the scale of gigawatt data centers you would need radiator panels the size of Gibraltar. Silicon chips melt long before the radiator reaches efficient operating temperature. You might need entirely new semiconductor materials, gallium arsenide or silicon carbide, that can run at 300-400 degrees Celsius. It is not just about launching servers. It might mean reinventing the chip. The whole bet rides on Starship driving launch costs from $2,700 per kilogram down to $200, maybe eventually $10. At $10 per kilo, you can launch heavy, cheap, standard server racks. Mass stops being a constraint. The engineering tradeoffs change completely. The Cosmic Architect. This is the Dyson swarm endgame. Not a solid shell around the sun (that is physically impossible), but trillions of individual satellites orbiting in dense formation, each capturing a sliver of sunlight. Musk’s million satellites would capture roughly 0.00000000004% of the sun’s output. A rounding error on a rounding error. But the expansionist logic says once you start, you do not stop. The theoretical blueprint is called the Mercury Loop. You land self-replicating mining robots on Mercury, which is rich in metals and sits right next to the sun. They mine the surface, build thin-foil solar collectors, and use electromagnetic railguns to shoot them into orbit. Those collectors beam energy back down to power more mining. It is an exponential feedback loop. Researchers at Oxford calculated you could dismantle the entire planet in about 31 years. Even at that scale, thermodynamics wins. The Landauer limit means every bit erased generates heat. A Dyson swarm eventually cooks itself if it thinks too hard. The Jevons Paradox sitting in the middle of all this This is the tension that runs through the entire episode and connects directly to how any energy company should think about the next decade. Jacobson argues that efficiency leads to sufficiency. Electrify everything, coordinate the loads, and demand goes down. We can get by with less. The expansionist view says the opposite. William Stanley Jevons noticed in the 19th century that when steam engines got more efficient, coal consumption went up, not down. Cheaper energy means more uses for it. If you unlock cheap orbital compute, demand does not flatten. It explodes into virtual worlds, planetary simulations, uses we cannot even conceive of yet. If Jacobson is right, energy companies are optimization businesses. You squeeze value out of a more or less static system. If Musk is right, you are preparing for a grid that needs to double, then triple, then quadruple. It is not a conservation problem. It is a throughput problem. The one thing all three visions agree on Whether the power comes from a rooftop in Palo Alto, a satellite 500 kilometers up, or a ring of collectors around the sun, the bottleneck is always the same: coordination. Jacobson’s model only works if 80% of load is flexible. That requires massive demand response, virtual power plants, automated dispatch. Space solar needs laser downlinks, ground stations, collision avoidance for a million moving objects, all managed in real time. Even the Dyson swarm needs orchestration at a scale that makes today’s grid look like a toy. The hardware is not the hard part. The connective tissue is. California proved this in 2024. They hit 117% renewable coverage in some intervals. Battery storage grew 2,100% in five years. But they also threw away 3.4 million megawatt hours of clean energy because they could not move it in space or time. Germany spent three billion euros just on redispatch, paying plants to turn down in one place and up in another to manage congestion. The electrons are there. The infrastructure to get them to the right place at the right time is what is lagging. The ownership question nobody wants to talk about Jacobson’s world is distributed. Rooftop solar, community wind, local batteries. Hard to monopolize sunshine when it falls on everyone’s roof. The orbital and Dyson worlds are centralized by nature. You need to be a trillion-dollar entity to launch rockets at scale. You need to own the mass drivers on Mercury. It recreates the dynamics of the oil industry. A few players control supply, everyone else is a customer. We are choosing between energy democracy and energy tycoons. Or some hybrid of the two. So what does this actually mean? We receive 10,000 times more energy from the sun than we currently use. The scarcity is not natural. It is a scarcity of infrastructure and coordination. Musk’s orbital play is, at its core, a hedge against our own dysfunction. A bet that we are too slow at building transmission lines, too tangled in zoning fights, too bad at aggregating distributed resources to keep up with what AI demands. So he is routing around the zoning board entirely. Maybe he is right about that. But today, right now, the fight is still on Earth. It is the last 10% problem. It is making the load follow the sun. It is the boring, unglamorous work of connecting millions of devices into something that behaves like a single coordinated system. Whether the future is on Earth or in orbit, the operating system for the energy transition is the same: coordination software, protocols, aggregation. The unsexy layer that makes any of this actually work. We are currently deciding, in boardrooms and regulatory filings, whether to wire the planet or wire the solar system. And every battery you aggregate, every flex load you optimize, is a vote in that election. Keep coordinating. Listen to the full episode on [Coordinated with Fredrik]. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    37 min
  6. The Spiral: Why the Most Important Intellectual War of Our Time Is Between People Who See Walls and People Who See Launchpads

    FEB 6

    The Spiral: Why the Most Important Intellectual War of Our Time Is Between People Who See Walls and People Who See Launchpads

    In October 1990, a Stanford biologist sat at his desk and wrote a check for $576.07. He put it in an envelope. He addressed it to an economist at the University of Maryland. He did not include a note. No congratulations, no concession speech, not even a “good game.” Just the check, sealed and mailed. That silence is deafening when you know the backstory. Because that check was never about the money. It was the settlement of a decade-long wager about the fundamental nature of reality itself. And the argument it represents, between people who look at a finite planet and see walls closing in and people who look at the same planet and see a launchpad, has been raging for over two centuries. Right now, in the age of AI and climate tipping points, it is reaching a fever pitch. This is the story behind our latest episode of Coordinated with Fredrik. We called it “The Spiral” because the clash between these two worldviews is not a pendulum swinging between optimism and pessimism. A pendulum returns to the same points. A spiral goes around, but with each revolution, it moves up an axis. It progresses. Each side is forced to incorporate something the previous round missed, and the stakes get higher with every turn. The bet that started with too many people and ended with no note The man writing the check was Paul Ehrlich, author of The Population Bomb, a book that sold two million copies and predicted hundreds of millions of people would starve to death in the 1970s and 1980s. Ehrlich had been on The Tonight Show with Johnny Carson roughly 20 times. He got a vasectomy to set an example. He told an interviewer that he would take even money England would not exist by the year 2000. He was, in every sense, the public face of environmental doom. On the other end of the envelope was Julian Simon, an economist who had written The Ultimate Resource, arguing that the human mind is the only resource that matters. Where Ehrlich saw mouths to feed, Simon saw minds that create. In 1980, Simon issued a public challenge: pick any raw materials, any timeframe longer than one year, and I will bet you the inflation-adjusted price goes down. Ehrlich and two colleagues chose five metals. Chromium, copper, nickel, tin, and tungsten. They placed a $1,000 bet with a payoff date of September 29, 1990. During that decade, the world added more than 800 million people, the largest single-decade increase in human history. Demand exploded. And every single metal fell in price. Tin dropped over 70 percent. Tungsten fell by half. Hence the check. For a lot of people, that was the end of the story. Optimists won, pessimists lost, case closed, let’s drill some oil. But the surface reading is dangerously incomplete. If you run the same bet over different decades, the results flip completely. A study found that Ehrlich would have won 61.2 percent of all possible ten-year intervals between 1910 and 2007. From 2000 to 2010, with the China boom driving metal prices parabolic, Ehrlich would have wiped the floor with Simon. It was not a definitive victory. It was a single data point in a much larger war between two operating systems for viewing the world. The Club of Rome and the model that keeps tracking reality The modern version of the limits worldview began in a villa in Rome in 1968, when an Italian industrialist named Aurelio Peccei gathered scientists and economists because he believed all of humanity’s problems were interconnected. Peccei was not some ivory tower philosopher. He had been tortured by fascists in the anti-resistance movement during the Second World War. He had seen civilization come apart and get rebuilt. He called the interconnected mess of global problems the “problématique,” and his group became the Club of Rome. Four years later, a team of 17 MIT researchers built a computer model called World3 and ran it on room-sized mainframes. They tracked five variables: population, food production, industrial output, pollution, and resource depletion. The key was not just the variables but the feedback loops and delays between them. More factories mean more food, more food means lower mortality, lower mortality means more people, more people means more factories. That is the engine of civilization. But more factories also mean more pollution, which degrades soil, and more resource extraction, which gets progressively harder and more expensive. The system starts to eat itself. The “standard run” scenario, business as usual with no major policy changes, projected overshoot and collapse around the 2040s or 2050s. Not by the year 2000, as critics endlessly claim. If you actually look at the charts from the 1972 book, all the curves keep growing well past 2000. The myth that the Club of Rome predicted the world would end in 2000 is the most persistent straw man in the history of this debate. And the model has tracked reality with unsettling accuracy. In 2008, an Australian physicist named Graham Turner compared 30 years of actual data against the original 1972 curves. The match was terrifyingly close. A 2014 update was bleaker: the data indicated the early stages of collapse could occur within a decade. A 2020 study concluded that without major changes, economic growth will peak and then rapidly decline by around 2040. We are living inside the window of their original prediction right now. The man who saved a billion lives and still said it was temporary While the Club of Rome was modeling collapse, an agronomist from Iowa was busy proving them wrong with his bare hands. Norman Borlaug had gotten to college on a wrestling scholarship. He ended up developing semi-dwarf, high-yield wheat varieties through a technique called shuttle breeding, growing two generations per year by alternating between locations in Mexico. The test came in the mid-1960s, when India and Pakistan teetered on the brink of exactly the catastrophe Ehrlich was predicting on television. Borlaug shipped his seeds. They arrived during the Indo-Pakistani War. The results were staggering. Pakistan’s wheat yields nearly doubled within five years. India went from famine threat to grain surplus so fast that local governments had to close schools and use classrooms as temporary granaries because they ran out of storage. The Congressional Gold Medal credits Borlaug with saving over one billion lives. He is the single greatest data point in the techno-optimist argument. But here is the thing that both sides tend to forget: Borlaug himself was not a blind optimist. In his Nobel Prize acceptance speech, he called his Green Revolution “a temporary success” and “a breathing space.” He warned about population growth in nearly every speech he gave for the rest of his career. He knew he had not solved the problem forever. He had bought us a few decades to get our house in order. A quantum physicist, a basement, and a philosophy built on thermodynamics The modern acceleration movement exploded out of a very unlikely origin. In 2022, a French-Canadian quantum physicist named Guillaume Verdon quit his job at Google, moved into his parents’ basement in Quebec, sold his car, bought $100,000 worth of GPUs, and started a movement on Twitter under the pseudonym BasedBeffJezos. The name was a pun on Jeff Bezos. The philosophy was a direct shot at Effective Altruism, the movement associated with AI safety and existential risk. Where EA said slow down, we might destroy ourselves, Verdon’s movement said speed up, or we definitely will. He and his co-founders called it effective accelerationism, or e/acc. The intellectual foundation is built on the work of MIT biophysicist Jeremy England, whose theory of dissipative adaptation proposes that under certain conditions, matter spontaneously organizes itself into more complex structures because those structures are better at spreading energy around. A forest dissipates far more solar energy than a desert. Life, in this framing, is a mechanism the universe evolved to increase entropy faster. Intelligence and technology are even better mechanisms. A data center takes organized energy and converts it into waste heat and information. It is, thermodynamically speaking, a machine for accelerating entropy. E/acc takes this and runs with it. They argue that civilization is a higher-order dissipative structure, that resistance to acceleration is metaphysically misguided, and that humanity’s cosmic duty is to climb the Kardashev scale from a Type 0 civilization to one that harnesses the energy of an entire planet, then a star, then a galaxy. Energy consumption is not a vice. It is a moral virtue. This moved from fringe Twitter into the heart of Silicon Valley strategy at startling speed. In October 2023, Marc Andreessen published his Techno-Optimist Manifesto, a 5,200-word essay that used the phrase “we believe” 113 times and called sustainability, the precautionary principle, and trust and safety the enemies of progress. After the 2024 US election, tech figures began explicitly connecting e/acc principles to deregulatory politics. The geographic split is not a coincidence. Limits thinking is a European movement. The word “décroissance” comes from French. Kate Raworth’s Doughnut Economics was adopted as an official planning framework in Amsterdam. The precautionary principle is baked into EU regulation. E/acc is a Silicon Valley movement, full stop. Its founders are tech workers. Its patron saints are venture capitalists. Its cultural habitat is X. American frontier mythology, libertarian philosophy, and venture capital’s fundamental business model, exponential growth or death, created the conditions for its emergence. Both sides are right, and both sides are dangerously wrong The hardest part of this story is that neither tribe has the full picture. The Jevons Paradox sits at the center of the conflict like an oracle telling both sides exactly what they want to hear. In 1865, William Stanley Jevons observed that more e

    39 min
  7. The Founder Bottleneck — Surviving the Jump to 10 People

    FEB 2

    The Founder Bottleneck — Surviving the Jump to 10 People

    There is a moment in every startup’s life where growth stops feeling like progress. You hired smart people.You raised money.You shipped something that works. And yet—everything feels slower, noisier, more fragile than when you were three people in a room. This episode is a deep dive into the most dangerous phase in a startup’s life: the transition from a scrappy founding team to a 10–15 person company. We unpack: * Why productivity mathematically collapses as teams grow * The psychological traps founders fall into (hero syndrome, identity foreclosure) * Why Slack becomes a liability at scale * What a minimum viable operating system for a 10-person company actually looks like * How founders must shift from doing work to designing systems This is not about motivation.It’s about mechanics. If you feel like you’re constantly firefighting, this episode explains why—and how to stop. The Garage Myth Dies at 10 People Every founder remembers the garage phase. Three or four people.One shared brain.No process, no meetings, no documentation—and somehow everything works. That phase ends brutally around 10 people. Not because anyone is incompetent.But because implicit coordination stops working. There’s a simple formula behind this: N × (N − 1) ÷ 2 That’s the number of communication paths in a team. * 3 people → 3 connections * 5 people → 10 connections * 10 people → 45 connections * 15 people → 105 connections Nothing “feels” different when you hire the 7th or 8th person.But the communication network has already exploded. You’re no longer in a team.You’re running a distributed system—without having designed it as one. Biology and Math Are Both Against You This breakdown isn’t just organizational. It’s biological. Anthropologist Robin Dunbar showed that humans have hard cognitive limits on stable group sizes. Two thresholds matter here: * ~5 people: a support clique (everyone knows everything) * ~15 people: a close group limit The 5–15 range is a no-man’s land. Founders try to manage a small tribe with garage-era instincts.The result is chaos—and the founder becomes the bottleneck. The Bottleneck Founder Pattern When founders don’t adapt, the same symptoms appear every time: 1. Decision Queues Work stalls while everyone waits for the founder to approve tiny things. The founder becomes a toll booth. 2. Team Passivity High-performers stop thinking.They wait.They become order-takers instead of owners. 3. “Swoop and Poop” Management The founder disappears, then reappears with opinions and changes—without context. Nothing kills morale faster. Crucially:This is not because founders are bad people. It’s because of identity conflict. Identity Foreclosure: Why Letting Go Feels Like Dying Most founders—especially technical ones—built their identity around being the builder. Writing code.Solving hard problems.Getting instant dopamine from things that work. Leadership doesn’t give that feedback. Managing people is: * Delayed gratification * Ambiguous outcomes * Often invisible when done well As Paul Graham describes it:founders are trapped between the maker schedule and the manager schedule—and both suffer. So founders compensate by becoming heroes. They jump in.Fix the bug.Save the day. And accidentally teach the team: “Don’t worry. I’ll always fix it.” That’s not leadership.That’s dependency creation. From Firefighter to Fire Chief The key shift is this: Stop holding the hose. Start building the fire station. A firefighter fights fires.A fire chief ensures: * Training * Equipment * Water pressure * Strategy Touch the hose only when the building is about to collapse. This transition feels like grief. You’re letting go of the identity that made you successful.But without it, the company never scales. Giving Away Your Legos Former Facebook leader Molly Graham has a perfect metaphor: Growing a company is like giving away your Legos. You built the thing.You know every brick.Now someone else will build with your pieces—badly, at first. Hovering makes it worse. Her rule: If you’re doing the same job you did six months ago, you’re the bottleneck. Growth requires repeatedly firing yourself. Why Slack Becomes the Enemy Slack feels efficient—until it isn’t. Research shows: * 23 minutes to regain focus after an interruption * Even 5-second interruptions triple error rates At 10 people: * Decisions live in DMs * Context is fragmented * No single source of truth exists Founders become archaeologists, digging through chat logs to understand why something happened. The Minimum Viable Operating System This episode argues for a deliberately minimal stack, not enterprise process. 1. Linear for Execution Linear integrates directly with GitHub. Status updates happen automatically.No nagging.No manual reporting. Work updates itself. 2. Notion for Memory Notion becomes institutional memory. Rule: If it’s discussed, it’s documented. This shifts the company from tribal knowledge to durable knowledge. Meetings That Don’t Suck: L10-Lite Instead of heavy frameworks like EOS, the episode recommends a single weekly leadership meeting: 60–90 minutes. Same agenda. Every week. Agenda: * Wins (psychological momentum) * Scorecard (5–7 key metrics) * Priorities (on/off track) * IDS: Identify, Discuss, Solve Most meetings report status.This one resolves bottlenecks. You leave with decisions, owners, and deadlines. Delegation That Actually Works Delegation is not assigning tasks.It’s assigning outcomes. Instead of: “Change the button color.” Say: “Customers can’t find the buy button. Fix that.” Frameworks discussed: * CEO Bubble: what only the founder should do * Decision Zones: green / yellow / red decisions * MSCL test: mandate, stakes, edge, leverage Most founders stay busy because they’re hiding in low-leverage work. The Real Shift: From Doing to Designing At three people: Your output = your work. At ten people: Your output = the system you designed. This is the hardest lesson. Teaching feels slow.Letting go feels dangerous.But founders who make this shift are ~3× more likely to reach a successful exit. Closing Thought If you’re constantly firefighting, the problem isn’t effort. It’s architecture. The fire won’t disappear.But if you don’t build the fire station, you’ll be holding the hose forever. And eventually—you’ll run out of water. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    38 min
  8. The Meter Is the Membrane

    FEB 1

    The Meter Is the Membrane

    Most engineering failures don’t come from bad algorithms or insufficient data.They come from something much more basic: We didn’t define the system properly. That’s what this episode of Coordinated with Fredrik is about — system boundaries, thermodynamics, and how we should think about a home once it stops being a passive consumer and starts behaving like an active energy system Where does a system begin — and where does it end? This sounds almost philosophical, but it’s one of the most practical questions an engineer can ask. Every system needs a boundary. Without one, it’s impossible to reason about control, optimization, or even responsibility. This is true for software systems, mechanical systems, and very much so for energy systems. Ludwig von Bertalanffy, the father of systems theory, once said: “The boundaries of a system are not given in nature but are determined by the observer.” That’s true in many domains — but energy is special. In energy systems, the boundary is not arbitrary.It is physical, legal, and enforced. That boundary is the electricity meter. The meter is not a billing device We tend to think of the electricity meter as something purely administrative — a device that exists to calculate our bill. But that’s a mistake. The meter is the point of common coupling (PCC) between your home and the grid.Everything you consume passes through it.Everything you export passes through it. It is where: * Ownership changes * Responsibility changes * Grid physics ends and home physics begins * Billing, tariffs, export limits, and fuse constraints apply In thermodynamic terms, it is the membrane between two systems. Once you see the meter this way, the right question stops being “What is my inverter doing?” and becomes: What crosses this boundary, when, and under what constraints? That single shift changes everything. Why the old model worked — and why it broke Historically, homes were boring. They were passive loads.Power flowed in one direction.Individual behavior didn’t matter much. From the grid’s perspective, you could aggregate thousands of homes and get remarkably accurate forecasts. The system was statistically predictable because nothing interesting happened at the edges. So our tooling reflected that worldview. We read registers.We polled Modbus TCP.We collected telemetry. And for a long time, that was enough. The moment homes stopped being predictable Then we added things. Solar PV at the edges of the grid.Batteries that store energy over time.Electric vehicles with large, deadline-driven loads.Heat pumps with thermal inertia and weather-dependent efficiency. Suddenly: * Power flows both ways * State matters (SOC, temperature, availability) * Timing matters more than magnitude * Homes can go from “doing nothing” to exporting 8 kW in seconds From the grid’s point of view, a home that used to be a smooth, boring signal becomes bursty, stateful, and hard to predict. A house might sit at zero net flow for hours — perfectly balanced by solar and storage — and then abruptly inject a large amount of power when a battery fills up or a cloud passes. The old statistical assumptions no longer hold. A short detour into thermodynamics (the useful parts) Thermodynamics gives us the correct mental model for all of this. Clausius summarized the first and second laws in a single sentence: “The energy of the universe is constant. The entropy of the universe tends to a maximum.” Everything that happens inside a home — or any site — sits inside that frame. The first law: accounting Energy doesn’t disappear. It transforms. For a home: * Energy can be stored chemically (batteries) * Stored thermally (hot water tanks, slabs, buildings) * Converted between electrical and thermal forms * Exported or imported across the meter Power is just energy per unit time.Storage is what happens when generation and consumption don’t align in time. In that sense, storage isn’t a device category.It’s a consequence of time mismatch. The second law: usefulness The first law tells us energy is conserved.The second law tells us not all energy is equally useful. Electricity is high-quality energy.Low-temperature heat is low-quality energy. You can easily turn electricity into heat.You can’t easily turn heat back into electricity. This is why heat pumps matter so much: they don’t create heat — they move it, exploiting temperature differences to deliver more heat than the electrical energy they consume. None of this is optional. Software that ignores the second law will always look good in simulations and fail in reality. From signals to systems: Site, Device, DER This is where thermodynamics meets software architecture. Site The site is the system boundary.Everything behind the meter. A site has: * Objectives (cost, comfort, self-consumption, grid services) * Constraints (main fuses, export limits, tariffs) * State that evolves over time Optimization only makes sense at this level. Device A device is something you can communicate with. It has: * Protocols (Modbus, REST, cloud APIs) * Registers * Firmware versions * Vendor quirks and bugs Devices answer the question:What can I technically talk to right now? That’s necessary — but insufficient. DER (Distributed Energy Resource) A DER is a logical abstraction. It represents capability, constraints, and state — independent of protocol. A battery DER might represent: * Total capacity * Current SOC * Charge/discharge limits * Efficiency Whether that battery consists of one module or twenty cells doesn’t matter unless it affects system behavior. DERs answer the real question:What can this resource do for the system? Devices are how you talk.DERs are what you reason about. Why this abstraction matters Once you define: * The boundary (the site) * The resources (DERs) * The constraints Control stops being reactive. The problem becomes: What should the energy flow across the meter look like over time? The grid doesn’t care how your system is wired internally.It cares about magnitude, direction, and timing at the boundary. In that sense, the meter becomes the objective function. Homes are no longer loads A modern home has: * State * Constraints * Objectives * Time-coupled decisions That’s not a load. That’s an agent. We inherited an energy system architecture from a time when homes were boring. They aren’t anymore. That creates real challenges — but also real opportunities. And none of them can be addressed without going back to first principles and defining the system correctly. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    40 min

About

Coordinated with Fredrik is an ongoing exploration of ideas at the intersection of technology, systems, and human curiosity. Each episode emerges from deep research. A process that blends AI tools like ChatGPT, Gemini, Claude, and Grok with long-form synthesis in NotebookLM. It’s a manual, deliberate workflow, part investigation, part reflection, where I let curiosity lead and see what patterns emerge. This project began as a personal research lab, a way to think in public and coordinate ideas across disciplines. If you find these topics as fascinating as I do, from decentralized systems to the psychology of coordination — you’re welcome to listen in. Enjoy the signal. frahlg.substack.com