Coordinated with Fredrik

Fredrik Ahlgren

Coordinated with Fredrik is an ongoing exploration of ideas at the intersection of technology, systems, and human curiosity. Each episode emerges from deep research. A process that blends AI tools like ChatGPT, Gemini, Claude, and Grok with long-form synthesis in NotebookLM. It’s a manual, deliberate workflow, part investigation, part reflection, where I let curiosity lead and see what patterns emerge. This project began as a personal research lab, a way to think in public and coordinate ideas across disciplines. If you find these topics as fascinating as I do, from decentralized systems to the psychology of coordination — you’re welcome to listen in. Enjoy the signal. frahlg.substack.com

  1. Boiling the Ocean: Why Incremental Thinking Is Now the Most Dangerous Strategy

    4D AGO

    Boiling the Ocean: Why Incremental Thinking Is Now the Most Dangerous Strategy

    There is a phrase that has quietly governed modern management culture for decades: “Don’t boil the ocean.” It’s the sentence that appears whenever ambition starts to feel uncomfortable.When scope expands.When a system-level question threatens a quarterly roadmap. It’s framed as wisdom. Prudence. Maturity. But what if that advice—so deeply internalized that we barely question it anymore—has quietly become dangerous? This episode, and this essay, explores a contrarian but increasingly unavoidable thesis: In an era of collapsing intelligence costs, not boiling the ocean is how you lose. This is not a motivational slogan. It’s an economic and engineering argument. To understand why, we need to rewind nearly 160 years—back to coal mines, steam engines, and a mistake humanity has repeated every time a general-purpose resource becomes radically cheaper. The Original Mistake: When Efficiency Backfires In 1865, at the height of the British Industrial Revolution, a young economist named William Stanley Jevons published a book called The Coal Question. At the time, Britain was anxious about energy dominance. Coal powered everything: factories, railways, ships, empire. The assumption among policymakers was simple and intuitive: As engines become more efficient, total coal consumption will fall. After all, James Watt’s steam engine was dramatically better than the old Newcomen design—using roughly one quarter of the fuel for the same mechanical work. Efficiency should lead to conservation. Except it didn’t. Jevons observed something deeply counterintuitive: Despite massive efficiency gains, coal consumption didn’t fall at all.It exploded. UK coal production grew steadily for decades—rising from ~5 million tons in 1750 to over 100 million tons by the 1860s, eventually peaking near 300 million tons in the early 20th century. Jevons summarized the paradox succinctly: “It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.” This is what we now call the Jevons Paradox: When a general-purpose resource becomes more efficient and cheaper, total consumption increases—because new uses become economically viable. Efficiency doesn’t cap demand.It unlocks it. Latent Demand and the Threshold of Viability Why does this happen? Because demand for fundamental resources isn’t fixed—it’s latent. When steam power was expensive and inefficient, it was used only for extreme, high-value tasks (like pumping water out of deep coal mines). Once efficiency improved, the activation energy dropped. Suddenly it made sense to: * Put engines in textile mills * Power ships and locomotives * Mechanize entire industries The question was never “How much steam power do humans need?”The question was “At what price does entirely new behavior emerge?” That same dynamic has repeated itself over and over again. Light: From Luxury to Pollution Nothing illustrates this better than the history of light. In the 1300s, producing a fixed amount of illumination—about one million lumen-hours—cost the modern equivalent of £40,000. Light was so expensive that people rationed candles the way we ration fuel during wartime. By 2006, that same amount of light cost £2.90. A 14,000× reduction in real cost. So did we save energy? No. Between 1800 and 2000, per-capita light consumption increased ~6,500×. We didn’t stop at lighting rooms. We lit cities. Highways. Stadiums. Parking lots at 3 a.m. We put light into pockets, shoes, keyboards, architecture. We created an entirely new problem—light pollution—because light became too cheap to care about. LEDs repeated the pattern again: * Lower wattage per bulb * Explosive growth in total lighting Efficiency didn’t restrain usage.It expanded imagination. Doing More With Less: Buckminster Fuller’s Missing Half This is where Buckminster Fuller enters the story. Fuller described a long-term technological trajectory he called ephemeralization: Doing more and more with less and less—until eventually you can do everything with almost nothing. His favorite example was bridges. * Roman bridges: massive stone, brute force, pure compression * Iron bridges: lattice structures, geometry, less material * Steel suspension bridges: tension, elegance, minimal mass * Eventually: radio waves, fiber optics—connection without material The function remains.The atoms disappear. At first glance, Fuller seems to contradict Jevons. But he doesn’t. They describe the same system from different angles: * Ephemeralization → less material per unit of function * Jevons Paradox → vastly more total units once the function becomes cheap We didn’t save copper by inventing fiber optics.We used orders of magnitude more communication. The Great Bet: Ingenuity vs. Scarcity This tension came to a head in the 1970s. On one side: * The Club of Rome * Paul Ehrlich * Limits to Growth, The Population Bomb * A zero-sum worldview: finite resources, inevitable collapse On the other: * Julian Simon * Buckminster Fuller * The belief that human ingenuity is the ultimate resource In 1980, Simon challenged Ehrlich to a bet. Ehrlich chose five industrial metals—copper, chromium, nickel, tin, tungsten—and predicted prices would rise over the next decade as population exploded. Instead, prices fell by 57%. Why? * Substitution (fiber replaces copper) * Better extraction * Recycling * Design efficiency Ingenuity outran depletion. Intelligence Enters the Equation All of this matters because we are now repeating the same mistake—but with something far more powerful than coal or light or copper. We are making intelligence cheap. The cost of AI inference has been collapsing at an unprecedented rate—on the order of hundreds of times per year. What cost ~$20 per million tokens in 2022 costs cents today. This is ephemeralization of cognition. And if Jevons holds—as it always has—then the implication is unavoidable: Cheap intelligence will not reduce work.It will explode the scope of what gets built. The fear narrative—“AI will take jobs”—is the same zero-sum thinking that lost the Simon-Ehrlich bet. It assumes: * A fixed amount of code * A fixed amount of analysis * A fixed amount of problem-solving History says the opposite. When the cost of thinking drops, we attempt problems that were previously unthinkable. The Real Bottleneck Has Moved When software was expensive, the bottleneck was execution. Now execution is cheap. The bottleneck is vision. This is why incrementalism is suddenly dangerous. Optimizing for 1.05×—cutting support costs, shaving headcount, marginal automation—is defensive thinking applied to an abundance problem. As Astro Teller famously put it: “It’s often easier to make something 10× better than 10% better.” Why? Because 10% forces you to argue with legacy constraints.10× forces you to throw them out. Energy, AI, and the Literal Ocean In energy, this becomes especially clear. If AI helps unlock controlled fusion—abundant, clean, baseload power—the question isn’t “How much cheaper is my electricity bill?” The question is: * What becomes possible when energy is no longer the constraint? Desalination at planetary scale.Carbon capture as infrastructure.Terraforming, not conservation theater. This is Jevons again—at civilizational scale. The Actual Choice Buckminster Fuller framed it starkly: Utopia or oblivion. Not because technology guarantees utopia—but because fear guarantees stagnation. The tools are arriving whether we are psychologically ready or not. The only remaining decision is whether leaders choose: * Scarcity thinking and protectionism * Or positive-sum ambition and construction So the real strategic question becomes: Where are you still optimizing for 1.05× when the physics now allow 10×? What ocean are you refusing to boil—not because it’s impossible, but because it used to be? Because the water is ready.The apparatus exists.And timid incrementalism is no longer neutral—it’s a risk. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    34 min
  2. Wire the Planet or Wire the Solar System?

    4D AGO

    Wire the Planet or Wire the Solar System?

    Coordinated with Fredrik — Episode Recap On January 30, 2026, SpaceX filed what looked like the most boring piece of regulatory paperwork imaginable. An FCC application. A string of numbers. The kind of thing you scroll past. Except this one was for permission to launch one million orbiting data centers. And in the preamble, they called it “a first step towards becoming a Kardashev Type II civilization.” That is not normal language for a permit application. That is a declaration of intent for a different species. This episode digs into what that filing actually means, and why it forces anyone working in energy to confront a question that used to be reserved for philosophy seminars: are we wiring the planet, or wiring the solar system? Three visions, one fork in the road The episode walks through three competing models for where energy goes from here. They sound like they belong in different centuries, but all three are showing up on balance sheets right now. The Earthbound Optimizer. This is Professor Mark Jacobson’s model out of Stanford. His thesis is that we can run 100% of civilization on wind, water, and solar. Not just electricity. Everything. Transport, heating, industry, agriculture, the military. All of it, with existing technology. No fusion. No miracle batteries. No carbon capture. The physics behind it is surprisingly straightforward. Combustion is terrible at converting energy into useful work. A gasoline car turns only 17-20% of its fuel into motion. The rest is heat and noise. An electric motor runs at 90-95% efficiency. A heat pump moves three to four units of heat for every one unit of electricity you put in. Jacobson calculates that simply by electrifying everything, we cut global energy demand by 56.4%. The upfront cost is around $61.5 trillion, but annual energy costs drop from $17.8 trillion to $6.6 trillion. That is a six-year payback with an infinite tail of savings. Any board would fund that project in a heartbeat. So why haven’t we done it? Because the model assumes 80% of daily electrical loads can be shifted within an eight-hour window. Charging your car at 2 AM instead of 6 PM? Easy. Asking a steel mill or a data center training an AI model to pause for eight hours? That is where the model meets reality. The Orbital Industrialist. This is the Musk play. He looked at the seven-year wait for a new substation in Virginia, the zoning fights, the interconnection queues, and decided that the grid is a political problem. Rockets are a physics problem. He prefers physics problems. In a sun-synchronous orbit, solar panels get 99% uptime. No clouds, no night, no atmosphere scattering the light. A panel in space generates six to eight times more energy per year than the same panel on Earth. The whole idea is to move the heavy compute, the training runs that take months and consume staggering amounts of power, off the planet entirely. Train the model in orbit where energy is constant and free. Beam the finished weights back to Earth. Learning happens in the sky. Thinking happens on your phone. The catch is cooling. Space is a perfect insulator. There is no air for convection. The only way to dump heat is radiation, and to radiate at the scale of gigawatt data centers you would need radiator panels the size of Gibraltar. Silicon chips melt long before the radiator reaches efficient operating temperature. You might need entirely new semiconductor materials, gallium arsenide or silicon carbide, that can run at 300-400 degrees Celsius. It is not just about launching servers. It might mean reinventing the chip. The whole bet rides on Starship driving launch costs from $2,700 per kilogram down to $200, maybe eventually $10. At $10 per kilo, you can launch heavy, cheap, standard server racks. Mass stops being a constraint. The engineering tradeoffs change completely. The Cosmic Architect. This is the Dyson swarm endgame. Not a solid shell around the sun (that is physically impossible), but trillions of individual satellites orbiting in dense formation, each capturing a sliver of sunlight. Musk’s million satellites would capture roughly 0.00000000004% of the sun’s output. A rounding error on a rounding error. But the expansionist logic says once you start, you do not stop. The theoretical blueprint is called the Mercury Loop. You land self-replicating mining robots on Mercury, which is rich in metals and sits right next to the sun. They mine the surface, build thin-foil solar collectors, and use electromagnetic railguns to shoot them into orbit. Those collectors beam energy back down to power more mining. It is an exponential feedback loop. Researchers at Oxford calculated you could dismantle the entire planet in about 31 years. Even at that scale, thermodynamics wins. The Landauer limit means every bit erased generates heat. A Dyson swarm eventually cooks itself if it thinks too hard. The Jevons Paradox sitting in the middle of all this This is the tension that runs through the entire episode and connects directly to how any energy company should think about the next decade. Jacobson argues that efficiency leads to sufficiency. Electrify everything, coordinate the loads, and demand goes down. We can get by with less. The expansionist view says the opposite. William Stanley Jevons noticed in the 19th century that when steam engines got more efficient, coal consumption went up, not down. Cheaper energy means more uses for it. If you unlock cheap orbital compute, demand does not flatten. It explodes into virtual worlds, planetary simulations, uses we cannot even conceive of yet. If Jacobson is right, energy companies are optimization businesses. You squeeze value out of a more or less static system. If Musk is right, you are preparing for a grid that needs to double, then triple, then quadruple. It is not a conservation problem. It is a throughput problem. The one thing all three visions agree on Whether the power comes from a rooftop in Palo Alto, a satellite 500 kilometers up, or a ring of collectors around the sun, the bottleneck is always the same: coordination. Jacobson’s model only works if 80% of load is flexible. That requires massive demand response, virtual power plants, automated dispatch. Space solar needs laser downlinks, ground stations, collision avoidance for a million moving objects, all managed in real time. Even the Dyson swarm needs orchestration at a scale that makes today’s grid look like a toy. The hardware is not the hard part. The connective tissue is. California proved this in 2024. They hit 117% renewable coverage in some intervals. Battery storage grew 2,100% in five years. But they also threw away 3.4 million megawatt hours of clean energy because they could not move it in space or time. Germany spent three billion euros just on redispatch, paying plants to turn down in one place and up in another to manage congestion. The electrons are there. The infrastructure to get them to the right place at the right time is what is lagging. The ownership question nobody wants to talk about Jacobson’s world is distributed. Rooftop solar, community wind, local batteries. Hard to monopolize sunshine when it falls on everyone’s roof. The orbital and Dyson worlds are centralized by nature. You need to be a trillion-dollar entity to launch rockets at scale. You need to own the mass drivers on Mercury. It recreates the dynamics of the oil industry. A few players control supply, everyone else is a customer. We are choosing between energy democracy and energy tycoons. Or some hybrid of the two. So what does this actually mean? We receive 10,000 times more energy from the sun than we currently use. The scarcity is not natural. It is a scarcity of infrastructure and coordination. Musk’s orbital play is, at its core, a hedge against our own dysfunction. A bet that we are too slow at building transmission lines, too tangled in zoning fights, too bad at aggregating distributed resources to keep up with what AI demands. So he is routing around the zoning board entirely. Maybe he is right about that. But today, right now, the fight is still on Earth. It is the last 10% problem. It is making the load follow the sun. It is the boring, unglamorous work of connecting millions of devices into something that behaves like a single coordinated system. Whether the future is on Earth or in orbit, the operating system for the energy transition is the same: coordination software, protocols, aggregation. The unsexy layer that makes any of this actually work. We are currently deciding, in boardrooms and regulatory filings, whether to wire the planet or wire the solar system. And every battery you aggregate, every flex load you optimize, is a vote in that election. Keep coordinating. Listen to the full episode on [Coordinated with Fredrik]. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    37 min
  3. The Spiral: Why the Most Important Intellectual War of Our Time Is Between People Who See Walls and People Who See Launchpads

    6D AGO

    The Spiral: Why the Most Important Intellectual War of Our Time Is Between People Who See Walls and People Who See Launchpads

    In October 1990, a Stanford biologist sat at his desk and wrote a check for $576.07. He put it in an envelope. He addressed it to an economist at the University of Maryland. He did not include a note. No congratulations, no concession speech, not even a “good game.” Just the check, sealed and mailed. That silence is deafening when you know the backstory. Because that check was never about the money. It was the settlement of a decade-long wager about the fundamental nature of reality itself. And the argument it represents, between people who look at a finite planet and see walls closing in and people who look at the same planet and see a launchpad, has been raging for over two centuries. Right now, in the age of AI and climate tipping points, it is reaching a fever pitch. This is the story behind our latest episode of Coordinated with Fredrik. We called it “The Spiral” because the clash between these two worldviews is not a pendulum swinging between optimism and pessimism. A pendulum returns to the same points. A spiral goes around, but with each revolution, it moves up an axis. It progresses. Each side is forced to incorporate something the previous round missed, and the stakes get higher with every turn. The bet that started with too many people and ended with no note The man writing the check was Paul Ehrlich, author of The Population Bomb, a book that sold two million copies and predicted hundreds of millions of people would starve to death in the 1970s and 1980s. Ehrlich had been on The Tonight Show with Johnny Carson roughly 20 times. He got a vasectomy to set an example. He told an interviewer that he would take even money England would not exist by the year 2000. He was, in every sense, the public face of environmental doom. On the other end of the envelope was Julian Simon, an economist who had written The Ultimate Resource, arguing that the human mind is the only resource that matters. Where Ehrlich saw mouths to feed, Simon saw minds that create. In 1980, Simon issued a public challenge: pick any raw materials, any timeframe longer than one year, and I will bet you the inflation-adjusted price goes down. Ehrlich and two colleagues chose five metals. Chromium, copper, nickel, tin, and tungsten. They placed a $1,000 bet with a payoff date of September 29, 1990. During that decade, the world added more than 800 million people, the largest single-decade increase in human history. Demand exploded. And every single metal fell in price. Tin dropped over 70 percent. Tungsten fell by half. Hence the check. For a lot of people, that was the end of the story. Optimists won, pessimists lost, case closed, let’s drill some oil. But the surface reading is dangerously incomplete. If you run the same bet over different decades, the results flip completely. A study found that Ehrlich would have won 61.2 percent of all possible ten-year intervals between 1910 and 2007. From 2000 to 2010, with the China boom driving metal prices parabolic, Ehrlich would have wiped the floor with Simon. It was not a definitive victory. It was a single data point in a much larger war between two operating systems for viewing the world. The Club of Rome and the model that keeps tracking reality The modern version of the limits worldview began in a villa in Rome in 1968, when an Italian industrialist named Aurelio Peccei gathered scientists and economists because he believed all of humanity’s problems were interconnected. Peccei was not some ivory tower philosopher. He had been tortured by fascists in the anti-resistance movement during the Second World War. He had seen civilization come apart and get rebuilt. He called the interconnected mess of global problems the “problématique,” and his group became the Club of Rome. Four years later, a team of 17 MIT researchers built a computer model called World3 and ran it on room-sized mainframes. They tracked five variables: population, food production, industrial output, pollution, and resource depletion. The key was not just the variables but the feedback loops and delays between them. More factories mean more food, more food means lower mortality, lower mortality means more people, more people means more factories. That is the engine of civilization. But more factories also mean more pollution, which degrades soil, and more resource extraction, which gets progressively harder and more expensive. The system starts to eat itself. The “standard run” scenario, business as usual with no major policy changes, projected overshoot and collapse around the 2040s or 2050s. Not by the year 2000, as critics endlessly claim. If you actually look at the charts from the 1972 book, all the curves keep growing well past 2000. The myth that the Club of Rome predicted the world would end in 2000 is the most persistent straw man in the history of this debate. And the model has tracked reality with unsettling accuracy. In 2008, an Australian physicist named Graham Turner compared 30 years of actual data against the original 1972 curves. The match was terrifyingly close. A 2014 update was bleaker: the data indicated the early stages of collapse could occur within a decade. A 2020 study concluded that without major changes, economic growth will peak and then rapidly decline by around 2040. We are living inside the window of their original prediction right now. The man who saved a billion lives and still said it was temporary While the Club of Rome was modeling collapse, an agronomist from Iowa was busy proving them wrong with his bare hands. Norman Borlaug had gotten to college on a wrestling scholarship. He ended up developing semi-dwarf, high-yield wheat varieties through a technique called shuttle breeding, growing two generations per year by alternating between locations in Mexico. The test came in the mid-1960s, when India and Pakistan teetered on the brink of exactly the catastrophe Ehrlich was predicting on television. Borlaug shipped his seeds. They arrived during the Indo-Pakistani War. The results were staggering. Pakistan’s wheat yields nearly doubled within five years. India went from famine threat to grain surplus so fast that local governments had to close schools and use classrooms as temporary granaries because they ran out of storage. The Congressional Gold Medal credits Borlaug with saving over one billion lives. He is the single greatest data point in the techno-optimist argument. But here is the thing that both sides tend to forget: Borlaug himself was not a blind optimist. In his Nobel Prize acceptance speech, he called his Green Revolution “a temporary success” and “a breathing space.” He warned about population growth in nearly every speech he gave for the rest of his career. He knew he had not solved the problem forever. He had bought us a few decades to get our house in order. A quantum physicist, a basement, and a philosophy built on thermodynamics The modern acceleration movement exploded out of a very unlikely origin. In 2022, a French-Canadian quantum physicist named Guillaume Verdon quit his job at Google, moved into his parents’ basement in Quebec, sold his car, bought $100,000 worth of GPUs, and started a movement on Twitter under the pseudonym BasedBeffJezos. The name was a pun on Jeff Bezos. The philosophy was a direct shot at Effective Altruism, the movement associated with AI safety and existential risk. Where EA said slow down, we might destroy ourselves, Verdon’s movement said speed up, or we definitely will. He and his co-founders called it effective accelerationism, or e/acc. The intellectual foundation is built on the work of MIT biophysicist Jeremy England, whose theory of dissipative adaptation proposes that under certain conditions, matter spontaneously organizes itself into more complex structures because those structures are better at spreading energy around. A forest dissipates far more solar energy than a desert. Life, in this framing, is a mechanism the universe evolved to increase entropy faster. Intelligence and technology are even better mechanisms. A data center takes organized energy and converts it into waste heat and information. It is, thermodynamically speaking, a machine for accelerating entropy. E/acc takes this and runs with it. They argue that civilization is a higher-order dissipative structure, that resistance to acceleration is metaphysically misguided, and that humanity’s cosmic duty is to climb the Kardashev scale from a Type 0 civilization to one that harnesses the energy of an entire planet, then a star, then a galaxy. Energy consumption is not a vice. It is a moral virtue. This moved from fringe Twitter into the heart of Silicon Valley strategy at startling speed. In October 2023, Marc Andreessen published his Techno-Optimist Manifesto, a 5,200-word essay that used the phrase “we believe” 113 times and called sustainability, the precautionary principle, and trust and safety the enemies of progress. After the 2024 US election, tech figures began explicitly connecting e/acc principles to deregulatory politics. The geographic split is not a coincidence. Limits thinking is a European movement. The word “décroissance” comes from French. Kate Raworth’s Doughnut Economics was adopted as an official planning framework in Amsterdam. The precautionary principle is baked into EU regulation. E/acc is a Silicon Valley movement, full stop. Its founders are tech workers. Its patron saints are venture capitalists. Its cultural habitat is X. American frontier mythology, libertarian philosophy, and venture capital’s fundamental business model, exponential growth or death, created the conditions for its emergence. Both sides are right, and both sides are dangerously wrong The hardest part of this story is that neither tribe has the full picture. The Jevons Paradox sits at the center of the conflict like an oracle telling both sides exactly what they want to hear. In 1865, William Stanley Jevons observed that more e

    39 min
  4. The Founder Bottleneck — Surviving the Jump to 10 People

    FEB 2

    The Founder Bottleneck — Surviving the Jump to 10 People

    There is a moment in every startup’s life where growth stops feeling like progress. You hired smart people.You raised money.You shipped something that works. And yet—everything feels slower, noisier, more fragile than when you were three people in a room. This episode is a deep dive into the most dangerous phase in a startup’s life: the transition from a scrappy founding team to a 10–15 person company. We unpack: * Why productivity mathematically collapses as teams grow * The psychological traps founders fall into (hero syndrome, identity foreclosure) * Why Slack becomes a liability at scale * What a minimum viable operating system for a 10-person company actually looks like * How founders must shift from doing work to designing systems This is not about motivation.It’s about mechanics. If you feel like you’re constantly firefighting, this episode explains why—and how to stop. The Garage Myth Dies at 10 People Every founder remembers the garage phase. Three or four people.One shared brain.No process, no meetings, no documentation—and somehow everything works. That phase ends brutally around 10 people. Not because anyone is incompetent.But because implicit coordination stops working. There’s a simple formula behind this: N × (N − 1) ÷ 2 That’s the number of communication paths in a team. * 3 people → 3 connections * 5 people → 10 connections * 10 people → 45 connections * 15 people → 105 connections Nothing “feels” different when you hire the 7th or 8th person.But the communication network has already exploded. You’re no longer in a team.You’re running a distributed system—without having designed it as one. Biology and Math Are Both Against You This breakdown isn’t just organizational. It’s biological. Anthropologist Robin Dunbar showed that humans have hard cognitive limits on stable group sizes. Two thresholds matter here: * ~5 people: a support clique (everyone knows everything) * ~15 people: a close group limit The 5–15 range is a no-man’s land. Founders try to manage a small tribe with garage-era instincts.The result is chaos—and the founder becomes the bottleneck. The Bottleneck Founder Pattern When founders don’t adapt, the same symptoms appear every time: 1. Decision Queues Work stalls while everyone waits for the founder to approve tiny things. The founder becomes a toll booth. 2. Team Passivity High-performers stop thinking.They wait.They become order-takers instead of owners. 3. “Swoop and Poop” Management The founder disappears, then reappears with opinions and changes—without context. Nothing kills morale faster. Crucially:This is not because founders are bad people. It’s because of identity conflict. Identity Foreclosure: Why Letting Go Feels Like Dying Most founders—especially technical ones—built their identity around being the builder. Writing code.Solving hard problems.Getting instant dopamine from things that work. Leadership doesn’t give that feedback. Managing people is: * Delayed gratification * Ambiguous outcomes * Often invisible when done well As Paul Graham describes it:founders are trapped between the maker schedule and the manager schedule—and both suffer. So founders compensate by becoming heroes. They jump in.Fix the bug.Save the day. And accidentally teach the team: “Don’t worry. I’ll always fix it.” That’s not leadership.That’s dependency creation. From Firefighter to Fire Chief The key shift is this: Stop holding the hose. Start building the fire station. A firefighter fights fires.A fire chief ensures: * Training * Equipment * Water pressure * Strategy Touch the hose only when the building is about to collapse. This transition feels like grief. You’re letting go of the identity that made you successful.But without it, the company never scales. Giving Away Your Legos Former Facebook leader Molly Graham has a perfect metaphor: Growing a company is like giving away your Legos. You built the thing.You know every brick.Now someone else will build with your pieces—badly, at first. Hovering makes it worse. Her rule: If you’re doing the same job you did six months ago, you’re the bottleneck. Growth requires repeatedly firing yourself. Why Slack Becomes the Enemy Slack feels efficient—until it isn’t. Research shows: * 23 minutes to regain focus after an interruption * Even 5-second interruptions triple error rates At 10 people: * Decisions live in DMs * Context is fragmented * No single source of truth exists Founders become archaeologists, digging through chat logs to understand why something happened. The Minimum Viable Operating System This episode argues for a deliberately minimal stack, not enterprise process. 1. Linear for Execution Linear integrates directly with GitHub. Status updates happen automatically.No nagging.No manual reporting. Work updates itself. 2. Notion for Memory Notion becomes institutional memory. Rule: If it’s discussed, it’s documented. This shifts the company from tribal knowledge to durable knowledge. Meetings That Don’t Suck: L10-Lite Instead of heavy frameworks like EOS, the episode recommends a single weekly leadership meeting: 60–90 minutes. Same agenda. Every week. Agenda: * Wins (psychological momentum) * Scorecard (5–7 key metrics) * Priorities (on/off track) * IDS: Identify, Discuss, Solve Most meetings report status.This one resolves bottlenecks. You leave with decisions, owners, and deadlines. Delegation That Actually Works Delegation is not assigning tasks.It’s assigning outcomes. Instead of: “Change the button color.” Say: “Customers can’t find the buy button. Fix that.” Frameworks discussed: * CEO Bubble: what only the founder should do * Decision Zones: green / yellow / red decisions * MSCL test: mandate, stakes, edge, leverage Most founders stay busy because they’re hiding in low-leverage work. The Real Shift: From Doing to Designing At three people: Your output = your work. At ten people: Your output = the system you designed. This is the hardest lesson. Teaching feels slow.Letting go feels dangerous.But founders who make this shift are ~3× more likely to reach a successful exit. Closing Thought If you’re constantly firefighting, the problem isn’t effort. It’s architecture. The fire won’t disappear.But if you don’t build the fire station, you’ll be holding the hose forever. And eventually—you’ll run out of water. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    38 min
  5. The Meter Is the Membrane

    FEB 1

    The Meter Is the Membrane

    Most engineering failures don’t come from bad algorithms or insufficient data.They come from something much more basic: We didn’t define the system properly. That’s what this episode of Coordinated with Fredrik is about — system boundaries, thermodynamics, and how we should think about a home once it stops being a passive consumer and starts behaving like an active energy system Where does a system begin — and where does it end? This sounds almost philosophical, but it’s one of the most practical questions an engineer can ask. Every system needs a boundary. Without one, it’s impossible to reason about control, optimization, or even responsibility. This is true for software systems, mechanical systems, and very much so for energy systems. Ludwig von Bertalanffy, the father of systems theory, once said: “The boundaries of a system are not given in nature but are determined by the observer.” That’s true in many domains — but energy is special. In energy systems, the boundary is not arbitrary.It is physical, legal, and enforced. That boundary is the electricity meter. The meter is not a billing device We tend to think of the electricity meter as something purely administrative — a device that exists to calculate our bill. But that’s a mistake. The meter is the point of common coupling (PCC) between your home and the grid.Everything you consume passes through it.Everything you export passes through it. It is where: * Ownership changes * Responsibility changes * Grid physics ends and home physics begins * Billing, tariffs, export limits, and fuse constraints apply In thermodynamic terms, it is the membrane between two systems. Once you see the meter this way, the right question stops being “What is my inverter doing?” and becomes: What crosses this boundary, when, and under what constraints? That single shift changes everything. Why the old model worked — and why it broke Historically, homes were boring. They were passive loads.Power flowed in one direction.Individual behavior didn’t matter much. From the grid’s perspective, you could aggregate thousands of homes and get remarkably accurate forecasts. The system was statistically predictable because nothing interesting happened at the edges. So our tooling reflected that worldview. We read registers.We polled Modbus TCP.We collected telemetry. And for a long time, that was enough. The moment homes stopped being predictable Then we added things. Solar PV at the edges of the grid.Batteries that store energy over time.Electric vehicles with large, deadline-driven loads.Heat pumps with thermal inertia and weather-dependent efficiency. Suddenly: * Power flows both ways * State matters (SOC, temperature, availability) * Timing matters more than magnitude * Homes can go from “doing nothing” to exporting 8 kW in seconds From the grid’s point of view, a home that used to be a smooth, boring signal becomes bursty, stateful, and hard to predict. A house might sit at zero net flow for hours — perfectly balanced by solar and storage — and then abruptly inject a large amount of power when a battery fills up or a cloud passes. The old statistical assumptions no longer hold. A short detour into thermodynamics (the useful parts) Thermodynamics gives us the correct mental model for all of this. Clausius summarized the first and second laws in a single sentence: “The energy of the universe is constant. The entropy of the universe tends to a maximum.” Everything that happens inside a home — or any site — sits inside that frame. The first law: accounting Energy doesn’t disappear. It transforms. For a home: * Energy can be stored chemically (batteries) * Stored thermally (hot water tanks, slabs, buildings) * Converted between electrical and thermal forms * Exported or imported across the meter Power is just energy per unit time.Storage is what happens when generation and consumption don’t align in time. In that sense, storage isn’t a device category.It’s a consequence of time mismatch. The second law: usefulness The first law tells us energy is conserved.The second law tells us not all energy is equally useful. Electricity is high-quality energy.Low-temperature heat is low-quality energy. You can easily turn electricity into heat.You can’t easily turn heat back into electricity. This is why heat pumps matter so much: they don’t create heat — they move it, exploiting temperature differences to deliver more heat than the electrical energy they consume. None of this is optional. Software that ignores the second law will always look good in simulations and fail in reality. From signals to systems: Site, Device, DER This is where thermodynamics meets software architecture. Site The site is the system boundary.Everything behind the meter. A site has: * Objectives (cost, comfort, self-consumption, grid services) * Constraints (main fuses, export limits, tariffs) * State that evolves over time Optimization only makes sense at this level. Device A device is something you can communicate with. It has: * Protocols (Modbus, REST, cloud APIs) * Registers * Firmware versions * Vendor quirks and bugs Devices answer the question:What can I technically talk to right now? That’s necessary — but insufficient. DER (Distributed Energy Resource) A DER is a logical abstraction. It represents capability, constraints, and state — independent of protocol. A battery DER might represent: * Total capacity * Current SOC * Charge/discharge limits * Efficiency Whether that battery consists of one module or twenty cells doesn’t matter unless it affects system behavior. DERs answer the real question:What can this resource do for the system? Devices are how you talk.DERs are what you reason about. Why this abstraction matters Once you define: * The boundary (the site) * The resources (DERs) * The constraints Control stops being reactive. The problem becomes: What should the energy flow across the meter look like over time? The grid doesn’t care how your system is wired internally.It cares about magnitude, direction, and timing at the boundary. In that sense, the meter becomes the objective function. Homes are no longer loads A modern home has: * State * Constraints * Objectives * Time-coupled decisions That’s not a load. That’s an agent. We inherited an energy system architecture from a time when homes were boring. They aren’t anymore. That creates real challenges — but also real opportunities. And none of them can be addressed without going back to first principles and defining the system correctly. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    40 min
  6. The Invisible Grid: How Messaging Systems Became the Nervous System of Modern Infrastructure

    JAN 30

    The Invisible Grid: How Messaging Systems Became the Nervous System of Modern Infrastructure

    Episode Summary We tend to think about infrastructure in physical terms: wires, pylons, transformers, steel and copper. But modern systems—especially energy systems—are held together by something less visible and just as critical: the messaging layer. In this episode, we trace the hidden history of how machines learned to talk to machines. From Wall Street trading floors to oil pipelines in the desert, from telecom switches in Sweden to rage-coded weekends in Silicon Valley, this is the story of frustration-driven innovation. We explore: * Why synchronous “telephone-style” software broke at scale * How publish/subscribe became the software equivalent of a system bus * Why RabbitMQ, Kafka, NATS, and MQTT exist—and what specific pain each one was born to solve * The architectural tradeoffs between smart brokers and dumb pipes * Why replayability, liveness, and reliability are fundamentally different goals * How modern systems increasingly combine all of these tools * And why the next architectural leap will come from today’s friction points This episode isn’t about choosing the “best” messaging system.It’s about understanding why each one exists, and what happens when you use the wrong tool for the wrong kind of problem. Key Concepts * Messaging as the nervous system of physical infrastructure * Subject-based addressing and decoupling * Smart broker vs. dumb broker architectures * Append-only logs and replayability * Control planes vs. data planes * Edge constraints and low-power networks * Friction as a signal for architectural evolution Mentioned Systems & Ideas * TIBCO and the original information bus * AMQP and the open-standard rebellion * RabbitMQ and Erlang’s “let it crash” philosophy * Kafka and the log as the source of truth * NATS and the “dial tone” model * MQTT and constraint-driven protocol design * ZeroMQ, Pulsar, Redis Streams (briefly) The Invisible Grid How Messaging Systems Became the Nervous System of Modern Infrastructure Close your eyes for a moment.(Not if you’re driving—but mentally.) When we talk about infrastructure, we picture the physical grid: copper wires, transformers humming in empty fields, pylons cutting across landscapes. It’s tangible. You can touch it. You can see it rust. You can watch a tree fall on it. If that grid fails, everything stops. But there is another grid—one we almost never visualize. An invisible grid, running inside software.A nervous system made of messages. And just like the physical grid, when this system clogs, desynchronizes, or collapses, the lights go out anyway—no matter how much copper is in the ground. Modern energy systems, financial markets, cloud platforms, and industrial control loops don’t merely use software. They depend on it at the level of physics. Signals must arrive on time. Control decisions must propagate. State must remain coherent across thousands or millions of moving parts. This post is about how we got here. Not as a clean, planned evolution—but as a genealogy of frustration. The Original Sin: The Telephone Call Early software systems communicated the same way humans did: by calling each other directly. Application A opens a connection to Application B, waits for it to respond, sends data, and blocks until it hears back. This is synchronous coupling—the software equivalent of a phone call. It works fine for two systems. It collapses at scale. On a trading floor—or an energy grid—one event must fan out to many consumers: risk engines, dashboards, control systems, settlement layers. In the telephone model, the sender must call each one, sequentially. If any receiver is slow or unavailable, everything backs up. Latency accumulates. Failure cascades.In finance, you go bankrupt.In energy, you destabilize the grid. This brittleness created the first great insight. The Software Bus: Publish, Don’t Call In the mid-1980s, an engineer looked at a computer motherboard and asked an uncomfortable question: Why is software dumber than hardware? A CPU doesn’t “call” the graphics card. It broadcasts onto a system bus. Whoever is listening picks up the signal. The sender doesn’t care who receives it—or if anyone does at all. That idea became publish/subscribe. Instead of sending data to addresses, you publish it to subjects.Instead of knowing who consumes it, you just agree on what it means. This decoupling was revolutionary. It gave us the first real software nervous system. And it worked—so well that it created the next problem. When Middleware Ate the Budget By the early 2000s, large enterprises had dozens of incompatible messaging systems. Each vendor had its own protocol, its own servers, its own licensing model. Banks were spending absurd portions of their IT budgets not on business logic—but on plumbing. The rebellion that followed wasn’t technical at first.It was economic. Why don’t we have a TCP/IP for messaging? That question led to open standards. And open standards led to open source. RabbitMQ and the Power of “Let It Crash” RabbitMQ emerged from a near-perfect alignment between problem and tool. The problem: routing messages reliably, flexibly, transactionally.The tool: Erlang—a language built for telecom switches that cannot go down. Erlang’s philosophy is radical: don’t prevent failure—contain it. Instead of one giant program sharing memory (where one bug burns the house down), Erlang runs millions of tiny isolated processes. If one crashes, a supervisor instantly replaces it. Failure becomes routine. Boring. Managed. RabbitMQ embodies this mindset. It is a smart broker: it routes, retries, buffers, tracks acknowledgements, and guarantees delivery. It is a post office. And like all post offices, it has limits. Kafka and the Log That Changed Everything When LinkedIn tried to track everything, the post office model broke. Too much sorting. Too much state. Too much overhead. The breakthrough was deceptively simple:Stop routing messages. Start recording history. Kafka treats data as an append-only log—an immutable sequence of events. Producers write to the end. Consumers read at their own pace. The broker doesn’t track who’s done what. This aligns perfectly with disk physics. Sequential writes are fast. Replays are free. History becomes an asset. In this model: * The log is the source of truth * Databases are just materialized views * You can replay the past with new intelligence Kafka isn’t a post office.It’s a newsstand. NATS and the Dial Tone Then came cloud platforms, microservices, and another frustration. Messaging systems had become pets—delicate, stateful, needy.But cloud infrastructure demands cattle—replaceable, disposable, boring. NATS was born from that tension. Its original design was ruthless: * No persistence * No buffering for slow consumers * No guarantees beyond “best effort right now” If you’re too slow, you’re dropped.If no one’s listening, the message vanishes. This sounds dangerous—until you realize what it’s for. Control planes. Heartbeats. Service discovery. Real-time signals where the latest state matters more than history. NATS is not a database.It’s a dial tone. MQTT: Innovation Under Constraint The most elegant designs often come from the harshest constraints. MQTT was built for oil pipelines in the desert, running over satellite links so slow and expensive that saving two bytes mattered. The result was a protocol stripped to its bones: * Tiny headers * Persistent low-power connections * Explicit handling of unreliable networks * A “last will and testament” for dead devices Years later, the same properties made MQTT perfect for smartphones. From oil rigs to billions of pockets. Today, MQTT is the language of the edge. Synthesis: No Winners, Only Tradeoffs There is no perfect messaging system. Each of these tools exists because an engineer hit a wall: * Too slow * Too heavy * Too expensive * Too fragile They encode those frustrations into architecture. That’s the real lesson. Modern systems don’t pick one.They compose: * MQTT at the edge * Kafka for history and analytics * NATS for control and coordination * RabbitMQ for transactional work Different pipes for different fluids. The Real Question Every major evolution in messaging came from irritation. A system that made engineers sigh.A component everyone dreaded touching.A piece of infrastructure that fought back. So here’s the closing thought: Where is that friction in your system today? That’s not technical debt.That’s a signal. The next nervous system will be built there. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    37 min
  7. From 95% to 99%: The Stoic Executive’s Guide to Supplement ROI

    JAN 29

    From 95% to 99%: The Stoic Executive’s Guide to Supplement ROI

    Most conversations about supplements are useless. They start too early, aim too low, and ask the wrong question. This is not a beginner’s guide. This is not “eat your vegetables” or “drink more water.” This is a discussion for someone who has already done the hard part. You wake up early.You train consistently.Alcohol is gone.Food is clean.Sleep is a priority. In other words: the foundation is already built. So the real question becomes uncomfortable, almost heretical: When you’re already operating at the 95th percentile of discipline, does supplementation actually move the needle — or are you just creating expensive urine? This episode of Coordinated with Fredrik was built to answer that question with one lens only: return on investment. Not vibes. Not biohacker cosplay. ROI. The Core Thesis: No Magic Pills, but Real Feature Upgrades After combing through meta-analyses, randomized controlled trials, and clinical protocols published through early 2025, the answer is not a naïve yes. It’s a qualified yes. Supplements do not replace fundamentals. They do not compensate for poor sleep, weak conditioning, or a chaotic life. But even a perfect modern lifestyle leaves gaps — gaps created by: * Soil depletion * Indoor work * Chronic cognitive load * Latitude and lack of sun * Stress-induced mineral loss The opportunity lies in biological arbitrage: small chemical inputs that produce disproportionate output for a high-functioning executive. Out of all the noise, four compounds consistently survive scrutiny. Not exciting. Not exotic. Just effective. The Core Four 1. Creatine: Decision Fatigue Insurance Creatine has undergone one of the most dramatic rebrandings in modern science. Once dismissed as “gym bro powder,” it is now increasingly understood as cognitive fuel. The reason is simple: the brain is an energy-hungry organ. When ATP runs low, processing speed, attention, and working memory degrade. Creatine acts as the fastest phosphate recycling system in the human body — a rapid charger for neural energy. Recent meta-analyses show statistically significant improvements in: * Short- and long-term memory * Attention span * Processing speed The benefits are strongest under metabolic stress: sleep deprivation, high cognitive load, intense work periods. In one striking study, a single high dose of creatine largely preserved cognitive performance during 21 hours of wakefulness. For a CEO, this is not about muscle. It’s about staying sharp when the margin for error is thin. Protocol * Creatine monohydrate only * 3–5 g daily * No loading phase * Consistency over intensity Kidneys: safe.Hair loss: unsupported by evidence.Marketing variants: ignore them. 2. Magnesium: The Off Switch If creatine is about output, magnesium is about recovery. Magnesium quietly governs over 300 enzymatic reactions, including energy metabolism, neural signaling, and muscle relaxation. Yet 50–60% of adults in the Western world are insufficient. Why? * Mineral-depleted soil * Stress-driven magnesium loss * Caffeine and training increasing excretion For high performers, magnesium deficiency is almost structural. Recent imaging studies link adequate magnesium intake to structurally younger brains, fewer white matter lesions, and improved cognitive resilience. One targeted form, magnesium L-threonate, has shown particularly strong effects on brain magnesium levels due to its ability to cross the blood–brain barrier. Choosing the form * L-threonate: cognitive longevity, focus, brain health * Bisglycinate: sleep quality, relaxation, recovery * Oxide: avoid (unless you want a laxative) Take it in the evening. Make it a ritual. 3. Omega-3s: Structural Maintenance of the Brain Omega-3s are not supplements. They are building materials. EPA and DHA are structural components of neuronal membranes. Their status can be measured directly through the omega-3 index — a four-month rolling average of membrane composition. Targets matter: * 8%: associated with cardiovascular and cognitive protection * 10–12%: what aggressive optimizers aim for Higher omega-3 index levels correlate with: * Reduced cardiovascular events * Slower cognitive aging * Improved mood stability Plant-based omega-3s (ALA) do not convert efficiently. For DHA, conversion is effectively negligible. If you don’t eat fatty fish, supplementation is non-negotiable. Key insight: do not guess. Test. Adjust dosage based on blood data. Potency vs purity is a real trade-off: * Fish oil: higher doses, contamination risk * Algae oil: cleaner, lower doses * Fish roe: superior bioavailability, lower required intake There is no ideology here. Only measurement. 4. Vitamin D: System-Wide Regulation Vitamin D is misclassified. It is not a vitamin in function — it is a hormonal regulator. It directly influences the expression of over 1,000 genes. Above the 37th parallel, winter synthesis is effectively zero. For Northern Europe, deficiency is structural. Beyond bone health, recent data links adequate vitamin D levels to: * Slower telomere shortening * Lower dementia incidence * Improved immune and metabolic regulation The mistake is dosing without context. High-dose vitamin D without: * Magnesium (cofactor) * Vitamin K2 (calcium traffic control) …creates risk. Safe stack * Vitamin D3: often 5,000 IU+ (test to confirm) * Vitamin K2 (MK-7): 100–200 mcg * Adequate magnesium intake This is an ecosystem, not a pill. The Stoic Frame: Don’t Major in the Minors Supplements are the final 5%. They polish the machine — they do not build it. Zone 2 cardio.Sleep.Strength training.Consistency over decades. Miss a day? Nothing breaks.Forget a week? No catastrophe. The stoic advantage is restraint. The goal is not obsessive optimization. The goal is protecting cognitive capital — the asset that compounds everything else you do. The real biohack isn’t copying anyone’s protocol. It’s knowing your own numbers. Measure. Adjust. Repeat. That’s the whole game. This post is adapted from the podcast episode transcript and reflects the discussion as presented in Coordinated with Fredrik . This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    37 min
  8. The Efficiency Trap: Why Using Less Never Works

    JAN 25

    The Efficiency Trap: Why Using Less Never Works

    I’ve been thinking about this paradox since my PhD days. Back then I was trying to make ships more fuel efficient, believing it would reduce emissions. Then I discovered William Stanley Jevons and his 1865 book “The Coal Question.” It nearly broke me. Jevons noticed something counterintuitive about James Watt’s steam engine. Watt made steam engines roughly four times more efficient than the old Newcomen engines. Common sense says this should have reduced coal consumption. The opposite happened. Coal use exploded. Why? Because efficient steam power suddenly became cheap enough to use everywhere. Factories, railways, mines. The efficiency didn’t save resources. It unlocked demand that hadn’t existed before. “It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.” That quote has stuck with me for years. Now look at AI. In 2024, data centers consumed about 415 terawatt hours globally, roughly 1.5% of all electricity. Projections put that at 945 terawatt hours by 2030. Virginia already sends 26% of its electricity to data centers. Ireland is at 21-22%. DeepSeek showed you can train competitive models far more cheaply. Did that reduce compute demand? No. It opened the door for more companies to train more models. Same pattern as Watt’s engine. Here’s where I land on this: the paradox isn’t a warning. It’s a description of how economic systems work. Fighting it is pointless. The question is whether we can meet rising demand with clean, abundant energy. Solar is already the cheapest electricity source available. The sun has always been Earth’s power plant. Even the coal Jevons worried about is just ancient stored sunlight. We’re going to use more electricity. A lot more. That’s not a crisis. That’s an opportunity to finally build the energy system we should have had all along. More on this in future episodes.——SHOW NOTES: Episode recorded Sunday morning, Kalmar, Sweden The History * William Stanley Jevons, Liverpool-born economist, published “The Coal Question” in 1865 * UK consumed 93 million tons of coal annually at the time, nearly all of Britain’s energy supply * Coal production had grown 3.5% per year for the previous 80 years * British coal production didn’t peak until 1913, almost 50 years after Jevons wrote his warning The Famous Quote “It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.” The Steam Engine Story * Thomas Newcomen developed the atmospheric engine in 1712, less than 1% efficient * James Watt (born 1736) was given a Newcomen engine to repair in 1763 * Conceived the separate condenser idea in 1765, patented in 1769 * Watt’s engine was roughly 4x more efficient than Newcomen’s * Result: coal consumption exploded because steam power became economical for everything The Paradox More efficiency didn’t reduce coal use. It made steam power cheap enough to deploy everywhere. Textile mills, railways, factories. The efficiency unlocked demand that didn’t exist before. Modern Examples * More fuel-efficient cars → people drive more * LED lights → we install far more lights * Air conditioning → billions of new users as it became affordable AI and Data Centers Today * 2024: Global data centers consumed 415 TWh (1.5% of global electricity) * 2030 projection: 945 TWh (nearly 3% of global electricity) * Data center electricity growing 15% per year (4x faster than overall electricity growth) * US: 4% of total electricity goes to data centers * Virginia alone: 26% of state electricity to data centers in 2023 * Ireland: 21-22% of national electricity to data centers The DeepSeek Parallel Early ChatGPT models might be the Newcomen engine of AI. Useful for specific tasks but extremely inefficient. DeepSeek and other breakthroughs are making AI cheaper and more efficient. According to Jevons Paradox, this won’t reduce compute demand. It will increase it. The Optimistic Take Rising electricity demand isn’t a problem if we meet it with abundant, cheap, clean sources. Solar is now the cheapest electricity source on earth. The sun has always been Earth’s power plant. Even fossil fuels are just stored sunlight. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    28 min

About

Coordinated with Fredrik is an ongoing exploration of ideas at the intersection of technology, systems, and human curiosity. Each episode emerges from deep research. A process that blends AI tools like ChatGPT, Gemini, Claude, and Grok with long-form synthesis in NotebookLM. It’s a manual, deliberate workflow, part investigation, part reflection, where I let curiosity lead and see what patterns emerge. This project began as a personal research lab, a way to think in public and coordinate ideas across disciplines. If you find these topics as fascinating as I do, from decentralized systems to the psychology of coordination — you’re welcome to listen in. Enjoy the signal. frahlg.substack.com