Coordinated with Fredrik

Fredrik Ahlgren

Coordinated with Fredrik is an ongoing exploration of ideas at the intersection of technology, systems, and human curiosity. Each episode emerges from deep research. A process that blends AI tools like ChatGPT, Gemini, Claude, and Grok with long-form synthesis in NotebookLM. It’s a manual, deliberate workflow, part investigation, part reflection, where I let curiosity lead and see what patterns emerge. This project began as a personal research lab, a way to think in public and coordinate ideas across disciplines. If you find these topics as fascinating as I do, from decentralized systems to the psychology of coordination — you’re welcome to listen in. Enjoy the signal. frahlg.substack.com

  1. 071 - Measure What Matters

    3 MARS

    071 - Measure What Matters

    It’s fall 1999. A venture capitalist named John Doerr walks into a cramped office in Mountain View — thirty employees gathered around a ping-pong table that doubles as a boardroom. The company is called Google, and it’s the eighteenth search engine to arrive on the web. Doerr carries an $11.8 million check in one hand and a management system in the other. The check made Google possible. The management system made Google Google. This episode is a deep dive into that system — Objectives and Key Results — and the book that lays it all out: John Doerr’s Measure What Matters. What Are OKRs, Really? The story starts at Intel in the 1970s. Andy Grove, the legendary CEO, was staring at a crisis: the Japanese semiconductor industry was about to eat Intel alive. Grove needed every single person in the company to understand the mission and execute on it — fast. His answer was a deceptively simple framework. An Objective is what you want to achieve. Key Results are how you’ll know you’ve achieved it. That’s it. But the magic isn’t in the framework. It’s in what happens when you make everyone’s goals visible — from the newest intern to the CEO. Grove’s quote haunts the entire book: “There are so many people working so hard... and achieving so little.” The Four Superpowers Doerr identifies four things that OKRs unlock, and we spend most of the episode working through each one with real stories: Focus. Brett Kopf built Remind, an education app, but the company was drowning — trying to do everything at once. OKRs forced them to say no. They picked three objectives and killed everything else. The company survived. Alignment. MyFitnessPal grew to 100 million users with a tiny team. How? Everyone’s OKRs were transparent and cascaded from a single mission. No one wondered what they should be working on. Tracking. Bill Gates brought OKRs to the Gates Foundation to track something most organizations struggle to measure: progress against malaria deaths. You can only improve what you measure. Stretch. YouTube set a goal of one billion hours of watch time per day. They were at 100 million. That’s a 10x target — the kind of number that makes you stare at a whiteboard and question your sanity. They hit it. Beyond Goals: The Death of the Annual Review The second half of the book — and the episode — tackles something less glamorous but arguably more important: Continuous Feedback and Recognition (CFRs). Annual performance reviews are broken. A doctor changes a treatment protocol in January and doesn’t see outcome data until December. A software engineer ships a feature and gets feedback six months later. CFRs replace this with continuous, lightweight conversations. The examples from Zume Pizza and Lumeris (a healthcare company where lives are literally at stake) make the case that faster feedback loops aren’t just nicer — they’re a requirement when the world moves fast. OKRs as Coordination Infrastructure Here’s where I get personal. I run an energy infrastructure company. Every day I think about the same problem Andy Grove faced at Intel: how do you get thousands of distributed actors — solar panels, batteries, EVs, heat pumps — to coordinate toward a shared objective? The energy grid must balance supply and demand every single second. It’s a coordination problem at continental scale. And the answer, I think, starts with something Grove figured out in a semiconductor factory fifty-five years ago. Not more generation. Not bigger wires. Shared objectives with measurable key results, propagated across every node in the network. Measure what matters. Key Takeaways * OKRs are not a goal-setting exercise — they’re a coordination protocol for organizations (and systems) of any size * The four superpowers — Focus, Alignment, Tracking, Stretch — compound on each other * Transparency is the mechanism: when everyone can see everyone else’s goals, alignment happens organically * Stretch goals (10x, not 10%) change how people think about problems, not just how hard they work * Annual reviews are dead — continuous feedback is the faster loop that modern systems require * Coordination infrastructure — whether for companies or energy grids — requires shared, measurable objectives Behind the Scenes: A New Format This episode is an experiment. It’s the first episode of Coordinatedproduced in what I’m calling the Radiolab format — a multi-voice, multi-track production that feels closer to audio documentary than traditional podcasting. Instead of a solo monologue, this episode features four voice layers: * Daniel — the primary host, driving the narrative with a steady British broadcaster voice * Matilda — the co-host and audience proxy, bringing energy and genuine reactions * Fredrik (narrator) — stepping in for authoritative narration and synthesis * Fredrik (expert) — the same voice but with a “tape” EQ filter, used for quoting Doerr, Grove, and other real people. It creates the feel of archival audio. How It Was Built The entire production pipeline is automated — from book to finished episode. Here’s what happens under the hood: Pass 1 — The Episode Bible. The full text of Measure What Matters(~72,000 words) is split into 21 chapters. Each chapter is fed individually to Claude Opus, which extracts the most compelling stories, quotes, and emotional beats. The result is a ~28,000-word “episode bible” — a structured research document. Pass 2 — Script Generation. The bible is fed back to Claude in five separate passes (one per ACT), each with specific creative direction: the hook, the superpowers (split across two passes), the revolution, and the synthesis. Claude generates a structured JSON script with 458 segments, each tagged with speaker, production cues, and timing. Production Cues. Every segment carries metadata that controls the final mix: - Stereo panning — Daniel sits slightly left, Matilda slightly right, narrator centered - Tail-stepping — some reactions start 150-300ms before the previous line ends, creating that signature Radiolab crosstalk energy - Hard cuts — four moments in the episode where ALL sound drops to pure silence before a key revelation - Music beds— four mood tracks (tension, uplifting, contemplative, energetic) scored under narration sections - EQ presets — expert quotes get a “tape” filter (300-4000Hz bandpass) that makes them sound like archival recordings Audio Generation. All 458 segments are rendered through ElevenLabs’ text-to-speech API, each with per-voice model selection and tuned voice settings. The stems are then assembled on a timeline — not sequentially concatenated — with overlaps, fades, panning, and music beds mixed in. Mastering. The raw mix goes through dynamic range compression and two-pass loudness normalization to -16 LUFS (the podcast standard for Spotify and Apple Podcasts). The result: 75 minutes of produced audio from a single text file input. The entire pipeline — from book to finished MP3 — runs with one command. I’m excited to keep iterating on this format. The bones are there. Now it’s about tuning the voices, the pacing, and the music to make it feel less like AI and more like radio. Full transcript available below the audio player. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    1 tim 15 min
  2. 070- Boil the Ocean

    2 MARS

    070- Boil the Ocean

    In February 2026, Y Combinator CEO Garry Tan published a short essay arguing that “don’t boil the ocean” — the most common piece of startup advice — is now obsolete. His reasoning rests on a chain of ideas stretching back 160 years, through a Victorian economist, a radical architect, and two professors betting on the price of tin. I followed that chain. It changed how I think about everything. The Paradox That Triggered a Coal Panic In 1865, a 29-year-old English logician named William Stanley Jevons published The Coal Question. His central observation was so counterintuitive that economists are still debating it 160 years later: “It is wholly a confusion of ideas to suppose that the economical use of fuel is equivalent to a diminished consumption. The very contrary is the truth.” Making fuel more efficient does not reduce consumption. James Watt’s steam engine used 75% less coal than the Newcomen engine for the same work. And yet British coal production grew 3.5% per year for 80 straight years — from 5.2 million tons in 1750 to 292 million tons at peak in 1913. A 56-fold increase. Watt’s engine didn’t conserve coal. It made coal-powered work so cheap that steam engines migrated from mine pumps to cotton mills to railways to steamships. Every new application that became economically viable created demand that overwhelmed the efficiency gains. This is the Jevons Paradox. And it has repeated with eerie precision wherever a fundamental resource has gotten dramatically cheaper. 14,000x Cheaper Light, 6,500x More Consumption Economist William Nordhaus traced the cost of artificial light from the 1300s to the present. One million lumen-hours cost the equivalent of £40,800 in the 1300s. By 2006: £2.90. A 14,000-fold decline. Did humanity respond by consuming the same amount of light? The average UK resident in 2000 consumed 6,500 times more artificial light than in 1800. Computing tells the same story — cost per gigaflop dropped from $18.7 million in 1984 to three cents in 2017. Total compute consumed went up by orders of magnitude. Storage, communication, bandwidth — same pattern, every time. The cheaper the resource, the more we consume. As long as human desire for it is elastic, Jevons holds. Fuller’s Arc: From Stone to Nothing While economists debated the paradox, Buckminster Fuller was watching the same phenomenon through a different lens. He called it “ephemeralization” — doing more and more with less and less until eventually you can do everything with nothing. Stone arches. Iron trusses. Steel cables. Wireless signals. Each generation of bridge required less material. But the number of connections, the volume of communication, the scope of what bridges enabled — that exploded beyond imagination. Fuller and Jevons aren’t contradictions. They’re two sides of the same process. We do more with less of this specific thing (Fuller’s insight), which makes the underlying capability so cheap that we consume vastly more of it (Jevons’ insight). The Ehrlich-Simon Bet In 1980, Stanford’s Paul Ehrlich — who predicted hundreds of millions would starve in the 1970s — bet economist Julian Simon $1,000 that five metals would rise in inflation-adjusted price over ten years. Ehrlich picked chromium, copper, nickel, tin, and tungsten. All five declined. Tin fell 55%. Tungsten dropped over 60%. The $1,000 basket was worth $423.93 by 1990 — despite world population growing by 800 million, the largest single-decade increase in human history. Scarcity incentivized innovation, substitution, and new discoveries. Ehrlich mailed a check for $576.07 and never bet on resource prices again. Intelligence Is Collapsing Faster Than Anything Before It Now there’s a resource collapsing in cost that makes coal, light, and metals look like gentle declines. That resource is intelligence. Epoch AI found that LLM inference prices are falling between 9x and 900x per year, with a median of 50x. Since January 2024, that accelerated to 200x per year. GPT-3.5-level performance dropped 280-fold in cost — from $20 per million tokens to 7 cents — in two years. And here’s the Jevons effect in real time: despite per-token costs falling 280x, total inference spending grew 320% over the same period. Researchers found super-elastic demand — a 1% price decrease drives a 1.42% volume increase. Price times quantity riseseven as price falls. Demand for intelligence is, in the paper’s language, “currently un-satiated.” No previous resource — not coal, not electricity, not computation — has become cheap this fast. This Is Personal I spend every day thinking about what happens when energy becomes the bottleneck. When intelligence is cheap but you need electricity to run it. When every home has solar and batteries and heat pumps and EVs, but nobody has figured out how to coordinate them. Germany had 457 hours of negative electricity prices in 2024. Generators paying people to take their power. Jevons would have recognized it instantly — we made electricity generation incredibly efficient, and demand exploded into categories nobody anticipated: data centers, EVs, heat pumps, AI inference clusters drawing megawatts. Tan’s essay distinguishes two responses to this moment. The zero-sum response: use AI to do the same thing cheaper, cut headcount, eke out 5% efficiency gains. The positive-sum response: attempt things that were previously impossible. Applied to the energy grid — the zero-sum response is building more solar panels and bigger batteries. Throwing hardware at the problem. The positive-sum response is asking: what if every home was a power plant? What if every battery was a grid asset? What if every EV was a node in a distributed network that could balance the grid in 200 milliseconds, from the edge, locally? The Ocean Is Already Boiling If Jevons is right — and 160 years of evidence says he is — the demand response to near-free intelligence will be proportionally extraordinary. The organizations and founders who raise their ambitions rather than protect their incumbency will define the next era. The ones who respond with zero-sum fear will find that the ocean is boiling around them whether they like it or not. Fuller, who died in 1983, anticipated this moment: “Ephemeralization trends towards an ultimate doing of everything with nothing at all.” Intelligence approaching zero marginal cost is the logical terminus of his arc from stone to wireless. The question is no longer whether this is happening. It’s whether we’ll use the moment to boil oceans — or drown in committee. Key Takeaways * The Jevons Paradox — making a resource more efficient doesn’t reduce consumption, it unleashes demand. Coal, light, computing, and storage all followed this pattern over 160 years. * AI inference costs are collapsing faster than any previous resource: 280x cheaper in two years, accelerating to 200x per year. Yet total spending on inference grew 320%, showing super-elastic demand. * Jevons and Fuller describe two sides of the same process: we do more with less material (ephemeralization) while consuming vastly more of the underlying capability (the paradox). * The energy grid is the next Jevons battleground — cheap solar created 457 hours of negative prices in Germany, and AI-driven demand is exploding. The positive-sum response isn’t more hardware, it’s coordination infrastructure at the edge. Full transcript available below the audio player. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    17 min
  3. 069 - Building the Machine

    2 MARS

    069 - Building the Machine

    This episode is different. Instead of talking about the grid, energy markets, or coordination protocols, I’m talking about this podcast — how it’s made, why I rebuilt the entire production pipeline from scratch, and what it taught me about the relationship between tools and output. The NotebookLM Era Here’s what used to happen: I’d spend a week reading papers, articles, talking to people. Then I’d dump everything into NotebookLM, hit generate, and get a thirty-minute episode. Every time. Thirty minutes. No control over length, no control over focus, and zero memory of what I’d already covered three episodes ago. It was fine. But it wasn’t mine. It didn’t sound like me. Every episode existed in isolation — no arc, no continuity. Just... content. And if you know me at all, you know that drives me absolutely crazy. Physics Before Code (Yes, Even for Podcasts) We say this at Sourceful constantly: physics before code. The physical reality of the problem has to drive the architecture. Not the other way around. So I asked myself — what’s the physics of a podcast? A podcast is a conversation. A human being talking to other human beings. That’s the constraint. Everything else is infrastructure. Once I framed it that way, the architecture became obvious. Research goes in. A script comes out — not generated audio, a script. Something I can read, edit, argue with. The script is the single source of truth. Audio, transcript, blog post — all derived from that one artifact. One command. Research in, episode out. The Memory Problem Here’s the part that gets me genuinely excited: the system has memory. It knows what we talked about in episode twelve. It knows which topics we’ve beaten to death and which ones we’ve barely touched. When it generates a new script, it has the context of every previous episode. That means I can say: give me a ten-minute episode on grid storage, connect it to what we said about frequency regulation in episode forty-two, and don’t repeat the EV stuff from last week. And it does that. Because it has the memory. There’s an obvious irony here. I spend all day building local-first coordination infrastructure for the energy grid — systems with memory, context, and local intelligence. And then I was going home and using a cloud-only, no-memory, no-control tool to make my podcast. The cobbler’s children have no shoes, right? Not anymore. Fifteen Iterations to Sound Like Myself I should be honest about the process, because it wasn’t “write code, done.” It was deeply iterative. The first version sounded terrible. Flat. Robotic. Like a GPS navigation system reading my thoughts. So I started tweaking. Voice stability settings. Speed. Silence between segments. Which model to use for synthesis. It turns out that AI voice cloning is incredibly sensitive to these parameters. Too much “style” and my voice drifts into accents I’ve never had. Too little speed and I sound sedated. The wrong model and prosody falls apart completely. It took about fifteen iterations to get to what you’re hearing now. It’s still not perfect. But it’s mine. Tools Shape Output There’s a deeper point here that I keep coming back to: the tools we use shape the things we make. If your tool gives you no control, you get generic output. If your tool has memory, your output has continuity. If your tool understands your constraints, your output respects them. We’re not just consuming AI anymore. We’re building with it — giving it context, constraints, memory. Making it an extension of how we think, not a replacement for it. Same Principles, Same Architecture The same shift is happening in energy. From consuming to coordinating. Your solar panels don’t just generate power — they coordinate with the grid. Your battery doesn’t just store energy — it provides frequency response. Your EV doesn’t just drive — it balances load in your neighborhood. Every device becomes a participant, not just a consumer. That’s what Coordinated is about. Not just the energy grid. Not just markets and thermodynamics. It’s about the idea that complex systems need coordination. And coordination requires memory, local intelligence, and respect for the physics of the problem. Whether that problem is balancing fifty hertz across a continent... or making a podcast that actually gets better over time. Same principles. Same architecture. Same obsession with doing it right. This episode was produced entirely by the Coordinated pipeline — script generated by Claude Opus, voice rendered through ElevenLabs, blog post auto-generated, all from one command. The cobbler finally made himself some shoes. Key Takeaways * I rebuilt the podcast production pipeline from scratch because NotebookLM gave me no control over length, focus, or continuity — every episode existed in isolation with no memory of previous ones. * The new system treats the script as the single source of truth. Research goes in, a human-editable script comes out, and everything else (audio, transcript, blog post) derives from that one artifact. * Episode memory changes everything — the system knows what 68 previous episodes covered, so it can build on past conversations instead of repeating them. * The tools we use shape what we make. If your tool has no memory, your output has no continuity. The same principle applies to energy: devices need to be participants with local intelligence, not just dumb consumers on a cloud API. Full transcript available below the audio player. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    6 min
  4. NATS as the Nervous System of the Grid

    28 FEB.

    NATS as the Nervous System of the Grid

    In this special episode of Coordinated with Fredrik, we went deep — not at the strategy layer, not at the founder layer, but at the socket level. This was a strict engineering teardown of a single question: Can NATS become the autonomic nervous system of Sourceful Energy? What follows is the architectural synthesis. The Real Problem We’re Solving At Sourceful, we are not just operating a backend. We are coordinating: * High-throughput mobile app traffic * A growing mesh of backend microservices * Massive telemetry streams from distributed energy assets * Smart meters * EV chargers * Solar arrays in remote geographies * Wind turbines behind unstable uplinks This is not a traditional cloud-native architecture. It is cloud + edge + unreliable networks + financial correctness. That requires more than horizontal scaling. It requires: * Isolation of failure domains * Backpressure awareness * Autonomous routing * Dynamic topology adaptation * Edge survivability This is where NATS becomes interesting. The 15 MB Binary That Changes the Conversation NATS runs as a single static Go binary of roughly 15 MB. A single node can handle 15–18 million messages per second. That sounds unrealistic until you understand the engineering choices: Concurrency Model * Lightweight goroutines * User-space scheduling * Massive TCP connection density Memory Discipline * Zero-allocation parsing * Pointer passing instead of object churn * Minimal garbage collection pauses Routing Philosophy * Pure in-memory message routing * Disk I/O only when explicitly requested NATS is not a heavyweight enterprise broker. It is a highly optimized, high-throughput routing engine. Selfish Optimization: Protect the System First One of the most controversial ideas in NATS is “selfish optimization.” If a downstream consumer slows down: * NATS does not buffer indefinitely * NATS does not slow producers * NATS drops the connection From a traditional enterprise mindset, that sounds aggressive. But in distributed energy systems, it is correct. If the router collapses: * Telemetry stops * Control signals stop * Billing APIs stop * The entire system fails Protecting the health of the transport layer is non-negotiable. The whole must survive even if individual services fail. Core NATS vs JetStream NATS separates transient routing from durability. Core NATS * In-memory * Fire-and-forget * Ultra-low latency * No persistence If no subscriber exists, the message is dropped. Use this for: * Real-time telemetry * State queries * Fast internal RPC JetStream * Durable streams * Raft-based replication * Replayable consumers * At-least-once / exactly-once semantics Use this for: * Billing events * Immutable records * Financial correctness The key principle: Persistence is opt-in. You only pay for disk I/O when the workload requires it. Raft Without the Bottleneck Most distributed streaming systems rely on a single global consensus group. JetStream does something different: * One meta-consensus group for cluster metadata * Independent Raft groups per stream * Even per consumer If you run 5,000 streams, you run 5,000 independent consensus groups. Why does that not collapse under overhead? Because: * Each Raft group runs as a lightweight goroutine * Heartbeats are batched * Streams are isolated A spike in one stream does not block the others. This is horizontal scalability at the consensus layer. Subject-Based Routing Instead of IP-Based Thinking NATS routes by subject strings, not by IP addresses. Example: telemetry.eu.germany.meter80492 Routing is powered by an optimized radix trie. This means: * No regex matching * No linear scans * Logarithmic routing complexity Subject hierarchies become your semantic network. Developers stop thinking about: * Hostnames * Ports * DNS * Reverse proxies They express interest in data.The infrastructure routes it. Request-Reply Without HTTP NATS supports request-reply patterns without point-to-point connections. Mechanically: * The requester generates a temporary reply subject * Publishes a message including that subject * A service processes and replies to that subject * The first response wins To developers, it feels synchronous. Under the hood, it is fully asynchronous and multiplexed. Queue groups provide built-in distributed load balancing. This eliminates internal service meshes and east-west load balancers for microservice communication. Public ingress still requires API gateways.Internal routing becomes dramatically simpler. Global Scaling with Superclusters Inside a region, NATS uses a full mesh cluster. Across regions, it uses superclusters connected by gateways. Gateways operate in interest-only mode. If Europe is not subscribing to US telemetry: * No bytes cross the Atlantic. The moment interest appears: * Flow begins automatically. This prevents blind data mirroring and reduces egress costs dramatically. Leaf Nodes: Edge Autonomy Leaf nodes are where NATS becomes transformative for energy infrastructure. A leaf node: * Runs locally on edge hardware * Initiates an outbound TLS connection to the core * Requires no inbound firewall rules * Multiplexes all traffic over a single connection If connectivity drops: * Local JetStream buffers telemetry * Local control systems continue functioning * No data is lost When connectivity restores: * The stream synchronizes automatically * Consumers resume from correct offsets This enables: Autonomous edge during disconnection.Seamless federation when connected. For EV chargers, solar arrays, and wind turbines, this is critical. Decentralized Security at Scale Traditional brokers rely on centralized authentication. That becomes a bottleneck at scale. NATS uses: * ED25519 keypairs * JWT-based trust hierarchy * Operator → Account → User model Authentication becomes pure cryptographic verification. No database lookups.No external latency.No central auth bottleneck. Permissions are embedded in JWT claims: * Publish rights * Subscribe rights * Data limits Revocation can be pushed in real time without cluster restarts. For enterprises tied to Okta or LDAP, auth callouts bridge existing identity systems into decentralized JWT issuance. This allows compliance without sacrificing performance. Kafka, RabbitMQ, MQTT — Where NATS Fits Kafka Designed for: * Durable append-only logs * Analytics pipelines * Data lakes Strength: * Historical retention Tradeoff: * Partition-bound scaling * Consumer rebalancing pauses * Operational overhead NATS: * Dynamic routing * Elastic worker scaling * Lower latency for microservices RabbitMQ Designed for: * Complex exchange-based routing Tradeoff: * Higher operational fragility under partitions * Cluster state complexity NATS: * Simpler subject routing * Gossip for cluster sync * Raft-backed durability MQTT Best for: * Constrained IoT devices NATS does not replace MQTT. It embeds an MQTT broker. MQTT topics are mapped directly to NATS subjects internally. This creates a unified backbone: * Edge devices speak MQTT * Backend services speak NATS * No external translation layer required The Paradigm Shift For decades, distributed systems have been built around: * IP addresses * DNS names * Blocking HTTP calls * Explicit service discovery NATS introduces a different idea: Express interest in a semantic subject.Let an autonomic system route it dynamically. In a world of: * Real-time AI inference * Autonomous energy assets * Fluid containerized workloads * Distributed edge computing IP-based thinking becomes friction. Subject-based thinking becomes leverage. What This Means for Sourceful Adopting NATS is not swapping a queue. It is: * Flattening internal service meshes * Eliminating east-west load balancers * Moving complexity into autonomic infrastructure * Enabling edge-first resilience * Protecting system health by design * Running a global coordination backbone on a single optimized binary The goal is operational simplicity. Push complexity into the transport layer.Free engineers to focus on energy optimization logic. If we get this right: The infrastructure becomes invisible. And the grid becomes programmable. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    40 min
  5. Slaying the Broken Charger Dragon

    28 FEB.

    Slaying the Broken Charger Dragon

    You know the feeling. You’re low on range. The map says there’s a charger around the corner. You pull up — relief. And then… black screen. Or “out of order.” Or worse: it looks fine, but nothing happens. That moment — that tiny spike of anxiety — is the real enemy of EV adoption. In this episode of Coordinated with Fredrik, we go deep into the engineering side of that problem. Not surface-level EV talk. Not market hype. We unpack the actual backend architecture required to build a charging network that doesn’t break — and more importantly, one that scales. This was a special deep dive made for David and the team at Sourceful Energy. The mission: How do you build a truly robust charging network using OCPP — but do it in a modern, open-source, event-driven way that fits a NATS-based architecture? Why OCPP 2.0.1 Is the Only Serious Choice Today We trace the evolution of OCPP from the early SOAP-based days (heavy XML over fragile 2G connections) to the WebSocket revolution of 1.6 — and then to the modern architecture of 2.0.1. Here’s the punchline: * 1.6 works, but it’s messy and vendor-fragmented. * 2.0.1 is structured, hierarchical, and event-driven. * It has native certificate handling. * It supports granular device modeling. * It enables accurate transaction handling. * It’s built for ISO 15118 and plug-and-charge. * And it cleanly prepares you for OCPP 2.1 and bidirectional charging (V2X). If you’re building from scratch in 2026, there’s no serious argument for staying on legacy 1.6 unless hardware forces your hand. The Architecture Question: Don’t Build a CSMS From Scratch The temptation is obvious: “We’re engineers. Let’s build it ourselves.” That’s a trap. Implementing the full OCPP spec — properly — is a multi-year effort. Edge cases, retries, timeouts, certificate handling, WebSocket state management… it’s a black hole. Instead, we explore the open-source landscape: * The old monolithic Java servers. * Client-side firmware stacks. * And finally: a modern, modular TypeScript-based backend designed for 1.6 and 2.0.1 side by side. The key architectural insight: Use a modular OCPP engine as the edge gateway.Inject a custom NATS adapter.Publish clean, validated events into your own internal event-driven system. Let OCPP parsing and compliance live at the edge.Let your business logic live in your own microservices. That separation is everything. NATS + OCPP 2.0.1 = Clean Topic Hierarchies OCPP 2.0.1’s hierarchical device model maps beautifully to NATS subject structures. Instead of a generic firehose of messages, you can structure topics like: ocpp.v2.station-42.evse1.connector.temperature ocpp.v2.station-42.transactionevent ocpp.v2.station-42.status Now your billing service only subscribes to transaction events.Your analytics service subscribes to metering data.Your ops dashboard listens to error codes. Fully decoupled. Clean. Scalable. This is how you avoid turning your backend into a spaghetti monolith. Reliability Is Not a Feature — It’s the Product Up to 25% of public chargers can be non-functional at any given time. That’s not a UX issue. That’s a systemic architectural failure. A huge reason: vague error reporting. If your charger only reports “Other error,” your operations team has no choice but to roll a truck. That’s expensive. And slow. In the episode, we talk about: * Mandating standardized granular error codes. * Using compliance tools to verify hardware implementations. * Making reliability a contractual requirement in procurement. Because here’s the uncomfortable truth: The world’s best backend can’t fix bad firmware. Architecture and hardware procurement strategy must align. The Bigger Play: From Charger to Grid Asset We close the episode with something bigger. OCPP 2.1 introduced proper bidirectional charging (V2X). That means EVs stop being passive loads and start becoming active grid assets. If your backend is: * Event-driven * Granular * Secure * High-throughput You’re not just running a charging network. You’re laying the foundation for a virtual power plant. And that only works if the software pipes are designed correctly from day one. If you’re building infrastructure — not just apps — this episode is for you. This wasn’t about theory. It was about architecture decisions that determine whether your network becomes resilient, scalable, and future-proof — or collapses under its own complexity. Thanks for tuning in to Coordinated with Fredrik. More deep dives coming. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    33 min
  6. Special Deep Dive for David: OCPP Without the Marketing Fluff

    27 FEB.

    Special Deep Dive for David: OCPP Without the Marketing Fluff

    It wasn’t for a broad audience.It wasn’t for policymakers or the energy-curious.It was built for one specific person: David — a firmware engineer driving north on the E4 toward Linköping. Long highway. Solo drive. Brain half in traffic, half in state machines. So we decided to go deep into something that actually matters if you’re building charging infrastructure: OCPP — Open Charge Point Protocol. No marketing fluff. No “empowering the green transition.”Just architecture, protocol evolution, firmware headaches, and where this is all going. What OCPP Actually Is Strip it down and OCPP is simple: It is the application-layer language between a charging station (EVSE) and a backend system (CSMS). That’s it. Today, that means: * JSON payloads * WebSockets transport * TLS security * Deterministic state machines It is the pipe between hardware and cloud. And if you’re writing firmware, it is the difference between “works in the lab” and “survives production.” The Early Days: SOAP, XML, and Pain Before OCPP 1.6, we lived in the SOAP era. XML everywhere.Heavy envelopes.Verbose messages.Request/response HTTP only. That created a structural problem: chargers sit behind NATs and cellular modems. Servers couldn’t easily push commands down to them. So chargers had to poll. “Do you have work for me?”“Do you have work for me?”“Do you have work for me?” Inefficient. Expensive. Messy. And for embedded systems? Parsing XML on a constrained MCU was not fun. OCPP 1.6J — The WebSocket Revolution Then came OCPP 1.6J. The “J” mattered. JSON instead of XML.Persistent WebSockets instead of pure HTTP.Bidirectional messaging. Suddenly: * The backend could push commands instantly. * Latency dropped. * Cellular data usage shrank. * Parsing became lighter. For firmware engineers, this was a major quality-of-life upgrade. And 1.6 became everywhere. AC chargers. DC chargers. Public networks. Private networks. It worked. But it wasn’t clean. The Ambiguity Problem OCPP 1.6 had a philosophical flaw: It was ambiguous. Take a simple case. A user plugs in a cable before authorizing. The charger sends: StatusNotification: Preparing What does that mean? * Plug inserted? * Authorized but waiting? * Internal self-check? * Something else? The backend had to infer meaning from sequence patterns. That’s brittle engineering. And when you scale across vendors, those interpretations diverge. OCPP 2.0 → 2.0.1: The Hard Reset OCPP 2.0 tried to fix everything. It introduced a massive architectural shift — including a hierarchical device model. But the specification itself had issues: * Broken schema references * Circular definitions * Inconsistencies You couldn’t strictly validate against it. So 2.0.1 replaced it. And this is important: * 2.0.1 is not backward compatible with 1.6. * It’s a structural rewrite. * If someone says “OCPP 2.0,” they almost certainly mean 2.0.1. Flat World vs Hierarchical World In 1.6, configuration was flat. Key → Value. Like an INI file with a networking layer. In 2.0.1, everything becomes a tree. Charging Station→ EVSE→ Connector And beyond that: * Cooling systems * Power modules * Converters * Displays * Locks * Subsystems Each defined as: * Component * Variable * Attribute This is no longer just a protocol.It’s a digital twin of the hardware. For firmware engineers, that means: * Internal state model required * Memory allocation planning * Component indexing strategies * Careful RAM management You’re not just flipping relays anymore.You’re modeling the machine. Transactions: From Guessing to Determinism 1.6 had separate messages: * StartTransaction * StopTransaction * MeterValues 2.0.1 consolidates everything into: TransactionEvent Each event includes: * Event type * Trigger reason Now the charger can explicitly say: * CablePluggedIn * Authorized * RemoteStart * EVConnected No guessing.No backend inference. Just a deterministic state machine. This is one of the most important improvements in the protocol’s history. Offline Handling: The Reality of Cellular Networks Cellular drops. Forests exist. Power glitches happen. In 1.6: * Local whitelist for RFID * Vague retry behavior * Data sometimes lost * Billing inconsistencies In 2.0.1: * Defined queuing behavior * Sequential ordering * Transaction sequence numbers * Explicit retry logic But here’s the firmware cost: You now need: * Non-volatile message storage * Flash wear-leveling * Circular buffer logic * Message prioritization This is where embedded engineering meets infrastructure-grade reliability. Ping Is Not Heartbeat One subtle but critical distinction: WebSocket pingKeeps the TCP connection alive. OCPP heartbeatConfirms the application logic is running. You can have: * Working ping * Deadlocked firmware loop If your watchdog logic monitors only socket state, you will miss application failures. Network uptime ≠ system health. Error Handling: Stop Using “OtherError” In 1.6, the error enum was limited. Anything outside predefined values became: OtherError Operationally useless. 2.0.1 improves this with structured NotifyEvent messages tied to components. Then industry alignment brought standardized Minimum Required Error Codes (MRECs). Instead of “OtherError,” you can send precise fault codes for: * Ground fault * Pilot failure * UI failure * Network outage That difference saves unnecessary truck rolls. And truck rolls are expensive. Security: From Naive to Cryptographic Security evolved dramatically. Profile 1:Plain HTTP. Not acceptable. Profile 2:TLS — server authentication. Profile 3:Mutual TLS — charger and server authenticate each other. 2.0.1 improves: * Certificate lifecycle management * CSR handling * Renewal flows Add secure firmware updates with signature verification, and now you have something defensible. Without that, your charger is just an exposed Linux box with high voltage attached. ISO 15118 and Plug & Charge Plug & Charge requires certificate exchange between: * Vehicle OEM * Mobility operator * Charging infrastructure In 1.6, this required awkward tunneling and vendor extensions. In 2.0.1, it is native. If you are building serious DC fast chargers today, 2.0.1 is not optional. OCPP 2.1: The Grid Starts Moving Released in January 2025, OCPP 2.1 extends 2.0.1 and adds: * Bidirectional charging (V2X) * DER integration * Dynamic payment support * Secure QR-based session flows This is where it gets interesting. When vehicles can discharge power: * V2H (vehicle-to-home) * V2G (vehicle-to-grid) The charger becomes a coordination node. And the backend becomes an orchestrator of distributed batteries. At scale, that means millions of mobile storage units participating in grid balancing. The physical charger becomes infrastructure plumbing. The real asset becomes the battery — moving down the highway. The Bigger Picture The shift from 1.6 to 2.0.1 and 2.1 is a shift: From ambiguity → determinismFrom flat config → digital twinFrom best effort → structured reliabilityFrom charging → energy orchestration We are moving from “start a charging session” to: “Coordinate distributed energy assets dynamically across the grid.” And that is a completely different future. The car is no longer just consuming electricity. The car is becoming part of the grid. David, if you’re reading this — or listening back — May your TLS handshakes succeed.May your heap never fragment.May your sequence numbers stay ordered. And may your latency be low and your uptime high. See you in the next deep dive. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    28 min
  7. The Internet Was Born in Panic

    22 FEB.

    The Internet Was Born in Panic

    The textbook version of how the internet came to be is clean. A few smart people in California connected some computers, email happened, and then we got cat videos. That version is wrong. Or rather, it is so sanitized that it misses what actually makes the story worth telling. I spent this episode going deep on the real source material. DARPA archives, technical memos from the Internet Society, oral histories from the engineers who were actually in the room. And what came out is a story that is far more chaotic, far more human, and far more terrifying than the version you got in school. The fear that started everything The internet does not begin with a desire to share information. It begins with the end of the world. Early 1960s. The Cuban Missile Crisis is fresh in everyone’s memory. The United States and the Soviet Union are sitting on arsenals that can flatten civilization several times over. And in the Pentagon, the generals have a very specific nightmare that has nothing to do with the bombs themselves. It is about what happens after the first bomb lands. The question was simple: if the president picks up the phone to order a counter strike, does the phone actually work? The entire US communications infrastructure at the time ran on the AT&T telephone network. Circuit switched. When you called someone, the system physically connected a series of copper wires and mechanical switches all the way across the continent. You were essentially renting a very long wire for the duration of your conversation. The problem was the switching stations. All those wires funneled through major hubs in major cities. Chicago. St. Louis. Denver. Cities that happened to be top tier targets for Soviet missiles. Take out the switching station in St. Louis and you did not just lose St. Louis. You severed the connection between the entire East Coast and the West Coast. Efficient for peacetime. Incredibly brittle for war. Paul Baran and the fishnet This brittleness is what brought Paul Baran into the picture, and this is a name that really should be on statues. Baran was an engineer at the RAND Corporation, the think tank tasked with thinking about the unthinkable. Around 1960, he started working on the survivability problem. The result was massive. Eleven volumes of what became the On Distributed Communications memorandum. In those papers, he drew three network topologies that remain the most important diagrams in the history of computing. Centralized: a bicycle wheel. All spokes connect to one hub. One bomb, game over. Decentralized: a few bicycle wheels connected to each other. Better. But you can still isolate large chunks of the network by hitting the right nodes. Distributed: a fishnet. No center. No hubs. Every node connected to several of its neighbors. A mesh. And then Baran did something that separates the thinkers from the engineers. He did not just draw it. He proved it mathematically. He ran Monte Carlo simulations where he virtually destroyed nodes in his mesh. He simulated a nuclear attack on his own design. The results were staggering. Even if you destroyed 50% of the nodes, literally wiped half the map off the face of the earth, the remaining nodes could still maintain significant connectivity. That is deeply counterintuitive. You would think losing half the network would kill the whole thing. But the beauty of redundancy is that if the direct route is gone, the message goes left, then down, then right, then up. As long as some path exists anywhere, the message gets through. But this created a new problem. In the old system you had a dedicated wire. You knew the exact path. In a fishnet where the path changes constantly and cities might get nuked mid-sentence, how does the information know where to go? Hot potato routing and the invention of packets Baran realized you could not use analog voice anymore. You could not send a continuous stream. If the path breaks, the stream breaks. You have to digitize it. Chop it up. He proposed taking data and cutting it into tiny digital chunks of 1,024 bits each. He called them message blocks. Then he introduced a concept he called hot potato routing. A guy working on apocalyptic nuclear survival strategy called it hot potato. And it is the perfect description. Imagine each node is a person. You hand them a message block. They do not store it. They do not think about it. They look at the address, look at their neighbors to see who is still alive and not busy, and they throw it to the best option immediately. Get it out of here. If that neighbor gets destroyed a millisecond later, the next potato goes to a different neighbor. No central commander needed. The network heals itself, packet by packet, node by node. Now here is where the story gets strange. While Baran is doing this work in Santa Monica, fueled by Cold War paranoia, a physicist named Donald Davies at the National Physical Laboratory in the UK is independently arriving at almost the exact same solution. But Davies was not thinking about bombs. He was thinking about efficiency. Davies realized that the phone system was a terrible fit for computer data. Phone calls are continuous. Computers are bursty. You type a command, burst of data, then sit there thinking for twenty seconds. Silence. Then the computer replies. Burst. If you are renting an entire highway for that, you are driving one car down it every ten minutes. Davies proposed chopping data into chunks and weaving different conversations together on the same wire. Like shuffling a deck of cards. And he gave us the word we still use: packets. The wild part is that Baran and Davies, working in total secrecy from each other on different continents with completely different motivations, both settled on 1,024 bits as the optimal packet size. The physics of information transfer pushed them both to the exact same destination. It was not just invented. It was discovered. From theory to refrigerators Theory is cheap though. Building the thing required money, and it required a guy named JCR Licklider. Licklider was a psychologist, which matters. He was not just a hardware person. He was interested in how humans and machines interacted. In 1962 he took over the computer research program at ARPA, and he brought a philosophy that was alien to the military. He did not see computers as ballistic calculators. He saw them as communication devices. In 1963 he wrote a memo to his colleagues with a title that still gives me chills: To the Members and Affiliates of the Intergalactic Computer Network. This man was dreaming of the cloud fifty years too early. Licklider was the evangelist. He passed the torch to Bob Taylor, who got the project approved for the most mundane reason imaginable. Taylor had three terminals in his Pentagon office, each connected to a different mainframe at a different university. None of them talked to each other. He had to physically roll his chair between them. He reportedly said: “Man, it is obvious what to do. If you have these three terminals, there ought to be one terminal that goes anywhere you want to go.” That frustration sparked the ARPANET. Taylor hired Larry Roberts as chief architect, but Roberts immediately hit a wall. The universities he wanted to connect all ran different computers speaking different machine languages. And the universities were hostile to the idea. They did not want to give up their precious computing cycles to run experimental network software. The solution came from Wesley Clark: do not ask the mainframes to run the network. Build a smaller, separate computer to handle the traffic. They called it the IMP, the Interface Message Processor. ARPA sent out a request for quotation to 140 companies. IBM laughed at them. The giants of the industry thought packet switching was unstable. Only 12 bids came back. The winner was a small consulting firm in Cambridge, Massachusetts: Bolt, Beranek and Newman. BBN. The IMP they built was a modified Honeywell DDP-516. A steel refrigerator you could probably drop off a moving truck and it would still boot up. Inside that steel fridge: 12 kilobytes of memory. The code that ran the entire early internet: 6,000 words of assembly language. No room for bloat. And the best side note from this era: when BBN won the contract, Senator Ted Kennedy sent them a congratulatory telegram. He had misread Interface Message Processor as Interfaith Message Processor. Considering they were getting a Honeywell to talk to an IBM, it was basically a religious miracle. Lo October 29, 1969. Boelter Hall, UCLA, Room 3420. A windowless room filled with the hum of cooling fans. Charlie Klein, a 21 year old grad student, sits at a terminal. 350 miles away at the Stanford Research Institute, Bill Duvall sits at another. They are coordinating over a regular telephone. Klein types L. “Did you get the L?” Duvall checks: “Got the L.” Klein types O. “Got the O.” Klein types G. The system crashes. Buffer overflow on the Stanford side. The first message ever sent on the internet was “LO.” Like lo and behold. It was accidental poetry. The telegraph got “What hath God wrought.” The telephone got “Mr. Watson, come here.” Both scripted and rehearsed. The internet began with a crash and a fragment. Feels honest, somehow. The internet is messy. It started messy. They fixed the bug within an hour. By December they had four nodes running. The constitution of cyberspace The original ARPANET ran on a protocol called NCP. It was designed for a world where ARPANET was the only network and all the hardware was trusted. But by the early 70s, other networks started appearing. Satellite links. Radio networks in Hawaii. And none of them could talk to each other. Enter Vint Cerf and Bob Kahn. In 1974 they published a paper that still runs the world: A Protocol for Packet Network Intercommunication. They laid out four ground rules that I think of as the constitution of cyberspace. Each network stands on its own. Yo

    34 min
  8. The $8.8 Trillion Foundation Nobody Owns

    22 FEB.

    The $8.8 Trillion Foundation Nobody Owns

    There is a number that keeps rattling around in my head since recording this episode: $8.8 trillion. That is the demand-side value of open source software according to a recent Harvard Business School study. Not the market cap of the companies selling software. The value of the code itself. The actual lines sitting in public repositories, running the global economy, maintained by volunteers, hobbyists, and a rotating cast of corporate contributors who might get reassigned next quarter. To put that in perspective, it rivals the GDP of the entire Eurozone. And here is what keeps me up at night: if I buy a physical component for our energy infrastructure, I get a warranty, a supplier, and a paper trail. If it breaks, I know who to call. But the software stack underneath all of it — the grid management, cloud infrastructure, data pipelines — that sits on a foundation that legally comes with zero warranty. None. Express or implied. That tension is what this episode is really about. When software was just the manual We tend to think of software as the product. But go back to the 1950s and 60s, and software was an accessory that shipped with the hardware. Nobody hoarded it because there was no reason to. IBM customers formed a user group in 1955 called SHARE, and their motto was disarmingly simple: “SHARE is not an acronym, it’s what we do.” By 1959, they had collaboratively written an entire operating system. Just engineers helping engineers get million-dollar machines to work better. That culture crystallized into something close to a philosophy at the MIT AI Lab in the 1970s. Code was left open. If you needed a program for your experiment, you walked to a cabinet, copied the source from a paper tape, added what you needed, and put it back. It was communal by default. Then the world changed. A printer jam that reshaped the global economy The proprietary turn started with a legal shift, not a technical one. IBM unbundled software from hardware in 1969 under antitrust pressure, and overnight, code got a price tag. Bill Gates fired the first cultural shot in 1976 with his open letter to hobbyists, essentially arguing that if software has value, creators deserve to get paid. Fair point from a business perspective. But to the hacker community, it felt like someone was putting fences around a public park. The real breaking point, though, was absurdly petty. Richard Stallman wanted to fix a paper jam on a Xerox printer at MIT. He had done it before on their old printer by writing a script that notified users when their print job got stuck. But the new Xerox printer ran proprietary code. When he asked a researcher at Carnegie Mellon for the source, the guy said no — he had signed an NDA. Stallman viewed this as a moral betrayal. That printer jam radicalized him. He launched the GNU project in 1983 and created the GPL, a legal hack that used copyright law against itself. Copyright restricts sharing. Copyleft mandates it. The GPL said: do whatever you want with this code, but if you distribute changes, you must keep them open under the same license. It was viral by design. The hobbyist who built the engine Stallman had the philosophy, the legal tools, and the foundational programs. But by the early 90s, GNU was missing its kernel — the part that actually talks to the hardware. Their kernel project, Hurd, was stuck in architectural perfection debates. While they were building a cathedral, a 21-year-old Finnish student named Linus Torvalds posted a casual message on Usenet in August 1991: “I’m doing a free operating system, just a hobby, won’t be big and professional like GNU.” Probably the greatest understatement in the history of technology. Linus licensed his kernel under Stallman’s GPL, not for ideological reasons, but because it was a fair trade: I show you my code, you show me yours. His monolithic kernel was messy, technically “wrong” by academic standards, but it was fast and it worked. History proved that worse is better. A massive, chaotic network of volunteers connected by the internet could iterate faster than any closed corporate team. When the boardrooms noticed Corporate America eventually could not ignore it. But “free software” sounded anti-capitalist and legally terrifying to a CIO in 1997. So in 1998, at a strategy session in Palo Alto, Christine Peterson suggested the term “open source” — stripping away the moral philosophy and replacing it with a pure engineering and business argument. Less vendor lock-in. Shared maintenance costs. Better, faster, cheaper. Microsoft was terrified. Internally, their own engineers admitted Linux was competitive. Publicly, Steve Ballmer called it “a cancer.” Their strategy was embrace, extend, extinguish. But then IBM showed up as the white knight. In 2001, they announced a billion-dollar investment in Linux. Not because they cared about the OS — they made their money on hardware and consulting. It was a move to commoditize the operating system layer and destroy Sun Microsystems’ expensive proprietary Unix business. IBM told every Fortune 500 CIO: Linux is safe. You will not get fired for running it. And once it was deemed safe, it started eating the world. The LAMP stack (Linux, Apache, MySQL, PHP/Python) let startups build for zero licensing cost. Facebook, Wikipedia, WordPress — all built on free infrastructure. By 2002, Apache ran 58% of all websites. The irony peak came in 2018 when Microsoft, the “cancer” company, acquired GitHub for $7.5 billion. They finally realized the cancer was actually the cure for their own irrelevance. The fragility underneath Here is where it gets uncomfortable. XKCD comic 2347 shows all of modern digital infrastructure as a massive tower balanced on one tiny block: “a project some random person in Nebraska has been thanklessly maintaining since 2003.” It is funny until you realize it is basically a documentary. Heartbleed in 2014 showed us that the encryption library securing most of the internet’s traffic was maintained by essentially one person full-time. Log4j in 2021 scored 10 out of 10 on the severity scale and compromised 40% of business networks overnight — a logging library so boring nobody paid attention to it. But the one that should genuinely scare any CEO is the XZ Utils backdoor from 2024. This was not an accidental bug. A persona calling themselves “Jia Tan,” likely a state-sponsored actor, spent two to three years earning the trust of a burned-out volunteer maintainer. They submitted helpful patches, took load off the tired guy’s shoulders, and were gradually granted repository permissions. Once they had the keys, they injected a backdoor designed to subvert SSH authentication — the way administrators securely log into servers. If it had hit stable Linux releases, attackers would have had a master key to millions of servers worldwide. It was caught by pure luck. A Microsoft engineer named Andres Freund noticed SSH logins lagging by half a second during routine benchmarking, got curious, decompiled the binaries, and found the most sophisticated supply chain attack in history. Half a second of latency saved us. Maintainer burnout is not just a sad open source HR problem. It is a national security vulnerability. AI breaks the definition And now we have AI, where the very meaning of “source code” falls apart. A traditional open source project is human-readable text files. An LLM is three things: architecture, training data, and weights. When Meta releases Llama and calls it “open source,” they give you the weights but not the training data. It is like handing someone a compiled binary without the original code. You can run it, but you cannot reproduce it or deeply audit it. The geopolitics here are loud. Meta releasing Llama was not charity. It was scorched earth. If capable AI models become a free commodity, OpenAI and Google lose their moat. Meta protects its core ad business by destroying competitors’ margins. It is IBM and Linux all over again, just at a different scale. And now nation states are playing. The UAE funds the Falcon models. France backs Mistral. DeepSeek R1 from China matched GPT-4 reasoning at a fraction of the training cost and briefly crashed NVIDIA’s stock. China is using open source as a competitive wedge against American AI dominance. Satya Nadella made a point that stuck with me: data sovereignty is not just about where your servers sit. It is about tacit knowledge. If you rely entirely on a closed API from an American tech giant, every prompt you send makes their model smarter. You are exporting your own intelligence. But if you download an open-weight model and fine-tune it locally, you keep the weights. You own the brain. For any company running critical infrastructure, this is not a theoretical debate. Grazer or gardener? The episode ends with a question I have been sitting with: when I look at our tech stack, do I see a pile of free resources to consume indefinitely? Or do I see a fragile supply chain that requires active stewardship? Because if you are not contributing back — engineering time, financial support, participating in governance — you are not a neutral user. You are part of the risk profile. You are building skyscrapers balanced on the shoulders of that tired volunteer in Nebraska. Open source is not just code anymore. It is the invisible critical infrastructure of the modern world. And infrastructure requires maintenance. Listen to the full episode of Coordinated with Fredrik wherever you get your podcasts. This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit frahlg.substack.com

    34 min

Om

Coordinated with Fredrik is an ongoing exploration of ideas at the intersection of technology, systems, and human curiosity. Each episode emerges from deep research. A process that blends AI tools like ChatGPT, Gemini, Claude, and Grok with long-form synthesis in NotebookLM. It’s a manual, deliberate workflow, part investigation, part reflection, where I let curiosity lead and see what patterns emerge. This project began as a personal research lab, a way to think in public and coordinate ideas across disciplines. If you find these topics as fascinating as I do, from decentralized systems to the psychology of coordination — you’re welcome to listen in. Enjoy the signal. frahlg.substack.com