The Node (and more) Banter

Platformatic

The Node (and more) Banter is your weekly dose of unfiltered, unscripted conversations between Luca Maraschi and Matteo Collina. We explore the edge cases, the anti-patterns, and the things no one puts in the docs. From distributed architecture to platform pitfalls and how enterprises tackle modern development—nothing’s off-limits. It’s not just Node.js®—it’s everything around it, wrapped in sharp banter, war stories, and real-world insight. The sky’s the limit.

  1. 4 FEB

    Scaling Node.js with the Right Signals: ELU, Kafka, and Kubernetes

    Node.js performance in production isn’t about a single number — it’s about understanding the signals that drive scaling, stability, and cost. Event Loop Utilization (ELU) sounds simple, but once you add Kafka consumers, Kubernetes autoscaling, streams, and worker threads, things get complicated fast. In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina dig into Node.js metrics through the lens of real-world, event-driven systems. We focus on how ELU behaves in Kafka-heavy workloads, how it correlates with CPU, memory, and I/O, and why choosing the right metrics matters when you’re running Node.js on Kubernetes — especially with architectures like Watt. We’ll explore: ✅ What Event Loop Utilization really measures — and why it’s a better signal than CPU alone ✅ How ELU behaves for Kafka consumers and stream-based workloads ✅ The relationship between ELU, memory pressure, and I/O saturation ✅ Why Kubernetes autoscalers struggle with Node.js — and where ELU fits in ✅ When worker threads help, and how to reason about ELU across workers ✅ How Kafka client design impacts event loop health and throughput ✅ Why Watt’s architecture aligns naturally with metric-driven scaling in K8s The big picture? Metrics shape architecture. If you run Node.js with Kafka on Kubernetes, this episode helps you understand which signals actually reflect load, how to avoid misleading autoscaling decisions, and why Watt was designed around these realities from day one.

    40 min
  2. 21 JAN

    When Recursion Crashes Your App: The Async Hooks DoS Nobody Expected

    JavaScript applications have long relied on a simple assumption: when recursion goes too far, the runtime throws a catchable error and the server survives. But under the hood, that assumption was never guaranteed — and with async context tracking enabled, it completely breaks. In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina dive into a recently disclosed Node.js bug that turned deep recursion into an unrecoverable process crash. A subtle interaction between stack exhaustion and async_hooks caused Node.js to exit immediately — bypassing try/catch, uncaughtException, and any chance of graceful recovery. The result: a denial-of-service risk silently affecting React Server Components, Next.js, and nearly every APM tool in production. We’ll cover: ✅ Why stack overflows were never a reliable safety mechanism — even before this bug ✅ How AsyncLocalStorage and async_hooks put fatal error handling on the call stack ✅ Why React Server Components, Next.js request context, and APM tools were all affected ✅ How a single deeply nested request could crash an entire Node.js process ✅ What the fix actually changes — and why it’s a mitigation, not a full solution ✅ What teams should do now: upgrades, input limits, and safer architectural patterns ✅ What we tried and didn't work This wasn’t a React bug. It wasn’t an APM bug. And it wasn’t even really a Node.js security bug. It was a reminder that recovering from resource exhaustion is not a contract, and that modern Node.js architectures increasingly depend on behaviour the runtime never promised. If you run React, Next.js, or any Node.js service with async context or observability enabled, this episode explains what broke, why it mattered, and how to avoid building availability on assumptions that won’t always hold.

    35 min
  3. 14 JAN

    The JAMstack Is Dead. Long Live the Runtime!

    Markdown in Git sounds like the simplest possible way to manage content. No CMS. No dashboards. No abstractions. Just files, version control, and AI agents that can grep the codebase. But once content needs to scale — across pages, teams, products, and automation — that simplicity starts to crack. Asset management sneaks back in. Permissions reappear. Content models emerge. Queries get rebuilt. And before long, you’re running a backend again. In this episode of The Node (& More) Banter, Luca Maraschi & Matteo Collina unpack the debate sparked by “You should never build a CMS” — and explain why deleting the CMS doesn’t delete the problem, it just moves it into code. We’ll cover: ✅ Why “content = page” breaks down as soon as content needs to be reused, queried, or governed ✅ How markdown + git quietly recreates CMS features — just without calling them that ✅ Why git workflows work for code but fall apart for real content collaboration ✅ Why AI agents need structured, queryable content — not grep and string matching ✅ How the JAMstack model collapses once content becomes dynamic, shared, and automated ✅ Why runtime-first architectures (and Node) are unavoidable in modern content systems The takeaway?You can delete the CMS UI — but you can’t delete the runtime. The JAMstack era is ending, and what replaces it is content infrastructure built for APIs, agents, and systems that need to reason, not just render.

    40 min
  4. 17/12/2025

    Inside the React RCE: What the Flight Vulnerability Really Reveals

    The latest vulnerabilities in React Server Functions and the React Flight Protocol highlight just how fragile modern serialization can be. When insecure prototype access escalates into remote code execution, it’s not just a bug — it’s a wake-up call for anyone building with server-driven React. In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina break down the newly disclosed React/Next.js RCE vulnerabilities and what they reveal about the complexity hidden inside today’s server-side React architectures. No blame, no sensationalism — just a clear explanation of what happened and why it matters. We’ll also touch on why this issue sent shockwaves across the industry. A single, strange-looking payload — now circulating widely — became the centerpiece of an exploit that blended JavaScript’s dynamic nature with a missing safety check in React Flight. Security researchers described it as a “CTF-level puzzle,” a reminder that powerful patterns like promise streaming, prototype inheritance, and dynamic evaluation come with sharp edges. We’ll cover: ✅ How React Server Functions and the Flight Protocol work — and why their serialization model is so complex. ✅ What made reference resolution and prototype access dangerous enough to enable RCE. ✅ Why server-driven React expands the attack surface when deserializing client input. ✅ How the patch fixes the root issue — and what this means for future React security. ✅ What teams should rethink today, from parsing to global state to architectural boundaries. Security incidents aren’t just CVEs — they’re blueprints for better engineering. If you run React Server Components, Next.js Server Actions, or any system that deserializes user input, this episode will help you understand the vulnerability, the fix, and the broader lessons for the ecosystem.

    30 min
  5. 10/12/2025

    The Node.js (R)evolution started - AWS just made it official

    Running Node.js in serverless environments should be simple: deploy a function, let AWS scale it, and forget about infrastructure. But when you introduce multi-concurrency, shared worker threads, global state risks, and CPU-bound workloads — it’s not that simple. In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina break down one of the biggest announcements from AWS re:Invent: the new Node.js runtime for Lambda Managed Instances. AWS is officially validating what Platformatic has been saying for months — Node.js is entering a multi-concurrency era, and most applications are not ready for it. We’re not only deep-diving into what this means for AWS in general, but also exploring how these changes reflect on modern enterprise web workloads, going beyond the headlines to explain why AWS had to move in this direction and what it means for building, scaling, and operating Node.js applications in 2025. We'll cover: ✅ What AWS’s new model changes — worker threads per vCPU, async/await concurrency, and 64 parallel requests per environment. ✅ How multi-concurrency exposes Node.js weaknesses — shared global state, unsafe DB clients, event-loop contention, and filesystem conflicts. ✅ Why these problems show up everywhere — not just in Lambda, but also in Kubernetes, EC2, Fargate, and on-prem deployments. ✅ How Platformatic anticipated this shift — and why Watt’s architecture (multi-worker isolation, kernel load balancing, no shared state) aligns with where AWS is steering the ecosystem. ✅ The performance implications — how concurrency amplifies latency spikes and failure cascades, and why architecture matters more than raw CPU. AWS’s announcement isn’t just a runtime update — it’s a public acknowledgement that the old “one request, one event loop” model of Node.js is gone. If you’re running Node.js today, whether serverless or self-hosted, this episode explains what’s changing under the hood, why it matters for performance, and how to stay ahead of it.

    32 min

About

The Node (and more) Banter is your weekly dose of unfiltered, unscripted conversations between Luca Maraschi and Matteo Collina. We explore the edge cases, the anti-patterns, and the things no one puts in the docs. From distributed architecture to platform pitfalls and how enterprises tackle modern development—nothing’s off-limits. It’s not just Node.js®—it’s everything around it, wrapped in sharp banter, war stories, and real-world insight. The sky’s the limit.