The Node (and more) Banter

Platformatic

The Node (and more) Banter is your weekly dose of unfiltered, unscripted conversations between Luca Maraschi and Matteo Collina. We explore the edge cases, the anti-patterns, and the things no one puts in the docs. From distributed architecture to platform pitfalls and how enterprises tackle modern development—nothing’s off-limits. It’s not just Node.js®—it’s everything around it, wrapped in sharp banter, war stories, and real-world insight. The sky’s the limit.

  1. 8 APR

    Skills, Agents & the Setup That Actually Works for AI-assisted JS Development

    Most developers let their AI assistant guess how to write Node.js. No context, no constraints, no history, just vibes and generic output. The result is code that ignores the event loop, skips proper error handling, and looks nothing like how senior engineers actually build. The question is: can you actually teach your AI to code like you? In this episode of The Node (and more) Banter, Luca Maraschi and Matteo Collina dive into the emerging AI toolchain for Node.js development, from Agent Skills to coding agents like OpenClaw and Pi. Matteo shares his own personal skills repository (mcollina/skills), a collection of battle-tested Node.js, Fastify, and TypeScript best practices that any AI coding assistant can load and apply out of the box. In this episode, we cover: ✅ What Agent Skills are and why the open standard changes how AI-assisted development works ✅ Why Matteo got frustrated with AI slop and built his own skills repo — and what's inside it ✅ How to install and use mcollina/skills with Claude Code, GitHub Copilot, OpenAI Codex, and more ✅ What OpenClaw and Pi are, and how they fit into the Node.js AI toolchain ✅ The practical question: what should a JavaScript developer actually install today to code with AI effectively? ✅ The difference between prompting from scratch every time vs. encoding your expertise once and reusing it everywhere The takeaway? AI-assisted development isn't just about which model you use, it's about the context you give it. Skills are how you stop training your AI from zero on every project and start shipping faster, with fewer corrections. If you're building with Node.js and you're not using skills yet, this episode will change how you set up your workflow.

    43 min
  2. 18 MAR

    Why Node.js Is the Critical Enabler for AI Applications

    AI applications are shifting from experiments toward real-world systems. As teams move to production, common challenges come up, such as managing low latency, handling concurrent requests, and integrating with different data sources and APIs. But beyond the models and prompts, there’s an important infrastructure question: Which runtime can handle AI workloads at scale and still be easy for developers to use? In this episode of “The Node (and more) Banter,” Luca Maraschi and Matteo Collina talk about why Node.js has become a key instrument for modern AI applications. Whether it’s managing LLM APIs, streaming responses, or building real-time agent systems and scalable AI backends, Node.js is at the heart of many production AI platforms. We’ll discuss why Node.js’s event-driven design works so well for AI workloads, how developer productivity speeds up AI development, and what enterprise teams should think about when building reliable AI services. Here’s what we’ll cover: ✅ Why most AI applications are about orchestration, not simply building models ✅ How Node.js manages streaming, concurrency, and real-time AI responses ✅ The role JavaScript plays in connecting models, APIs, and user interfaces ✅ Why developer speed matters in the fast-changing world of AI ✅ What enterprise teams need to think about when running AI workloads in production If you’re building or leading teams working on AI-powered products, this conversation will show why Node.js is becoming a key part of today’s AI stack.

    39 min
  3. 11 MAR

    Kubernetes Finally Gets Vercel-Style Deployment Safety

    Every deployment is a gamble. A user mid-session hits a new backend. A renamed field breaks a form. A shared TypeScript interface diverges between client and server, and suddenly your support queue is full, and three teams are in a bridge call trying to figure out who broke what. This is version skew, and it's been quietly slowing down engineering teams on Kubernetes for years. In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina introduce Skew Protection in Platformatic's Intelligent Command Center (ICC), bringing the deployment safety that frontend teams love about Vercel, directly into your existing Kubernetes setup. No migrations. Use the tools you're already using for your CI/CD pipeline. Just safer, faster shipping. We'll explore: ✅ Why version skew is a developer velocity problem disguised as a reliability problem ✅ How broken deployments silently erode user trust, and what it costs enterprises at scale ✅ How ICC pins users to their session version using cookie-based, version-aware routing ✅ The Active → Draining → Expired lifecycle that makes zero-downtime deploys predictable ✅ Why immutable per-version Deployments change how teams think about risk ✅ How Prometheus traffic monitoring automates cleanup. No manual rollback babysitting ✅ What this means for teams running Next.js, Remix, or monorepos on Kubernetes The big picture? Fear of breaking changes leads to bigger, rarer deployments, and that's where velocity goes to die. ICC's skew protection gives enterprise dev teams the confidence to ship smaller, ship faster, and stop making users pay the price for infrastructure gaps. Kubernetes just got a lot less scary.

    31 min
  4. 25 FEB

    Double the Density. Half the Memory (with James Snell)

    Node.js performance discussions usually revolve around CPU and latency. Memory often receives less attention. But memory footprint directly affects cost, scalability, cold starts, and container density. Cutting memory usage in half fundamentally changes how efficiently you can run Node.js in production. In this episode of The Node (& More) Banter, Luca Maraschi and Matteo Collina are joined by James Snell, Principal System Engineer at Cloudflare, core contributor to Node.js, and member of the Node.js Technical Steering Committee. Together, they unpack how we reduced Node.js memory consumption by 50 percent and what this reveals about V8 internals, runtime behaviour, and modern deployment environments. This conversation goes beyond surface-level tuning. It explores how JavaScript engine design decisions influence real-world infrastructure costs and architectural choices. We will explore: ✅ How V8 manages memory and where Node.js applications typically waste it ✅ What pointer compression is and why it has such a dramatic impact ✅ The tradeoffs between memory layout, performance, and compatibility ✅ How memory footprint influences Kubernetes density and serverless efficiency ✅ Why these optimizations matter for large scale and edge deployments ✅ What this means for the future of Node.js runtime evolution The takeaway? Memory is not just a technical detail. It is a strategic lever. If you are running Node.js in containers, serverless platforms, edge environments, or high-density clusters, this episode explains how reducing memory usage can unlock meaningful efficiency gains across your entire stack.

    35 min

About

The Node (and more) Banter is your weekly dose of unfiltered, unscripted conversations between Luca Maraschi and Matteo Collina. We explore the edge cases, the anti-patterns, and the things no one puts in the docs. From distributed architecture to platform pitfalls and how enterprises tackle modern development—nothing’s off-limits. It’s not just Node.js®—it’s everything around it, wrapped in sharp banter, war stories, and real-world insight. The sky’s the limit.

You Might Also Like