The Human in the Loop

Enrique Cordero

Welcome to The Human in the Loop, a weekly look at what’s going on in the world of AI. Every week, I go through the biggest stories, the weird experiments, and the stuff that might actually matter in our day-to-day lives.

  1. The Half-Life of a Good Decision

    3 DAYS AGO

    The Half-Life of a Good Decision

    The best practice you followed six months ago might be the technical debt you're cleaning up today. In traditional IT, a best practice can survive a decade. You study it. You argue for it in architecture reviews. You defend it when someone wants to cut corners. In AI, six months is enough to flip one into an antipattern. A paper published this week tested multi-agent orchestration frameworks against plain in-context prompting on procedural tasks. The orchestration lost. Same accuracy. More cost. More complexity. More failure modes. Six months ago, multi-agent was the answer you gave when someone asked how to handle complex workflows. Not because it was always right. Because models could not yet follow a long, careful prompt. That was the constraint. The scaffolding was built around it. The constraint changed. The scaffolding stayed. This is the part of AI adoption nobody talks about enough. It is not just that things move fast. It is that yesterday's correct decision becomes today's drag. And you cannot always feel it happening. The system still runs. The agents still coordinate. Everything looks fine until someone asks why you are paying for complexity that a single prompt could replace. We have approval processes built for risk. We do not have processes built for expiry. What is the half-life of an AI architectural decision right now? Six months? Three? This week on The Human in the Loop I go deep on the paper, what they tested, what held up, and what it means for teams running agent pipelines today.

    19 min
  2. Everyone knows the adoption numbers are bad

    22 MAR

    Everyone knows the adoption numbers are bad

    Everyone knows the adoption numbers are bad. Nobody's saying why they're actually bad. 60% of the workforce now has sanctioned AI tools. Only 11% of organizations have moved agentic pilots into production. That gap gets reported every week. What doesn't get said: most organizations are solving the wrong problem. They're asking "which model should we use?" That question is already obsolete. This week OpenAI released pricing tiers that looked like a product announcement. They weren't. They were a blueprint for how AI systems are designed from here. A nano model at $0.20 per million tokens isn't priced to be your assistant. It's priced to run as a subagent inside a larger system, handling classification while a more capable model handles reasoning. And the gap between 60% and 11% suddenly makes more sense. Organizations are still in "tool selection" mode while the underlying architecture has already shifted to orchestrated systems. It's not that people are resistant. It's that the question they're trying to answer ("which AI should my team use?") doesn't map to the problem anymore. The blockers are real: data governance, legacy systems, a workforce that's uncertain rather than resistant. But those are management problems. They require organizational design thinking. The companies that close that gap won't do it by finding a better model. They'll do it by figuring out which model plays which role, and building the systems around that. I dig into this (and the rest of what moved this week) in the new episode of The Human in the Loop. #AIAdoption #TechStrategy #TheHumanInTheLoop

    20 min

About

Welcome to The Human in the Loop, a weekly look at what’s going on in the world of AI. Every week, I go through the biggest stories, the weird experiments, and the stuff that might actually matter in our day-to-day lives.

You Might Also Like