Practical Innovation w/ Jobs-to-be-Done

Mike Boysen

Mike Boysen shares insights into the evolution of Jobs-to-be-Done, especially in the age of Generative AI. He makes the previously secret process more accessible new approaches and automated tools that vastly reduce the time, effort, and cost of doing what the large enterprises have been investing in for years. This will be especially interesting for the earlier stage, smaller enterprises, and those investing in them who have always had to rely on a superstar, or guess (or maybe that's the same thing!). So...check it out! www.jtbd.one

  1. 1D AGO

    The $150M Phase II JTBD Gap

    Chapter 1: The Socratic Deconstruction of “Agentic Drug Discovery” Let’s get brutally honest about the reality of AI in pharma. Generating a novel molecule in three weeks feels like magic, but the FDA doesn’t care how fast your GPUs run. If your “agentic” drug fails in a human liver five years from now, you still burn billions. We need to strip away the Silicon Valley hype, kill the assumptions, and fix the real biological bottleneck. The “Speed to Clinic” Fallacy vs. The Biological Reality Core Assertion: Solving the discovery-phase speed problem does not inherently increase the probability of a drug surviving human clinical trials. Factual Evidence: * Today, roughly 90% of all clinical-stage drugs fail, and that failure rate has barely budged despite the massive influx of computational biology. * The primary graveyard is Phase II efficacy testing, where the theoretical mechanisms of action finally collide with chaotic, non-linear human biology. * Accelerating the pipeline from 36 months to 3 weeks using AI agents only means you get to the FDA tollbooth faster; it doesn’t mean you have the right ticket to pass through. Implication: CellType is currently selling “speed to clinic” as its primary value proposition. This is a fatal structural flaw. Speeding up the wrong bottleneck just creates a larger, more expensive pileup of failed molecules in Phase I and II. The market doesn’t need more molecules faster; it needs safer, more effective molecules, regardless of how long they take to compute. The Socratic Breakdown of the Fallacy: * The false premise: If we test 10,000x more digital variations, we mathematically guarantee a better clinical outcome. * The biological reality: Digital variations are bounded by our current, imperfect understanding of human biology. If the underlying biological target is flawed, testing 10 million variations of a drug against that target just yields 10 million mathematically perfect, biologically useless compounds. Deconstructing the $2.6B R&D Out-of-Pocket Cost Core Assertion: The true capital burn in biotechnology happens inside human testing infrastructure, not inside early-stage digital molecule generation. Factual Evidence: * The widely cited $2.23 to $2.6 billion average cost to bring a new drug to market is heavily back-loaded. * Early discovery and preclinical testing usually account for less than 20% of the total capitalized cost. * The crushing financial weight comes from Phase II and Phase III human trials, which can cost anywhere from $50 million to $300 million+ per trial due to patient recruitment, clinical monitoring, and regulatory compliance. Implication: If CellType only optimizes the cheapest, earliest phase of the drug lifecycle, they are building a feature, not a generational platform. An AI tool that saves Big Pharma $10 million in discovery but still exposes them to a $150 million Phase II failure is a hard sell in a tight 2026 venture market. Where the money actually burns: * Patient Recruitment: Identifying and enrolling specific genetic phenotypes takes years and costs thousands of dollars per patient. * Clinical Site Management: Paying doctors and hospitals to administer and monitor the drug physically. * Adverse Event Pivot Costs: When a drug shows unexpected toxicity, the trial stops, but the fixed overhead costs continue to burn millions per month. The “Blind Spot” of In Silico Biological Simulation Core Assertion: Silicon simulations are currently incapable of perfectly mapping the secondary and tertiary cascading effects of a drug inside a wet, chaotic human system. Factual Evidence: * We have real-world 2025/2026 data proving the in silico blind spot. Major AI-first pioneers like Recursion Pharmaceuticals and Insilico Medicine have both faced high-profile clinical hurdles. * Their algorithms successfully generated novel targets and structures, but when introduced into human trials, the drugs still faced the exact same efficacy and toxicity roadblocks as human-designed drugs. * The “agentic” workflow often optimizes for binding affinity (how tightly the drug attaches to a target) but fails to account for downstream organ toxicity or solubility. Implication: “Agentic” workflows are currently generating highly sophisticated false positives. They look mathematically flawless on an AWS GPU cluster but fail unpredictably in a human liver. CellType has to stop treating biology like a deterministic software environment and start treating it like a chaotic physical system. The limits of current simulation: * Off-Target Effects: The AI agent predicts the drug will hit Target A, but in the body, it also accidentally binds to Target B, causing severe side effects. * Metabolic Breakdown: The human liver breaks down the AI-generated molecule before it ever reaches the intended tumor. * The “Black Box” of Disease: For complex diseases like Alzheimer’s, we don’t even fully understand the mechanism of action. You cannot accurately simulate what you do not fundamentally understand. Defining What We Know vs. What We Believe About CellType Core Assertion: To build a survivable strategic architecture, we have to aggressively separate CellType’s proven computational capabilities from its unproven biological assumptions. Factual Evidence: * What we KNOW (The Physics): We know that large language models and agentic workflows can write perfect Python. We know that AlphaFold and similar predictive models can accurately map protein structures. We know that cloud compute costs roughly $0.07 to $49.75 per hour depending on the GPU cluster. We know CellType can generate a novel chemical structure in weeks instead of years. * What we BELIEVE (The Trap): We assume that this novel chemical structure will actually bind safely in vivo. We assume the molecule can be manufactured at scale without degrading. We assume that computational speed translates linearly to clinical trial success. Implication: By isolating what we know, we realize that CellType is currently a hyper-efficient computational chemistry engine, not a fully integrated drug company. To survive, they need to either completely own the downstream physical validation (Disruptive Inversion) or pivot their engine to markets that don’t require 10-year human trials (Lateral Persona Expansion). The Socratic Scalpel applied to CellType’s Pitch: * Pitch: “We are the Agentic Drug Company.” * Scalpel: No, you are an automated computational chemistry layer. * Pitch: “We compress the 3-year timeline to 3 weeks.” * Scalpel: You compressed the cheapest 10% of the timeline. The remaining 90% is still bottlenecked by the FDA and human biology. * Verdict: The product narrative has to shift from “generating molecules faster” to “killing toxic molecules earlier.” Chapter 2: The Efficiency Delta & The 2026 ID10T Index Let’s run the actual math on “agentic” drug discovery. In 2026, the cost to spin up an AWS cluster to generate novel molecules is mathematically zero compared to legacy human labs. But this massive computational advantage is an illusion if the resulting molecule fails. Here is the exact financial physics of the CellType model. The Numerator (The $78/Hour Benchmark) Core Assertion: The traditional cost of human-led molecule discovery is artificially inflated by high-priced geographic labor and physical lab overhead. Factual Evidence: * The fully loaded cost of a San Francisco-based PhD bench scientist currently sits at a $78.00/hour benchmark. (Note: This is a blended assumption based on standard L2/L3 scientific labor rates, combining base compensation with specialized lab insurance, chemical disposal, and facility amortization). * A traditional drug discovery team requires 5 to 10 of these highly specialized human executors working continuously for 2 to 3 years just to identify a single viable preclinical candidate. * The total preclinical research phase alone costs between $300 million and $600 million before a drug ever enters a human trial. Implication: When CellType pitches Big Pharma, they are aggressively attacking this specific $78/hour human numerator. By replacing years of manual pipetting and educated guesswork with agentic workflows, they can completely obliterate the early-stage CapEx and OpEx burn rate. The Human Cost Breakdown: * Manual Target Identification: Humans reading disparate PDFs and genomic data to hypothesize a target. * Wet Lab Synthesis: The physical, error-prone process of manually combining chemicals. * Geographic Premium: Paying premium Bay Area or Cambridge salaries for labor that produces a 90% failure rate. The Denominator (The $39.80/Hour Compute Floor) Core Assertion: The absolute physics floor of generating a novel molecule is now governed by the spot price of an NVIDIA H200 GPU cluster, not human labor limits. Factual Evidence: * In early 2026, AWS officially raised the price of its p5e.48xlarge instances (featuring eight NVIDIA H200 GPUs) to $39.80 per hour globally. * While $39.80 is less than the $78/hour human benchmark, the true delta lies in output speed. A human might take 100 hours ($7,800) to synthesize and test one variation. * In that same single hour, a $39.80 compute instance can simulate tens of thousands of molecular variations against a digital target constraint. Implication: CellType has already reached the absolute limit of the physics floor. They have successfully decoupled molecule generation from human biology constraints, reducing the cost of a digital hit to fractions of a penny. The efficiency delta in Step 1 is solved, but the market value of that solution is collapsing as compute becomes commoditized. The Compute Reality: * Infinite Scale: You can spin up 1,000 AWS instances simultaneously; you cannot clone 1,000 PhDs. * The Commoditization Trap: Because anyone can rent a p5e.48xlarge for $39.80, CellType’s core moat is vulnerable if their only value is raw generation speed. * The Dig

    32 min
  2. 3D AGO

    Stop Guessing: The $0.07 Framework to Predict Customer Needs Before They Happen

    Chapter 1: Why Past Pain Points Guarantee Future Failure You’ve been lied to about how innovation actually works. Corporate strategists love to dig through old survey data, hunting for customer complaints about existing products as if those gripes hold the secret to the future. They don’t. We’re going to stop obsessing over what went wrong yesterday and start mathematically isolating exactly what your customer is trying to accomplish tomorrow. The Solution-Bias Trap Core Assertion: Basing your strategic roadmap on how users interact with current solutions guarantees you will never invent the next paradigm; you will only ever build a slightly less broken version of what already exists. Factual Evidence: Look at the 2026 market friction data driving enterprise earnings calls. We are currently seeing a lethal 6-month lag time between identifying a consumer trend qualitatively and getting capital approved for a solution. Worse, companies are paying MBB/Big 4 firms $150,000 to $350,000 for 12-week “ethnographic sprints.” What do they get for that money? A massive slide deck detailing exactly how much users hate the current market offerings. These sprints study the solution, not the underlying objective. Implication: When you study the solution, you optimize the wrong thing. You end up subsidizing your competitor’s R&D by fixing their UI bugs instead of leapfrogging their entire architecture. Solution bias is the most insidious virus in product development. It infects your roadmap because it feels intuitively correct to ask users what they hate about their current tools. But here is the brutal reality of the Solution-Bias Trap: * It creates feature bloat: You add patches and band-aids to legacy architecture instead of questioning if the architecture should exist at all. * It anchors your pricing: If you only build a “better version” of an existing tool, you are locked into the existing price ceiling of that category. * It blinds you to Pathway C (The Inversion Leap): You cannot execute a CapEx, Labor, or Network inversion if your entire worldview is restricted to optimizing the current system’s constraints. To break free, you need to ruthlessly separate the activity the user is performing from the technology they are currently using to perform it. We are not here to build a faster caterpillar; we are here to engineer a butterfly. The Illusion of the “Pain Point” Core Assertion: A “pain point” is nothing more than friction caused by a specific, flawed solution—it is not a fundamental human need, and solving it rarely leads to disruptive innovation. Factual Evidence: Consider a 2026 enterprise software user complaining that a legacy compliance tool “requires too many manual data entry clicks.” An L3 Senior Strategist billing at $300/hr will take that feedback, log it as a critical “pain point,” and recommend a multi-million dollar UX redesign to reduce the click count. But the human executor doesn’t fundamentally care about clicking. Their actual, solution-agnostic objective is to minimize the time it takes to verify a client’s regulatory status. Implication: Solving the pain point (reducing clicks) yields a slightly better, highly expensive compliance tool (Sustaining Innovation). Inverting the problem to solve the underlying objective (automating the verification via API) destroys the need for the UI entirely. We have been conditioned by legacy consulting frameworks to worship at the altar of the “pain point.” But pain points are deeply deceptive for three reasons: * They are temporary: A pain point only exists as long as the current technology exists. If the technology changes, the pain point vanishes, taking your entire value proposition with it. * They are highly subjective: What is a severe pain point to a novice user is often an invisible, accepted reality to a power user. This leads to loud, minor annoyances drowning out massive, systemic inefficiencies. * They breed incrementalism: If your entire product strategy is just a list of resolved complaints, your competitors can easily clone your feature set. You have no structural moat. Instead of chasing fleeting pain points, we need to map the permanent, underlying Job. The Job doesn’t change; only the solutions change. When you stop looking at where the user is hurting and start looking at what the user is trying to achieve, the path to a zero-friction, physics-limit solution becomes blindingly obvious. The Henry Ford Fallacy Re-examined Core Assertion: Customers are brilliant at evaluating outcomes, but they are terrible engineers. Asking them what they want is a guaranteed path to failure; forcing them to define how they measure success is the key to predictable innovation. Factual Evidence: We know from the limits of market validation that human synthesis introduces massive heuristic bias. When you ask a user for a solution, they will invariably request an incremental upgrade to what they already know (e.g., “I want a faster horse”). But when we deploy frontier API models to synthesize thousands of interactions at $0.07/kWh, we can extract the underlying metrics that actually drive adoption—metrics that have nothing to do with the user’s stated desires. Implication: We have to stop relying on users to play inventor. Your customers do not know how to combine CapEx inversions, LLM inference, and new business models. It is your job to engineer the solution; it is their job to define the metrics of success. The Henry Ford quote about “faster horses” is usually cited by arrogant product managers to justify ignoring customer research entirely. That is the wrong takeaway. The real lesson is that we have been asking the wrong questions. To build a predictable innovation engine, you need to shift your data collection from solutions to metrics. We do this by capturing Customer Success Statements (CSS). * Wrong Question: “What features do you want in the next update?” (Yields solution bias). * Right Question: “When you are executing this specific step, what makes the process unacceptably slow, unpredictable, or expensive?” (Yields measurable success criteria). If you focus on the metrics of the faster horse (minimize the time to transport goods, maximize the reliability of transport in bad weather), you naturally arrive at the combustion engine. The user gives you the mathematical boundaries of success; you use first-principles engineering to obliterate those boundaries. Defining What We Know vs. Believe Core Assertion: To architect a highly profitable long-term vision, you have to brutally separate what you factually know from what your corporate culture believes. Factual Evidence: Big 4 innovation sprints often charge up to $350,000 over 12 weeks simply to package internal corporate beliefs as external market truths. They use $800/hr L4 Partners to validate the existing internal biases of the executive team rather than discovering the raw execution goals of the market. This creates the horrific waste gap that feeds the ID10T Index. Implication: Applying the Socratic Scalpel (Node 1) strips away this internal solution bias. If we do not zero-base our assumptions and anchor our strategy exclusively on validated, external data, we will confidently build a beautiful product for a user who does not exist. Before you can build the Unified Validation Engine or map out your Customer Success Statements, you have to clean house. The Socratic Scalpel is an intellectual forcing function designed to destroy assumptions before they cost you money. When analyzing any market opportunity, you need to subject every single claim to this rigorous filter: * Isolate the Claim: Take the core belief driving your product roadmap (e.g., “Users want more AI in their workflow”). * Demand the Evidence: Ask exactly how we know this. Is it based on a statistically significant Top-Box Gap, or is it based on the CEO reading a trend report on a flight? * Separate Fact from Heuristic: A fact is a measurable behavior (e.g., “Users abandon this workflow 42% of the time at step 3”). A heuristic is a guess masking as a fact (e.g., “Users abandon step 3 because it’s too complicated”). * Define the Knowledge Gap: Clearly state what you actually need to find out to turn the heuristic into a fact. By running your entire strategic premise through the Socratic Scalpel, you instantly vaporize the expensive, heuristic guesswork that props up the legacy consulting model. You stop paying $300/hr for opinions, and you start paying $0.07/kWh for mathematical certainty. Chapter 2: The ID10T Index: Calculating the True Cost of Legacy Research We are going to expose the most expensive lie in corporate innovation: the idea that understanding your market requires paying a consulting firm a quarter-of-a-million dollars to run focus groups. We’re ripping apart the actual math behind this legacy process. You’re about to see exactly why relying on human synthesis isn’t just slow, it’s a structural financial failure. The Numerator: Mapping the Bloated Value Chain Core Assertion: The traditional ethnographic research model is a bloated, human-heavy value chain designed to maximize billable hours, not to discover mathematical market truths. Factual Evidence: Current 2026 market data proves that standard MBB/Big 4 innovation sprints are billed at flat rates ranging from $150,000 to $350,000 over agonizing 8-to-12-week timelines. This entire cost structure is propped up by a legacy labor pyramid that forces you to pay top-tier rates for mid-tier manual synthesis. Implication: You aren’t paying for superior data accuracy; you are subsidizing the massive governance overhead and administrative friction of a legacy labor model. This guarantees a horrifyingly low ROI on your research spend. To understand why traditional market research is financially broken, you have to map the exact human executors embedded in the Numerator (the current commer

    33 min
  3. FEB 1

    Your Codebase Has a 99% "Syntax Tax" Rate

    The software industry is stuck in a costly trap where we believe humans must manually type code to create applications. This approach forces us to pay expensive professional rates for typing tasks that AI can perform for less than a penny. To solve this, we must adopt "Vibe Coding," a new method where humans describe their ideas in plain English and AI handles all the technical construction. Part I: The Syntax Fetish The Practitioner’s Fallacy The Stuck Belief: “Coding Equals Typing” The modern software industry suffers from a collective hallucination: the belief that the manual entry of syntactic characters into a text file is the definition of engineering. This is the Practitioner’s Fallacy—confusing the tool (typing code) with the outcome (logic structure). For the last forty years, we’ve measured developer productivity by “lines of code” or “commit frequency.” This is equivalent to measuring the value of a novel by the number of keystrokes used to write it. In the post-LLM era, this metric is not just obsolete; it’s a liability. The ability to manually manage memory pointers in C++ or memorize the boilerplate for a React useEffect hook is no longer a competitive advantage. It is a Syntax Tax. The “Syntax Tax” Defined The Syntax Tax is the measurable gap between Architectural Intent and Executable Reality. It represents the time, energy, and capital consumed by the translation layer. The Economic Reality: * The Artifact: A standard SaaS feature (e.g., “Add a user to a database”). * The Intent Time: 2 minutes (Defining the logic). * The Syntax Time: 4 hours (Writing the boilerplate, fighting the linter, debugging the import errors, configuring the environment). * The Tax Rate: ~99% of the cycle time is waste. According to the ID10T Index (Inefficiency Delta in Operational Transformation)—a metric designed to quantify the gap between current commercial pricing and the theoretical minimum cost of production—traditional coding violates the “Bits Floor.” We’re paying L3 Professional Rates ($150/hr) for a task—syntax generation—that has a theoretical minimum cost of $0.01 per transaction via inference. Socratic Deconstruction: Dismantling the Belief Chain To move to Vibe Coding, we must first surgically remove the belief that manual coding is necessary. We apply the Socratic Scalpel, a method of inquiry used to excise “stuck beliefs” by challenging their foundational assumptions. (A) Clarification * Inquiry: “What exactly do we mean when we say ‘I coded this app’?” * Deconstruction: We usually mean “I translated a logical requirements document into a specific, rigid grammar that a compiler understands.” We’re claiming credit for the translation, not the logic. (B) Challenging Assumptions * Inquiry: “Why do we assume that a human must be the one to perform this translation?” * Deconstruction: This assumes that human precision in syntax is superior to machine precision. However, 70% of software vulnerabilities are memory safety errors—literal syntax mistakes made by humans. The assumption that humans are “safer” syntax generators is statistically false. (C) Evidence & Reasons * Inquiry: “What evidence do we have that natural language is insufficient for software definition?” * Deconstruction: Historically, natural language was too ambiguous for compilers. But with the advent of Context-Aware LLMs, the machine can now infer intent from ambiguous language with higher fidelity than a junior engineer can infer intent from a Jira ticket. (D) Alternative Viewpoints * Inquiry: “What if the code itself is just an intermediate artifact, like a compiled binary?” * Deconstruction: We don’t hand-write Assembly anymore; we let C compilers do it. We don’t hand-write C anymore; we let Python interpreters handle the memory. Vibe Coding is simply the next logical step: we should not hand-write Python anymore; we should let the AI handle the syntax. (E) Implications & Consequences * Inquiry: “If we stop writing syntax, what happens to the profession of software engineering?” * Deconstruction: The profession splits. The “Typists” (who rely on syntax for job security) become obsolete. The “Architects” (who understand systems, state, and data flow) become 100x more productive. The barrier to entry drops, but the ceiling for complexity rises. We are not “dumbing down” programming; we are elevating the level of abstraction. Just as the transition from punch cards to text files allowed for the Operating System, the transition from text files to Natural Language Intent will allow for Software as Malleable Matter. We must stop paying the Syntax Tax. The goal of the Vibe Coder is not to write code. The goal is to architect reality. Part II: The ID-TEN-T Audit (Statistical Efficiency Gap) Calculating the Vibe Delta The Numerator: The Cost of Manual Syntax To quantify the inefficiency of traditional development, we analyze the cost structure of an L3 Senior Engineer. * Role: L3 Senior Full-Stack Engineer. * Market Rate: ~$150/hour (fully burdened cost). * Constraint: Human typing speed and cognitive load (syntax verification). * Output: Approximately 50 lines of fully debugged, functional code per hour. * Cost per Functional Unit: $3.00 per line. This cost is artificially inflated because the engineer is not just thinking; they are physically typing, linting, and correcting syntax errors—tasks that require zero creativity but high precision. The Denominator: The Cost of Inference Now we apply the Robust First Principles Analyst (RFPA) protocol. This framework rejects “Reasoning by Analogy” (benchmarking against competitors) and strictly enforces “Reasoning from First Principles” to identify the physics-limit cost of a transaction. * Role: Agentic AI (e.g., Claude 3.5 Sonnet or GPT-4o). * Rate: Marginal cost of compute tokens. * Constraint: Context window size and inference speed. * Output: Instant generation of 50+ lines of syntax-perfect code. * Cost per Functional Unit: ~$0.0002 per line. The Index Score The ID10T Index is calculated as the gap between the Current Commercial Price and the Theoretical Minimum Cost. ID10T Index = $3.00 / $0.0002 = 15,000x Conclusion: The traditional software development process operates at an ID10T Index of 15,000. We are paying a premium of fifteen thousand times the necessary cost for the privilege of typing the code ourselves. This is arguably the most inefficient high-value process in the modern economy. The “Bits Floor” Violation Why Code Should Be Cheap The Bits Floor is a foundational axiom of information economics. It asserts that any process consisting purely of information manipulation (no atoms involved) should inherently trend toward the marginal cost of compute—approximately $0.01 per transaction. Traditional coding treats software as if it were matter—scarce, hard to move, and expensive to assemble. We treat code like it is made of aluminum or steel, requiring expensive “machining” (typing) to shape it. Vibe Coding restores the physics of software. It treats code as bits. By removing the human from the generation loop and keeping them in the verification loop, we align the cost of production with the marginal cost of compute. Part III: The Path Choice (Sustaining vs. Disruptive) The “Copilot” Trap (Sustaining Innovation) Faster Horses The industry’s first reaction to LLMs was GitHub Copilot. This represents Path A: Sustaining Innovation. * The Mechanism: The AI acts as a sophisticated autocomplete. It predicts the next few lines of code based on the cursor position. * The Flaw: It optimizes the typing process but maintains the dependency on manual files, git commits, and local environments. * The Consequence: The developer is still the “Typist in Chief.” They are still liable for every character in the text file. The Syntax Tax is subsidized, but not repealed. This approach is analogous to putting a motor on a bicycle. It’s faster, but it’s still fundamentally a bicycle. The “Vibe” Shift (Disruptive Innovation) The New Operating Model Path B is Vibe Coding (exemplified by tools like Replit Agent, Cursor Composer, Google Antigravity, and now the OpenClaw abstraction). This is Disruptive Innovation (but may be short-lived because things are changing rapidly). * The Mechanism: The user defines the state and outcome in natural language. The AI manages the files, the file structure, the imports, and the execution environment. * The Shift: * Old Job: Managing files and syntax. * New Job: Managing context and capability. * The Strategic Implication: The barrier to entry drops from “Years of Study” to “Clarity of Thought.” The developer no longer needs to know how to write a React component; they only need to know what a React component should do and why it is necessary; if that. Part IV: The Reconstruction (The Natural Language Stack) The New Stack: Prompt -> Context -> AST Layer 1: The Prompt (The Intent Layer) In the Vibe Coding stack, English is the new Source Code. Precision in language replaces precision in syntax. The “Prompt” is no longer a query; it is a specification. The quality of the software is directly downstream of the quality of the prompt. * Bad Input: “Make it pop.” * Good Input: “Implement a framer-motion spring animation on the hover state with a stiffness of 300 and damping of 20.” Much of this will be templatized. Layer 2: The Context Window (The State Layer) The Context Window replaces the file system as the primary mental model. * Traditional IDE: The developer must remember where functions are defined across 50 different files. * Vibe IDE: The AI holds the entire project structure in “working memory.” The developer manipulates the Context, ensuring the AI has the relevant information to execute the intent. Layer 3: The Execution (The Binary Layer) The actual code files (JavaScript, Python, Rust) are demoted to the

    7 min
  4. JAN 29

    Why "Clean Data" Kills Agentic Speed

    Companies are currently failing by forcing fast AI agents to use slow, centralized data warehouses, which creates a massive bottleneck. This traditional approach costs roughly $12,000 per data feed, making it 1.2 million times less efficient than letting an agent query data directly for just one penny. To fix this, businesses must switch to a "Newsroom" model where agents access raw data at the source instead of moving it. This method allows agents to clean data instantly when needed, drastically reducing costs and delays PART I: THE DECONSTRUCTION (THE LIBRARY MODEL) The Collision of Forces We’re witnessing a violent collision between two opposing forces in the enterprise. On one side, we have the “Library” model of data management—static, centralized, and governed by human committees. On the other, we have Agentic AI—dynamic, distributed, and demanding real-time context. The industry’s trying to force the latter into the former, and it’s failing. This failure isn’t technical; it’s philosophical. It stems from a single, deep-seated misconception that we need to excise before we can build anything new. The Stuck Belief “Data Governance is a protective gatekeeping function that requires centralization, rigid schemas, and human oversight to ensure accuracy before consumption.” The Socratic Inquiry To dismantle this, we need to apply the Scalpel. We can’t just accept “Accuracy” as a vague good; we have to interrogate it. * Clarification: What exactly do we mean by “accuracy” in a context where data changes faster than the cleaning cycle? If an agent needs a stock price now to execute a trade, is a “clean” value from yesterday’s batch process accurate? Or is it just “precisely wrong”? * Challenging Assumptions: Why do we assume data needs to be moved to a central warehouse to be useful? Is this a requirement of physics (like gravity), or is it a legacy artifact of 1990s compute limitations? * Evidence & Reasons: What evidence supports the belief that human-curated schemas reduce hallucination better than semantic injection at inference time? Have we tested this, or is it just how we’ve always done it? * Implications: If we stick to “The Library,” what breaks? The answer is simple: The Agent. It will either wait for the data (Latency Failure) or it will bypass IT entirely to get what it needs (Security Failure). PART II: THE EFFICIENCY DELTA The Economic Absurdity We can’t argue with sentiment; we have to argue with math. We need to calculate the ID10T Index (Inefficiency Delta in Operational Transformation) for the simple act of integrating a new data feed for an AI agent. The Numerator (Current State: “The Library”) In the traditional model, integrating a new source requires building a “Data Pipeline.” This is a manual construction project. * Process: A Data Engineer writes custom Python/SQL extractors. A Data Steward defines the schema and access policies. A QA team validates the data. * Labor: We’re paying L3 Professionals (Engineers) at $300/hr and L2 Skilled Trades (Stewards) at $75/hr. * Time: Industry average is roughly 40 operational hours to build, test, and deploy a robust feed. * The Cost: 40 hours × ~$300/hr (blended rate) = $12,000 per feed. The Denominator (Physics Limit: “The Newsroom”) Now, look at the physics limit. What’s the theoretical minimum cost for an agent to get that same data? * Process: The agent authenticates via API. It reads the schema documentation (or infers it from the JSON response). It performs Just-in-Time (JIT) reconciliation for the specific query. * Labor: 0 Human Hours. * Compute: 100 tokens of input context + 1 API call. * Physics Floor: The “Bits Floor” (Agentic Limit). * The Cost: $0.01 per interaction. The Efficiency Score $12,000 divided by $0.01 equals 1,200,000. The current approach is 1.2 million times less efficient than the theoretical minimum. We aren’t just inefficient; we’re practicing digital archaeology. We’re spending professional-grade capital to build permanent infrastructure for transient data needs. PART III: THE PATH CHOICE (OPTIMIZATION VS. DISRUPTION) We’re staring at a 1.2 million-fold gap. We have two ways to close it. Path A: Optimization (The “Better Library”) This is the seductive trap. * The Strategy: Use Generative AI to “code faster.” We build “Co-pilots for Data Engineers” that automate the writing of ETL pipelines and SQL scripts. * The Result: We reduce the time to build a pipeline from 40 hours to 4 hours. We lower the cost from $12,000 to $1,200. * The Fatal Flaw: This violates Command 5 of the First Principles Protocol: “Do not automate an inefficient process.” By choosing Path A, we’re just digging the grave faster. We’re still moving heavy data to the logic (violating Data Gravity). We’re still maintaining rigid schemas that break when a column changes. We’ve optimized a process that shouldn’t exist. * Verdict: REJECT. Path B: Disruption (The “Newsroom”) This is the necessary pivot. * The Strategy: Eliminate the pipeline entirely. Move the logic (The Agent) to the data (The Source). * The Execution: We build a “Semantic Control Plane” that allows agents to query raw APIs directly, utilizing Just-in-Time governance. * The Result: We hit the physics limit of $0.01 per interaction. * Verdict: EXECUTE. PART IV: THE RECONSTRUCTION (THE SEMANTIC CONTROL PLANE) To execute Path B, we need to rebuild our architecture based on physics, not tradition. We rely on these Foundational Axioms: The Physics of Data Gravity Logic Travels, Data Stays. Data is heavy (Terabytes). Logic is light (Kilobytes). It’s always cheaper and faster to send the query to the data than to copy the data to a warehouse. We’re moving to a Zero-Copy architecture where the agent visits the data where it lives. Latency is Accuracy Data that’s “clean” but 24 hours old is functionally incorrect for an autonomous agent. Real-time access to “messy” data is superior to delayed access to “perfect” data, provided the agent has the intelligence to filter the noise. Governance is Metadata We stop writing governance policies in PDF documents. Governance rules need to be machine-readable instructions—a “Semantic Constitution”—that the agent consumes at runtime. This isn’t a gate; it’s a lens. PART V: THE EXECUTION (THE NEWSROOM PARADIGM) We’re shifting from “The Library” (Hoarding) to “The Newsroom” (Reporting). Here’s how the new stack functions across the three critical pillars of data. Structured Data: Semantic Binding In the Library, if a Salesforce admin changes cust_ID to customer_identifier, the SQL pipeline breaks. Humans rush to fix it. This is the Fragility Loop. In the Newsroom, we use Semantic Binding. We don’t tell the agent “Look at Column A.” We tell the agent “Look for the Unique Customer Identifier.” The agent scans the schema at runtime, infers that customer_identifier is the target, and writes its own query. We’ve replaced Explicit Reference (brittle) with Semantic Inference (resilient). The ID10T cost of maintenance drops to zero. Unstructured Data: From “Dark Matter” to Fuel 80% of enterprise data is unstructured (PDFs, Emails, Slack). In the Library, this is “Dark Matter”—invisible to SQL. Extracting value requires an L3 Professional ($300/hr) to read the documents. In the Newsroom, this is our primary fuel. * Manual Review: Reading a 50-page contract takes 1 hour. Cost: $300. * Agentic Review: An LLM with a 128k context window ingests the PDF in seconds. Cost: $0.05. This 6,000x cost reduction flips the economics. We don’t need to structure the unstructured; we just need to give the agent RAG (Retrieval-Augmented Generation) access to “interview” the documents. Integration: Just-in-Time (JIT) Reconciliation The biggest objection to Zero-Copy is: “If we don’t centralize it, we can’t clean it.” This is false. We don’t need all the data to be clean all the time. We need specific data points to be clean right now. Instead of a nightly batch job that scrubs 10 million records (Just-in-Case), the Agent performs JIT Reconciliation. If it pulls an address from CRM and an address from Billing, and they conflict, the agent resolves that specific conflict in real-time using the Semantic Constitution. We pay for the compute to clean only what we consume. PART VI: CONCLUSION & THE FUTURE (SELF-HEALING GOVERNANCE) The Privacy Pivot: Context-Aware Masking We’re also solving the “Third Rail”: PII. Instead of binary Access Control (You see it or you don’t), we use Context-Aware Masking. We allow the agent to “see” the Social Security Number to perform a verification, but the Semantic Constitution strictly prohibits writing that SSN to the logs or memory. We govern observation, not just access. Self-Healing Governance The ultimate destination isn’t just an agent that reads data; it’s an agent that fixes it. When our “Journalist” agent finds a discrepancy, it doesn’t just error out. It generates a Governance Proposal—a suggestion to update the semantic map or flag a dirty record. The Data Stewards stop being janitors and start being Editors, approving the fixes that the agents propose. We’re done building $12,000 pipelines for $0.01 questions. The Library is closed. The Newsroom is open. If you find my writing thought-provoking, please give it a thumbs up and/or share it. If you think I might be interesting to work with, here’s my contact information (my availability is limited):Book an appointment: https://pjtbd.com/book-mike Email me: mike@pjtbd.com Call me: +1 678-824-2789 Join the community: https://pjtbd.com/join Follow me on 𝕏: https://x.com/mikeboysen Articles - jtbd.one - De-Risk Your Next Big Idea New Masterclass: Principle to Priority Q: Does your innovation advisor provide a 6-figure pre-analysis before delivering the 6-figure proposal? This is

    8 min
  5. JAN 27

    Fire the Patient: The End of Adherence

    The pharmaceutical industry relies on patients to take daily pills, but this manual process fails half the time because human memory is unreliable. This design flaw creates a massive efficiency gap where patients perform unpaid work to manage their health, costing billions in preventable emergencies. Instead of creating apps to nag patients, companies must switch to autonomous implants and injectables that deliver medicine automatically without user effort. This shift guarantees the medicine works and removes the burden of adherence entirely. A First Principles Deconstruction of Medical Compliance Target Audience: Pharma Executives, Digital Health Strategists, Product Architects, Clinical Operations Leads. Part I: The Deconstruction (The Socratic Scalpel) Goal: Dismantle the industry’s “Stuck Belief” that non-compliance is a behavioral flaw rather than a design defect. The “Bad Patient” Myth The pharmaceutical industry operates on a fundamental category error: the belief that non-compliance is a behavioral defect rather than a structural failure. When a system fails 50% of the time—as chronic medication adherence does after six months, according to the World Health Organization—it is not a user error; it is a design flaw. The industry has spent decades trying to “fix the patient” through education, gamification, and behavioral nudges, failing to recognize that the patient is unreliable “human middleware” in a precise chemical delivery loop. We currently rely on biological agents (patients) to execute precise pharmacokinetic schedules, a task for which the human brain is evolutionarily ill-suited. In any other safety-critical industry—aviation, nuclear power, or high-frequency trading—relying on manual human intervention for daily maintenance would be considered negligence. Yet in medicine, we label the failure of this manual process as “non-compliance,” shifting the burden of the system’s design failure onto the end-user. We don’t need better patients; we need to fire the patient from the job of drug delivery. The Five Whys of Failure (RFPA Step 1) To understand why the “Adherence Crisis” persists despite billions in investment, we must apply the Robust First Principles Analyst (RFPA) protocol, starting with the Five Whys. This diagnostic chain reveals that the problem is not biological or psychological, but economic. * Why is compliance low? Because the therapeutic loop requires manual human actuation every 24 hours. * Why is the loop manual? Because the Oral Solid Dose (OSD)—the pill—is the dominant standard for medication delivery. * Why is the pill dominant? Because it is exceptionally cheap to manufacture and stable to ship. * Why do we prioritize manufacturing cost over delivery reliability? Because the business model is built on “Volume of Units Sold” rather than “certainty of therapeutic outcome.” * The Root Cause: We are optimizing for Factory Efficiency (the cost to make the pill), not Therapeutic Efficacy (the probability the molecule reaches the receptor). This analysis reveals that “Adherence” is a problem created by the solution itself. We are currently stuck in Path A (Optimization), trying to make “pill swallowing” easier. The First Principles imperative is Path B (Deletion): question the existence of the pill. If the delivery mechanism (the pill) requires a level of consistency that the user (the human) cannot provide, the mechanism must be deleted. The “Nagging” Economy The “Nagging Economy”—the estimated $50 billion sector comprising adherence apps, smart pill bottles, glow-caps, and nurse call centers—is the industrialization of “Sustaining Innovation.” These tools attempt to patch a fundamentally broken user interface (the daily dose) rather than eliminating the friction entirely. In the vocabulary of the Musk Loop, this is “paving the cow path”—automating and optimizing a process that should not exist. Consider the “Smart Pill Bottle.” It uses sensors and Bluetooth to track when a patient opens the cap, sending data to a cloud dashboard. While technologically impressive, it is functionally absurd. It adds cost ($50+ per unit), complexity (batteries, syncing, data privacy), and cognitive load (notifications) to a process that is already failing due to friction. It is a Band-Aid Innovation that reinforces the legacy dependence on the daily oral dose. True innovation does not make it easier to remember the pill; it makes the pill unnecessary. We must stop building better alarm clocks and start building autonomous delivery systems. Part II: The ID10T Audit (Efficiency Gap Analysis) Goal: Quantify the economic and physiological cost of the current “Daily Dose” model using the ID10T Index. Calculating the “Compliance Tax” The “Compliance Tax” is the rigorous quantification of the systemic waste generated by the current reliance on manual patient adherence. It is not an abstract concept; it is a direct financial levy on the healthcare system caused by the failure of the “Daily Dose” interface. According to the CDC and NIH, the direct cost of prescription non-adherence in the United States ranges from $100 billion to $300 billion annually. This figure represents the cost of avoidable hospitalizations, emergency room visits, and escalated disease states resulting from patients failing to act as reliable delivery mechanisms. In human terms, this design failure results in approximately 125,000 preventable deaths per year. This is equivalent to a fully loaded jumbo jet crashing every single day. If any other consumer product—a car, a toaster, a phone—had a user interface failure rate that killed 125,000 people annually, it would be recalled immediately. The fact that the “Daily Pill” remains the standard of care is a testament to the industry’s focus on Manufacturing Ease over User Reality. The ID10T Index of the Oral Solid Dose The ID10T Index (Inefficiency Delta in Operational Transformation) measures the gap between the Current Commercial Price of a process and its Theoretical Minimum Cost (physics limit). For medication adherence, the inefficiency is driven by “Shadow Labor”—the uncompensated work we force patients to perform. The Numerator (Current Commercial Reality):The true cost of the daily pill is not just the pharmacy price. It includes the cost of the “Compliance Infrastructure” required to prop up the failing system. This includes: * Nursing time spent on adherence counseling. * The cost of “Smart” packaging and reminder apps. * The catastrophic cost of “Rescue Care” (ER visits) when the system fails. The Denominator (The Physics Limit):The theoretical minimum cost of maintaining a therapeutic blood concentration is the cost of the molecule plus the energy required to deliver it, with zero human labor. * The Labor Floor (Shadow Labor): We assign a value to the patient’s time using the Standard L1 Manual Labor Rate ($25/hr). * Task: Remember, Locate, Open, Swallow, Log, Refill. * Time: Conservative estimate of 2 minutes per day. * Calculation: $25/hr * (2/60 hours) * 365 days = $304.16 per patient per year. * The Bits Floor: The cost of digital monitoring (if autonomous) approaches $0.01. The Efficiency Delta:The current system essentially imposes a $304 annual labor tax on every patient for every chronic medication they take. If a patient is on 5 medications, they are performing $1,500 worth of uncompensated labor annually to act as a manual servo in the pharma supply chain. By switching to a Long-Acting Injectable (LAI) or Implant that requires intervention only twice a year (15 minutes of L2 Skilled labor), we reduce the failure points by a factor of 180x (365 events vs. 2 events). The “Entropy of Adherence” Adherence is ultimately a problem of thermodynamics: systems tend toward disorder (entropy) unless energy is applied. In the context of daily medication, “Entropy” is the statistical probability of missing a dose. The Rule of Compounding Failure:Every manual step in a process introduces a probability of failure. If a patient has a 90% reliability rate for a single task (high for a human), a daily regimen often involves three distinct micro-steps: (1) Remembering the time, (2) Locating the medication, (3) Physically ingesting it. * Reliability Calculation: $0.9 \times 0.9 \times 0.9 = 0.729$ (72.9% reliability per day). * Over a week, the probability of Perfect Adherence drops to near zero. Conclusion:You cannot “educate” a human to overcome the laws of thermodynamics. You cannot “gamify” your way out of entropy. The only way to increase the reliability of the system is to delete the steps. By moving from a daily oral dose (365 steps/year) to a semi-annual implant (2 steps/year), you structurally eliminate the opportunity for entropy to enter the system. We must stop trying to make the patient more disciplined and start making the therapy more autonomous. Part III: The Path Choice (JTBD Elevation) Goal: Pivot from “Getting Patients to Take Meds” (Path A) to “Ensuring Therapeutic Levels” (Path B). Defining the Job-to-be-Done The Jobs-to-be-Done (JTBD) framework demands we separate the Solution (the pill) from the Job (the biological outcome). The pharmaceutical industry has historically operated at Level 1 Abstraction, defining the job as “Help me remember to take my medication.” This low-level definition inevitably leads to low-level solutions: reminder apps, vibrating caps, and automated pill dispensers. These solutions fail because they assume the patient wants to perform the task of adherence. They do not. To unlock disruptive innovation, we must elevate the job to Level 3: “Maintain therapeutic blood concentrations of MoleculeX within the effective window.” This simple linguistic shift fundamentally alters the design constraints. If the job is to “maintain blood concentration,” the patient is no longer a necessa

    7 min
  6. JAN 23

    The Computational Imaging Revolution: Deconstructing the MRI Monopoly

    The medical imaging industry is stuck building massive, expensive "Cathedrals" for MRI machines because they believe better images only come from giant magnets. This old-fashioned thinking makes current scanners 42 times more expensive than necessary, costing millions for hardware when physics says it should cost about $50,000. By replacing expensive copper shielding and super-cold magnets with smart software and artificial intelligence, we can build portable, affordable scanners that plug into a regular wall outlet. This shift turns MRI from a rare, expensive procedure into a common tool that doctors can bring directly to the patient's bedside to diagnose strokes instantly. Executive Summary: The End of the “Cathedral” Model Audience: Healthcare Strategists, Hardware Engineers, Deep Tech Investors The medical imaging industry is currently trapped in a “Hardware Arms Race,” operating on the flawed, linear assumption that diagnostic utility is strictly a function of magnetic field strength … This “Reasoning by Analogy” has produced 7-Tesla “Cathedrals”—immensely expensive, immovable suites requiring liquid helium cooling—that alienate patients from care. This document deconstructs that monopoly. By applying First Principles thinking, we demonstrate that Signal-to-Noise Ratio (SNR) is no longer solely a hardware constraint (Atoms) but a computational one (Bits). The convergence of low-field permanent magnet physics (0.064T), active electromagnetic interference cancellation, and Deep Learning reconstruction (DL-ESPIRiT) enables a scanner with a Theoretical Minimum Cost of ~$50,000 to perform the same Job-to-be-Done as a $2.1 million machine: detecting pathology at the point of care. We are witnessing the shift from MRI as a procedure to MRI as a utility. Part I: The Deconstruction (The “Tesla Cult”) The Stuck Belief: The Tyranny of the Boltzmann Distribution The central dogma of modern radiology is that Image Quality is a function of Magnetic Field Strength. To understand why the industry is stuck, we must understand the physics they are optimizing for. MRI works by aligning the protons of hydrogen atoms (mostly in water) with a magnetic field. The clarity of the image depends on how many protons align “up” versus “down.” The ratio of this alignment is governed by the Boltzmann Distribution: Where… (the energy difference) is directly proportional to the magnetic field strength. * The Industry Logic: To get more signal (higher SNR), you simply increase magnetic field strenght. * The Consequence: This created a linear innovation trajectory. * The Cost Function: While Signal scales roughly linearly with Field Strength, Cost scales quadratically (or exponentially) due to the requirements of superconductivity. This logic is a classic “Reasoning by Analogy” trap. It assumes that the only way to recover structure from data is to increase the volume of the raw signal. In the pre-GPU era, this was true. In the post-Transformer era, it is false. We are effectively paying millions of dollars for “Hardware SNR” when “Software SNR” (reconstruction algorithms) costs pennies per inference. The Socratic Scalpel: Challenging the Hardware Monopoly We must apply the Socratic Inquiry to dismantle the “High-Field” consensus. * Inquiry 1 (Clarification): “What exactly are we buying when we spend $2 million on a 3T magnet?” * The Surface Answer: “We are buying high-resolution images.” * The First Principles Answer: “No. We are buying proton alignment. We are buying a higher probability that a hydrogen nucleus will precess at the Larmor frequency in a detectible way.” * Inquiry 2 (Challenging Assumptions): “Why must signal alignment be physical? Is it possible to reconstruct the structure of the anatomy from a lower signal using probabilistic models?” * The Assumption: “You cannot image what you cannot measure.” * The Counter-Evidence: Modern Deep Learning models (U-Nets, GANs) routinely upscale 480p video to 4K. The “texture” of high resolution can be hallucinated mathematically if the underlying “structure” (anatomy) is preserved. * Inquiry 3 (Implication): “If diagnostic confidence can be achieved at 64mT (milliTesla), what happens to the infrastructure?” * The Result: If we drop the field strength, we lose the requirement for superconductivity. If we lose superconductivity, we lose Liquid Helium. If we lose Liquid Helium, the scanner becomes a consumer appliance. The Legacy Artifact: The Helium Hostage Crisis The single greatest barrier to MRI accessibility is Liquid Helium. To maintain a superconducting magnet, the coils must be bathed in liquid helium to reach 4 Kelvin (-452°F). This creates a cascade of physical constraints that define the “Cathedral” model. The Quench Pipe (Infrastructure Cost) If a superconducting magnet loses its cooling (a “quench”), the liquid helium boils instantly, expanding 700:1 in volume. * The Constraint: This requires massive, dedicated cryogenic exhaust pipes (typically 10-12 inch diameter stainless steel) routed directly to the outside of the building. * The Cost: Retrofitting a hospital room with quench pipes typically costs $50,000 - $150,000 alone. This makes mobile deployment impossible; you cannot attach a quench pipe to an elevator. The Supply Chain Shock (Operational Risk) Helium is a non-renewable resource, typically a byproduct of natural gas extraction. * Source Concentration: The majority of the world’s supply comes from the US (Cliffside Field), Qatar, and Russia. * Volatility: Prices have quadrupled in the last decade. Hospitals are frequently placed on “allocation,” meaning they cannot top off their scanners, risking a catastrophic quench. * Conclusion: Building a global healthcare infrastructure on a volatile, non-renewable noble gas is a strategic failure. The cooling system is not a feature; it is a Process Artifact. The Copper Prison: The Faraday Cage High-field MRI systems operate at Larmor frequencies that overlap with commercial FM radio (64 MHz at 1.5T). Because the MRI signal is radio-frequency (RF), external radio waves will ruin the image. * The Legacy Solution: Build a Faraday Cage. A room completely lined with copper shielding. * The Cost: Shielding a standard MRI suite requires tons of copper and specialized labor, costing $30,000 - $50,000. * The Isolation: This cage physically separates the patient from the rest of the ICU. You cannot simply roll a 1.5T scanner next to a ventilator because the ventilator is an RF noise source, and the scanner is an RF receiver. The cage is a “monument to passive engineering.” Part II: The ID10T Audit (Efficiency Gap) To quantify the inefficiency of the current model, we apply the ID10T Index (Inefficiency Delta in Operational Transformation). We compare the Current Commercial Price of the status quo against the Theoretical Minimum Cost dictated by physics. The Numerator: The “Cathedral” Standard (1.5T Fixed Suite) The cost of a standard installation is driven by weight, power, and shielding requirements. These are not “medical” costs; they are “physics management” costs. * Hardware (The Machine): ~$1,500,000. * Superconducting Niobium-Titanium coils. * Cryostats. * High-voltage gradient amplifiers (2000V+). * Site Preparation (The Room): ~$500,000. * Copper Faraday Cage. * Structural reinforcement (floor loading for 5+ tons). * Cryogen exhaust venting (Quench pipe). * Magnetic shielding (Silicon steel to contain the 5-Gauss line). * Operational Opex (The Tax): ~$100,000/year. * Helium top-offs. * Cold-head replacement (mechanical cryocooler). * Electricity (50-100 kW peak power draw). Total Numerator: ~$2,100,000 + Strict Zoning. The Denominator: The “Computational” Standard (Low-Field) The theoretical minimum cost relies on Permanent Magnets (which require zero power) and Compute (which rides Moore’s Law). * Atoms (The Magnet): ~$15,000. * Material: Sintered Neodymium-Iron-Boron (NdFeB), Grade N48 or N52. * Configuration: Halbach Array (Self-shielding). * Mass: ~300kg. * Calculation: 300kg $\times$ ~$50/kg (Spot Price) = $15,000. * Bits (The Shield & Reconstruction): ~$1,000. * EMI Sensors: Standard RF antennas (* Compute: NVIDIA Jetson or similar edge inference module (* Reconstruction Cost: ~$0.10 per scan (Energy cost of inference). * Regulatory Floor: ~$75/hr. * Operated by an L2 Skilled Tech or Nurse, rather than an L3 Specialist. Total Denominator: ~$50,000 Hardware Cost. The ID10T Index Calculation The Verdict: The medical imaging industry is operating at 42x inefficiency. It’s clearly not the answer to Life, the Universe, and Everything. * We are paying for Passive Shielding (Copper) instead of Active Cancellation (Algorithms). * We are paying for Hardware Signal (Superconductors) instead of Software Signal (Deep Learning). * Every dollar spent above $50k is a subsidy for “Reasoning by Analogy.” Part III: The Path Choice (JTBD Elevation) Innovation requires elevating the Job-to-be-Done (JTBD) to a level of abstraction that allows for disruptive solutions. We must define the job in terms of the patient’s struggle, not the machine’s capability. Job Definition: De-anchoring from the Scanner * Level 1 (The Trap): “Generating a high-resolution T1-weighted image of the brain.” * Why it fails: This job definition forces you to compete on resolution, which favors high-field magnets. If the job is “resolution,” 7T always wins. * Level 2 (The Shift): “Diagnosing a stroke within the golden hour.” * Context: A stroke patient loses 1.9 million neurons per minute. The constraint is not image quality; the constraint is time. Driving the patient to the “Cathedral” takes 45 minutes. Bringing the scanner to the patient takes 5 minutes. * Level 3 (The Elevated Job): “Assessing neurological status at the point of care.” * The Acceptance Criteria: The job is not “make a pretty picture.” The

    8 min
  7. JAN 20

    First Principles of Logistics: The Deconstruction of Parcel Induction

    Warehouses incorrectly use people to place boxes on conveyor belts, believing only human hands can handle the variety of packages. This approach is incredibly wasteful because manual labor is slow and costs nearly 90 times more than using machines for the same amount of work. Instead of trying to make workers faster, companies should remove humans from this task entirely. The best solution is to use automated systems that rely on physics and sensors to sort packages faster and cheaper. Part I: The Deconstruction – The Myth of the Human Singulator Reality: No opportunity landscape needed. Elon Musk never hired an ODI study. The Industry Consensus (The Stuck Belief) In the logistics and warehousing sector, the “Induction Station” is widely accepted as the unavoidable interface between the chaos of inbound receiving and the order of high-speed sorting. The prevailing industry dogma holds that “High-speed induction of heterogeneous parcels requires human dexterity.” This belief chains the industry to a “Human Robot” model. We deploy biological agents (operators) to perform repetitive, low-latency mechanical tasks—picking up a box, orienting it, and timing its placement onto a moving tray or belt. This is a profound misuse of the human engine. We are paying L1 Manual Rates ($25/hr) for a task that utilizes none of the human’s higher-order cognitive faculties (reasoning, complex problem solving) and instead exhausts their weakest subsystem: mechanical endurance. The “Smart Person” Trap in Induction When asked why this process is manual, facility managers (the “Smart People”) often cite the “Heterogeneity Argument”: * “Robots can’t handle the variety. We have polybags, tubes, tires, and boxes. Only a human hand can grasp them all.” * “We need humans to ensure labels are facing up for the scanner.” These are not first-principles constraints; they are technological confessions. They confess that the upstream process (receiving) is too chaotic and the downstream process (scanning) is too myopic. The human is merely the Entropy Filter inserted to patch these two systemic failures. The Socratic Scalpel: Severing the Link To deconstruct this, we must apply the Socratic Scalpel to the core assumption: Does singulation actually require dexterity? * Q (Clarification): “What exactly is the ‘job’ of the induction operator?” * A: “To place items one by one on the belt with a gap.” * Q (Challenge Assumption): “Why must they be placed ‘one by one’? Why can’t they flow?” * A: “Because they arrive in a pile (bulk).” * Q (First Principles): “Is separating a pile into a stream a biological problem or a physics problem?” * A: “It is a friction and velocity problem.” * Q (Implication): “If we solve the friction problem using variable speed belts, what is the value of the human in the loop?” * A: “Zero. In fact, negative, because humans are inconsistent.” The Fundamental Truth Human induction is a latency buffer masquerading as a quality control step. The persistence of manual induction is not due to the superiority of the human hand, but due to the historical failure to apply Flow Dynamics to parcel geometry. As demonstrated by the Law of Constraints, any system that relies on human cycle times to feed a machine cycle time (the sorter) will ultimately be capped by human biological limits (~15-20 PPM), rendering the machine’s theoretical capacity (often 60+ PPM) unreachable. The Cost of “Reasoning by Analogy” Warehouses continue to build manual induction lines because they are reasoning by analogy: “Our last facility had manual induction, and it worked, so we will do it here.” This leads to the “Sustaining Innovation” Trap. * Sustaining Path: We buy better anti-fatigue mats. We install vacuum lift assists. We add gamification screens to “motivate” workers to move faster. * Result: We invest capital to make an inefficient process slightly more tolerable. We are “digging the grave faster” (RFPA Command 4). The First Principles approach requires us to delete the process entirely. We do not want better manual induction; we want no manual induction. We want the parcels to organize themselves through the application of physics—specifically, centrifugal force (unscramblers) and differential velocity (gapping belts). “The most common error of a smart engineer is to optimize a thing that should not exist.” — RFPA Protocol Part II: The ID10T Audit – Measuring the Efficiency Gap The Audit Failure: Why “Per-Unit” Math Lies Critical Correction: A previous analysis yielded an ID10T Index of 2.06. This is a “False Floor.” It assumed a 1:1 comparison between a human and a machine. * The Flaw: It failed to account for Throughput Density. To achieve high-speed sortation (e.g., 10,000 Parcels Per Hour), you cannot simply “speed up” a human. You must replicate them. * The Correction: We will calculate the ID10T Index based on a Capacity Block of 10,000 PPH. The Numerator: The Cost of “Manual Scale” To process 10,000 PPH manually, we face the Linear Scaling Problem. * Throughput per Human: 800 PPH. * Headcount Required: $10,000 / 800 = 12.5$ → 13 Operators. * Support Staff: 1 Supervisor per 10 staff + 1 “Water Spider” (supplies). Total: 15 Humans. * The Wage Stack: * 13 Operators @ $25/hr = $325/hr. * 2 Support @ $35/hr = $70/hr. * The Hidden “Space Tax”: 13 Induction lanes require ~200 linear feet of conveyor. At industrial lease rates + conveyor depreciation, this adds ~$50/hr in “Real Estate & Capital Waste.” * Total Commercial Price (Numerator): $445.00 per operating hour. The Denominator: The Physics of “Flow Scale” To process 10,000 PPH via physics (Automated Singulation), we operate at the Logarithmic Limit. * The Energy Limit: 10,000 parcels X 49 Joules = 490,000 Joules = 0.136 kWh. * The Information Limit (Revised): * Correction: We previously applied a $0.01 “Cloud API” cost. This is incorrect for local controls. A local PLC/Photo-eye loop operates at the speed of light for the cost of electrons. * Cost: Negligible (. * The Machine Wear: Friction and motor depreciation at high speed. * Est: $5.00 per hour. * Total Physics Denominator: The True ID10T Score What Was Originally Missed? To reach a multiple of 50+, we had to include three factors that the “Unit Cost” analysis ignores: * The “Linear Scaling” Penalty: Humans do not scale. To get 10x output, you pay 10x cost. Machines scale efficiently; to get 10x output, you often just run the VFD at 60Hz instead of 30Hz. * The “Management Tax”: You cannot deploy 13 L1 operators without L3 supervision. This layer of “management to manage the inefficiency” is pure waste. * The “Opportunity Cost” of Utilization: * Scenario: The downstream sorter runs at a fixed speed. * Human Feed: 85% Fill Rate (Variegated spacing, missed lugs). 15% of the sorter’s capital value is wasted every hour. * Machine Feed: 99% Fill Rate. * Value: Recapturing that 14% capacity is worth millions annually, dwarfing the hourly labor rate. Part III: The Path Choice – Elevation & Selection Job-to-be-Done Elevation To escape the “Human Robot” trap, we must redefine the job using Level 3 Abstraction. * Level 1 (The Task - Wrong): “Placing boxes on a conveyor belt.” * Result: Better gloves, lift assists. * Level 2 (The Outcome - Better): “Maximizing sorter utilization.” * Result: Faster humans, gamification. * Level 3 (The Abstraction - Correct): “Harmonizing asynchronous object flow into synchronous sorter injection.” The Job Definition: The goal is not to “lift boxes.” The goal is to take an asynchronous, chaotic arrival pattern (receiving) and convert it into a synchronous, gapped departure pattern (sorting). The Path Decision: Constrained vs. Disruptive Path A: Constrained Optimization (The Dead End) * Strategy: Keep the human, but make them faster. * Tactics: “Follow-the-light” pacing systems, ergonomic tilt-tables to reduce reach distance. * Why it Fails: It accepts the L1 Labor Rate as a fixed constraint. It violates RFPA Command 4 (”If you are digging your grave, don’t dig faster”). Path B: Disruptive Deletion (The Physics Standard) * Strategy: Remove the human entirely. Use friction and vision to execute the job. * Tactics: Automated Singulation (Bulk-to-Stream conversion) + Vision Tunnels (6-sided scanning). * Verdict: This is the only path that respects the First Principles analysis. Part IV: The Reconstruction – The RFPA Loop Step 1: Make Requirements Less Dumb The Constraint: “Humans must place parcels label-up so the scanner can read them.”The Interrogation: Who set this requirement? The scanner vendor from 1999?The Physics Truth: Light travels in straight lines, but mirrors and multiple cameras can capture light from all angles simultaneously.The Fix: Install 6-Sided Scan Tunnels.Result: The “orientation requirement” is deleted. A box can be upside down, sideways, or tumbling—the machine still reads it. Step 2: Delete the Part (The Pick & Place) The Component: The human hand performing the “Pick and Place” motion.The Deletion: We replace the discrete action (pick-place) with a continuous process (flow).The Replacement: Bulk Flow Singulators. * Instead of picking items out of a cart, we tip the cart into a hopper. * The hopper feeds an “unscrambler” belt that uses centrifugal force to line items up single-file. * Result: The “Pick and Place” step is deleted. The L1 labor role is deleted. Step 3: Simplify & Optimize The Optimization: Dynamic Gapping. * Once singulated, items need specific gaps (e.g., 12 inches) to enter the sorter. * Instead of a human guessing the gap, we use Variable Frequency Drive (VFD) Belts. * Logic: Belt A runs at 1.0 m/s. Belt B runs at 1.5 m/s. The speed difference creates the exact gap required by the physics of the sorter. * Result: Precision increases from ±6 inches

    8 min
  8. JAN 16

    Post-Payday Architecture: The Shift from Batch to Stream

    Executive Summary The Problem: The “standard” two-week pay cycle is a relic of 1950s mainframe computing, not a law of economics. This artificial latency forces workers into predatory debt (payday loans, overdrafts) and bleeds employers through massive turnover costs. The Efficiency Gap: We are paying $35 in overdraft fees or $4,000 in turnover costs to solve a problem that physics says costs $0.01 (a database query). The Disruption: Stop optimizing “financial wellness” seminars. Delete the latency. Implement Earned Wage Access (EWA) to align compensation speed with work speed. PART I: THE APEX STRATEGY Deconstructing the Stuck Belief Input Subject: The Bi-Weekly Paycheck (The “Batch Processing” Legacy). The Stuck Belief: “Employees must be paid in batches because calculating payroll takes time and capital.” Socratic Inquiry (The Scalpel): * (Clarification): “What exactly prevents real-time payment? Is it a law, or a software limitation?” * (Challenge): “We stream movies, data, and electricity in real-time. Why is money—which is just data—the only utility that lags by 14 days?” * (Implication): “If we delete the 14-day lag, do we delete the need for predatory credit entirely?” Key Insight: The “Payday” concept is an artifact of physical check printing and manual ledger reconciliation. In a digital API environment, it is purely artificial friction. The Efficiency Delta We calculate the cost of this artificial latency using the RFPA Protocol. Numerator (P_market - The Cost of Friction): * Employee Side: Average Overdraft Fee ($35) or Payday Loan Interest (400% APR). * Employer Side: Replacement cost of an entry-level employee due to financial stress turnover (~$4,129 per hire, per SHRM data). Denominator (P_min - The Physics Limit): * The Bits Floor: The cost to verify “Hours Worked” and execute a ledger transfer = $0.01 (Digital Transaction Limit). The Index: Conclusion: The market is paying a 3,500x premium for “latency” that shouldn’t exist. Path Selection * Path A (Optimization - Rejected): “Financial Wellness” apps, budgeting workshops, or annual bonuses. (Paving the Cow Path). * Path B (Disruption - Selected): Earned Wage Access (EWA). Streaming liquidity to match labor input. The Reconstruction (First Principles Solutions) The Job-to-be-Done (JTBD): “Align cash flow with life flow.” Foundational Axioms: * Work is Continuous: Value is created every hour, not every 14 days. * Money is Data: Moving it should be instant and frictionless. * Liquidity is Retention: The fastest payer wins the talent war. The Mechanism: * A “Zero-Integration” overlay that fronts the capital (no cash flow hit to employer). * Automatic reimbursement on “Cycle Day” (no structural change to payroll processing). Execution Strategy (Real Options) * Option to Explore: Audit the “Turnover Tax.” How many exits cite “better pay” or “financial stress”? * Option to Validate: Pilot EWA with a single department. Measure absenteeism and shift pickup rates. * Option to Scale: Roll out company-wide as a core “Financial Health” benefit, replacing costly recruitment drives. PART II: THE DECONSTRUCTION The Legacy of Latency Chapter TL;DR: We treat the “bi-weekly paycheck” as an immutable economic law, but it’s actually a technical scar from the 1950s. Modern banking APIs can move money in milliseconds ($0.01 cost), yet we force employees to wait 336 hours (14 days) to access wages they’ve already earned. The Mainframe Hangover The bi-weekly pay cycle is not a business requirement; it’s a fossilized software limitation. In the 1950s and 60s, calculating payroll was a massive computational event. Companies ran mainframes (like the IBM 1401) that required physical punch cards and hours of dedicated processing time. It was computationally expensive to run these “batches.” * The Constraint: You couldn’t run the payroll “job” every day because the computer time was too valuable and the manual reconciliation took too long. * The Artifact: To save processing power, companies spaced payments out to every two weeks (or monthly). * The Reality Today: We carry this “batch processing” mentality into a world of cloud computing where computing power is effectively infinite and free. We aren’t limited by punch cards anymore, but we still pay people as if we are. Money is Just Data If we apply Socratic Inquiry to the nature of money today, we hit a fundamental truth: Money is information. When you stream a movie on Netflix, you don’t wait 14 days for the data to buffer. You get it the second you request it. Why? Because the cost of transmitting that data is negligible. Wage transfer is identical. It is simply a ledger entry moving from Employer_Account to Employee_Account. * The Physics of the Transaction: The actual cost to update a database row (which is what a bank transfer is) is fractions of a penny. * The Artificial Friction: The 14-day delay is purely artificial. It’s an administrative choice to hold capital that strictly belongs to the worker. As noted in Real Options Logic, value is created the moment the work is performed. Holding that value back creates a “Liquidity Gap” that forces the employee to seek expensive bridge capital. The High Cost of “Float” This latency creates a massive ID10T Index gap. By holding wages for two weeks, employers (and their banks) benefit from the “float”—interest earned on money that has technically already been earned by the worker. * The Employee’s Reality: Because they can’t access their liquidity, 72% of Americans live paycheck to paycheck. When an unexpected bill hits on Day 10 of the pay cycle, they have a solvency crisis, despite being “solvent” on paper (accrued wages). * The Predatory Bridge: To bridge this 4-day gap, they turn to overdrafts ($35 fee) or payday loans (400% APR). * The Efficiency Delta: * Cost of Real-Time Access: ~$0.01 - $0.50 per transaction via modern rails (RTP/FedNow). * Cost of Waiting: $35.00 (Overdraft Fee). * The Gap: We are paying a 3,500% premium for a delay that serves no functional purpose. Reframing the Narrative We need to stop calling EWA a “loan.” This is a critical semantic shift. * A Loan: Money you haven’t earned yet, given against future promise. * EWA: Money you have earned, accessed when you need it. When an employee accesses their earned wages, they are simply reducing the settlement time of a transaction that has already occurred. The labor is delivered; the debt is owed. EWA just clears the ledger. The ID10T Audit (The Cost of Inaction) Chapter TL;DR: The market price for bridging the 14-day pay gap is roughly $35 (an overdraft fee), while the actual cost to move the money instantly is $0.01. We accept massive financial penalties as the “cost of doing business,” but this is an unforced error. Defining the Efficiency Delta To understand why the 14-day pay cycle is obsolete, we can’t just rely on sentiment; we need to use the ID10T Index. This formula calculates the gap between what the market currently pays to solve a problem (P_market) and the theoretical minimum cost defined by physics (P_min). In the context of payroll, “P” represents the cost of accessing liquidity. The Numerator: The Market Price of Friction (P_market) The “Market Price” is the penalty paid by the system because the money is stuck in a 14-day buffer. This cost hits both the employee and the employer. * The Employee Penalty (The Predatory Tax):When an employee runs out of cash on Day 10, they don’t stop consuming electricity or needing food. They hit a liquidity wall. The market solution is a $35.00 overdraft fee or a payday loan with 400% APR. * The Cost: $35.00 per incident. * The Employer Penalty (The Turnover Tax):Financial stress is the number one driver of employee turnover. When an employee quits to get a signing bonus elsewhere just to pay a bill, the employer pays a replacement cost. According to SHRM data, the cost to replace an entry-level employee is roughly $4,129. * The Cost: $4,129 per exit. The Denominator: The Physics Limit (P_min) Now, let’s look at the “Physics Limit.” What is the absolute irreducible cost to move the money? Since money is data, the cost is the energy required to flip a bit in a ledger and the marginal cost of the bandwidth to transmit that confirmation. * The Bits Floor: In a modern API environment (like the FedNow rail or a closed-loop ledger), the marginal cost of a transaction approaches zero. * The Regulatory Floor: Even adding compliance checks, the cost is negligible. * The Physics Limit: $0.01 (1 cent). The Calculation: A 3,500x Efficiency Gap If we compare the common “Overdraft Solution” to the “Real-Time Solution,” the math is staggering. We are paying a 3,500x premium for latency. If you bought a gallon of gas for $3.50, and the gas station charged you a $3,500 “delivery fee” to pump it into your car 14 days later, you would riot. Yet, this is exactly how the bi-weekly pay cycle operates. We’re burning capital on fees that purchase absolutely no value—they only purchase access to value that already exists. The Conclusion: Latency is the Enemy The ID10T Audit proves that the bi-weekly pay cycle isn’t just “old school”; it’s structurally insolvent. It relies on employees paying a “poverty premium” to banks (via overdrafts) or employers paying a “turnover tax” to recruiters. Eliminating the lag isn’t a perk. It’s an efficiency mandate. PART III: THE RECONSTRUCTION The Disruption (Aligning Cash Flow with Work Flow) Chapter TL;DR: The traditional payroll job (”Batch Processing”) solves the wrong problem. We need to delete the concept of “Payday” and replace it with “Streaming Liquidity.” By overlaying a real-time ledger on top of legacy systems, we align cash flow with workflow. The Job-to-be-Done (JTBD): “Streaming Liquidity” To fix the payroll system,

    8 min

About

Mike Boysen shares insights into the evolution of Jobs-to-be-Done, especially in the age of Generative AI. He makes the previously secret process more accessible new approaches and automated tools that vastly reduce the time, effort, and cost of doing what the large enterprises have been investing in for years. This will be especially interesting for the earlier stage, smaller enterprises, and those investing in them who have always had to rely on a superstar, or guess (or maybe that's the same thing!). So...check it out! www.jtbd.one