The Innovators Studio with Phil McKinney

Phil McKinney

Forty years of billion-dollar innovation decisions. The real stories, the hard calls, and the patterns that repeat across every organization that's ever tried to build something new. Phil McKinney shares what those decisions actually look like. Phil was HP's CTO when Fast Company named it one of the most innovative companies in the world three years running. He co-founded a company and took it public. Now he runs CableLabs, the R&D engine behind the global broadband industry. This isn't theory. It's what happened. And what you can see coming if you know what to look for. Running since 2005, originally as The Killer Innovations Show, now The Innovators Studio. Tens of millions of downloads. Full archive at killerinnovations.com. New episodes at philmckinney.com.

  1. How to Overcome Expert Bias

    10H AGO

    How to Overcome Expert Bias

    Last June, I was on a business trip in Silicon Valley when a second cardiac device failed. Same problem with a second surgical team six months apart. The full story is on philmckinney.com. What changed everything was one doctor who stopped treating what everyone else had diagnosed and asked whether they even had the right problem. That one question uncovered what two surgical teams had missed. That's the expert trap. And it shows up in your business, your career, and your decisions far more than you'd expect. Before you act on the next expert recommendation you receive, there are three checks almost nobody makes. Stay with me, because one of them is going to feel uncomfortable. That's the one that matters most. THE TRAP A friend of mine ran a mid-sized manufacturing company, and a few years ago, he hired a well-regarded industry analyst to help him think through where his business was headed. The analyst had data, slide decks, and a client list that made you feel like you were in good company just being in the room. He pointed to three companies in adjacent categories that had shifted to direct-to-consumer sales and won. He was confident, he was credible, and he was paid well to be both. My friend followed the advice. He put together a team, built the infrastructure, and ran the channel for twenty-two months. He lost around four million dollars, and his best wholesale distributors felt abandoned. Some of them never came back. The analyst wasn't wrong. Direct-to-consumer had worked for those other companies. The data was real, and the success stories were real. But nobody in that room ever asked whether any of those success stories involved his specific customer, his specific product, or his specific buying cycle. The companies the analyst cited were consumer brands. My friend's company was in the industrial supplies industry. Completely different purchase decision. He'd actually noticed this early on, and something felt off, but he never said it out loud because the expert had already spoken. That's the feeling I'm talking about. You notice something doesn't quite fit, but you don't raise it, because who are you to question the expert? That's the expert trap, and it's one of the most reliable ways your thinking gets replaced without you realizing you handed it over. WHAT'S ACTUALLY HAPPENING When you perceive someone as having more relevant knowledge than you do, your brain measurably reduces the cognitive effort it puts into evaluating what they're saying. This has been studied, and it's not a weakness or a character flaw. It's a shortcut your brain developed because trusting domain expertise is usually the right call. The cardiologist probably does know more about your heart than you do, and the structural engineer probably does know more about load-bearing walls. The shortcut works often enough that it sticks. The problem is what it skips. It doesn't feel like you're surrendering your judgment. It feels like being informed. And so you follow advice that was right, just not for your situation, your timing, or your constraints. The advice was calibrated for circumstances that don't match yours, and the moment the credential appeared, the evaluation stopped. The wrong takeaway from everything I just said is to become reflexively skeptical, to walk into every expert conversation looking for the angle, ready to push back. That's just a different way to stop thinking. The goal isn't distrust. The goal is to stay in the evaluation while the expert is talking, instead of handing it over. Three checks help you do exactly that, and any serious expert should be able to answer them without hesitation. CHECK ONE: CONTEXT The first check is one question: where, specifically, has this worked before? Most people ask whether something works and most experts answer that question confidently. But that's the wrong question. What actually matters is where it worked, what kind of organization, what stage of growth, what kind of customer, what competitive environment, what specific circumstances. Expertise is built on pattern recognition developed inside a specific set of situations. The pattern is real, but whether your situation matches it closely enough to actually apply it is a completely different question, and it's the one nobody asks. Even in medicine, good surgeons will tell you that outcomes from major clinical trials don't always replicate cleanly when the patient profile differs from the trial population. The research is real and the expertise is real, but the fit question is what determines whether any of that expertise is actually useful to you right now. Most advisors don't volunteer this, not because they're hiding anything, but simply because nobody asks. So ask. Just simply and directly: where have you seen this work, and where does that situation differ from ours? A good expert has thought about this already. The answer comes quickly and it's specific. If they get vague or keep circling back to the general principle instead of the specific situation, slow down, because that vagueness is telling you something. CHECK TWO: INCENTIVE The second check is the one that's going to feel uncomfortable, but ask it anyway: what does the expert gain from this recommendation? Every expert operates inside incentive structures, and that's just how it works. A surgeon recommends surgery more often than a physical therapist does, not because surgeons are corrupt, but because surgery is the tool surgeons have. A financial advisor who earns commission on certain products is structurally more likely to recommend those products. A consultant whose business model depends on long engagements has different incentives than one whose model is based on outcomes. None of this makes the recommendation wrong. It just makes it something you need to understand before you weight it. The way to surface this without it feeling like an accusation is to ask about the logic rather than the incentive. Ask them to walk you through why this approach rather than the alternatives they considered. Think about it this way. If a mechanic quotes you a repair and you ask why that repair instead of the simpler one, you expect a real answer. You get that answer from a mechanic you trust. You should expect exactly the same from every expert in your life, regardless of how much more impressive their office is. Before we get to the third check, think about the last significant decision you made based on expert input. Could you answer the context question? Could you answer the incentive question? Most people can't. The checks never happened. The third check is the one I almost never see anyone use, and in my experience it's the most revealing of the three. CHECK THREE: FAILURE RATE The third check is this: when doesn't this work? Think about what every expert presentation looks like. Track record, success cases, confidence — the whole architecture is built around what worked. What failed almost never comes up unprompted. But any expert who has used a recommendation enough to believe in it has also seen it fail. They know where it falls apart and what the warning signs look like. That knowledge is exactly what you need, and it's almost never volunteered. So ask for it directly: when have you seen this approach not work, and what tends to produce a different outcome? The doctor I mentioned at the top, Dr. West, that's exactly the question he asked. Not how to treat the condition better, but whether they even had the right diagnosis. Every other expert had followed the standard protocol. He asked when the standard fails. He found one paper describing one edge case that had been sitting in the literature for six years. That question uncovered what two surgical teams had missed. That's what the failure rate check does. It doesn't surface doubt, it surfaces evidence. And an expert who can only tell you what worked hasn't really thought carefully about when it doesn't. That's someone selling a recommendation, not helping you make a decision. THE SYNTHESIS Three checks — context, incentive, and failure rate. What they do together is simple. They require the expert to give you something you can actually examine rather than something you're simply being asked to accept. That's the difference between making a decision and receiving one. CLOSE You already know which of the three checks you'd struggle to make. That's the one worth starting with. The friend I mentioned at the top, the one who spent twenty-two months and four million dollars on a channel that was never right for his business, I talked to him afterward. He knew something felt off from the beginning. He noticed the mismatch. But the confidence in the room, the slides, the client list, all of it washed that feeling away. He said: I knew enough to ask the question. I just didn't know I was allowed to. You're allowed to. Drop a comment and tell me which of the three checks is hardest for you to make. I want to know if it splits the way I think it does. See you next week.

    15 min
  2. How to Overcome Confirmation Bias

    MAY 6

    How to Overcome Confirmation Bias

    Confirmation bias is shaping your decisions right now. Not occasionally. Every day. And the unsettling part is that the smarter you are, the harder it is to see it happening. By the end of this episode you'll know exactly what confirmation bias is. How to recognize when it has taken over a room. And three specific practices that actually work. Not borrowed frameworks, but what forty years of high-stakes decisions has taught me. Let's get into it. What Is Confirmation Bias? Confirmation bias is your brain's tendency to seek out, favor, and remember information that confirms what you already believe, filtering out everything that contradicts it. Most people think that just means seeking out information that agrees with them. That's part of it. But here's what makes it truly dangerous. Once you form a strong belief, three things happen automatically. Unequal Evaluation. Picture two studies landing on your desk. One says your strategy is working. One says it isn't. You read the first and nod. You read the second and start looking for the flaw: the methodology, the sample size, the funding source. Selective Memory. Your brain doesn't store evidence equally. What supports your belief stays accessible. What contradicts it becomes harder to recall the longer you hold the belief. The Backfire Effect. When someone directly challenges a belief you hold, your brain treats it as a threat. The response isn't reconsideration. It's defense. Studies show you actually leave the argument more convinced than when you entered it. Together, the longer you hold a belief and the more it matters to you, the harder it becomes to change, no matter how much evidence says you should. Confirmation Bias in Today's World Confirmation bias has always been part of human thinking. What's changed is the environment around it. Algorithms feed you content that matches what you already believe. Social media shows you opinions from people who think like you. Search engines rank results based on what you've clicked before. Every system you interact with daily is built to confirm your existing views. Not by accident, but because confirmation keeps you engaged. The result compounds. The more confirming information you consume, the stronger your existing beliefs become. The stronger your beliefs become, the more your brain filters out opposing information. The more that information gets filtered, the harder it becomes to update your thinking, even when updating is exactly what the situation demands. This is mindjacking in action. The systematic replacement of your thinking by systems built to do it for you. And confirmation bias is one of its most powerful tools. It's visible everywhere. In public discourse where people can no longer agree on basic facts. In organizations that keep funding failing strategies long after the evidence says stop. In leaders who build teams designed to tell them what they want to hear. You might assume that smarter, more experienced people are less susceptible to this. The research says otherwise. The Smartest Person in the Room Gets It Wrong Here's what surprises most people. Confirmation bias doesn't get weaker as you get smarter. It gets stronger. Dan Kahan at Yale ran a study. He gave people a math problem where the correct answer contradicted their political beliefs. The smarter the person, the more likely they were to get the answer wrong, in the direction that protected their belief. More intelligence, applied more effectively, in service of the conclusion they'd already reached. A smart person who has formed a wrong belief is better at defending it. They find flaws in the opposing data faster. They construct more sophisticated arguments. They're more convincing to others and to themselves. I watched this play out in a board meeting. A CEO had championed a major strategy. Three separate analyses came back contradicting it. Each time, he found a different flaw in the methodology. By the end of the meeting he'd convinced the room the data was unreliable. The strategy continued. The outcome was exactly what the data predicted. He wasn't dishonest. He was skilled. His intelligence was working against him. And everyone in that room let it happen. If you're intelligent, experienced, and confident in your judgment, you are not immune to confirmation bias. You are more vulnerable to it. If you know someone who is always the smartest person in the room, send them this episode. They need it more than most. How to Overcome Confirmation Bias: What Actually Works Knowing about confirmation bias doesn't stop it. I know this from experience, not from research. I've been in rooms where everyone understood exactly what was happening and it happened anyway. What works is different from what you've probably been taught. Catch It in Yourself: The Flip Debate The moment I've most reliably caught confirmation bias operating in myself hasn't come from a checklist or a framework. It's come from a specific kind of conversation. I keep a small group of trusted advisors, people I call my kitchen cabinet. These aren't peers. They're almost never inside the organization. They have no stake in the outcome and no incentive to tell me what I want to hear. When I'm about to make a significant decision and I feel the pull of certainty, I take it to one of them. The conversation has a specific structure. I argue my position, fully and genuinely, the strongest version I can make. Then I stop. And I argue the opposite. Not a token acknowledgment of the other side. A real debate. I take the side I'm most resistant to and make the best case I can for it. What happens in that second argument is where confirmation bias shows up. The gaps. The assumptions I'd been protecting. The evidence I'd felt the urge to dismiss. When you're forced to argue a case you don't believe, you find the things you didn't want to see when you were arguing the one you do. An outside advisor is essential. Someone who will push back, ask hard questions, and notice when the flip argument is being faked. You can't do this with someone who needs something from you. The absence of stakes is what makes the honesty possible. Catch It in a Room: Two Signals to Watch For I've learned to watch for two signals that tell me confirmation bias has taken over a room. Both are visible before the decision is made. Almost everyone misses them. The first signal is the unwillingness to debate the other side. When a room has really decided, before the discussion is officially over, nobody wants to argue the opposing position. Not even hypothetically. Raise the other side and watch what happens. Eyes go flat. The conversation moves on. Someone changes the subject. If a room can't genuinely engage with the strongest case against the preferred direction, confirmation bias is driving. The second signal is circular justification. Listen for reasoning that keeps returning to its own starting point. The evidence for the decision is the decision itself. When you can't find an external reason, just a restatement of the conclusion, confirmation bias is driving. When I hear circular justification in a room, I stop the conversation. Not to embarrass anyone. To name what's happening. "We're not evaluating anymore. We're confirming. Let's go back to the evidence." That single intervention has changed the outcome of more decisions than any framework I've ever been taught. Change How You Decide: Full Options, Real Challenge Here's the most consistent change I've made in my own decision-making, and it comes directly from watching what confirmation bias costs people: I force a full pros and cons analysis on every serious option. Not just the one I'm leaning toward. This sounds obvious. Almost nobody does it. The natural pull is to build the case for the option that already feels right and compare it against the weaknesses of the alternatives. That's confirmation bias disguised as analysis. What I do instead is give every option on the table the same treatment. The best case for it. The best case against it. Without knowing in advance which one I'm going to choose. For decisions that carry real weight, I take it further. I bring in my brain trust: direct reports who will tell me what I don't want to hear, kitchen cabinet advisors, trusted board members. I ask specifically for the challenges. Not validation. Not enthusiasm. The places where the thinking is weak, the assumptions that might not hold, the evidence I might have filtered out. One question has changed how I approach every major decision: what am I not seeing? The answers, from people who have no incentive to protect my view, are exactly where the confirmation bias lives. Confirmation Bias Exercise: Try This Today This week, before you finalize any decision you've already started leaning toward, do one thing. Find one person outside your organization, someone with no stake in the outcome, and run the flip debate. Argue your position fully. Then stop and argue the opposite, with the same effort and commitment. Don't summarize the other side. Argue it. Make the best case you can for the view you're most resistant to. Notice what comes up in that second argument. The gaps. The assumptions. The evidence you'd been setting aside. That's where your confirmation bias is living. Run that exercise this week. Not once. Every time you feel the pull of certainty on a decision that matters. The Benefits of Overcoming Confirmation Bias The payoff from these practices compounds over time. Examined beliefs are more reliable than accumulated ones. Decisions that accounted for opposing evidence hold up better than decisions that filtered it out. Judgment that evaluates rather than confirms earns a different kind of trust from the people around you. Beyond your own decisions, catching confirmation bias makes you harder to capture. Every algorithm, every platform, and every persuader around you is built to exploit it. S

    15 min
  3. Why Most Organizations Aren't Funding Innovation

    APR 29

    Why Most Organizations Aren't Funding Innovation

    Twelve official definitions for R&D. Zero agreement. The US government publishes at least a dozen distinct official definitions across agencies, accounting standards, tax authorities, and international bodies. Not one agrees with the others on where research ends and development begins. Trillions of dollars flow through R&D budgets every year. Boards approve them. Investors evaluate them. Governments subsidize them. Analysts benchmark them. And the term at the center of all of it has no settled definition. A company can gut its research investment without triggering a single alarm on its income statement. Researchers who gained rare access to confidential federal R&D data found exactly this: when companies face financial pressure, they cut research while leaving development essentially untouched, and the combined number barely moves. Every benchmark, every board conversation, every investment thesis built around the R&D line may be built on sand. Innovation, ideas made real, requires both. Research is how you find the idea. Development is how you make it real. Strip out the research and you're not innovating, you're iterating on what already exists. Strip out the development and you're just experimenting. The problem is that nobody in the room knows which one they're actually funding, because the definition that would tell them doesn't exist. Someone needs to draw the line. This episode is about why nobody has, and the definition I think should replace the chaos. By the end, I'm going to put that definition in front of you and ask you to push back on it. Not to agree. To tell me where it breaks. How We Got Here Four institutions took a run at defining R&D. Each one got it right for their own purposes. None of them got it right for yours. Frascati: Built for Governments In June 1963, OECD economists met at a villa in Frascati, Italy, south of Rome, and produced what became the international standard for measuring R&D across nations. Now in its seventh edition. The Frascati Manual divides R&D into three tiers: basic research (theoretical work with no application in view), applied research (original investigation toward a specific practical objective), and experimental development (using existing knowledge to produce new products or processes). To qualify, an activity must be novel, creative, uncertain in outcome, systematic, and transferable. Used by governments across roughly 75 countries. Solid for what it was designed to do: let nations compare R&D investment on consistent terms. What Frascati cannot tell you: whether a specific company's spending is creating competitive advantage. It counts the type of activity. It doesn't assess what the activity produces for the organization doing the spending. A company can satisfy every Frascati criterion investigating something every competitor already knows. The knowledge is new to them. That is enough. The accountants drew a different line, for a different reason, with a different consequence. FASB: Built for Accountants In October 1974, the Financial Accounting Standards Board issued Statement No. 2, Accounting for Research and Development Costs, now codified as Topic 730. Every public company filing under US GAAP operates under it. The rule: all R&D costs expensed as incurred. Research, development, basic, applied: one line on the income statement. Their definition: research is a planned search aimed at discovery of new knowledge. Development is the translation of research findings into a plan or design for a new product. The rationale is explicit in the original standard. Future benefits from R&D are, in FASB's language, "at best uncertain." Expense everything immediately. The standard solved the problem it was asked to solve, which was accounting treatment: when to recognize the cost, not whether the cost was strategically sound. The consequence: sustaining engineering, feature maintenance, and incremental product updates all land on the same line as genuine exploratory research. Nobody looking at the income statement from outside can see the difference. The number is technically accurate and analytically opaque. Abraham Briloff, the late accounting professor at Baruch College, put it plainly: "Accounting statements are like bikinis. What they show is interesting, but what they conceal is significant." He was talking about financial reporting broadly. He could have been writing specifically about the R&D line. Researchers at Duke and London Business School spent years tracking corporate scientific output and found that it declined steadily across industries even as headline R&D spending kept rising. The combined number was hiding a substitution. Nobody on the outside could see it. Outside the United States, a different standard governs, and it creates a comparison problem most analysts never account for. IFRS: Built for International Investors IAS 38 governs R&D under IFRS, and its treatment differs from FASB in one significant way. Research costs are always expensed, same as FASB. But development costs can be capitalized as an asset on the balance sheet once a company can demonstrate technical feasibility, intent to complete, ability to use or sell the result, likely future economic benefit, adequate resources, and reliable cost measurement. A European company that capitalizes its development phase carries those costs as an asset: lower expenses in the period, higher total assets. An identical US company expensing everything under FASB takes the full hit immediately: higher expenses, lower assets. Same underlying investment. Incomparable financial pictures. Run the standard industry benchmark, R&D as a percentage of revenue, and you may conclude the US company is investing more aggressively. You may be comparing the same dollar invested under two different accounting regimes. Roughly 169 jurisdictions use IFRS. The United States does not. India uses an adapted version. Japan maintains its own standards board. The benchmark the industry trusts most is meaningless for cross-border comparison, and almost nobody says so. Section 174: Built for Tax Authorities The Internal Revenue Code adds another layer. Section 174 governs the deductibility of what the US tax authority calls "research or experimental expenditures," and the definition is not the same as FASB Topic 730. A company's R&D for tax purposes and its R&D for financial reporting can cover different activities and produce different numbers. The Tax Cuts and Jobs Act of 2017 tightened this further: domestic R&D expenses that were previously deductible immediately now must be amortized over five years, international over fifteen. The definition of what qualifies shifted when the timing rules changed. Within one country, one company, three definitional regimes apply simultaneously: Frascati for any government reporting, FASB for the income statement, and Section 174 for taxes. A single dollar of R&D spending can be classified three different ways depending on who's asking. The Gap None of Them Fill Four frameworks, built by four institutions, for four different purposes. Not one was built for the question that actually matters. Is this investment creating new knowledge that gives us a capability nobody else can easily replicate? The gap between them is where innovation decisions actually live. The National Science Foundation recognized the problem clearly enough that it publishes a separate annotated document just to catalog the competing definitions, because they're too inconsistent to assume any two readers are using the same one. That gap isn't an oversight. It's a structural consequence of four institutions doing their own jobs well. The question practitioners need answered was nobody's institutional job. You've been in the room. The R&D number is on the slide. Nobody asks what's inside it, because the accounting standard doesn't require an answer, and the room has learned not to expect one. So it went unanswered. Until now. A Better Definition for R&D Research is work directed at creating new knowledge where the outcome is genuinely uncertain and the knowledge cannot be readily obtained from existing sources. Development is the translation of that knowledge into products, services, or processes that meaningfully advance an organization's capability in ways competitors cannot easily replicate. Four elements define it: Genuinely uncertain outcome. If you know what you're going to get before the work starts, it's engineering execution, not research. The uncertainty doesn't have to be total. Most applied research has a likely direction. But there has to be real doubt about whether the approach works, whether the knowledge emerges. Cannot be obtained from existing sources. This is the one nobody puts in writing. If the knowledge is already in the literature, available from a consulting engagement, or present in a competitor's published work, finding it again isn't research. Generating new knowledge and capturing existing knowledge are different activities. Only one belongs here. This criterion alone would reclassify a significant portion of what companies currently call R&D. Advances capability competitors cannot easily replicate. Development only qualifies when it translates research into something that genuinely moves the organization forward competitively. Sustaining engineering doesn't pass it. Feature parity doesn't. Competitive catch-up doesn't. All real work, none of it development under this definition. Agnostic to accounting jurisdiction. This definition doesn't tell you how to expense or capitalize anything. That's already governed by whichever standard applies. What it does is establish what genuinely belongs in each category, regardless of where the company files. That makes it usable across FASB and IFRS companies without translation. There is a simpler way to put it. For any project in your R&D budget, ask two questions. First: are we cre

    21 min
  4. APR 15

    R&D Spending Is the Most Misleading Number in Business

    Every public company's R&D number is a lie hiding in plain sight. Not because anyone falsified it. Because the number was never built to tell the truth. It was built to satisfy an accounting standard written in 1974. And for fifty years, boards, analysts, and CEOs have been making billion-dollar innovation decisions based on a number designed by accountants to solve a different problem entirely. Here's what makes this genuinely strange. The real number exists. The government has been collecting it from every major US company for decades. It would answer the question every innovation leader and investor actually needs answered. And it is locked away by federal law. Confidential. Never published. Never seen by the people who need it most. It's sitting in a federal database right now. And there's a way to estimate it for any public company, without asking anyone's permission. I know it exists because I spent years building it from the inside. Why the R&D Signal Was Blurry When I was running innovation at HP, we discovered this problem firsthand. We had a connection between R&D investment and gross margin that held up across decades of HP history. Better than anything Wall Street was using. But the signal was blurry. None of us could figure out why. The answer came from a question someone on the team asked almost as an aside. What if R&D isn't one thing? Research and Development Are Not the Same Thing Think about what actually lives inside a typical R&D budget. There's a team somewhere investigating whether a new approach could enable a capability that doesn't exist yet. No product defined. No spec written. Asking whether something is even possible. And there's a team building the next version of a product that ships in eighteen months. Spec locked. Timeline set. Engineering executing against a defined target. Both show up on the same line in the budget. Both get called R&D. Both count equally toward the number that gets reviewed every quarter. They are not the same thing. One is Research. The other is Development. Research is the work you do when you don't yet know what you're building. The output is understanding. New knowledge that might enable future products nobody has designed yet. You can't know exactly what you'll find. If you already knew, it wouldn't be research. Development is the work you do when you know exactly what you're building. The spec exists. The product is defined. The question isn't what to make. It's whether it can be made, on time, at cost, at quality. One creates the future. The other delivers the present. And for fifty years, every public company in America has been required to report them as one indistinguishable number. When we split the HP data along that line, Research on one side and Development on the other, the signal sharpened immediately. Research spend, measured against gross margin three to five years later, was a meaningfully stronger predictor than the combined number had ever been. The blur hadn't been in the gross margin data. It had been in the R&D number itself. Two fundamentally different things, averaged together, producing a number that looked precise and predicted almost nothing. But splitting R from D at the company level was only the beginning. The model was still lying to us. Just more quietly. Why Company-Level R&D Splits Still Mislead Even with the split, something was still soft. HP wasn't one business. It was dozens. Printers, PCs, servers, software, each running on different timelines, different technology cycles, different competitive dynamics. What if the R/D split meant something different depending on where it was applied? We pushed it to the product line level. Then further, to the platform level within product lines. Printers were the clearest example. HP's printer business wasn't one story. There were platforms built on established technology. Mature ink systems, proven print head chemistry, products that had been shipping for years. And there were platforms built on genuinely new core technology. New chemistry. New mechanisms. New approaches to fundamental problems that nobody had solved yet. Research investment by platform told a completely different story than Research investment by product category. The Research going into new technology platforms had a completely different relationship to future margin than Research going into mature platforms. Different time horizons. Different risk profiles. Different margin implications years down the road. Laptops told the same story. A traditional consumer laptop line and a high-performance portable workstation weren't the same investment. One was Development-heavy. Defined product, known market, engineering executing against spec. The other had genuine Research behind it. Unsolved thermal problems, new form factor constraints, and materials questions that hadn't been answered yet. When a single R&D assumption is applied across all of that, treating every dollar the same regardless of what it actually does, the signal disappears into the average. Peanut butter across the portfolio. The model only got honest when it got specific. Research by platform and Development by platform, matched against the margin performance of those specific platforms years later. Which platforms were building future margin? Which ones were running on margin that past Research had already bought? We could see it because we were inside the company. The question is whether anyone on the outside could ever see the same thing. The R&D Data the Government Collects and Won't Release Outside the internal budget process, everyone sees the same thing: a single line on the income statement. The US government recognized decades ago that the combined R&D number was analytically useless. So they built a system to collect the real one. The National Science Foundation runs a survey called the Business Enterprise Research and Development survey. The BERD survey. Every year, roughly 47,500 US companies are required to report their R&D spending broken into three categories: basic research, applied research, and experimental development. The split that every board and every investor needs to see. Mandatory. Collected. Verified. And then locked away. The firm-level data is confidential under federal law. The NSF publishes only industry-level aggregates. So every company fills out this survey and reports its real R/D split to the government. That data sits in a federal database. And the boards, investors, and analysts who need it most cannot access it. Researchers at Northwestern and Boston University were given rare access to that confidential data. What they found is striking. When companies face financial pressure and cut R&D, they don't cut Development. They cut Research. Almost entirely. Development barely moves. Every earnings squeeze. Every activist campaign. Every cost optimization program. Systematically targeting the one part of R&D that builds future margin. And because the combined number barely moves, nobody on the outside sees it happening. That's not a coincidence. That's the accounting standard doing exactly what it was designed to do: produce one clean number for the income statement. It was never asked to protect the future. How to Estimate the Research-to-Development Split Without Inside Access So what can actually be done without access to the locked data? More than most people realize. Step 1. Find the industry baseline. The aggregate BERD data is public at the sector level. Ask an AI tool for the Research-to-Development ratio for the relevant industry. That's the benchmark. Everything else gets measured against it. A company spending 8% of its R&D on Research in an industry where the average is 25% is telling you something the combined number never would. Step 2. Look at the gross margin trend compared to peers. Gross margin over time is the most honest external signal of Research health. A company with a declining margin relative to peers, while reporting flat or growing R&D spend, is almost certainly shifting the mix toward Development. The math works in the other direction, too. An AI tool can pull this comparison for any public company in minutes. This is exactly the signal that was invisible at HP until it was too late. Step 3. Look at patent trends compared to peers over time. Patents are an imperfect but useful directional indicator. Not because more patents always means more Research. It doesn't. But a sustained decline in patent output relative to peers, alongside flat R&D spend, suggests the investment is maintaining existing products rather than creating new knowledge. Combined with the gross margin trend, it starts to triangulate where the split actually sits. None of these three steps requires access to an internal budget. All of them can be done in an afternoon with public data and an AI tool. Together, they produce a working picture of the R/D split that the income statement was never designed to reveal. What the R&D Split Revealed at HP That No One Outside Could See When Hurd took over in 2005, HP was spending $3.5 billion on R&D. Roughly 4% of revenue. By 2009, his last full year as CEO, that had dropped to $2.8 billion. Revenue had grown significantly over that period, so the percentage had fallen further still, to under 2.5%. Both the dollar amount and the ratio were declining simultaneously while the company got larger. Wall Street tracked the combined number. The board reviewed it. Nobody raised a structural alarm. The Research component within that total was well below the industry average for comparable technology companies. Not slightly. Significantly. The margin consequences arrived years later. They always do. What Happens When the Definition of Research Doesn't Exist The R/D split gave us a real predictive signal. We ran with it. The conversations were sharper. But the team kept pulling on a thread that nobody expected. When we looked closely at what was

    17 min
  5. The Innovation Metric Bill Hewlett and Dave Packard Used

    APR 1

    The Innovation Metric Bill Hewlett and Dave Packard Used

    Every public company in the technology industry measures innovation spending the same way. R&D as a percentage of revenue. Why? Because Wall Street tracks it. Boards benchmark it. CEOs get fired over it. And it tells you almost nothing about whether the spending is working. Bill Hewlett and Dave Packard knew that. From the very beginning, they measured something different. Something the rest of the industry has been ignoring for seventy years. And the proof was sitting in a paper that Chuck House pulled out and sent to me after a conversation at a Computer History Museum board meeting. By the end of this episode, you'll know what that metric is, why it works, and why the one everyone else uses makes it nearly impossible to tell whether your innovation investment is building the future or just burning cash. Here's how I found it. The Question That Wouldn't Let Go In the last episode, I talked about the argument with Mark Hurd. The question was over whether HP should cut R&D as a percentage of revenue to match Acer. I knew Mark was fundamentally wrong. But I couldn't prove it. The only metric on the table was R&D as a percentage of revenue. That was what Wall Street expected. It's what shareholders expected. It's what the board expected. But I couldn't argue against it, because I didn't have the data. I needed a better metric. So I decided to go back to the beginning. HP's complete financial records dating back to the 1940s. Division by division. R&D project by R&D project. The actual operating data. I got access to all of it. The HP archive team gave me direct access to Bill and Dave's original notebooks. Now, data alone wasn't enough. It was mountains and mountains of data, and you're trying to extract the signal. What is the trigger in that data? The conversation that cracked it open happened outside HP.     The Man with the Medal of Defiance I was at a Computer History Museum board meeting, standing next to Chuck House, and I shared with him the struggle I was having. A little context on Chuck. He spent twenty-nine years at HP. He was the Corporate Engineering Director and he helped launch dozens of products. He's also the recipient, from David Packard himself, of the Medal of Defiance. The Medal of Defiance was given to him because David had told him at one point to kill a product line. Chuck went around that decision, put the product into the catalog, shipped it, and it turned into a phenomenal success. When David gave Chuck the medal, the citation was something along the lines of: "for going above and beyond the stupidity of management and doing what was right." Chuck and Raymond Price co-authored a book called The HP Phenomenon, published by Stanford Press. It's the deep dive into the history of the innovation culture inside HP, all of the metrics used back in the Bill and Dave days that put in place the structure that allowed HP to be successful. By the time I'm at HP, Chuck had long since moved on. He was running Media X at Stanford, the university's research program on innovation, media, and technology. But we both served on the Computer History Museum board. At that board meeting, I shared the argument I'd had with Mark and the search for a better metric. I had a strong feeling there was something around gross margin. That R&D investment impacted gross margin. But a feeling isn't an argument. I needed data. I needed to correlate R&D spend to margin, and that's extraordinarily hard to do when you've got all these different product lines and divisions. Chuck got this little smile on his face and said, "I need to send you something." The Paper and the Whiteboard What he sent me was a paper. A journal paper he and a few of his colleagues had written decades before. And it laid out the connection between research investment and margin performance. The correlation I suspected but couldn't prove was right there on the page. I read it that night. The next morning I emailed Chuck, and I was just really excited. What they'd written decades ago matched what I was finding in the data. That email exchange turned into an invitation. I asked Chuck to come to HP Labs. We met in a conference room in Building 3, the main building for HP Labs at the time. And I'll tell you, I look back on this and it makes me smile a little, because this conference room was just down the hall from Bill and Dave's offices. HP preserved those offices exactly as Bill and Dave left them. You can walk in there today, see their desks, see their offices, just as they were on their last day. There's something about being that close to where it all started that makes the history feel less like history and more like unfinished business. Chuck walked up to the whiteboard and drew two things. On the left side: R&D as a percentage of revenue. The metric every company reports. The metric Mark used to argue HP was overspending. Chuck's point was simple. That metric tells you how much you're spending. That's it. Nothing about whether your products are any good. Nothing about whether customers value what you built. It's an input metric pretending to be an output metric. Two ways to improve the ratio: spend less on research, or sell more of what you've already got. Neither of those is innovation. You can manipulate R&D as a percentage of revenue by cutting your R&D spend, or you can cut prices to drive top-line revenue. But neither has any connection to measuring whether your innovation is actually working. On the right side, he drew gross margin. The distance between the cost to make something and what the customer pays for it. Chuck said: that gap is a direct measure of differentiation. Solve a problem nobody else can solve, and customers will pay for that difference. Margin expands. Build a product that looks like everyone else's, and customers have no reason to pay more. They'll shop you. Margin compresses. Then he drew the line connecting both sides. Research investment flows in. If the research produces differentiated products, gross margin expands. That expanded margin funds the next round of research. A virtuous cycle. But only if you're watching margin. The moment you manage to the spending ratio instead, the cycle breaks. The boardroom conversation stops being about whether research is producing differentiation. It becomes about whether the spending number looks right compared to some peer. That's what happened with Mark. HP's PC group margins were compressing toward commodity levels. The response, driven by that revenue-ratio metric, was to cut research spending to match the compression. Exactly backwards. Compressing margins are the alarm bell. Fix the research pipeline. Fix your innovation. Not just more innovation, but good innovation. Don't defund it. Bill and Dave's First Product, and What It Actually Proved Standing at that whiteboard, I could see it running through HP's entire history. The HP 200A audio oscillator. 1939. HP's first commercial product. Competitors were selling oscillators for over $200. Bill and Dave were selling theirs for $89.40. Now that's not because they undercut the market. What Bill figured out as part of his master's degree project at Stanford was that by using a light bulb inside the circuit as a self-regulating component, you could smooth the output in a way competitors couldn't match. Technically superior instrument. Radically cheaper to build. Walt Disney bought eight of them for Fantasia. The founders tracked the gap. Cost versus what customers pay. Not total revenue. That gap is gross margin. And that gap funded everything that came after. A lower-priced product, a higher-quality product, and the margin it generated is what drove HP's ability to continue to reinvest. David Packard codified it. He described what he called the six-to-one ratio. Products at HP were considered genuinely successful only when the profit from a product over time was six times the cost of developing it. If it was lower than that, it wasn't generating enough. And this is also how Bill and Dave decided which product lines to kill off. The ratio determined where research dollars were earning their return and where they weren't. The products that crushed that ratio weren't the ones with the biggest R&D budgets or the most engineers. They were the ones earning the highest return on the research dollar, because customers paid a premium for what the research produced. And here's what this enabled: self-financing. No debt. No banks. No Wall Street ninety-day pressure. That was back before HP was even public. It was the freedom to invest in research on a ten-year horizon, and that's only possible with healthy margins. At HP's margins, spending landed at about eight to ten percent of revenue. Why Eight to Ten Percent Is Not a Contradiction Now you might hear "eight to ten percent of revenue" and think I'm contradicting myself. I just spent ten minutes telling you that R&D as a percentage of revenue is a useless metric. Here's the difference. Bill and Dave didn't start with the percentage and work backwards. They started with margin. They funded the research that kept margins healthy, and the spending that produced happened to land at eight to ten percent. The percentage was a byproduct, not a target. The moment you flip that and make the percentage the goal, you've lost the plot. That's the distinction the entire industry missed. Chuck drew all of this in about twenty minutes on a whiteboard. Decades of institutional knowledge, distilled into one diagram. And the thing that hit me hardest wasn't the analysis. It was the realization that HP had already figured this out. The knowledge was in a paper that had been sitting around for decades. The company had just forgotten. What was old had become what was new. HP didn't need a breakthrough. It just needed to remember. Confirming the Pattern: Art Fong and John Young After the session with Chuck, I reached out to two other people who'd been the

    20 min
  6. The R&D Metric Mark Hurd and HP Got Wrong

    MAR 25

    The R&D Metric Mark Hurd and HP Got Wrong

    Twenty years. Nearly one thousand episodes on this show. And starting today, we're going to try something a little different this season. Season 21 is about the decisions that actually determine whether innovation lives or dies inside any organization. The real calls. Not the fluff stuff we read in academic textbooks. I want to actually put you in the rooms where these decisions are happening. What went right. What went wrong. My objective is to expose you to the patterns in innovation decisions so that you can recognize them. Recognize them in yourself, in the people you need to influence, long before you step into any landmines. So let's get into it. The Encounter on the Top Floor of Building 25 Making generational decisions on innovation investment can be a make-or-break moment. What I refer to as a CLM, a Career Limiting Move. In my case, it started with a chance conversation with Mark Hurd, HP's CEO. Let me take you back to 2005. HP headquarters is on Page Mill Road in Palo Alto, referred to internally as Building 25. The top floor is where all of the executive offices are. That's where Mark's office was. I was up there doing some meetings and got snagged by Mark. Now, Mark had a reputation. He was a big numbers guy. He believed in what he called extreme benchmarking. You tore into your competitors' numbers. You knew your own numbers in and out.1 Others had warned me about this. He had a famous quote that everybody shared:  "Stare at the numbers long enough, and they will eventually confess." Mark believed you could not lead a critical role at HP if you did not know your numbers cold, inside and out. Didn't matter whether it was sales, CTO, a function, or a division. It didn't matter. And Mark tested everyone on the leadership team. Not just the leadership team. He would randomly stop employees and ask them for their numbers based on what group they worked in. It was non-stop. It was constant. To where support staff was literally constantly preparing briefing books for managers, VPs, leaders, just in case they got nabbed by Mark. In my case, I happened to be walking past his office. Mark waved me in. I sat down, and he immediately started drilling me on the CTO numbers. The number he focused on was R&D as a percentage of revenue. The Broken Benchmark: R&D as a Percentage of Revenue Now, if you've been a regular listener of this show, you know my opinion of that metric. R&D as a percentage of revenue is a meaningless number.2 It is absolutely meaningless. But every public company CEO at an innovation-dependent company, all the tech companies, AI companies, even automotive, they live by this number. It's a number that Wall Street looks at. You have to report it as part of your quarterlies, and from there it's simple math.3 When Mark grilled me, he was focused specifically on the PC group at HP. HP's number at the time for the PC group was about one and a half percent. R&D as a percentage of the PC group's revenue. Acer, which was a key competitor, was at 0.8%. Less than one percent. Roughly half of HP's number.4 Apple was at four percent.5 Mark's question, and he was really pounding on this, was: How do we get our ratios in line with Acer? Basically, he was saying: how do we cut costs so that our R&D expense as a percentage of revenue equals Acer at 0.8%? This is exactly the problem with choosing the wrong metric. Now I'm going to quote somebody who I think was probably one of the most insightful leaders in the business world. Charlie Munger. If you've ever watched any of his talks, he had a really strong opinion on certain metrics. Specifically EBITDA, earnings before interest, taxes, depreciation and amortization. Charlie referred to EBITDA as BS earnings. It was a metric Wall Street swore by, and Munger said it hid more than it revealed. His exact words: "Every time you see the word EBITDA, just substitute the word 'b******t' earnings."6 R&D as a percentage of revenue is the same problem in a different disguise. It's the metric that makes every company look like it's investing when all it's doing is spending. Mark was using a broken instrument to make a generational decision. If you make decisions based on R&D as a percentage of revenue, and then you do comparisons like "let's make our numbers look like Acer," what you are actually deciding to do is cut your R&D. That is generational. You will destroy a company's innovation capability over the next ten to twenty years before you can even have a hope of rebuilding it.7 "We Are Not Apple and We Never Will Be" I looked at him and said: Why aren't we raising our R&D spend to match Apple? Mark didn't hesitate. He said: "We are not Apple and we never will be." I took offense at that. I was offended that he wouldn't even contemplate it. And I pushed back. I pushed back hard. I argued we could be Apple in areas where we had genuine advantage. Here's one example. Go back to September 2004, about a year before my meeting with Mark. Carly Fiorina was still CEO. Carly had just handed Steve Jobs access to the retail shelf space HP spent thirty years building.8 At that time, HP controlled about nine, nine and a half percent of all retail shelf space for consumer electronics, the largest single entity holding in that category. Where did all that come from? It traces back to the calculator days in the 1970s. Those relationships, those stocking slots, that footprint: HP had spent three decades building that access. Apple was launching the iPod.9 It had no retail distribution in consumer electronics. None. And rather than HP taking advantage of that for itself, it actually opened the door and allowed Apple to come in. That is how the iPod got its traction. It bought Apple the time to build out its own retail strategy, which is ultimately what allowed Apple to be where it is today. That wasn't an accident of history. That was HP giving away a structural competitive asset. When I tried to push back on Mark, saying we could be better with the right investment, it didn't land. Mark viewed the PC business as a commodity. And if it's a commodity, you manage expenses. You don't invest in capabilities. Monthly Arguments and the Search for Better Metrics There was no decision made that day. But something shifted in me. That was the first of many monthly arguments I had with Mark. And they were non-stop. What it drove me to do was start looking for better metrics. We had something most companies don't have: HP's complete financial history going all the way back to the 1940s. I had access to the numbers, division by division, for one of the founding companies of Silicon Valley.10 We were getting traction. I was actually getting Mark to align. I was getting the HP board to align. And then what happens? Mark gets removed as CEO and Leo comes in. Then Meg kicked Leo out and she took over. Then the split of HP into two companies. Acer today? Still roughly 0.9% of revenue in R&D.11 Twenty years later, almost exactly where Mark wanted HP to get to. What I Would Do Differently: Right Argument, Wrong Language If I'm being honest about what I would do differently, I had the right argument. I had the wrong language. The job wasn't to prove Mark wrong. Nobody changes their mind when they're being told they're wrong. I needed to stop speaking CTO and start speaking CEO. Meet him where he was. Make the case in the language of margin, risk, competitive position, the language he already trusted. But that language didn't exist when it came to R&D and innovation. That's the reason I spent the rest of my career building something better. And that is what this season is about. What Comes Next: The Metrics That Tell the Truth That conversation with Mark sent me looking. If R&D as a percentage of revenue was the wrong metric, and I believe to my core that it was, and is, then what's the right one? We went back through HP's own numbers. We back-cast all the way to the 1940s, looking at the numbers by division, by the overall organization. And then something unexpected happened. The archive team at HP gave me access to something nobody had looked at in decades: Bill Hewlett and Dave Packard's original notebooks. What I found in there pointed me somewhere nobody had thought to look. In the next episode, we're going to talk about the metrics that actually tell the truth when it comes to R&D and innovation.     If this episode gave you some insights, shifted something, share it with somebody who you think needs to hear it. Particularly if you're trying to fight senior leaders around R&D investment. And in the comments below, tell me: what's that one benchmark that you are required to hit, and yet you've never questioned? Is it the right benchmark? Have you really looked at it? I genuinely would like to know. Show notes and this week's Studio Notes are over at philmckinney.com. Subscribe there. That's where the deeper analysis lives. Every Monday that we post, subscribe. You don't want to miss the next one. I'll see you in the next episode.

    14 min
  7. How to Build a Decision System that Protects Your Thinking

    MAR 10

    How to Build a Decision System that Protects Your Thinking

    The best decision-makers aren't better at deciding. They're better at controlling when, where, and how they decide. It took me twenty years to figure that out. Most people spend that time trying harder: more discipline, more willpower, more resolve to think clearly under pressure. It doesn't work. That's when mindjacking wins. Not through force. Through the door you left unguarded. The answer isn't trying harder. It's building systems that protect your thinking before the pressure hits. By the end of this episode, you'll have four concrete strategies for doing exactly that, and a one-page system you'll build before we're done. And I have something else to share at the end. Something I've been working toward for twenty years. Let's get into it. Why Willpower Fails and Design Works Ulysses knew his ship would pass the island of the Sirens. He also knew the song was irresistible. Sailors who heard it became incapacitated and drove straight into the rocks. He didn't try to be stronger than it. He had his crew fill their ears with wax and tie him to the mast, with strict orders not to release him, no matter what he said when the music reached him. His calm self setting rules for his compromised self. That's the core of everything in this episode. These are called commitment devices. The decision gets made early, when your thinking is clear, before you're tempted to take the wrong path. Studies tracking self-imposed contracts found that when people added meaningful stakes to their commitments, their follow-through nearly doubled. Not because they became more virtuous, but because they'd taken the choice off the table at the moment they were most likely to get it wrong. Stop asking "How do I resist?" Start asking, "What can I decide now, so I don't have to decide under pressure?" Before you can build the right commitments, you need to know exactly where your thinking breaks down. Not decision-making in general. Yours. Finding Your Personal Vulnerability Think back across the last few months. Where did your thinking most clearly cost you? Some people stall. They keep researching past the point of useful information, using "I need more data" as cover for avoiding a commitment they know they need to make. Others make their worst calls at the end of long days. Saying yes when they mean no, because no requires energy they've already spent. Some get caught by urgency. A deadline appears, the pressure closes off their thinking, and they move fast. Only later do they discover the deadline was manufactured to do exactly that. Others walk into a room with a clear position and walk out agreeing with the loudest voice, unable to explain exactly when they shifted. And some defend decisions past the point where the evidence says stop, because stopping would mean admitting something about themselves they're not ready to face. Identify yours. Write it down before we go further. Your primary vulnerability is a design target, not a character flaw. You can't build around something you haven't named. Four Strategies for Protecting Your Judgment Strategy 1: Control When You Decide Every morning I put on the same thing: a black golf shirt, blue jeans, and cowboy boots. Same brands, same routine, no decisions. My wife tolerates it. I've stopped apologizing for it. It's not a fashion choice. It's a cognitive load choice. Your brain has a finite amount of decision-making capacity each day. Every trivial choice draws from the same reserve you need for the decisions that actually matter. What to wear, what to eat, which route to take. Eliminating those choices doesn't just save time. It protects the mental fuel you'll need later. Decision-making capacity isn't flat across the day. It peaks early, when you're rested and fresh. It degrades, measurably, as conditions erode. The same call made at 8 a.m. and at the end of your seventh consecutive meeting aren't equivalent. Same person, different machine. Pull up your calendar from the last two weeks. Look at when your biggest decisions actually happened. For most people, it's not in a calm moment with a clear head. It's in the hallway, on a rushed call, in the last fifteen minutes of a meeting that ran over. That's not bad luck. That's the default you haven't changed yet. Write a standing rule: no significant, hard-to-reverse commitments after a certain hour or after a certain number of back-to-back meetings without a mandatory pause. Hold it like a policy, not a preference. Because preferences are exactly what disappear under the conditions where you need them most. Strategy 2: Build Your Kitchen Cabinet One of the things I credit most for whatever success I've had in my career isn't a framework or a methodology. It's four people. I call them my kitchen cabinet. They've seen my best decisions and my worst ones. They know when I'm rationalizing. They know when I'm avoiding. And they are not afraid to call me out when I'm off the tracks. Here's what surprises people when I describe them. They're not senior executives. They're not peers from inside my industry. They don't work in any organization I've ever worked for. They're a deliberate mix: different backgrounds, different areas of expertise, different ways of seeing the world. One of them has been in my cabinet for nearly thirty years. I trust them completely, and everything we discuss stays between us. That independence is the whole point. The people inside your organization have something at stake in your decisions. Your peers have their own agendas, even when they don't mean to. Your boss has a preferred outcome. None of that makes them bad advisors. It just means they can't give you the one thing you need most when a decision gets hard: a perspective with no skin in the game. Your kitchen cabinet can. Because they have nothing to gain or lose from what you decide, they can ask the question everyone else in the room is avoiding. They can tell you what you don't want to hear. And they'll do it before you've committed, when it still matters, not after the fact, when all they can do is watch. Build yours deliberately. Four to six people is enough. Prioritize independence over seniority. Look for people who will push back, not people who will reassure. And make the relationship reciprocal. You show up for their decisions too. The cabinet only works if the trust runs both ways and the conversations stay private. You don't need them for every decision. You need them for the ones where you're most at risk of fooling yourself. Strategy 3: Write Your Position Before the Room Fills Up I've sat in enough rooms where I walked in with a clear position and walked out having said almost none of it. Not because I was wrong. Because by the time the senior voice spoke and the heads started nodding, my own analysis felt less certain than it did twenty minutes earlier. The brain doesn't just nudge your answer when social pressure arrives. It rewrites your perception. What you saw before entering the room changes to match what the room already believes, before you've consciously registered the pressure. Before any consequential group decision, write down where you stand. Three sentences. What you believe. What evidence supports it. What would genuinely change your mind. A note on your phone is enough. It doesn't need to be formal. It needs to be external, because your memory will quietly revise itself once the social pressure arrives. Those three sentences are a record of what you actually concluded before the room had a chance to work on you. When the discussion moves toward a position, you can then distinguish between "I'm updating because I heard something new" and "I'm caving because the silence is uncomfortable." Without that record, those two experiences feel identical in the moment, and one of them will reliably win. Strategy 4: Assume the Failure Before You Commit In August 2016, Delta Air Lines ran a routine scheduled test of the backup generator at their Atlanta data center. A transformer caught fire. Three hundred of Delta's 7,000 servers, improperly connected to a single power source, went dark. They couldn't fail over to backups. The servers that stayed online couldn't communicate with the ones that hadn't. The entire system collapsed: passenger check-in, baggage, websites, kiosks, and airport displays. Gone. Delta cancelled 2,100 flights over three days. $150 million in losses. Thousands of passengers slept on airport floors. The system had redundancy designed in. The backup had been tested. The specific failure mode, servers with no alternate power connection, was a known vulnerability that nobody had ever stopped to question. A year before the fire, cognitive psychologist Gary Klein, the researcher who developed the pre-mortem, had written a thought experiment describing almost this exact scenario. Imagine, he wrote, that an airline CEO gathered top management and asked: "Every one of our flights around the world has been cancelled for two straight days. Why?" People would think terrorism first. The real progress, Klein said, would come from mundane answers: a reservation system down, a backup that didn't activate, a cascade nobody had traced in advance. Delta built what Klein described. Without running the question that would have found it. The pre-mortem is that question. Before you commit to a significant decision, assume it's six months later, and the decision failed. Not possibly, but definitely. Then ask: What went wrong? What did you know but not say? What did someone sense but find too awkward to raise in the room? "What could go wrong?" produces hedged answers. People soften concerns to preserve harmony. "It failed. What happened?" changes the psychology entirely. You're not being negative. You're being forensic. The things that surface, the concerns that felt impolitic, the risks that seemed too small to mention, are frequently the ones that end up mattering most. Each of these four strategies is

    25 min
  8. How to Quit Defending Decisions You Know are Wrong

    MAR 3

    How to Quit Defending Decisions You Know are Wrong

    Ron Johnson was one of the most successful retail executives in America. He'd made Target hip. He'd built the Apple Store from nothing into a retail phenomenon. So when J.C. Penney hired him as CEO in 2011, expectations were sky-high. Johnson moved fast. He killed the coupons. Eliminated the sales events. Redesigned the stores. When his team suggested testing the new pricing strategy in a few locations first, Johnson said five words that explain everything that happened next: "We didn't test at Apple." Within seventeen months, sales dropped twenty-five percent. He was fired. And here's the part nobody talks about: Johnson had access to all the data. Every week, the numbers told the same story. Customers were leaving. Revenue was collapsing. The board was getting nervous. He could see it all. He just couldn't act on it. Because changing course would mean he wasn't the visionary who reinvented retail. He wasn't making a business decision anymore. He was protecting who he believed he was. That's the identity trap. And it doesn't just happen to CEOs.  What if changing your mind didn't have to feel like losing yourself? Let's get into it. Why Identity Bias Looks Like Your Best Qualities The trap doesn't target bad thinkers. It targets good ones. Think about the entrepreneur who poured three years and her life savings into a startup. The data says it's failing. The metrics are clear. Her advisors are suggesting it's time to pivot or shut down. She has every analytical tool to evaluate this accurately. And she can't do it. She's plenty smart. The problem is that admitting failure would mean she's "a quitter." And she is not a quitter. That's not who she is. Johnson wasn't stupid either. He was brilliant. His identity as the retail visionary just happened to make him blind to the one thing that could save his company: the possibility that what worked at Apple wouldn't work at Penney's. He experienced his blindness as conviction. As leadership. And that's the disguise. Every other thinking error in this series, uncertainty, depletion, time pressure, social pressure, you can feel those happening. You know when you're tired. You know when you're rushed. But identity fusion is invisible from the inside. It disguises itself as your best qualities. The entrepreneur calls it perseverance. Johnson called it vision. The investor who won't sell a losing position? He calls it discipline. Your ego doesn't announce that it's taking over. It puts on a costume that looks exactly like your strengths. And your brain? Your brain is in on it. Why Changing Your Mind Feels Like a Threat When a belief becomes part of your identity, your brain defends it as it would defend your body. Challenge that belief, and your brain responds the same way it would to a physical threat. Not metaphorically. The same neural circuits that protect you from danger activate to protect you from being wrong. That's why arguments about strategy or direction can generate so much heat and so little light. You're not debating a position anymore. You're defending territory. And sometimes you defend it long past the point where the evidence says stop. A project you've poured months into. A strategy you championed. A hire you fought for. The data says cut your losses, but you keep going because walking away would mean all that time, all that effort, all that money was wasted. That's the sunk cost fallacy. And most people think it's about the money or the time. But it's not. Sunk cost is about identity. Think about that manager who spent eighteen months building a new system. The team knows it's not working. She knows it's not working. But scrapping it doesn't just waste eighteen months of budget. It means her judgment failed. It means she led her team down the wrong road for a year and a half. "I've invested too much to quit" sounds like a financial calculation. It's not. It's an identity statement. What she's really saying is: "If I quit, I'm the kind of person who wastes eighteen months of people's lives." The sunk cost isn't financial. It's existential. And suddenly you can see that every time you've held on too long, stayed in something past its expiration date, defended something you knew wasn't working, the force holding you there wasn't logic. It was your self-image refusing to absorb the hit. So how do you loosen the grip once you realize it's there? Three Warning Signs Your Ego Has Taken the Wheel Here's what to watch for. 1. Emotional Intensity That Doesn't Match the Stakes Someone suggests a different approach to a process you built. Not a criticism. Just an alternative. And you feel a flash of heat in your chest. Defensiveness. Maybe irritation. The reaction is way out of proportion to the suggestion. Pay attention to that gap. The intensity isn't about the process. It's about what being wrong would say about you. 2. How You Argue When someone pushes back on your position, watch what happens. If you find yourself attacking the person instead of engaging their argument, that's identity talking. "You don't understand our industry." "You haven't been doing this as long as I have." The moment you shift from "here's why the evidence supports my position" to "here's why you're not qualified to question it," you've stopped defending a conclusion and started defending yourself. The tell is subtle: you'll feel righteous, not curious. 3. The Evidence Filter When you're evaluating something objectively, new information can move you in either direction. But when identity is involved, watch what happens. You accept supporting evidence quickly, uncritically, almost with relief. Contradicting evidence? You tear it apart. You find flaws in the methodology. You question the source. You say, "That's just one study." When you're applying completely different standards depending on which direction the evidence points, that's not critical thinking. That's identity protection wearing a lab coat. How To Loosen the Grip So what do you do once you recognize the grip? Early in my career, I championed a technology direction that I was convinced was right. The evidence started coming back that it wasn't working. And I was doing exactly what I just described. Scrutinizing the bad data, embracing the good data, and getting irritated when people questioned me. It wasn't until a colleague looked at me and said, "You're not evaluating this anymore. You're defending it," that I realized my identity had completely hijacked my judgment. What helped was a shift in language that sounds simple but changes everything. Stop holding beliefs as part of your identity. Start holding them as a working thesis. The Reframe Listen to the difference between these two statements. First: "I believe this company will succeed." Second: "My working thesis is that this company will succeed." The first version fuses the belief to you. If the company fails, you were wrong. You made a bad bet. The second version builds in the expectation that your thinking will evolve. New data doesn't make you wrong. It makes you better informed. The Proof That colleague I mentioned? After that conversation, I started framing every strong opinion as a working thesis in my own head. Not out loud at first. Just internally. And the effect was immediate. I stopped feeling attacked when contradicting data came in. I started treating it as an update instead of a threat. The position I was defending? I reversed it completely. And the thing I was most afraid of — looking like I'd wasted everyone's time — never happened. The team was relieved. The Practice Next time you find yourself defending a position with more heat than it deserves, pause and restate it starting with "My working thesis is..." Then ask yourself: "What would I need to see to change this?" If you can't answer that question, if there's literally no evidence that could change your mind, that belief has become part of your identity. And your brain will protect it like one. The Door The goal isn't to be wishy-washy. Commit fully to your working thesis. Act on it with confidence. The difference is that you've built a door in the wall, and you've given yourself permission to walk through it if the evidence changes. That door is the difference between updating when you're wrong and doubling down until it costs you. Why Identity Is the Amplifier The identity trap doesn't operate alone. It recruits every other force we've covered in Part Two of this series. Facing uncertainty? Identity says, "You're not the kind of person who hesitates." Someone manufactures a deadline to pressure you? "Leaders are decisive. Act now." The whole room disagrees with your position? Identity whispers "I'm a team player" — or digs in with "I'm the one who sees what others miss." Identity is the amplifier. It takes every vulnerability from Episodes 10 through 13 and cranks up the volume. That's why we saved it for last. Everything else we've covered in Part Two? Necessary. But not sufficient. Because if you haven't dealt with your identity's grip on your beliefs, those skills have a backdoor that ego walks right through. And this is exactly what mindjacking exploits. I go much deeper into an article I wrote and in my dedicated mindjacking episode, links below. But the core mechanism is this: mindjacking doesn't just offer you convenient conclusions. It attaches those conclusions to who you are. "People like us think this." "Smart people choose this." Once a belief becomes a badge of identity, you'll convince yourself. No external persuasion required. From Seeing the Trap to Building the Escape Here's your challenge this week. Pick one belief you hold that you've never seriously questioned. Something professional. Your management philosophy. Your investment thesis. Your view on how your industry works. Something you'd describe as "just who I am." Now find the strongest argument against it. Not a straw man. The real, best case the other side would make. Sit

    16 min
4.6
out of 5
74 Ratings

About

Forty years of billion-dollar innovation decisions. The real stories, the hard calls, and the patterns that repeat across every organization that's ever tried to build something new. Phil McKinney shares what those decisions actually look like. Phil was HP's CTO when Fast Company named it one of the most innovative companies in the world three years running. He co-founded a company and took it public. Now he runs CableLabs, the R&D engine behind the global broadband industry. This isn't theory. It's what happened. And what you can see coming if you know what to look for. Running since 2005, originally as The Killer Innovations Show, now The Innovators Studio. Tens of millions of downloads. Full archive at killerinnovations.com. New episodes at philmckinney.com.

You Might Also Like