YPO Technology Network AI Brief

Stephen Forte

AI moves fast. Your briefing should move faster. The YPO Technology Network AI Brief is a daily breakdown of the AI developments that actually matter to your business. No hype, no jargon, no filler — just what changed, what it costs you or saves you, and what to tell your team on Monday. Hosted by Stephen Forte for the leaders who don't have time to chase the news but can't afford to miss it.

  1. HACE 18 H

    Elevate The Adopters. Train The Curious. Phase Out The Refusers.

    There are two workforces inside your company right now, and the gap between them is widening every quarter. Writer's 2026 AI Adoption Survey found that super-users save 4.5x more time, are 5x more productive, and are 3x more likely to be promoted with a raise compared to their non-adopting peers. Same job title. Same company. Same tenure. Stephen makes the case that this is not a productivity bump — it is a different employee — and that the historical PC adoption analog (which took 15 years to show up in productivity statistics) is the wrong mental model. This cycle is moving in months, not decades. What's covered The hard data — Writer's April survey on super-users, Gallup's 50% adoption number, Microsoft's 22-point critical thinking lift when managers model AI use, and the executive numbers nobody is saying out loud (77% will not promote non-adopters, 60% are planning layoffs of AI refusers, 92% cultivating an AI elite) What the adopters are actually doing differently — not "they use AI more." They have internalized a different mental model of work. Decomposition, iteration, critical evaluation. The thinking skill, not the software skill. Why the PC analog is misleading — Solow's 1987 productivity paradox took 15 years to resolve. That slow burn was a gift. This cycle is opening gaps in months. The story of a software engineer in his late twenties being measurably outpaced by 23-year-olds who design their workflow around AI from the first keystroke. Three moves CEOs should make as a sequence — (1) elevate the adopters now into broader scope and role redesign, (2) replace generalized AI training with workflow-specific 1:1 coaching that sits next to each employee and shows them what AI does for THEIR Tuesday morning, (3) be honest with the small percentage who will not adapt A note on what this is not — AI fluency is a skill, not a personality test. Most people can acquire it. The bifurcation is between the curious and the refusers, not the brilliant and the average. The thesis: This is not about whether AI is the future. That argument is over. This is about whether your company elevates the adopters, trains the curious, and is honest with the refusers — or protects the resisters until it cannot afford to anymore. The challenge: Walk the floor this week. Have a real conversation with one super-user about how they work now. Have a real conversation with one refuser about what they think is going to happen. The data you collect on those two walks will tell you more about your company than any AI strategy deck. The YPO Technology Network AI Brief is hosted by Stephen Forte for YPO members and senior operating leaders.

    14 min
  2. HACE 1 DÍA

    OpenAI Changed The Model. Your Company Didn't Notice. That's The Whole Problem.

    A week ago Tuesday, OpenAI silently swapped the default ChatGPT model from GPT-5.3 Instant to GPT-5.5 Instant. Most enterprises did not notice. Their sensitive workflows ran on a different model at lunchtime than they did at breakfast — with a different hallucination profile on legal, medical, and financial outputs — and nobody at the C-level was told. Stephen reads the default swap as the cleanest test of where your company sits on a much larger divide: PwC's finding that 74 percent of AI's economic value is being captured by 20 percent of companies. What's covered What actually changed on May 5 — GPT-5.5 Instant becomes default, GPT-5.3 phased out for paid users in 90 days, real benchmark improvements on hallucination in sensitive domains, and the parallel rollout of GPT-5.5-Cyber for vetted teams The three-question test — which model is our team on, when did it last change, did anyone evaluate the new one against our workflows. If you cannot answer all three quickly, you are in the 80%. The core reframe — two ways a company can relate to AI right now. Consume it as a feature (whatever's in the chat box is what you run) or run it as infrastructure (versioned, evaluated, governed). The 74/20 divide is not about adoption. It is about posture. Three concrete moves the leaders are making — version-controlling the model stack, running an evaluation harness on sensitive workflows, and picking growth use cases on purpose rather than productivity use cases by accident The GPT-5.5-Cyber footnote — why specialty AI procurement is starting to look like the Pentagon's procurement (callback to S1E60), and what that means for the commodity tier most enterprises are buying without realizing it The thesis: The companies that noticed last Tuesday's default swap are running infrastructure. The companies that did not are running a chat box and hoping. That is not a tools problem. That is the whole problem. The challenge: One engineer, one evaluation harness, one person whose job description includes "tell me when the model changed." That is the gap between the 20 percent and the rest. Run the three-question test this week. The YPO Technology Network AI Brief is hosted by Stephen Forte for YPO members and senior operating leaders.

    11 min
  3. HACE 2 DÍAS

    Eight AI Vendors. One Customer. The Procurement Lesson Hiding In Plain Sight

    On May 1, the Pentagon signed agreements with eight frontier AI labs — SpaceX, OpenAI, Google, NVIDIA, Reflection, Microsoft, Amazon Web Services, and Oracle — to deploy models on Impact Level 6 and 7 classified networks. Most of the press read it as a defense story or a politics story. Stephen reads it as the procurement playbook most enterprises haven't built yet. What's covered What the Pentagon actually structured on May 1 — eight vendors named, Impact Levels 6 and 7, the $200M Google contract from 2025, the separate $500M Scale AI deal, and Oracle added on the day of the announcement Three things the Pentagon got right — multi-vendor sourcing against a single capability scope, use restrictions written into the contract rather than into policy, and an expandable framework rather than a fixed roster Why Anthropic ended up frozen out — the use-case restrictions they refused to remove, the supply-chain risk classification that followed, and what their absence teaches operators about vendor-customer values alignment Three operator moves for your own AI vendor stack — pull the real list, classify by workflow class not by product, and put use-case scoping into the contracts at renewal Why compute reliability is what makes vendor optionality possible in the first place The reframe: Most enterprises are running a roster. The Pentagon built a framework. One bar, one contract template, multiple vendors qualified, workloads portable. New vendor signs, gets in. Old vendor falls behind, gets de-prioritized without a renegotiation. The challenge: Probably three weeks of work to build a vendor stack that survives the next model release without an emergency board meeting. The Pentagon did the procurement work at signing time. You can do it at renewal time. Cheaper either way. The YPO Technology Network AI Brief is hosted by Stephen Forte for YPO members and senior operating leaders.

    10 min
  4. HACE 3 DÍAS

    From Press Release to P&L: Anthropic's Real Story

    Anthropic's annual conference last week shipped enterprise infrastructure rather than another headline model — Managed Agents, multi-agent orchestration, outcomes-as-rubric, a memory feature called dreaming, and a serious compute expansion. Most of the coverage reads like a product launch recap. Stephen reframes it as a P&L event and walks through the three-stage method for turning announcements like these into a workflow change a CFO will defend in the budget cycle. What's covered What Anthropic actually shipped — Managed Agents, multi-agent orchestration, outcomes (rubric-based self-checks), the dreaming memory feature, and why the compute expansion is the silent variable that turns a fragile experiment into a budget line Why most enterprise AI rollouts stall — not a model problem, a sequencing problem Stage one — Build the bad version in Perplexity Computer. Three patterns that show up almost every time: the order is wrong, the agent reads the instruction differently than you wrote it, and the QA step belongs at every stage rather than the end Stage two — Run it manually for two weeks with a senior person in the loop and a daily two-line journal that becomes the operating manual The handoff — How Perplexity Computer writes the spec as markdown while you iterate, and how that markdown folder seeds Anthropic's Managed Agents with light tweaks rather than a rewrite Stage three — Move the hardened version into a managed environment with long-running sessions, scoped permissions, persistent memory, and an audit trail The thesis: Use Perplexity Computer, or a tool like it, to learn the workflow. Use Anthropic Managed Agents, or one like it, to run the workflow. Two different tools for two different jobs. Discover, then operate. The challenge: Pick one workflow this quarter — reconciliation, expense triage, sales-order processing, customer onboarding, ticket routing. Build the bad version in a flexible environment over a week. Run it for real for two weeks. Then harden it into a managed environment built to run it every day. Ninety days, end to end. One workflow, demonstrably cheaper, faster, or more accurate than it was the quarter before. The YPO Technology Network AI Brief is hosted by Stephen Forte for YPO members and senior operating leaders.

    16 min
  5. HACE 5 DÍAS

    Secrets, Identity, And The Blast Radius Of A Helpful Agent

    Weekend Special Edition. The Saturday deep dive on secrets management for AI agents — the unglamorous infrastructure decision that determines how big your blast radius is when something goes wrong. Stephen walks through the BuildClub stack, the patterns we use with clients, and the specific mistakes that cost companies the most. The single thesis: Treat your agents like employees, not like scripts. Give them an ID. Give them the minimum access they need. Write down what they have. Revoke it when they leave. Same playbook you already run for humans. What you will get out of this episode: Why the over-provisioning trap is universal — and why it is not a careless-developer problem The two angles for production deployment: corporate identity in your tenant, and giving the agent its own user account How to structure your secrets vault so a single leak does not own the whole company Where to keep the seed credential — and why GitHub Actions secrets plus OIDC federation beats a static admin key OAuth 1 vs OAuth 2 vs static API keys, explained for a non-technical audience The two practical disciplines that matter most: rotation and revocation BuildClub's offline-first build pattern and why it gives client IT a precise ask instead of a fuzzy one Vendors and tools mentioned: Infisical — open-source secrets management; what we run at BuildClub 1Password Service Accounts — solid alternative if your org already runs 1Password Microsoft Entra Agent ID — first-class identities for AI agents in your tenant GitHub Actions OIDC — short-lived cloud credentials, no long-lived keys GitGuardian — automated secret scanning across your repos The two-thing close: If I were sitting in your seat this quarter, I would (1) pull the list of every agent, automation, and integration in your company that holds a credential — just the list, not a project — and (2) rebuild one workflow the right way as the template for everything that follows. Listen. Share with a fellow member who is shipping their first agents. Stay sharp. Hosted by Stephen Forte, CEO of BuildClub. The YPO Technology Network AI Brief is a daily podcast for CEOs and senior business leaders.

    16 min

Acerca de

AI moves fast. Your briefing should move faster. The YPO Technology Network AI Brief is a daily breakdown of the AI developments that actually matter to your business. No hype, no jargon, no filler — just what changed, what it costs you or saves you, and what to tell your team on Monday. Hosted by Stephen Forte for the leaders who don't have time to chase the news but can't afford to miss it.

También te podría interesar