AI to ROI (fka Metrics that Measure Up)

Ray Rike

AI to ROI is a podcast that shares how enterprises translate AI investments into measurable business value. Hosted by Ray Rike, Founder and CEO of Benchmarkit, the show features senior enterprise leaders and AI software executives who share how AI initiatives move from pilots to production, and how ROI is actually measured and achieved. In addition, each week, we publish a bonus episode with AI to ROI Newsletter co-author, Peter Buchanan to discuss the Big Story of the Week. The AI to ROI podcast is the evolution of the original "Metrics to Measure Up" podcast.

  1. NVIDIA – The Full-Stack Maestro

    3D AGO

    NVIDIA – The Full-Stack Maestro

    Five months ago, Ray and Peter called NVIDIA the maestro of the AI economy. Since then, NVIDIA has not just conducted the orchestra. It has rewritten the music and may be building the entire concert hall. In this episode, Ray and Peter revisit their October thesis, walk through everything NVIDIA unveiled at GTC, and break down what it all means for enterprise AI buyers navigating infrastructure, inference costs, and procurement strategy. What we covered in this episode: From GPU maker to full-stack AI platform: the transformation is complete NVIDIA's strategic intent is no longer just selling chips. It is embedding its technology across the entire AI stack and becoming the foundational layer on which the rest of the AI economy rests. Ray draws the only historical parallel he can find: what IBM was to enterprise technology from the 1960s through the 1980s. The difference is NVIDIA is moving faster, with more cash, and with a software flywheel IBM never had. GTC was not a product launch, it was a platform declaration NVIDIA unveiled the Vera Rubin platform, a fully integrated AI supercomputer with liquid cooling and a two-hour installation window. They licensed Groq's LPU architecture in a $20 billion deal that combines GPU and LPU chips to deliver 35x token throughput over current Blackwell systems. They launched NemoClaw (an enterprise-grade agent framework already partnered with Adobe, Salesforce, and SAP), Dynamo (an open-source inference operating system), and the Nemotron family of open-source frontier models. Jensen committed $26 billion over five years in free cash flow to build best-in-class frontier models with no outside funding required. The financial performance is in a category by itself Fiscal year 2026 revenue came in at $215.9 billion, up 65% year over year and 8x since 2022. Data center revenue exceeded $190 billion. Free cash flow hit $97 billion, translating to a 47% free cash flow margin. Combined with 65% growth, that is a Rule of 40 score of 109. Ray notes he has never seen anything like it at scale, and NVIDIA is a hardware company running 80% gross margins. CFO Colette Kress described their inference position as: "right now, we are the king of inference." The moat is not hardware. It is ecosystem lock-in Since 2022, NVIDIA has committed over $50 billion across 170 venture deals, with corporate deal volume growing from 12 deals in 2022 to 67 deals in 2025. Portfolio companies include OpenAI, Anthropic, xAI, CoreWeave, and Lambda. Sovereign AI contracts signed since October total $30 billion across France, the Netherlands, Canada, Singapore, and the Middle East. Hyperscalers still represent roughly 50% of revenue, but the faster-growing segments are sovereign entities, enterprise verticals, and NeoCloud providers, which is exactly the diversification NVIDIA needs as hyperscaler CapEx normalizes. The risks are real but manageable from where NVIDIA sits today Custom ASICs from Google, Amazon, Meta, and Microsoft represent the most credible competitive threat, though those chips are optimized for internal platforms and do not solve multi-cloud or on-premise deployment needs. Export control escalation remains a live risk, with NVIDIA restarting NH200 production for China. TSMC concentration is a structural vulnerability, especially given geopolitical risk around Taiwan. And three hyperscalers account for over half of NVIDIA's receivables, some of whom are actively building competing chips. What enterprise AI buyers should do right now. Ray and Peter close with four concrete takeaways for enterprise buyers: evaluate the full infrastructure stack, not just GPU cost; model inference economics carefully before deciding which models to run and where; pursue a strategic partnership with NVIDIA rather than transactional procurement, because partnership creates supply access standard customers do not get; and do not assume custom silicon from hyperscalers solves your problem, because data residency and on-premise requirements often mean NVIDIA needs to be part of the solution regardless. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    34 min
  2. The AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital

    MAY 7

    The AI-Native Services Playbook - with Jake Saper, General Partner - Emergence Capital

    Our host, Ray Rike sits down with Jake Saper, General Partner at Emergence Capital, to unpack the firm's AI-Native Services Playbook. Jake brings a unique lens: 12 years at Emergence, early-stage bets on companies like Zoom, and a portfolio of seven AI-native services businesses already in the portfolio. The conversation covers what separates AI-native services from SaaS, why the business model is harder to execute than it looks, and the five metrics and structural choices that determine who wins. WHAT WE COVER IN THIS EPISODE Domain Expertise: Critical - But Not Required from the Founders AI-native services companies are selling outcomes, not products. That means trust and credibility are the first sales. Domain expertise is non-negotiable, but it does not have to live in the founding team if two conditions are met: the founders go as deep as humanly possible on the service before launch, and they hire senior domain experts early. Emergence portfolio company Hanover Park, an AI-native fund administrator, is the case study. The founder interviewed 150 CFOs before writing a line of code and hired respected fund accounting veterans to sit alongside the AI. That combination unlocked enterprise trust from day one. Hire a Product Leader Before You Think You Need One The biggest structural trap in AI-native services is over-relying on human delivery while the product falls behind. Market pull is strong by design — if you promise faster, better, cheaper outcomes in an existing market, customers will buy. But if delivery is primarily human, you have a services company with venture capital financing and no AI leverage. The fix is a dedicated product leader whose sole KPI is productizing the service. The best AI-native services companies run a tight feedback loop between the doers (service delivery) and the builders (engineering), and the PM owns that loop. The Mirage of Product Market Fit In SaaS, fast growth plus strong net dollar retention meant you had product market fit. In AI-native services, those are necessary but not sufficient. Revenue growth powered by human labor is a false signal. True product market fit requires that AI is delivering the majority of the service value. Jake's framework: track both leading indicators (a North Star product metric showing AI leverage improvement, such as human review time per contract or time to migrate a line of code) and lagging indicators (revenue per FTE trending up quarter over quarter, and gross margin). The leading indicators tell you if you're building leverage. The lagging indicators confirm it. Outcome-Based Pricing: The Direction of Travel AI-native services companies that started with labor-based pricing will need to migrate toward outcome-based pricing over time, and the transition requires patience. Emergence portfolio company Prosper AI, an AI-native healthcare services provider handling prior authorization and benefits verification, navigated this by moving a portion of contracts to resolution-based pricing while keeping the remainder on a per-minute basis. That hybrid approach gave both sides the data and comfort to expand the outcomes-based portion at renewal. Jake's view: as AI does more of the work, downward pricing pressure is inevitable, but upward margin pressure offsets it. Revenue Per FTE and Gross Margin: The Two Metrics That Matter Most Revenue per FTE is the primary signal of AI leverage, but it needs to be benchmarked two ways: against the legacy service provider in the same vertical, and against itself quarter over quarter. The latter is more important. If revenue per delivery FTE is not improving each quarter, the AI is not compounding. On gross margin, the industry is still in the Wild West. Two common errors: allocating service delivery headcount to R&D instead of COGS because the team "helped train the model," and excluding inference spend from COGS. Both understate the true cost of delivery. Customer-specific model training belongs in COGS. Base model training belongs in R&D. The Moat Question Brand trust and proprietary data are the two sources of durable advantage. Brand matters because enterprises buying AI-delivered outcomes need a trusted guarantor. Data matters because high-volume AI-native operations accumulate transaction data that legacy providers, running at lower volume with more human overhead, simply cannot match. Emergence portfolio company Harper, an AI-native insurance broker, is outperforming brokerages ten times its size on placement speed and carrier-risk matching because its data volume is superior. LINKS Emergence Capital AI-Native Services Playbook: em.cap.com ABOUT AI TO ROI Ray Rike is the Founder and CEO of Benchmarkit, the leading B2B SaaS and AI-native software benchmarking company. The AI to ROI podcast brings a metrics-first lens to enterprise AI adoption, ROI measurement, and the business models being built on top of AI. Subscribe on your favorite podcasting app and connect with Ray on LinkedIn. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    30 min
  3. The Role of the CAIO in a Managed Service Provider - with Jim Piazza, CAIO Ensono

    APR 28

    The Role of the CAIO in a Managed Service Provider - with Jim Piazza, CAIO Ensono

    Ray Rike sits down with Jim Piazza, Chief AI Officer at Ensono, a managed services provider scaling AI across both its internal operations and customer environments. Jim brings a rare combination of deep infrastructure experience, nearly a decade at Meta scaling data center operations with machine learning, and a rigorous framework for connecting AI investments to business outcomes that executive operators can actually measure. Key Topics: Defining the Chief AI Officer Role in an MSP: Jim describes the CAIO role as a blend of CDO, CIO, and CTO with an AI lens, but with a critical distinction: the job is not to ask what AI can do. It is to identify where AI improves service delivery, customer outcomes, and financial performance. At Ensono, that meant starting small as VP of Predictive Systems, demonstrating results, and earning the mandate to expand. Prioritization, not ideation, is the core skill. Building AI Tools That Drive Internal Operational ROI: Ensono developed three production AI systems for internal use. Envision Predictive Engine analyzes telemetry data across systems to predict failures before they cause business impact, including one case where a problem was detected 144 minutes before it would have affected a major logistics customer outside Ensono's own scope of responsibility. Diagnose Now puts the right diagnostic data in front of engineers at the right moment and has delivered up to a 66% reduction in mean time to repair in A/B testing. ChangeGuardian assesses risk scores for the 8,000-plus changes Ensono executes monthly, auto-generating methods and procedures from a decade of historical change data to reduce both risk and manual effort. Structuring AI Governance: The Three Musketeers Model: Jim, the CTO, and the CIO operate as a deliberate leadership triad. The CTO owns the platforms. The CIO owns data quality and structure. The CAIO owns the build-versus-buy decision and solution development. Shared accountability, not siloed ownership, drives alignment. Each business unit also contributes one to two subject matter experts through a formal value stream mapping process to identify where AI should focus first. Measuring AI ROI Before Writing a Line of Code Jim's most consistent lesson: define your value metrics before touching the technology. AI use cases must tie back to core business metrics such as mean time to repair, customer satisfaction, SLA risk reduction, and gross margin improvement. Business unit leaders own the outcome measurement. The CAIO owns the budget and the technology. That separation of responsibility keeps AI programs anchored to results rather than activity. The CAIO and CIO Relationship: Where the Lines Get Drawn: For companies bringing in a Chief AI Officer alongside an existing CIO, Jim offers a practical delineation. The CIO owns data infrastructure and quality. The CAIO is a consumer and a builder who depends on that foundation. Without clean, accessible data, AI programs stall regardless of the use case. The CAIO's job is to surface missing or insufficient data and partner with the CIO to close the gap. Lessons Learned and Career Advice for the AI Era: Jim's framework for AI program success: start with one or two high-probability use cases where data is already in good shape, build credibility through results, then expand. Avoid the ten-pilot trap. Kill weak use cases early. For early-career professionals, his advice is equally direct: learn to work with AI, not compete with it. Build problem framing, critical thinking, and business judgment. Technical fluency matters, but business judgment is what separates the people AI replaces from the ones AI makes more valuable. This episode is essential listening for technology and operations executives navigating the practical reality of AI deployment inside complex enterprise environments. If you are a CIO, CTO, COO, or Chief AI Officer trying to figure out how to structure governance, measure impact, and build internal credibility for AI programs, Jim Piazza gives you a real-world operating model, not theory. For managed services leaders and enterprise buyers evaluating MSP capabilities, the Ensono case studies show what it looks like when an MSP moves from reactive service delivery to predictive, AI-driven outcomes. And for executives still debating whether to hire a Chief AI Officer, this conversation makes a direct case for what the role should own, how it should partner, and what success looks like when it is done right. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    29 min
  4. On Paper, the SpaceX IPO is Not So Heavenly

    APR 24

    On Paper, the SpaceX IPO is Not So Heavenly

    SpaceX filed for what could be the largest IPO in history, targeting a $1.75 trillion valuation and $75 billion raise on NASDAQ in June. Ray Rike and Peter Buchanan cut through the narrative and go straight to the numbers, business unit by business unit. Key Topics: The Launch Services Monopoly Falcon 9 launches cost roughly $67 million, compared to $110-160 million for competitors. With over 100 launches per year, $4 billion in NASA contracts, and a freshly awarded Space Force contract, SpaceX has no meaningful competitor at scale. The catch: the next-generation Starship rocket, critical to everything else in the bull case, is already five years behind its original commercial timeline. Starlink: The $10 Billion Business You Never Think About Starlink generates nearly $10 billion in annual revenue from 10 million global subscribers, representing 54% of SpaceX's total revenue. The real margin engine is not residential subscribers but aviation and maritime, where per-customer annual revenue runs $300K and $34K respectively. Amazon's Project Kuiper remains far behind with under 700 satellites versus Starlink's 10,000-plus. XAI and X: The Problem Child SpaceX acquired XAI in February 2026 in an all-stock deal valued at $250 billion. The financial reality is stark. XAI burned $9.5 billion in cash during the first nine months of 2025 on only $210 million in revenue, nearly $28 million per day. A combined 2025 P&L would have shown a $5 billion net loss on $18.5 billion in revenue, reversing SpaceX's standalone $8.5 billion profit in 2024. Grok, its large language model, is described in internal SpaceX memos as clearly behind Anthropic, OpenAI, and Gemini, and Elon Musk himself has said publicly it needs to be rebuilt. The IPO Mechanics: Structure, Retail Allocation, and a Controversial NASDAQ Rule Change Five banks are co-leading the offering with no single lead book-runner, and each was reportedly required to purchase Grok subscriptions as a condition of participation. Retail investors receive a 30% share allocation, three times the typical size. Most controversially, NASDAQ shortened its index inclusion waiting period from 90 days to 15, which could trigger mandatory passive fund buying from vehicles like Invesco's QQQ shortly after listing. Market veterans are calling it structural manipulation. The Bull and Bear Case The bull case requires Starship reaching commercial operations within 18 months, Grok building a real enterprise sales engine beyond Elon's existing relationships, and the vertical integration thesis playing out as planned. Starlink as a global AI distribution layer, Grok trained on real-time X data, and orbital data centers as a structural competitive moat. The bear case is simple: every element depends on Starship staying on schedule, and if it slips again, the entire investment thesis slips with it. Executive Takeaways for Technology Leaders The valuation is not priced on current fundamentals. It is priced on a version of this business that does not exist yet and may not until the early 2030s. For technology executives evaluating SpaceX or XAI as vendors or partners, multi-year contract stability is a real consideration. The NASDAQ rule change also has downstream implications for OpenAI, Anthropic, and other AI companies in the IPO pipeline. This episode is designed for B2B SaaS and enterprise AI executives who need to understand where capital is flowing and why it matters in their own strategic context. If you are making decisions about AI vendor relationships, enterprise infrastructure partnerships, or simply need a clear-eyed read on how AI-era IPO valuations are being constructed, Ray and Peter give you the data behind the headlines, not just the hype. No investment advice. Just the numbers, the business model mechanics, and the questions every executive should be asking before the June listing. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    34 min
  5. AI's Organizational Impact: McKinsey's State of Organizations 2026 Report

    APR 22

    AI's Organizational Impact: McKinsey's State of Organizations 2026 Report

    Ray Rike and Peter Buchanan dig into McKinsey's 2026 State of Organizations Report, a landmark study drawing on more than 10,000 senior executives across 15 countries and 16 industries. The central finding is both simple and uncomfortable: the vast majority of organizations are actively experimenting with AI, and that same majority reports no meaningful impact on their bottom line. This episode is about closing that gap. Topics Covered Three Tectonic Forces Reshaping Every Organization. McKinsey identifies AI and agentic systems, economic and geopolitical fragmentation, and workforce transformation as structural shifts rather than temporary headwinds. Ray and Peter unpack why these forces are interdependent and why three in four leaders say their organizations are not ready to face what is coming, including leaders who describe themselves as optimistic.Why AI Initiatives Keep Falling Short. The diagnosis is clear: most organizations are running scattered pilots and point solutions that augment individuals but never transform the enterprise. McKinsey's data shows that organizations redesigning entire domains, marketing, finance, and operations, see dramatically greater financial impact than those pursuing isolated use cases. Ray calls this systems thinking and walks through five specific variables required to move from pilot to production at scale.Humans and AI Agents: A New Collaboration Model. Only one in four executives expect AI to take on truly agentic, autonomous roles in the next 12 to 24 months. Ray and Peter discuss why senior leaders are more conservative than younger high-potential talent, what the Hitachi and Allianz case studies reveal about workforce redesign versus workforce replacement, and why demand for AI fluency has increased 7x faster than any other skill tracked in job postings.Geopolitical Disruption and the Cost of Organizational Rigidity. Three in four leaders report a material impact from geopolitical uncertainty on their organizations. Ray and Peter discuss the Tonies case study, a German toy company that launched a production facility in Vietnam on the same day US tariffs were announced, as a model of what organizational preparedness looks like in practice. Two thirds of surveyed executives also said their organizations are overly complex and inefficient, and McKinsey's diagnosis of why traditional structural fixes are no longer working is worth hearing.People and Performance: The Four-Times Multiplier. McKinsey's data shows that organizations investing equally in people development and operational performance are four times more likely to sustain top-tier financial results, grow revenue twice as fast, and carry half the earnings volatility of peers. Ray and Peter connect this to why 80% of leaders leave non-financial motivation levers completely untouched, and to what GE's model of purpose, autonomy, recognition, and growth still gets right.Business as Change: The New Operating Condition. McKinsey's closing argument is that transformation is no longer a periodic program with a defined start and end. It is a permanent operating condition. Ray frames four implications for leaders, and Peter adds the critical point that the gap between AI activity and AI impact is an organizational problem, not a technology problem. The tools exist. The redesign is the work. Why Listen This episode is for senior executives who are experiencing growing discomfort between how much their organization is investing in AI and how little of it is showing up in the numbers. Ray and Peter move well beyond summarizing the McKinsey findings. They connect the research to hands-on operating experience, call out where most organizations get stuck, and give listeners a practical framework for thinking about workforce redesign, change management, and leadership accountability. If you are responsible for AI strategy, organizational performance, or the people agenda at a B2B software or enterprise company, this is one of the most data-rich and actionable conversations you will find on the topic. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    34 min
  6. Beyond OpenClaw - The Rise of Personal AI Agents

    APR 16

    Beyond OpenClaw - The Rise of Personal AI Agents

    In this week's AI to ROI: Big Story episode, Ray Rike and Peter Buchanan unpack the OpenClaw phenomenon and what it reveals about the future of personal AI agents for both individuals and enterprises. From a solo developer's side project to 1.5 million active agents in two months, OpenClaw has ignited a new category and forced every major AI company to respond. Ray and Peter break down what is working, what is still broken, and which vendors have the best shot at winning the enterprise. Top Insights from This Episode OpenClaw Proved the Market, But Not the Product Peter Steinberger built OpenClaw in days and attracted 1.5 million users before OpenAI acquired him and opened the codebase. The product validated massive pent-up demand for always-on personal AI agents, but security researchers at Cisco and Northeastern University quickly surfaced serious vulnerabilities, including data exfiltration risks and prompt injection without user awareness. Even the Chinese government restricted its use in state agencies. The pioneer made the promise real; the product is not yet enterprise-safe. NVIDIA Jumped In Fast with NemoClaw, But Gaps Remain NVIDIA wrapped OpenClaw with a three-layer security architecture (OpenShell runtime, privacy router, and governance layer) and launched NemoClaw at GTC with nearly 20 partners, including Box and Cisco. Box demonstrated human-matching permission controls for enterprise file workflows, and Cisco showed a zero-day vulnerability response with a full audit trail. But governance experts noted NemoClaw still lacks basic IT safety features, particularly around rollback, audit trails, and policy enforcement. Fast to market; not yet enterprise-ready. Perplexity Made a Quiet Pivot to Enterprise AI Agent Infrastructure Six months ago Perplexity was an AI search company. Today they are building a three-product personal agent suite: Perplexity Computer for multi-model orchestration across 18-plus AI models, Personal Computer for local 24-7 file and compute access on Mac, and Comet Enterprise as an AI-native browser tying the stack together. Their Samsung Galaxy S26 integration via Bixby gives them significant distribution, and their CEO framed the shift simply: traditional operating systems take instructions; AI operating systems take objectives. The model-agnostic architecture may be their biggest differentiator. Anthropic Is Playing a Different and Potentially Smarter Game Rather than shipping a standalone personal agent, Anthropic is embedding agentic capability into existing products. Claude Code scaled to an estimated $2.5 billion in ARR in nine months. Claude Cowork gives Claude direct control of Mac-level tasks with a permission layer built in. And the Microsoft partnership puts Claude Cowork as the multi-step reasoning engine inside Microsoft 365 Copilot Wave 3, branded as Copilot Coworks. A recent survey showed 66 percent of enterprise technical buyers said they purchased Claude first, with ChatGPT in the thirties. Anthropic's enterprise trust advantage may matter more than feature parity. Enterprise Adoption Will Be IT-Led and Slow by Design Unlike SaaS, which grew through decentralized, shadow-IT purchasing that bypassed central IT, personal AI agents require direct access to local files, compute, and company systems. That puts CISOs and IT leaders in the approval seat from day one. Ray and Peter agree the enterprise version of personal AI agents is likely 12 to 24 months away from broad deployment, with adoption following a managed, permission-controlled model rather than the freewheeling consumer version that drove OpenClaw's early growth. If you are a company executive, evaluating allowing, enabling or even developing personal AI agents for your company, this episode is a great listen...it might even inspire you to create your own personal AI agent for your personal use! See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    31 min
  7. The Power of Eye Tracking for the Enterprise - with Adam Gross, Co-Founder & CEO of HarmonEyes

    APR 14

    The Power of Eye Tracking for the Enterprise - with Adam Gross, Co-Founder & CEO of HarmonEyes

    Eye tracking has moved far beyond the clinic and the sports performance lab. In this episode, Ray Rike sits down with Adam Gross, co-founder and CEO of HarmonEyes, to explore how AI-powered eye-tracking is being deployed in enterprise environments to measure cognitive load, predict performance degradation, and reduce costly employee burnout and attrition before problems occur. What You Will Learn: What eye tracking actually measures and why objective, passive, quantifiable eye movement data is more reliable than self-reported assessments for measuring cognitive and attention statesHow AI transforms raw eye data into actionable intelligence, including real-time model inference, individual adaptation across a population normative database of 15 million+ records, and predictive time-to-transition modelingWhy personalization at scale matters and how Harmonize uses advanced machine learning to adapt its models to individual differences in age, sex, and experience level, making population-level models actually work for every individualEnterprise use cases with measurable ROI, including pilot training in flight simulators (shorter time to proficiency), remote operator and call center environments (fatigue and overload intervention before safety incidents), and employee burnout detection over extended time horizonsThe device-agnostic deployment advantage, covering webcams, phone cameras, smart glasses, and vehicle cabin cameras as signal sources that eliminate the need to purchase dedicated hardwareHow team leaders use real-time cognitive state data to shift from reactive management to proactive intervention, reducing performance risk across shifts and high-stress operating environmentsPrivacy as a design principle, not an afterthought: Harmonize does not collect, store, or record eye tracking data or PII; the prior second of data is destroyed with each new output deliveryWhere to start as an enterprise buyer: the highest-value entry points are high-stress, high-stakes roles where burnout and performance degradation already show up as operational problems with measurable costsCareer advice for early professionals: the best defense against AI-driven job displacement is not avoidance but mastery; become the human in the loop who knows the technology best See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    34 min
  8. Pricing Strategy for AI Software and SaaS: When to Change, Who Should Own It, and the CFO's Role with Dan Balcauski

    MAR 31

    Pricing Strategy for AI Software and SaaS: When to Change, Who Should Own It, and the CFO's Role with Dan Balcauski

    Pricing is one of the most underleveraged strategic levers in B2B SaaS and AI Software. Most companies are getting it wrong. In this episode, Ray Rike sits down with Dan Balcauski, founder of Product Tranquility and a 20-year software industry veteran, to cut through the noise around consumption, usage, outcome, and hybrid pricing models. Dan brings a practitioner's perspective on when to review pricing, who should own it, and how the CFO fits into the equation. Signs Your Pricing Needs a Review Best-in-class companies review pricing at least quarterly -- but review does not always mean changeKey warning signals include declining net revenue retention and unexpected shifts in win/loss conversion ratesAI-native companies are iterating on pricing monthly due to rapid competitive dynamicsSales cycle length is a practical constraint: a 12-month enterprise cycle limits how frequently you can test and observe pricing changes The Role of Customers in Pricing Strategy Never anchor your pricing strategy entirely to your existing customer base -- they carry inherent biasA practical research mix: roughly one-third existing customers, two-thirds prospectsExisting customers know your real value; prospects only know what you show them -- both perspectives matterWhen introducing a second product, maintain structural similarity in pricing tiers even if the pricing metric differs Pricing Ownership and Governance Below $5M ARR, the founder/CEO owns pricing; above $20M it shifts to Product or Marketing -- the gap in between is where ownership gets dangerously vagueProduct Marketing is best positioned to own pricing because it sits at the intersection of positioning and value communicationSales owning pricing is a misalignment of incentives -- "like putting Dracula in charge of the blood bank"Best practice is a pricing council with a designated decision-maker, not design by committee Discounting and the CFO's Role Discounting policy is often the easiest and fastest win -- and one of the first places Dan looks with any clientEnforcement matters as much as policy: without monitoring, no new pricing strategy will ever reach the market as intendedThe CFO plays a dual role -- operational (contracts, billing, deal desk guardrails) and strategic (modeling cash flow and KPI impact when shifting pricing models)Caution: A finance-led focus on consistent margin profiles across products can misread how different market segments actually behave Outcome-Based Pricing: Hype vs. Reality Outcome-based pricing is "the future and always will be" -- it is not new, and it is genuinely difficult to executeTrue outcome pricing only works when you are directly in the revenue or savings transaction, as Stripe isA more practical frame is output-based pricing -- Intercom's 99 cents per resolved support ticket is a strong example of measuring a clear, attributable unit of value If you are involved in how best to monetize and price your B2B AI or SaaS product - this is a very valuable listen! See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    33 min
4.8
out of 5
41 Ratings

About

AI to ROI is a podcast that shares how enterprises translate AI investments into measurable business value. Hosted by Ray Rike, Founder and CEO of Benchmarkit, the show features senior enterprise leaders and AI software executives who share how AI initiatives move from pilots to production, and how ROI is actually measured and achieved. In addition, each week, we publish a bonus episode with AI to ROI Newsletter co-author, Peter Buchanan to discuss the Big Story of the Week. The AI to ROI podcast is the evolution of the original "Metrics to Measure Up" podcast.

You Might Also Like