Price Power

Jacob Rushfinn

The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.

  1. 13: The Four Horsemen of Churn w/ Dan Layfield

    1D AGO

    13: The Four Horsemen of Churn w/ Dan Layfield

    Dan Layfield, author of Subscription Index and former product lead at Codecademy and Uber Eats, explains why churn is the silent ceiling on subscription growth, how to diagnose which type of churn is killing your business, and the pricing trick that can double your LTV overnight. Dan walks through his four horsemen framework: payment failures, activation issues, pricing and plan mix, and voluntary cancellation. He shares the bottom-up optimization approach he uses with every company, starting with Stripe settings that take 10 minutes to fix. What you'll learn: Why your Stripe retry settings are probably wrong and how to fix them in 10 minutesHow to calculate your growth ceiling using churn rate and acquisition numbersWhy payment receipts might be reminding users to cancel every monthHow to price annual plans based on your monthly retention dataHow to build cancellation flows that save 20% of churning usersWhy activation experiments are tricky and often produce dudsWhy quality problems are the easiest growth fixesKey Takeaways: Churn dictates your ceiling. New users divided by churn rate equals your max subscribers. 1,000 new users with 20% churn = 5,000 subscriber ceiling. Lowering churn raises that ceiling proportionally.Start at the bottom of the funnel. Stripe settings, dunning emails, card updaters can be fixed in minutes and win back 5% of churn. Do these before tackling bespoke activation problems.Annual pricing should match monthly LTV plus one or two months. If average retention is five months, price annual at six months. Looks like a steep discount but doubles LTV.Turn off monthly email receipts. Netflix, Spotify, and Amazon don't send them. That monthly reminder is a monthly prompt to cancel.Cancellation flows should solve the underlying problem. Pausing works when the need is temporary. Downgrading works when they're paying for unused features.Links & Resources Subscription Index: https://subscriptionindex.comDan Layfield on LinkedIn: https://www.linkedin.com/in/layfield/Timestamps 00:00 Intro and Dan's path from JP Morgan to Codecademy 04:00 Freemium conversion benchmarks: sub-1% vs. good (3%) vs. great (7%) 06:30 The growth ceiling formula 08:00 The four horsemen of churn 12:00 Bottom-up optimization: start with Stripe settings 13:30 Cancellation flow tactics: pause, discount, upgrade/downgrade 19:30 Payment failure quick wins: smart retries, card updater, dunning emails 22:30 The annual pricing trick that doubled LTV at Codecademy 30:00 Activation and the Reforge framework 37:30 Onboarding should show value, not just explain device setup 42:30 Ethical cancellation flows and click-to-cancel legislation 49:30 Screenshot audit: where to start when you're stuck 52:30 Turn off monthly receipts: the easiest churn win 53:30 Lightning round

    1 hr
  2. 12: Price Testing for Subscription Apps with Michal Parizek

    MAR 12

    12: Price Testing for Subscription Apps with Michal Parizek

    Michal Parizek, pricing and growth lead at Mojo, explains how to predict long-term revenue from short-term price test data, why Apple's automatic regional pricing is wrong for most apps, and how to sequence pricing, packaging, and paywall tests for maximum impact. Michal walks through the 13-month revenue projection model he built at Mojo, which uses seven-day cancellation rates as a proxy for annual renewal rates. He shares how his team raised yearly prices by 50% in the US and Germany with minimal conversion drop, how they tested free trial lengths and found almost no difference between three-day and seven-day trials, and why the ratio between monthly and yearly plan prices matters more than the absolute price point. What you'll learn:- How to use seven-day cancellation rates to project 13-month revenue- Why Apple's exchange-rate-only pricing leaves money on the table- How to sequence price tests: price first, then packaging, then paywall design- Why the monthly-to-yearly price ratio drives plan share more than absolute price- How hiding the monthly plan pushed yearly share from 60% to 80%- Why free trials still matter for new users, despite advice to remove them- How three-day trials performed as well as seven-day trials at Mojo- Why your first price test should have big price gaps, not small ones- How traffic source mix can distort price test results- Why a 100% price increase was a short-term winner but long-term loser Key Takeaways: - Seven-day cancellation rate is a reliable early signal. 20-30% of cancellations happen in the first seven to ten days. Measure that rate per variant, project renewal rates from it, and you can evaluate a price test without waiting months. Mojo validated this against real data and it held. - Apple's regional pricing is just exchange rate math. No purchasing power, no local context. Look at your top five markets individually, compare conversion funnels by country, and cross-reference competitor pricing. - Pricing and packaging beat paywall design in impact. Changing price points, plan structures, and introductory offers had more effect than design or copy. Start with pricing, then plan mix, then layout. - The monthly-to-yearly price ratio drives plan selection. Changing only the monthly price shifted yearly subscriber share significantly. The perceived deal relative to monthly is a strong behavioral lever. - Don't remove free trials for new users without testing. Mojo tried it based on popular advice and saw revenue decline. Test it for your app. - Start price tests with big jumps. Test $40 vs $60 vs $80, not $50 vs $48 vs $52. Find the zone first, refine later. - Revisit cohorts months after shipping. Mojo's 100% price increase looked great short-term but cancellation rates spiked. The 13-month projection caught it. Links & Resources- Michal Parizek's Botsi blog post: https://www.botsi.com/blog-posts/pricing-experiments-the-backbone-of-mojos-monetization-success- Michal Parizek on LinkedIn: https://www.linkedin.com/in/michalparizek/ Timestamps0:00 Intro1:03 Using seven-day cancellation rates to predict 13-month revenue3:25 Building the report template and data pipeline6:13 Validating the renewal rate prediction model10:03 Benchmarks for new apps without renewal history12:09 Why Apple's automatic price tiers are wrong13:33 How to research and set regional prices17:10 Relationship between pricing, packaging, and paywall design21:15 Sequencing: price first, then packaging, then design23:55 Why paywall layout tests that touch plan visibility are most impactful26:41 Free trial strategy and length testing31:03 Paid trial options as an emerging trend33:16 The biggest mistake: not having enough data volume35:56 Raising prices 50% in the US and Germany38:46 Start with big price gaps, refine later40:11 Don't be afraid to test prices

    42 min
  3. 11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App

    FEB 25

    11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App

    Sasha, founder of Anticipate (a mental health app), explains why she accepted an overly broad problem statement during validation, how she used Reforge's product-market fit narrative framework to test hypotheses without building, and what she learned after eight rounds of iteration that still didn't land product-market fit. Sasha came into this with a real edge: years of marketing technology and data consulting for companies like Flo Health gave her the insight to use behavioral data for mental health. But translating deep domain expertise into a focused, sellable product turned out to be a different problem entirely. She walks through the specific moment her PMF interviews led her astray, why the Blue Ocean Strategy canvas revealed she was charging for features users get for free elsewhere, and the five pieces of advice from advisors that finally helped her reframe everything. What you'll learn: •  Why emotionally compelling answers in user interviews can mislead you into solving problems too large to tackle•  How Reforge's PMF narrative framework structures hypothesis validation before a single line of code is written•  Why product-market fit interviews need to go past the top-level pain and drill into specific, solvable sub-problems•  How the Blue Ocean Strategy canvas revealed Sasha was charging for features available for free•  Why willingness to pay and perceived value are not the same thing, and why conflating them kills monetization strategy•  How Apple in-app events can give early-stage apps a meaningful boost in rankings and visibility•  Why Reddit feedback, brutal as it is, beats feedback from friends and family every time•  How to identify your real competitors by talking to people who don't use any product in your category•  Why going viral before you understand your retention is more dangerous than growing slowly•  How Gamma's "ruthless focus on the first 30 seconds" applies to any early-stage product•  Why "hell yes" should be the bar for every slide in your demand validation deck before you build anything•  How to layer in analytics tools incrementally rather than setting up a full stack before you need it Key Takeaways: Don't take big emotional truths at face value. When Sasha asked users about mental health, they told her they never wanted to experience a crisis again. That's real. But it's so large and ambiguous that no small startup can solve it. She should have pressed further — what specific behaviors or sub-problems sit underneath that fear? One reachable problem beats ten important ones. Sell before you build. A slide deck that walks users through a problem and proposed solution is a much cheaper way to iterate than building product. If you're not getting "hell yes" reactions slide by slide, the product wouldn't have landed either. Change the deck first. Willingness to pay is not the same as value. Some use cases are genuinely valuable to users but they'll never pay for them because they see the data as theirs, or because it's available elsewhere for free. Knowing which features fall into which bucket before you write your pricing page saves a lot of pain. Your real competitors are probably not in your app category. Anticipate doesn't compete with Headspace or Calm. It competes with Apple Health, fitness apps, and the mental math people already do in their heads. Talking to non-customers revealed this, and it completely changed the product strategy. Be deliberate about your first 100 users. A Reddit launch spike or a Product Hunt bump feels like traction, but the signal is noisy. The first users should be chosen for the quality of feedback they can give, not for their contribution to MRR. Get 10 people who genuinely love the product, understand why, then figure out how to find 100 more of them. Virality is math, not magic. If viral growth is part of the strategy, it has to be built into the product and marketing engine from the start. A one-off spike from the wrong audience will tank your retention cohorts and give you data that doesn't mean anything. Build your analytics stack incrementally. Start with your database. Add simple app open events mapped to user IDs. When you know what's missing, layer in Amplitude for product analytics and AppsFlyer for attribution. Don't install tools you don't have a clear use for yet. Prepare for the long run. One piece of advice Sasha received that stuck: figure out how long you can stay in the game without damaging your quality of life. Early-stage building is a long game. Sustainability matters.Links & Resources: Reforge (Product-Market Fit Narrative Course): reforge.comBlue Ocean Strategy: blueoceanstrategy.comRob Snyder / Harvard Innovation Labs (Path to PMF): search "Rob Snyder Harvard Innovation Labs PMF"Prolific (user research panel): prolific.comAmplitude (product analytics): amplitude.comAppsFlyer (mobile attribution): appsflyer.comGamma (AI presentation tool): gamma.appAnticipate App: https://apps.apple.com/us/app/anticipate-ai-therapy-notes/id6746043684Sasha on LinkedIn: https://www.linkedin.com/in/aliaksandralamachenka/0:00 Beginning1:21 Intro and Sasha's background in MarTech and mental health2:20 How the Anticipate idea was born from behavioral data4:41 Using Reforge's PMF narrative framework before building8:26 The PMF interview mistake: accepting a big ambiguous problem14:38 The flight analogy for finding specific, solvable problems15:22 Should you research less and build faster?20:47 Why you should start with demand, not a product21:51 Willingness to pay vs. perceived value in consumer apps23:37 Being intentional about your first users27:21 Why Reddit feedback is actually valuable31:49 Current growth channels and why Sasha paused scaling34:51 Five pieces of advice from advisors40:10 Blue Ocean Strategy: mapping competitors and finding gaps45:21 Why non-consumers are the most important interview group47:21 Who Anticipate's real competitors actually are56:18 How to set up analytics step by step as a small team1:01:15 Gamma's "first 30 seconds" strategy and why it matters1:02:51 Sasha's next steps and final advice for founders

    1h 8m
  4. 10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx

    FEB 11

    10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx

    Xavier, CEO and co-founder of Ramdam, breaks down how subscription apps can scale creator ads on TikTok and Meta, why volume beats perfection in UGC testing, and where AI-generated video actually makes sense (and where it doesn't). Xavier spent five years at Match Group working on AI teams after his dating app was acquired. He then launched an app studio and discovered firsthand how painful it was to find winning ad creatives: months of testing 50 different videos just to find one that cut his cost per install by 5x. That frustration became Ramdam, a platform that helps consumer apps produce creator ads at scale. The company now works with Tinder, PhotoRoom, Flo, and other category leaders, delivering over 10,000 creatives per month. What you'll learn: Why a 5% success rate on ads is completely normal (and how to structure campaigns around it)How to start a UGC test: 20-40 creators, 4-5 concepts, $20-50K minimum spendWhy US English ads often perform in non-English speaking marketsHow winning apps keep one narrative from ad to paywallWhy TikTok carousel ads are massively underrated for dating appsHow to structure "test" vs "scale" campaigns to measure both CPI and ROASWhen AI-generated video makes sense: hard-to-source personas, scaling winning conceptsWhy the ad your team wants to reject might get 350 million viewsHow Ramdam uses AI to match briefs with creators and QA videos before deliveryWhy "happy accidents" from real creators still outperform AI-perfect executionKey Takeaways: Volume always wins over perfection. 50 different creators who don't perfectly match your persona will beat 5 who do. You can't predict which ad will work. Even Xavier, after thousands of campaigns, has no idea which ad will succeed when he sees it. The only strategy that works is testing at scale and following the data. Winning ads have a 2-3 week lifespan. Ad fatigue is real. If you're scaling on TikTok or Meta, you need to refuel with new creatives every month. The biggest spenders are producing 1,000+ creatives per month to stay ahead of fatigue. Start broad, then replicate winners. Early briefs should leave room for "happy accidents" where creators interpret the concept in their own style. Once you find a winner, run replicate campaigns: same hook, same narrative structure, but new faces and fresh energy. The ad-to-paywall story must be consistent. Winners keep one promise throughout the entire journey. If the ad says "sleep better in 7 minutes," that same message should appear on the store page, onboarding, and paywall. Breaks in this narrative kill conversion. AI video is a complement, not a replacement. AI-generated creators work for hard-to-source personas (high-income demographics, pregnant women, complex scenes). But they can't produce the weird, human moments that go viral. Find winning concepts with humans, then scale variations with AI. TikTok and Meta behave differently. TikTok rewards short (around 10 seconds), trend-driven content with trending sounds. Meta prefers structured narratives, product demos, 15-30 second videos. Carousels perform well on both, especially for storytelling. Creator diversity expands reach. Meta and TikTok treat ads with the same creator as nearly identical. Using many different faces helps you reach new audiences. This is why Ramdam assigns one creator per video across their 50K creator network. One ad can change everything. This business follows power law dynamics, similar to the music industry. Most ads do nothing. A small percentage capture all the budget. One viral hit can transform an app's trajectory overnight. Bonus for podcast listeners: Xavier can walk you through a fully personalized demo and share creative insights here: https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&utm_source=linkedin&utm_medium=social” Links & Resources:- Ramdam: ramdam.io- Xavier on LinkedIn: https://www.linkedin.com/in/xavier-de-baillenx/- Email: xavier@ramdam.io (mention Botsi Podcast for personalized demo)- I also found the TikTok SwipeWipe video: tiktok.com/@vdanielle22/video/7298313654594800942 Timestamps:00:00 Intro/Teaser03:00 Xavier's background: Universal Music to Match Group to Ramdam05:00 UGC formats explained: Classic, Trends, Carousels09:30 Ad lifespan and creative fatigue11:30 Why volume and experimentation beat perfection15:30 Starting a UGC test: creators, concepts, budget19:00 Creator diversity and platform algorithms23:00 Balancing authenticity with replication26:00 TikTok vs Meta: what works on each30:00 Connecting ad performance to product funnels36:00 Structuring test vs scale campaigns38:00 How Ramdam uses AI for creator matching and QA43:00 AI-generated video: use cases and limitations49:30 Marketing fundamentals: clarity and authenticity51:30 Counterintuitive learnings from UGC

    55 min
  5. 9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke

    JAN 28

    9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke

    Marcus Burke, Meta Ads consultant, explains why blended CPA is misleading, how creative format determines your audience targeting, and what signal engineering means for subscription apps in an AI-driven ad landscape. Marcus breaks down his approach to working with Meta's algorithm rather than against it. He advocates for strategic ad set segmentation based on where different creative formats naturally deliver: static ads to Facebook feed, short-form video to Instagram Reels. The conversation goes deep on the relationship between product design and ad optimization. Marcus explains how your subscription model, trial length, and paywall structure all affect the quality of signal you can send back to Meta. Sometimes optimizing for LTV conflicts with optimizing for ad signal, and growth teams need to navigate that tension intentionally. What you'll learn:• Why a $10 cost per trial can lose money while a $100 cost per trial can be profitable• How to use creative format (static vs. video vs. playable) to control placement distribution• Why "broad targeting" often results in narrow reach and high frequency on the same audience• How to structure ad sets by expected delivery rather than demographic targeting• What value rules are and how to use them for country, age, and gender optimization• Why the conversion event you optimize for should determine your account architecture• How to connect onboarding survey data with Meta demographic breakdowns• Why cold social traffic requires a fundamentally different onboarding approach than search traffic• What makes an effective "aha moment" before the paywall• How multi-price point strategies enable broader audience targeting• Why signal engineering is one of the last remaining levers for growth marketers Key Takeaways:• Blended CPA hides traffic quality problems. A $10 cost per trial from Instagram Reels represents a completely different audience than $10 from Facebook feed. Break down your metrics by placement to understand what you're actually buying. • Creative equals targeting. Your media format determines where your ad delivers. Short-form vertical video goes to Reels; statics go to Facebook feed. This isn't a bug but a feature you can use to control your audience mix without hard targeting. • Guide the algorithm, don't force it. Hard targeting gets expensive fast. Instead, use creative segmentation and value rules to nudge Meta toward your high-value audiences while keeping delivery efficient. • Your conversion event determines your account structure. If you're optimizing for a shallow event like trial starts, you need more ad sets to compensate for the algorithm's lack of business knowledge. Moving closer to revenue lets you consolidate more. • Onboarding should match your traffic source. Paid social users were just doom-scrolling and need to be entertained and re-sold on their problem. Search traffic already has intent. Design your onboarding accordingly. • Create an aha moment before the paywall. Prove value in the first session through something tangible: a sample scan, a personalized analysis, an imported recipe. This converts better than promising value during a 7-day trial. • Your pricing should match your creative strategy. Young audiences from UGC won't pay $70/year. Older Facebook feed audiences justify higher CPMs. Align your price points with who your ads are actually reaching. Links & Resources• Marcus Burke on LinkedIn: https://linkedin.com/in/marcusburke• Growth Festival Presentation: https://www.linkedin.com/posts/marcusburke_postmedia-buying-strategies-scaling-meta-activity-7373668067580563456-iziy Timestamps 00:00 – Intro clips01:41 – Introduction and context on Marcus's Growth Festival presentation02:12 – Why blended CPA is irrelevant and how it differs from blended ROAS04:36 – How placement affects traffic quality: $100 vs $10 cost per trial05:11 – Using creative format to control placement distribution07:32 – Working with the algorithm vs. forcing targeting10:19 – Why "broad targeting" doesn't mean broad reach11:06 – Getting placement and demographic data from Meta14:47 – Layering complexity: placements, demographics, user goals20:09 – Signal engineering and moving closer to business value22:30 – Account architecture: stop over-consolidating26:49 – Should subscription apps test removing trials?31:52 – Value rules: what they are and how to use them36:30 – Onboarding for paid social: entertainment over efficiency39:36 – Creating aha moments before the paywall45:44 – Multi-price point strategies to capture the full demand curve48:31 – Wrap-up and where to follow Marcus

    51 min
  6. 8: Shamanth Rao on Subscription Economics, Pricing, and Creative Strategy

    JAN 13

    8: Shamanth Rao on Subscription Economics, Pricing, and Creative Strategy

    Shamanth Rao, founder of Rocketship HQ, explains why subscription economics fundamentally differ from free-to-play, why early ROAS signals are structurally misleading, and why LTV without context means nothing. Drawing from a decade of hands-on experience across gaming and subscription businesses, Shamanth walks through how cash flow determines viable payback periods, why annual plans are the single most powerful lever in subscription growth, and how pricing strategy reshapes your entire acquisition model. He also dives deep into creative strategy: why ads should sell immediate value, not long-term habits; why relevance matters less than attention; and how winning ad narratives should actively inform your product and onboarding. What you’ll learn: • Why subscription apps don’t produce meaningful early monetization signals• Why there is no “correct” payback period• Why LTV without time, channel, platform, and geo context is misleading at best• Why annual plans dramatically reduce uncertainty and unlock scalable acquisition• Why most teams underprice annual plans• How trial length should vary by product type, not defaults• Why ads should sell speed-to-value, not habit formation• How “unrelated” or emotional ads outperform literal product messaging• How high-performing ads should influence product pages, onboarding, and roadmap decisions• Why quizzes and surveys work as both acquisition hooks and monetization levers• Where pay-as-you-go and credit-based pricing models fit — especially for AI apps• Why creative fatigue is a risk management problem, not just a volume problem • How micro-segmentation should directly shape creative production • Why AI-generated ads fail without strong human iteration and judgment Key Takeaways: • Subscription ≠ gaming economics. Games have uncapped monetization and instant signals; subscriptions have pricing ceilings and delayed feedback. Applying game-style ROAS logic to subscriptions leads to bad decisions. • Payback is a cash-flow constraint, not a best practice. The “right” payback window depends on how long your business can afford to wait to get paid back — not what investors or blogs suggest. • LTV is not a single number. Without time bounds and context (platform, channel, geo), LTV becomes theoretical and misleading. Payback periods make LTV actionable. • Annual plans change everything. They collapse uncertainty, improve cash flow, and simplify acquisition optimization. For most apps, increasing annual plan adoption and pricing has a bigger impact than almost any other lever. • Ads are not onboarding. The job of advertising is to interrupt the scroll and sell immediate value, not explain habit formation or long-term effort. That work belongs post-click. • Attention beats relevance. Ads don’t need to perfectly reflect the product to work; they need to stop the scroll. Winning narratives should then be reflected in onboarding and product experience. • Creative fatigue is a scaling risk. Over-reliance on a single winning creative can crash performance overnight. Diversification across formats, narratives, and micro-segments is essential. • AI doesn’t replace taste. It’s easier than ever to generate bad ads at scale. The advantage comes from human judgment, emotional specificity, and iterative refinement — not raw volume. Links & Resources • Rocketship HQ: https://www.rocketshiphq.com/ • Shamanth Rao LinkedIn: https://www.linkedin.com/in/shamanthrao/ • Intelligent Artifice Newsletter: https://intelligentartifice.kit.com/ 00:00 – Cold open: Why subscription economics break common growth advice 01:06 – Games vs subscriptions: monetization ceilings and delayed signals 05:12 – Payback periods are cash-flow decisions, not benchmarks 09:26 – Why LTV without context is misleading 12:41 – Pricing as the most powerful lever in subscription growth 15:00 – Why annual plans fundamentally change unit economics 18:13 – Trial length strategy: short vs long trials 19:30 – Why ads should sell immediate value, not habits 25:30 – Why Duolingo is the exception to habit-based advertising 30:30 – When ads should influence product and onboarding decisions 37:41 – One-off purchases, pay-as-you-go, and AI monetization models 40:30 – Creative fatigue and the danger of over-scaling winners 46:00 – Micro-segmentation, AI ads, and human judgment 54:20 – Closing thoughts

    55 min
  7. 7: Ekaterina Gamsriegler: How to engineer growth. Again and again.

    12/17/2025

    7: Ekaterina Gamsriegler: How to engineer growth. Again and again.

    - PricePowerPodcast.com- AI Pricing for your app: Botsi.com Ekaterina Gamsriegler (ex-Mimo, Amplitude Product50’s Top Growth Product Leader) breaks down why most growth teams struggle not because of a lack of ideas — but because they optimize the wrong things, in the wrong order. Ekaterina walks through real-world examples across onboarding, paywalls, trials, activation, and pricing — showing how user psychology, perceived value, and expectation-setting matter more than dashboards alone.  📖 Episode Chapters: 00:00 Growth Does Not Start with an MMP01:40 Breaking KPIs into Controllable Inputs03:56 Why “Breaking Things Down” Gets You 80% There06:30 Product Analytics vs Attribution12:00 Onboarding Length vs Paywall Exposure16:00 Why Averages Are Always Wrong18:10 The Truth About Personalization23:30 Why Users Don’t Start Trials28:30 Understanding Early Trial Cancellations34:45 Why Longer Sessions Can Be a Bad Sign38:00 Pricing as a Growth Lever42:00 Fix the Story Before the Price44:00 Closing Thoughts 💡 Key Takeaways:  • Growth is a sequencing problem. Teams fail when they jump straight to solutions instead of first building a usable map of user behavior and breaking metrics into their underlying drivers. • Product analytics beats attribution early. You don’t need a perfect funnel — you need a reliable picture of what users actually do after install. MMPs come later. • Averages hide the truth. Looking at overall conversion rates masks real issues that only appear when you segment by device, channel, geo, or user intent. • More exposure ≠ more revenue. Increasing paywall impressions by removing onboarding screens often lowers trial conversion if user intent isn’t built first. • Personalization rarely delivers big wins. Most onboarding and paywall personalization produces single-digit uplifts while adding major complexity and risk. • Most early churn is voluntary. Users cancel trials early because they want control, not because they hate the product. • Time-to-value matters more than time-in-app. Longer sessions often mean confusion, not engagement. • Lowering prices can work — in specific cases. Misaligned mental price categories, lack of localization, missing feature parity, or mission-driven goals can justify it. • Pricing issues are often narrative issues. Before changing the price, fix how value is communicated and perceived. • Sustainable growth comes from focus. The best teams work on 2–3 high-confidence problems at a time — and say no to everything else. Links & Resources Mentioned: • Ekaterina on LinkedIn: https://www.linkedin.com/in/ekaterina-shpadareva-gamsriegler/• Maven course: https://maven.com/mathemarketing/growing-mobile-subscription-apps• Full presentation from Growth Phestival Conference: https://www.canva.com/design/DAGw09v8yIo/lfVoi-Xf4QRm6-ddmtro1A/view• Jacob's Retention.Blog

    47 min
  8. 6: Lucas Moscon: Conversion Values, SKAN, Fingerprinting, MMPs, and Mobile Attribution

    12/04/2025

    6: Lucas Moscon: Conversion Values, SKAN, Fingerprinting, MMPs, and Mobile Attribution

    Lucas Moscon, one of the most technically knowledgeable people in mobile attribution, breaks down how post-ATT measurement really works, why most marketers are using outdated mental models, and how to build a modern, resilient measurement stack. Lucas clarifies what’s deterministic vs probabilistic today, exposes where MMPs still add value (and where they absolutely don’t), and explains why IP-based fingerprinting quietly powers 90%+ of attribution today. He also walks through SKAN in plain English, conversion-value strategy, web-to-app pipelines, and why looking at blended ROI beats chasing ROAS illusions on iOS. If you want to understand the actual mechanics behind click → install → revenue pipelines — and why Apple’s privacy tech is failing in practice — this episode is for you. What you’ll learn: • Why ATT didn’t “kill” attribution — it forced marketers to juggle deterministic, probabilistic, and blended layers• How Meta/Google matching actually works (spoiler: 90%+ relies on IP, not magic AI)• Why SKAN isn’t enough — and why relying on ROAS on iOS is the least trustworthy metric• How to measure effectively without over-reacting to noisy campaign-level data• When you truly need an MMP today — and why most apps don’t• How to correctly design conversion values for SKAN without over-engineering• Why retention determines how many conversion values you even receive• How to triangulate data across store consoles, subscription platforms, MMPs, and ad networks• Why focusing on payback windows (D60–D180) outperforms optimizing for short-term ROAS• Why probabilistic fingerprinting is still powering the ad ecosystem — and why Apple hasn’t stopped it Key Takeaways: • iOS ROAS is the noisiest metric you can use. Without IDFA, everything is extrapolated. High-confidence decision-making must use blended revenue and cohort ROI, not ad-platform ROAS. • Modern attribution = multiple layers. Post-ATT, performance requires triangulating data from SKAN, ad networks, subscription platforms, and product analytics — not trusting a single source of truth. • Fingerprinting ≠ complex algorithms — it’s mostly IP. Internal tests showed that greater than 90% of probabilistic matches come from IP alone. All the “advanced modeling” narratives are overstated.  • Most apps don’t need an MMP anymore. Exceptions: running AppLovin/Unity DSPs, React Native/Flutter SDK support gaps, or complex Web-to-App setups where Google requires certified links. Otherwise, MMPs mostly add cost, not clarity. • Retention determines SKAN visibility. If users don’t reopen the app, conversion values won’t update — meaning SKAN under-reports trials/purchases unless retention is strong. • Blend deterministic + probabilistic + aggregated signals. The goal isn’t precision — it’s directionally confident decisions across imperfect data. Marketers should work in ranges, not absolutes. • Longer payback windows unlock scale. Teams willing to accept D60–D180 payback dramatically out-spend competitors optimizing for D7 ROAS — assuming they have strong early-day proxies to detect failing cohorts. • MMPs don’t magically fix discrepancies. Even with one SDK, marketers still see mismatches across networks, stores, and internal analytics. The “one SDK solves it” narrative is outdated. Links & Resources • Appstack: https://www.appstack.tech/• Appstack library of resources: https://appstack-library.notion.site/• Lucas Moscon LinkedIn: https://www.linkedin.com/in/lucas-moscon/ 00:00 Opening Hot Take: “Are You Really Saturating Meta?”05:00 Early Indicators & Proxy Metrics (D3–D10)09:00 Predicting Cohort Success from Day 3–1011:00 How Click → Install Attribution Actually Works14:00 Web-to-App Infrastructure (Fingerprinting + SDK Flow)18:00 Meta/Google Matching: IDFA, AEM, SKAN24:30 Fingerprinting Reality: Why IP = 90% of Matches27:00 Apple’s Privacy Messaging vs Actual Enforcement30:30 How Apple Ads Uses (or Ignores) SKAN35:00 Should You Use an MMP in 2025?46:00 SKAN Conversion Value Mapping: The 63/62 Strategy49:00 Why Retention Determines SKAN Postbacks54:00 App Stack Overview + Closing Thoughts

    56 min

About

The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.

You Might Also Like