Price Power

Jacob Rushfinn

The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.

  1. 16: How to Build a Subscription App in the AI Era w/ Alice Muir

    1D AGO

    16: How to Build a Subscription App in the AI Era w/ Alice Muir

    Alice Muir, independent subscription consultant who's worked with Headspace, VSCO, Adobe, SoundCloud, and MyFitnessPal, explains why "higher engagement equals lower profit" is the new reality for AI apps, how to use strategic friction without choking activation, and why most consumers don't actually care that your app is AI-powered. What you'll learn: How strategic friction worked for MyFitnessPal, and what they got wrong by gating barcode scanning too lateWhy "protect the learning actions, charge for the outcomes" beats free-vs-paid debatesHow to handle the 90% of installs that never subscribe when free users now cost real moneyWhen hybrid monetization actually makes sense and when it just adds complexityWhy weekly subscriptions are a proxy for usage-based pricing on novelty AI appsWhy margin-qualified acquisition matters more than CAC alone for AI productsHow Flibbo used persona tiers on the paywall to get users to self-identify their willingness to payWhy a fitness app got a 6x paywall lift by removing strikethroughs, countdown timers, and stacked offersWhat the Subscription Stack framework needs to add for the AI eraThe back-of-the-napkin math founders should run before shipping any AI featureWhy consumers may actually be turned off by "AI-powered" positioningKey Takeaways: Protect learning actions, charge for outcomes. Users should learn what your product does for free. The thing that completes their job is what they pay for.GPU cost is a CAC line item, not a margin problem. If everyone you acquire gets one free generation, that compute cost belongs in your acquisition budget. Run worst-case-scenario math before you ship.Highest engagement is now lowest profit. Traditional subscription thinking inverts when each interaction has a real cost. The Subscription Stack engagement layer needs a full revision for AI apps.Weekly subscriptions are a usage proxy. AI apps see strong initial conversion and terrible retention because users have high intent for short bursts. Annual plans for a song generator are a fantasy.Self-identification at the paywall beats the questionnaire. Flibbo put basic, pro, and max personas on the paywall. Users picked the one that matched their use case.Simpler paywalls outperform copycat paywalls. Stripping countdown timers, strikethroughs, and stacked plan tiles from a fitness web-to-app funnel produced a 6x lift.Consumers don't care about AI as a feature. They care about the outcome. "AI-powered coach" can read as cheap, not premium. Lead with benefit, not technology.Don't add AI just to add AI. If a feature doesn't measurably improve retention or activation, you're paying GPU costs to compress your margin.Links & Resources Alice Muir on LinkedIn: https://www.linkedin.com/in/alicemuir/Subscription Stack framework: https://phiture.com/resources/subscriptionstack/Andrew Chen on consumer reactions to AI: linkedin.com/posts/andrewchen_when-consumers-dont-care-that-youre-building-activity-7358342997639360512-wqKMThomas Petit RevenueCat article on hybrid monetization: https://www.revenuecat.com/blog/growth/ai-hybrid-monetization/Timestamps 00:00 – Intro 01:25 – Baby raves in Berlin and the new May Day 02:58 – How the playbook changed: from acquisition-first to retention-first 07:25 – Strategic friction and the MyFitnessPal example 10:43 – Hard paywalls vs letting users discover value 11:55 – Protect learning actions, charge for outcomes 17:25 – The 90% problem: monetizing low-intent users 21:36 – When hybrid monetization actually makes sense 26:15 – Apple tax, GPU costs, and the AI app profitability squeeze 28:55 – Why weekly pricing fits novelty AI apps 33:53 – Margin-qualified acquisition for AI apps 39:08 – Flibbo's self-identifying paywall personas 43:55 – The 6x paywall win: stripping out the fluff 47:56 – Revisiting the Subscription Stack for the AI era 51:55 – Switching models to protect margin 53:38 – What founders should get right before adding AI 56:58 – Hot take: consumers don't care about AI

    1h 5m
  2. 15: How to start with Signal Engineering w/ Shumel Lais

    APR 22

    15: How to start with Signal Engineering w/ Shumel Lais

    Shumel Lais, co-founder of Day30 and previously founded Appsumer (acquired by InMobi), explains why most subscription apps feed ad platforms the wrong goal, how precision and recall reshape signal selection, and what a realistic measurement maturity ladder looks like in 2026. Shumel walks Jacob through the five stages of measurement maturity, from apps that just compare App Store Connect revenue to ad spend, through MMP attribution and cohorted reporting, up to incrementality testing for the largest spenders. He breaks down why signal engineering only makes sense once you have the right foundation in place, shares the 10-conversions-per-campaign-per-day rule of thumb for when to go further down funnel, and unpacks the restaurant booking app mistake that first put him onto the precision/recall framework. What you'll learn: Why optimizing to cost-per-trial leaves money on the table for most subscription appsHow Meta's 7-day visibility window forces the signal engineering problemWhy recall, not precision, is the metric most marketers overlookWhy the restaurant booking app example was Shumel's own mistake, and what it taught himHow Meta's event-day reporting can hide renewals inside new purchase countsWhy server-side events struggle more with matching than client-side eventsHow to decide between revenue-value signals and binary convert/no-convert signalsWhy subscription apps are years behind gaming on analytics maturityThe 10 conversions per campaign per day floor before attempting signal engineeringWhen LTV curves become reliable enough to extend payback from 30 days to 6+ monthsKey Takeaways: Signal engineering is closing the gap between what the platform can see and what you actually care about. Meta sees 7 days. You care about month 3 revenue.Recall is the metric most teams forget to measure. Precision tells you if the users firing your signal convert. Recall tells you what share of your actual converters it captures. A signal with 90% precision and 40% recall tells the algorithm that 60% of your good users are bad.There are five levels of measurement maturity, and most apps skip steps. ASC comparison → platform attribution → MMP → cohorted reporting → incrementality. Signal engineering is a level 3 or 4 exercise. Attempting it earlier wastes the effort.The 10-conversions-per-campaign-per-day rule. Below that, Meta cannot learn from a more selective signal. Above 30 to 40 per day, you are leaving performance on the table by not going further down funnel.Meta reports on event day, not install day. Renewals fire as purchase events, so Meta can claim credit for users who were already paying. Without install-cohorted MMP visibility, you are paying to acquire users you already had.Speed of signal affects matching quality and algorithm learning. Events sent within 24 hours have more matching parameters, and they let Meta decide if a user is good without waiting 7 days for the purchase to come through.The restaurant booking app was Shumel's own mistake. Before Day30, he optimized toward behaviors that correlated with bookings but were not causal. Performance did not move. The fix was cohorts, observation windows, and a binary prediction statement.Measurement problems are not an excuse anymore. In 2026, the tools exist and the playbooks exist. Hiding behind attribution gaps is a choice, as is hiding behind blended CAC when direct CAC is uncomfortable.Links & Resources Day30: https://day30.aiShumel Lais on LinkedIn: https://www.linkedin.com/in/shumellais/Timestamps00:00 Shumel's background and early mobile agency days00:56 The signal engineering framing and how Day30 landed on it03:30 A basic example: trials vs trials plus behavior05:56 Why signal engineering exists (attribution gap, not just subscriptions)08:45 Signal volume as the second dimension after precision09:30 Defining recall and the photo storage app example15:58 When to send revenue values vs binary convert/not-convert16:41 The restaurant booking app mistake and causation vs correlation19:33 Experiments are still the only real proof20:00 Measurement maturity level 1: no MMP, just ASC22:37 Do you actually need an MMP to start?23:39 Level 3: why MMP matters (Meta's event-day reporting trap)25:37 Level 4: cohorted metrics and aligning on day-30 ROAS26:30 Level 5: incrementality and MMM for the largest spenders27:35 The 10 conversions per campaign per day threshold29:30 Why the MMP matters for signal engineering (measurement, not the signal itself)31:03 MMP vs Conversions API for sending signals33:04 SDK vs server-side: matching and speed36:43 Payback periods and when to extend them40:32 Simple inputs for a basic predictive LTV model42:52 If you're running Meta to CPT today, what do you change first44:41 The quantity vs quality of signal tradeoff46:48 Hot takes: no more hiding behind attribution48:02 Favorite pricing and packaging tactics seen recently50:08 Day30's free signal audit offer

    55 min
  3. 14: Fix Activation Before Growth w/ Daphne Tideman

    APR 8

    14: Fix Activation Before Growth w/ Daphne Tideman

    Daphne Tideman, growth advisor and consultant for subscription apps, explains why most retention problems are actually activation problems, how to distinguish vanity activation metrics from ones that predict real retention, and why the aha moment should start in your ads, not just your product. Daphne walks through her evolution from treating activation as a simple funnel step to seeing it as a layered, behavioral process spanning the first 7 to 30 days. She shares real examples from growth audits where onboarding completion rates looked great but users vanished by day two, and breaks down the "time to first value" vs. "time to core value" framework for thinking about activation in stages. She also makes a case for monthly subscriptions as a faster learning tool for startups, and explains why revenue is a terrible North Star metric. What you'll learn: Why onboarding completion is often a vanity metric that hides activation failuresHow to identify whether your retention problem is actually an activation problemWhy "any action vs. no action" comparisons overstate the value of weak activation metricsHow to build mini aha moments into onboarding before the paywallHow to use the "time to first value" vs. "time to core value" frameworkWhy monthly subscriptions can help startups learn faster about activationHow to test whether an activation metric is predictive or just correlatedWhen user interviews beat quantitative analysis for defining activationWhy extending onboarding can drop completion rates but improve retentionHow to diagnose activation vs. retention vs. acquisition problemsWhy revenue as a North Star metric leads teams to extract value instead of create itKey Takeaways: Onboarding completion is a vanity metric. An app had over 90% onboarding completion on both platforms, but most users were gone by day two. The onboarding was too short and easy to click through. When they extended it and built in value-delivering steps before the paywall, completion dropped but retention improved.Your retention problem is probably an activation problem. For most apps, losing users in the first 30 days isn't a retention failure. It's an activation failure. Daphne argues we even mislabel it: "day two retention" and "day seven retention" describe periods when you're still activating users, not retaining them. True retention problems show up when users were active early but trickle off later.Activation should start in the ad. Showing the job to be done and the transformation in your ad creative builds trust before users even open the app. A coding app's best performing ad showed someone coding in a lift, making viewers think "I could find time for that too."Correlation isn't causation in activation metrics. Any action will always look better than no action. The real work is finding which behaviors, at what volume and timing, predict retention across cohorts and channels.Mini aha moments beat one big moment. Instead of trying to engineer a single big aha moment (which is often technically difficult), build multiple smaller moments of perceived value. These can be as simple as a personalized plan, a visual showing the outcome, or a first small win before the paywall.Monthly plans help you learn faster. For startups without much data, monthly subscriptions force users to make a renewal decision every month, which generates faster signal on who is truly activated vs. who is coasting on inertia.Revenue is a terrible North Star metric. It pushes teams toward extracting value from users rather than creating it. Activation and usage metrics better align the team's incentives with user outcomes.Links & Resources Daphne Tideman's Growth Ways newsletter: https://growthwaves.substack.com/Daphne Tideman on LinkedIn: https://www.linkedin.com/in/daphnetideman/00:00 Intro and Daphne's path from e-commerce to app growth consulting01:20 How activation thinking evolves from 2D to 3D04:20 Common activation mistakes: oversimplifying and picking the wrong metric05:50 Why standard metrics weren't predicting retention07:20 Onboarding completion as a vanity metric: 90% completion, gone by day two10:20 Activation vs. monetization: which to fix first13:20 Building mini aha moments into onboarding and ads17:50 User interviews and the role of emotions in activation20:20 Your retention problem is actually an activation problem23:20 Time to first value vs. time to core value framework27:20 How to test whether an activation metric is real or vanity29:20 Starting with user interviews vs. data when you lack scale31:50 Correlation vs. causation: finding the right activation threshold34:20 Learning from failed experiments36:50 Diagnosing activation vs. retention vs. acquisition problems39:20 Why activation problems are more common than retention problems42:20 Matching subscription models to use cases44:50 Biggest activation mistake apps make right now45:50 Lightning round: pricing wins, hot takes, and best activation results

    50 min
  4. 13: The Four Horsemen of Churn w/ Dan Layfield

    MAR 25

    13: The Four Horsemen of Churn w/ Dan Layfield

    Dan Layfield, author of Subscription Index and former product lead at Codecademy and Uber Eats, explains why churn is the silent ceiling on subscription growth, how to diagnose which type of churn is killing your business, and the pricing trick that can double your LTV overnight. Dan walks through his four horsemen framework: payment failures, activation issues, pricing and plan mix, and voluntary cancellation. He shares the bottom-up optimization approach he uses with every company, starting with Stripe settings that take 10 minutes to fix. What you'll learn: Why your Stripe retry settings are probably wrong and how to fix them in 10 minutesHow to calculate your growth ceiling using churn rate and acquisition numbersWhy payment receipts might be reminding users to cancel every monthHow to price annual plans based on your monthly retention dataHow to build cancellation flows that save 20% of churning usersWhy activation experiments are tricky and often produce dudsWhy quality problems are the easiest growth fixesKey Takeaways: Churn dictates your ceiling. New users divided by churn rate equals your max subscribers. 1,000 new users with 20% churn = 5,000 subscriber ceiling. Lowering churn raises that ceiling proportionally.Start at the bottom of the funnel. Stripe settings, dunning emails, card updaters can be fixed in minutes and win back 5% of churn. Do these before tackling bespoke activation problems.Annual pricing should match monthly LTV plus one or two months. If average retention is five months, price annual at six months. Looks like a steep discount but doubles LTV.Turn off monthly email receipts. Netflix, Spotify, and Amazon don't send them. That monthly reminder is a monthly prompt to cancel.Cancellation flows should solve the underlying problem. Pausing works when the need is temporary. Downgrading works when they're paying for unused features.Links & Resources Subscription Index: https://subscriptionindex.comDan Layfield on LinkedIn: https://www.linkedin.com/in/layfield/Timestamps 00:00 Intro and Dan's path from JP Morgan to Codecademy 04:00 Freemium conversion benchmarks: sub-1% vs. good (3%) vs. great (7%) 06:30 The growth ceiling formula 08:00 The four horsemen of churn 12:00 Bottom-up optimization: start with Stripe settings 13:30 Cancellation flow tactics: pause, discount, upgrade/downgrade 19:30 Payment failure quick wins: smart retries, card updater, dunning emails 22:30 The annual pricing trick that doubled LTV at Codecademy 30:00 Activation and the Reforge framework 37:30 Onboarding should show value, not just explain device setup 42:30 Ethical cancellation flows and click-to-cancel legislation 49:30 Screenshot audit: where to start when you're stuck 52:30 Turn off monthly receipts: the easiest churn win 53:30 Lightning round

    1 hr
  5. 12: Price Testing for Subscription Apps with Michal Parizek

    MAR 12

    12: Price Testing for Subscription Apps with Michal Parizek

    Michal Parizek, pricing and growth lead at Mojo, explains how to predict long-term revenue from short-term price test data, why Apple's automatic regional pricing is wrong for most apps, and how to sequence pricing, packaging, and paywall tests for maximum impact. Michal walks through the 13-month revenue projection model he built at Mojo, which uses seven-day cancellation rates as a proxy for annual renewal rates. He shares how his team raised yearly prices by 50% in the US and Germany with minimal conversion drop, how they tested free trial lengths and found almost no difference between three-day and seven-day trials, and why the ratio between monthly and yearly plan prices matters more than the absolute price point. What you'll learn:- How to use seven-day cancellation rates to project 13-month revenue- Why Apple's exchange-rate-only pricing leaves money on the table- How to sequence price tests: price first, then packaging, then paywall design- Why the monthly-to-yearly price ratio drives plan share more than absolute price- How hiding the monthly plan pushed yearly share from 60% to 80%- Why free trials still matter for new users, despite advice to remove them- How three-day trials performed as well as seven-day trials at Mojo- Why your first price test should have big price gaps, not small ones- How traffic source mix can distort price test results- Why a 100% price increase was a short-term winner but long-term loser Key Takeaways: - Seven-day cancellation rate is a reliable early signal. 20-30% of cancellations happen in the first seven to ten days. Measure that rate per variant, project renewal rates from it, and you can evaluate a price test without waiting months. Mojo validated this against real data and it held. - Apple's regional pricing is just exchange rate math. No purchasing power, no local context. Look at your top five markets individually, compare conversion funnels by country, and cross-reference competitor pricing. - Pricing and packaging beat paywall design in impact. Changing price points, plan structures, and introductory offers had more effect than design or copy. Start with pricing, then plan mix, then layout. - The monthly-to-yearly price ratio drives plan selection. Changing only the monthly price shifted yearly subscriber share significantly. The perceived deal relative to monthly is a strong behavioral lever. - Don't remove free trials for new users without testing. Mojo tried it based on popular advice and saw revenue decline. Test it for your app. - Start price tests with big jumps. Test $40 vs $60 vs $80, not $50 vs $48 vs $52. Find the zone first, refine later. - Revisit cohorts months after shipping. Mojo's 100% price increase looked great short-term but cancellation rates spiked. The 13-month projection caught it. Links & Resources- Michal Parizek's Botsi blog post: https://www.botsi.com/blog-posts/pricing-experiments-the-backbone-of-mojos-monetization-success- Michal Parizek on LinkedIn: https://www.linkedin.com/in/michalparizek/ Timestamps0:00 Intro1:03 Using seven-day cancellation rates to predict 13-month revenue3:25 Building the report template and data pipeline6:13 Validating the renewal rate prediction model10:03 Benchmarks for new apps without renewal history12:09 Why Apple's automatic price tiers are wrong13:33 How to research and set regional prices17:10 Relationship between pricing, packaging, and paywall design21:15 Sequencing: price first, then packaging, then design23:55 Why paywall layout tests that touch plan visibility are most impactful26:41 Free trial strategy and length testing31:03 Paid trial options as an emerging trend33:16 The biggest mistake: not having enough data volume35:56 Raising prices 50% in the US and Germany38:46 Start with big price gaps, refine later40:11 Don't be afraid to test prices

    42 min
  6. 11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App

    FEB 25

    11: Lessons from a Founder: What Sasha Learned Launching a Mental Health App

    Sasha, founder of Anticipate (a mental health app), explains why she accepted an overly broad problem statement during validation, how she used Reforge's product-market fit narrative framework to test hypotheses without building, and what she learned after eight rounds of iteration that still didn't land product-market fit. Sasha came into this with a real edge: years of marketing technology and data consulting for companies like Flo Health gave her the insight to use behavioral data for mental health. But translating deep domain expertise into a focused, sellable product turned out to be a different problem entirely. She walks through the specific moment her PMF interviews led her astray, why the Blue Ocean Strategy canvas revealed she was charging for features users get for free elsewhere, and the five pieces of advice from advisors that finally helped her reframe everything. What you'll learn: •  Why emotionally compelling answers in user interviews can mislead you into solving problems too large to tackle•  How Reforge's PMF narrative framework structures hypothesis validation before a single line of code is written•  Why product-market fit interviews need to go past the top-level pain and drill into specific, solvable sub-problems•  How the Blue Ocean Strategy canvas revealed Sasha was charging for features available for free•  Why willingness to pay and perceived value are not the same thing, and why conflating them kills monetization strategy•  How Apple in-app events can give early-stage apps a meaningful boost in rankings and visibility•  Why Reddit feedback, brutal as it is, beats feedback from friends and family every time•  How to identify your real competitors by talking to people who don't use any product in your category•  Why going viral before you understand your retention is more dangerous than growing slowly•  How Gamma's "ruthless focus on the first 30 seconds" applies to any early-stage product•  Why "hell yes" should be the bar for every slide in your demand validation deck before you build anything•  How to layer in analytics tools incrementally rather than setting up a full stack before you need it Key Takeaways: Don't take big emotional truths at face value. When Sasha asked users about mental health, they told her they never wanted to experience a crisis again. That's real. But it's so large and ambiguous that no small startup can solve it. She should have pressed further — what specific behaviors or sub-problems sit underneath that fear? One reachable problem beats ten important ones. Sell before you build. A slide deck that walks users through a problem and proposed solution is a much cheaper way to iterate than building product. If you're not getting "hell yes" reactions slide by slide, the product wouldn't have landed either. Change the deck first. Willingness to pay is not the same as value. Some use cases are genuinely valuable to users but they'll never pay for them because they see the data as theirs, or because it's available elsewhere for free. Knowing which features fall into which bucket before you write your pricing page saves a lot of pain. Your real competitors are probably not in your app category. Anticipate doesn't compete with Headspace or Calm. It competes with Apple Health, fitness apps, and the mental math people already do in their heads. Talking to non-customers revealed this, and it completely changed the product strategy. Be deliberate about your first 100 users. A Reddit launch spike or a Product Hunt bump feels like traction, but the signal is noisy. The first users should be chosen for the quality of feedback they can give, not for their contribution to MRR. Get 10 people who genuinely love the product, understand why, then figure out how to find 100 more of them. Virality is math, not magic. If viral growth is part of the strategy, it has to be built into the product and marketing engine from the start. A one-off spike from the wrong audience will tank your retention cohorts and give you data that doesn't mean anything. Build your analytics stack incrementally. Start with your database. Add simple app open events mapped to user IDs. When you know what's missing, layer in Amplitude for product analytics and AppsFlyer for attribution. Don't install tools you don't have a clear use for yet. Prepare for the long run. One piece of advice Sasha received that stuck: figure out how long you can stay in the game without damaging your quality of life. Early-stage building is a long game. Sustainability matters.Links & Resources: Reforge (Product-Market Fit Narrative Course): reforge.comBlue Ocean Strategy: blueoceanstrategy.comRob Snyder / Harvard Innovation Labs (Path to PMF): search "Rob Snyder Harvard Innovation Labs PMF"Prolific (user research panel): prolific.comAmplitude (product analytics): amplitude.comAppsFlyer (mobile attribution): appsflyer.comGamma (AI presentation tool): gamma.appAnticipate App: https://apps.apple.com/us/app/anticipate-ai-therapy-notes/id6746043684Sasha on LinkedIn: https://www.linkedin.com/in/aliaksandralamachenka/0:00 Beginning1:21 Intro and Sasha's background in MarTech and mental health2:20 How the Anticipate idea was born from behavioral data4:41 Using Reforge's PMF narrative framework before building8:26 The PMF interview mistake: accepting a big ambiguous problem14:38 The flight analogy for finding specific, solvable problems15:22 Should you research less and build faster?20:47 Why you should start with demand, not a product21:51 Willingness to pay vs. perceived value in consumer apps23:37 Being intentional about your first users27:21 Why Reddit feedback is actually valuable31:49 Current growth channels and why Sasha paused scaling34:51 Five pieces of advice from advisors40:10 Blue Ocean Strategy: mapping competitors and finding gaps45:21 Why non-consumers are the most important interview group47:21 Who Anticipate's real competitors actually are56:18 How to set up analytics step by step as a small team1:01:15 Gamma's "first 30 seconds" strategy and why it matters1:02:51 Sasha's next steps and final advice for founders

    1h 8m
  7. 10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx

    FEB 11

    10: Why the Weird Ad Wins: CEO of Ramdam on Finding UGC Champions | Xavier de Baillenx

    Xavier, CEO and co-founder of Ramdam, breaks down how subscription apps can scale creator ads on TikTok and Meta, why volume beats perfection in UGC testing, and where AI-generated video actually makes sense (and where it doesn't). Xavier spent five years at Match Group working on AI teams after his dating app was acquired. He then launched an app studio and discovered firsthand how painful it was to find winning ad creatives: months of testing 50 different videos just to find one that cut his cost per install by 5x. That frustration became Ramdam, a platform that helps consumer apps produce creator ads at scale. The company now works with Tinder, PhotoRoom, Flo, and other category leaders, delivering over 10,000 creatives per month. What you'll learn: Why a 5% success rate on ads is completely normal (and how to structure campaigns around it)How to start a UGC test: 20-40 creators, 4-5 concepts, $20-50K minimum spendWhy US English ads often perform in non-English speaking marketsHow winning apps keep one narrative from ad to paywallWhy TikTok carousel ads are massively underrated for dating appsHow to structure "test" vs "scale" campaigns to measure both CPI and ROASWhen AI-generated video makes sense: hard-to-source personas, scaling winning conceptsWhy the ad your team wants to reject might get 350 million viewsHow Ramdam uses AI to match briefs with creators and QA videos before deliveryWhy "happy accidents" from real creators still outperform AI-perfect executionKey Takeaways: Volume always wins over perfection. 50 different creators who don't perfectly match your persona will beat 5 who do. You can't predict which ad will work. Even Xavier, after thousands of campaigns, has no idea which ad will succeed when he sees it. The only strategy that works is testing at scale and following the data. Winning ads have a 2-3 week lifespan. Ad fatigue is real. If you're scaling on TikTok or Meta, you need to refuel with new creatives every month. The biggest spenders are producing 1,000+ creatives per month to stay ahead of fatigue. Start broad, then replicate winners. Early briefs should leave room for "happy accidents" where creators interpret the concept in their own style. Once you find a winner, run replicate campaigns: same hook, same narrative structure, but new faces and fresh energy. The ad-to-paywall story must be consistent. Winners keep one promise throughout the entire journey. If the ad says "sleep better in 7 minutes," that same message should appear on the store page, onboarding, and paywall. Breaks in this narrative kill conversion. AI video is a complement, not a replacement. AI-generated creators work for hard-to-source personas (high-income demographics, pregnant women, complex scenes). But they can't produce the weird, human moments that go viral. Find winning concepts with humans, then scale variations with AI. TikTok and Meta behave differently. TikTok rewards short (around 10 seconds), trend-driven content with trending sounds. Meta prefers structured narratives, product demos, 15-30 second videos. Carousels perform well on both, especially for storytelling. Creator diversity expands reach. Meta and TikTok treat ads with the same creator as nearly identical. Using many different faces helps you reach new audiences. This is why Ramdam assigns one creator per video across their 50K creator network. One ad can change everything. This business follows power law dynamics, similar to the music industry. Most ads do nothing. A small percentage capture all the budget. One viral hit can transform an app's trajectory overnight. Bonus for podcast listeners: Xavier can walk you through a fully personalized demo and share creative insights here: https://meetings-eu1.hubspot.com/xavier-de-baillenx/30min?utm_campaign=jacob-post&utm_source=linkedin&utm_medium=social” Links & Resources:- Ramdam: ramdam.io- Xavier on LinkedIn: https://www.linkedin.com/in/xavier-de-baillenx/- Email: xavier@ramdam.io (mention Botsi Podcast for personalized demo)- I also found the TikTok SwipeWipe video: tiktok.com/@vdanielle22/video/7298313654594800942 Timestamps:00:00 Intro/Teaser03:00 Xavier's background: Universal Music to Match Group to Ramdam05:00 UGC formats explained: Classic, Trends, Carousels09:30 Ad lifespan and creative fatigue11:30 Why volume and experimentation beat perfection15:30 Starting a UGC test: creators, concepts, budget19:00 Creator diversity and platform algorithms23:00 Balancing authenticity with replication26:00 TikTok vs Meta: what works on each30:00 Connecting ad performance to product funnels36:00 Structuring test vs scale campaigns38:00 How Ramdam uses AI for creator matching and QA43:00 AI-generated video: use cases and limitations49:30 Marketing fundamentals: clarity and authenticity51:30 Counterintuitive learnings from UGC

    55 min
  8. 9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke

    JAN 28

    9: Frameworks for Meta's AI-driven advertising w/ Marcus Burke

    Marcus Burke, Meta Ads consultant, explains why blended CPA is misleading, how creative format determines your audience targeting, and what signal engineering means for subscription apps in an AI-driven ad landscape. Marcus breaks down his approach to working with Meta's algorithm rather than against it. He advocates for strategic ad set segmentation based on where different creative formats naturally deliver: static ads to Facebook feed, short-form video to Instagram Reels. The conversation goes deep on the relationship between product design and ad optimization. Marcus explains how your subscription model, trial length, and paywall structure all affect the quality of signal you can send back to Meta. Sometimes optimizing for LTV conflicts with optimizing for ad signal, and growth teams need to navigate that tension intentionally. What you'll learn:• Why a $10 cost per trial can lose money while a $100 cost per trial can be profitable• How to use creative format (static vs. video vs. playable) to control placement distribution• Why "broad targeting" often results in narrow reach and high frequency on the same audience• How to structure ad sets by expected delivery rather than demographic targeting• What value rules are and how to use them for country, age, and gender optimization• Why the conversion event you optimize for should determine your account architecture• How to connect onboarding survey data with Meta demographic breakdowns• Why cold social traffic requires a fundamentally different onboarding approach than search traffic• What makes an effective "aha moment" before the paywall• How multi-price point strategies enable broader audience targeting• Why signal engineering is one of the last remaining levers for growth marketers Key Takeaways:• Blended CPA hides traffic quality problems. A $10 cost per trial from Instagram Reels represents a completely different audience than $10 from Facebook feed. Break down your metrics by placement to understand what you're actually buying. • Creative equals targeting. Your media format determines where your ad delivers. Short-form vertical video goes to Reels; statics go to Facebook feed. This isn't a bug but a feature you can use to control your audience mix without hard targeting. • Guide the algorithm, don't force it. Hard targeting gets expensive fast. Instead, use creative segmentation and value rules to nudge Meta toward your high-value audiences while keeping delivery efficient. • Your conversion event determines your account structure. If you're optimizing for a shallow event like trial starts, you need more ad sets to compensate for the algorithm's lack of business knowledge. Moving closer to revenue lets you consolidate more. • Onboarding should match your traffic source. Paid social users were just doom-scrolling and need to be entertained and re-sold on their problem. Search traffic already has intent. Design your onboarding accordingly. • Create an aha moment before the paywall. Prove value in the first session through something tangible: a sample scan, a personalized analysis, an imported recipe. This converts better than promising value during a 7-day trial. • Your pricing should match your creative strategy. Young audiences from UGC won't pay $70/year. Older Facebook feed audiences justify higher CPMs. Align your price points with who your ads are actually reaching. Links & Resources• Marcus Burke on LinkedIn: https://linkedin.com/in/marcusburke• Growth Festival Presentation: https://www.linkedin.com/posts/marcusburke_postmedia-buying-strategies-scaling-meta-activity-7373668067580563456-iziy Timestamps 00:00 – Intro clips01:41 – Introduction and context on Marcus's Growth Festival presentation02:12 – Why blended CPA is irrelevant and how it differs from blended ROAS04:36 – How placement affects traffic quality: $100 vs $10 cost per trial05:11 – Using creative format to control placement distribution07:32 – Working with the algorithm vs. forcing targeting10:19 – Why "broad targeting" doesn't mean broad reach11:06 – Getting placement and demographic data from Meta14:47 – Layering complexity: placements, demographics, user goals20:09 – Signal engineering and moving closer to business value22:30 – Account architecture: stop over-consolidating26:49 – Should subscription apps test removing trials?31:52 – Value rules: what they are and how to use them36:30 – Onboarding for paid social: entertainment over efficiency39:36 – Creating aha moments before the paywall45:44 – Multi-price point strategies to capture the full demand curve48:31 – Wrap-up and where to follow Marcus

    51 min

About

The Price Power Podcast is for all things growth, retention, and monetization for subscription mobile apps. We talk with amazing leaders in the industry to help share their knowledge with you. Hosted by Jacob Rushfinn, CEO of Botsi.

You Might Also Like