26 min

[Seedcamp Firsts] How to A/B Test Product Changes and Set up Good Data Science Practices This Much I Know - The Seedcamp Podcast

    • Technology

In a follow-up to their Seedcamp Firsts conversation on data, our Venture Partner Devin Hunt and Candice Ren, Founder of analytics agency 173Tech and a member of the Seedcamp Expert Collective, dive deep into A/B testing and good data science practices.

With new and exciting AI technology emerging around recommendation engines, how can product leads evaluate which solution is better and how to really measure a “better recommendation”?

Focusing on a specific case study - a furniture marketplace, Candice, who worked on A/B testing and recommendation engines for Bumble, Plend Loans, MUBI, Treatwell and many others, shares her thoughts on:
- the intricacies of setting up and analyzing an A/B test experiment focused on comparing two different recommendation algorithms
- how you set your hypothesis
- the best way to segment your user basis
- how to select what you are controlling for (e.g. click-through rate)
- how to interpret test results and consider broader business metrics impact.

Candice and Devin also emphasize the importance of granular testing, proper test design, and documentation of test results for informed decision-making within a company's testing framework.

In a follow-up to their Seedcamp Firsts conversation on data, our Venture Partner Devin Hunt and Candice Ren, Founder of analytics agency 173Tech and a member of the Seedcamp Expert Collective, dive deep into A/B testing and good data science practices.

With new and exciting AI technology emerging around recommendation engines, how can product leads evaluate which solution is better and how to really measure a “better recommendation”?

Focusing on a specific case study - a furniture marketplace, Candice, who worked on A/B testing and recommendation engines for Bumble, Plend Loans, MUBI, Treatwell and many others, shares her thoughts on:
- the intricacies of setting up and analyzing an A/B test experiment focused on comparing two different recommendation algorithms
- how you set your hypothesis
- the best way to segment your user basis
- how to select what you are controlling for (e.g. click-through rate)
- how to interpret test results and consider broader business metrics impact.

Candice and Devin also emphasize the importance of granular testing, proper test design, and documentation of test results for informed decision-making within a company's testing framework.

26 min

Top Podcasts In Technology

Lex Fridman Podcast
Lex Fridman
Acquired
Ben Gilbert and David Rosenthal
Hard Fork
The New York Times
All-In with Chamath, Jason, Sacks & Friedberg
All-In Podcast, LLC
Deep Questions with Cal Newport
Cal Newport
Darknet Diaries
Jack Rhysider