26 мин.

[Seedcamp Firsts] How to A/B Test Product Changes and Set up Good Data Science Practices This Much I Know - The Seedcamp Podcast

    • Технологии

In a follow-up to their Seedcamp Firsts conversation on data, our Venture Partner Devin Hunt and Candice Ren, Founder of analytics agency 173Tech and a member of the Seedcamp Expert Collective, dive deep into A/B testing and good data science practices.

With new and exciting AI technology emerging around recommendation engines, how can product leads evaluate which solution is better and how to really measure a “better recommendation”?

Focusing on a specific case study - a furniture marketplace, Candice, who worked on A/B testing and recommendation engines for Bumble, Plend Loans, MUBI, Treatwell and many others, shares her thoughts on:
- the intricacies of setting up and analyzing an A/B test experiment focused on comparing two different recommendation algorithms
- how you set your hypothesis
- the best way to segment your user basis
- how to select what you are controlling for (e.g. click-through rate)
- how to interpret test results and consider broader business metrics impact.

Candice and Devin also emphasize the importance of granular testing, proper test design, and documentation of test results for informed decision-making within a company's testing framework.

In a follow-up to their Seedcamp Firsts conversation on data, our Venture Partner Devin Hunt and Candice Ren, Founder of analytics agency 173Tech and a member of the Seedcamp Expert Collective, dive deep into A/B testing and good data science practices.

With new and exciting AI technology emerging around recommendation engines, how can product leads evaluate which solution is better and how to really measure a “better recommendation”?

Focusing on a specific case study - a furniture marketplace, Candice, who worked on A/B testing and recommendation engines for Bumble, Plend Loans, MUBI, Treatwell and many others, shares her thoughts on:
- the intricacies of setting up and analyzing an A/B test experiment focused on comparing two different recommendation algorithms
- how you set your hypothesis
- the best way to segment your user basis
- how to select what you are controlling for (e.g. click-through rate)
- how to interpret test results and consider broader business metrics impact.

Candice and Devin also emphasize the importance of granular testing, proper test design, and documentation of test results for informed decision-making within a company's testing framework.

26 мин.

Топ подкастов в категории «Технологии»

Запуск завтра
libo/libo
make sense podcast
make sense podcast
Podlodka Podcast
Егор Толстой, Стас Цыганов, Екатерина Петрова и Евгений Кателла
Lex Fridman Podcast
Lex Fridman
#BeardyCast: гаджеты и медиакультура
BeardyCast.com
Накликали беду
БОГЕМА