π Interview Question
βYou're launching a new onboarding flow for your mobile app. How would you design an experiment to test whether it improves user activation?β
This is one of the most common A/B testing questions asked in product data science interviews. Companies want to see if you can blend product thinking with statistical rigor, and communicate a clear and structured approach.
β Step-by-Step Answer
1. Clarify the Objective
Start by clarifying the business goal and metric definition.
βThe goal is to evaluate whether the new onboarding flow increases user activation. I define activation as a user completing onboarding and performing a key in-app action (e.g., creating a profile, posting content, or sending a message) within 3 days of signup.β
2. Define Primary and Secondary Metrics
- Primary Metric: Activation Rate = Activated Users / New Sign-Ups
- Secondary Metrics:
- Time to activation
- Drop-off rate during onboarding
- Retention after 7 or 14 days
- Bounce rate from app open to onboarding start
Also include guardrail metrics to monitor unintended consequences (e.g., increase in uninstalls or support tickets).
β Summary
Step | Description |
---|---|
1 | Clarify objective and define activation |
2 | Select primary and guardrail metrics |
3 | Design randomization and power analysis |
4 | Ensure clean user segmentation and data quality |
5 | Analyze uplift with statistical tests and CIs |
6 | Make a decision grounded in product and business context |