Compute sample size + days-to-significance for conversion rate experiments
At 5% baseline conversion, detecting a 10% relative improvement (i.e. 5.00% → 5.50%) at 95% significance with 80% power needs 31,234 visitors in each variant, or 62,468 total.
Running an A/B test without computing sample size first is how teams end up with 'inconclusive' results that waste months. This calculator uses the exact formula for two-proportion z-tests to tell you upfront: how many visitors per variant, and how long you'll wait. Fix your MDE first, plan the test, then ship.
Because detecting small improvements at low baselines is statistically hard. At 1% baseline, detecting a 5% relative lift (1.00% → 1.05%) needs ~60k per variant. The smaller the effect, the exponentially more data you need.
Industry standard: α=5% (accept a 5% false-positive rate) and power=80% (detect 80% of real effects). More rigorous teams use α=1% and power=90% but sample sizes balloon.
The smallest improvement you'd care about. If a 2% lift isn't worth shipping, set MDE=5% and you'll stop the test much sooner. Honest teams set this BEFORE running, not after seeing results.
Fixed-sample frequentist (what this tool computes) is still industry norm — easy to explain to stakeholders, works with most A/B tools. Bayesian is better if you can stop early based on peeking; requires specialized tooling.
Your daily traffic is 0 or the required sample exceeds your traffic × reasonable run length. Either increase traffic, relax your MDE, or pick bigger pages to test.