People don’t know enough about A/B testing statistics, and then go and publish case studies or offer advice full of bad math.
The purpose of this site is to raise awareness around proper AB testing statistics.
Stopping tests early and drawing conclusions for imaginary test results defeats the whole purpose of testing. Why run the test if you don’t actually know the real outcome?
People say silly things like “stop the test when you reach 95% significance”, “you need 100 conversions to stop a test” or they just rely on the tools to call a winner.
That is rubbish. Testing is not about magic numbers, and statistical significance is widely misunderstood, even by many so-called experts.