← Glossary·Business
A/B Test
Also known as: split test
Running two versions of a page in parallel and measuring which performs better on a defined metric. The basis of empirical CRO.
An A/B test runs two variants of a page (or component) simultaneously, splitting traffic between them randomly, and compares performance on a defined success metric — conversion rate, signup rate, time on page. Statistical significance is a required ingredient: a test with too few sessions will return noise, not signal.
Done well, A/B testing is the closest the web has to scientific method. It surfaces counterintuitive findings (the longer headline outperforming the punchier one; the muted CTA color winning over the saturated one) that would be impossible to predict from design intuition alone. Done badly — too many concurrent tests, inadequate sample size, peeking at results before the test concludes — it produces false positives that erode trust.
The rule of thumb: only test things that matter (the headline, the price, the primary CTA), let each test run to statistical significance even when you’re sure of the result, and document what you learned even if the test was inconclusive.