Back to Glossary

    Statistical Significance

    Confidence that test results aren't due to random chance, typically requiring 95% confidence level

    A mathematical way to determine whether your test results are real or just random luck. Prevents you from making decisions based on coincidence.

    The Core Question

    When Version B performs better than Version A, is that a real improvement or just random chance? Statistical significance gives you the answer. Vital for A/B testing and incrementality testing.

    The Standard

    95% confidence is the threshold. This means there's only a 5% chance the result happened randomly. Below 95%, you can't trust the difference is real.

    Why You Need Enough Data

    Small sample sizes produce unreliable results. You need:

    • 100+ conversions per variation minimum (more is better)
    • At least 1 week of data (accounts for day-of-week patterns)

    Example

    Version A: 120 conversions from 40,000 impressions (0.30% conversion rate) Version B: 150 conversions from 40,000 impressions (0.375% conversion rate)

    B looks 25% better, but this only gives you ~85% confidence—not enough to trust. You need to run longer.

    How to Check

    Don't eyeball results. Use online statistical significance calculators - input your numbers and they'll tell you if you have 95%+ confidence.

    Practical Advice

    For major decisions (scaling budgets, choosing platforms), require 95%+ confidence. For minor creative tests, sometimes it's fine to move on rather than wait weeks for marginal significance—focus your time on more impactful tests.