logo Performance Scan images

A/B testing: what is it? + tutorial

ab test
Marketing

Written by Niek van Son MSc on March 12, 2025

Niek van Son

Introduction

What if you could increase your online sales by 20% with one simple change? Sounds too good to be true? Yet this is exactly what is possible using A/B testing. But beware: not every test leads to spectacular results. In fact, many experiments yield insights that demonstrate precisely what does not work. Therefore, it is crucial to understand statistical significance in order to properly interpret test results.

In this article you will read what a/b tests are, how they work, and how to effectively use this powerful method to make your website traffic more profitable. In doing so, you will increase your competitiveness.

What is an A/B test?

An A/B test, also known as split testing, is a method in which two versions (version A and version B) of a Web page, e-mail or advertisement are shown to different groups of visitors. The goal is to determine which version performs best based on predetermined goals such as clicks, purchases or signups. By testing one specific variable at a time-for example, a header, button color or image-you can find out exactly which adjustment leads to better performance.

How does it work in practice?

You start by formulating a clear hypothesis, for example, "A green button generates more clicks than a red button." Then you randomly divide visitors into two groups. One group gets to see version A (red button), the other group gets to see version B (green button). During the test, you collect data on visitor behavior, then analyze the results to determine which version performs statistically significantly better. The winning design can then be rolled out to all visitors.

The importance of statistical significance

Statistical significance indicates whether the result of your A/B test is reliable, and not caused by chance. Without statistical significance, you run the risk of making decisions based on random or unreliable results. To determine whether a result is significant, you often use a confidence interval (e.g., 95%) and a p-value. The p-value indicates the probability that your result is based on chance; the lower this value, the more reliable the result. Usually a result with a p-value lower than 0.05 is considered statistically significant, meaning that there is less than a 5% chance that the result is random.

How large should your sample size be?

To achieve a statistically significant result (p-value of 0.05), your sample size must be large enough. The size of your sample depends on the effect your change has on the results. The larger the effect, the smaller the sample needed. Smaller differences require larger samples to measure reliably. You can calculate in advance how many visitors you need using online calculators or statistical tools. A common rule of thumb is to test at least several hundred to thousands of visitors per version, depending on how subtle the difference is that you want to detect.

Step-by-step guide to A/B testing

  1. Formulate a clear hypothesis: What do you expect to improve?
  2. Select one variable to test: For example, a button color, title or image.
  3. Create two versions (A and B): Make sure they differ only on the chosen part.
  4. Choose your target audience: Ensure a random distribution to avoid bias.
  5. Determine the required sample size: Use a statistical calculator for this purpose.
  6. Perform the test: Collect sufficient data over a representative period of time.
  7. Analyze the results: Check statistical significance and interpretation.
  8. Implement the winning version: Optimize further based on what you've learned.

Intuition is often a poor counsellor

Often changes to Web sites are made because someone calls out something that sounds logical. Although intuition and feeling are important, practice shows that such assumptions are frequently incorrect. But this is usually not checked. Entrepreneurs are therefore regularly surprised by results of A/B testing that are completely contrary to their expectations. It is essential to base your decisions not only on feeling, but especially on data and well-executed tests. Especially if you are spending a lot of money bringing in traffic, you want to learn how to get the most return from this. If you don't do it, your competitor will.

Niek van Son
THE AUTHOR

Niek van Son MSc

Marketing Management (MSc, University of Tilburg). 10+ years of experience as an online marketing consultant (SEO - SEA). Occasionally writes articles for Frankwatching, Marketingfacts and B2bmarketeers.nl.

Are the results from your online marketing disappointing?

Request our no-obligation performance scan and we'll tell you where you're going wrong.