Kin + Carta is joining Valtech. Learn more here >

Skip to main content

Select your location

Laptop with letter A inside, laptop with letter B inside

Can you achieve Quality and Quantity from your experimentation programme?

  • 27 April 2021

One of the challenges we face every day as optimisers is how we can increase the velocity of our A/B testing programmes whilst retaining the quality and statistical validity of our results.  A challenge which has recently been exaggerated by the drop in traffic across a number of industries due to Covid.

In fact, a recent study by cxl.com found that almost half of all respondents are only running 1–2 tests per month, with just 9.5% of optimisers performing 20 or more tests per month. For those who wanted to increase the velocity of their testing programmes but couldn't, lack of traffic was the most common challenge. 

Approximately how many online experiments (tests) does your team run every month? Graph shows the majority run 1-2 tests.
Approximately how many online experiments (tests) does your team run each month?

Running tests

Running experiments on sites with lower volumes of traffic can impact your ability to achieve statistical power and significance, both of which can lead to an increase in type 1 and 2 errors.

Type 1: Errors are false positives. This means that you wrongfully assume that your alternative hypothesis has worked even though it hasn’t. In simple terms: you are seeing an imaginary uplift.

Type 2: Errors happen when you inaccurately assume that no winner has been declared between a control version and a variation when there actually is a winner. In more statistically accurate terms, type 2 errors happen when the null hypothesis is false and you subsequently fail to reject it. In simple terms: not seeing an uplift when there is one.

Testing Capacity

Before we explain why increasing the velocity is important, we need to cover off one very important factor which is a limitation for every testing programme:  the maximum number of experiments you can run per year. By that we mean what is your overall testing capacity. To calculate this we need to look at three factors:

1. The time it takes to reach a statistically valid test

2. The number of pages you can run tests on simultaneously

3. How many weeks per year can you run tests for

For example, if we break this down into a simple equation:

1. It takes 4 weeks on average to reach a statistically valid outcome.

2. There are 5 pages we can test at any one time.

3. We are able to experiment throughout the year.

Based on this information you would have a testing capacity of 65 tests per year (52/4 *5).

If you are not using your full testing capacity you are losing money and the ability to increase your understanding of your users.

Why is increasing velocity Important?

There has been some debate on whether increasing velocity of testing programmes is a good thing or not. At Kin + Carta our philosophy is simple: the more tests you run, the more you learn about your customer and how to grow your business.

Did the disruptors of this world –  Airbnb, Uber, Netflix etc –  get to where they are today by running 1 or 2 experiments a month? No! They all share one thing in common: they experiment at high-velocity across all of their business units to increase learning and create exponential growth.

Through years of working with a diverse array of optimisation and testing platforms, one contributing factor has opened the door to increasing the velocity of our experimentation programmes whilst retaining the quality we need to have confidence in our results. That is the optimisation platform we use” Optimizely. 

 

What is Optimizely?

Optimizely is an enterprise optimisation platform that provides A/B and multivariate testing capability plus website personalisation. Whilst this is a similar offering to other A/B testing platforms, Optimizely has a few tricks up its sleeve to facilitate high velocity testing whilst retaining the statistical quality of results.

The Stats accelerator

The Stats Accelerator helps you algorithmically capture more value from your experiments by reducing the time to reach statistical significance, so you spend less time waiting for results. It does this by monitoring experiments and using machine learning to adjust traffic distribution among variations. In simple terms, it shows more visitors the variations that have a better chance of reaching statistical significance.

Using this method, it discovers as many significant variations as possible and relies on dynamic traffic allocation to achieve its results. It is important to note that any time you allocate traffic dynamically over time, you run the risk of introducing bias into your results. Left uncorrected, this bias can have a significant impact on your reported results. Stats Accelerator neutralises this bias through a technique called ‘weighted improvement’. While we won't go into detail regarding weighted improvements in this post, it's worth pointing out that these results are used to calculate the estimated true lift. This is vitally important because this filters out bias that would have otherwise been present.

Stats Accelerator enables companies to accelerate experimentation and reach statistical significance up to 300% faster with intelligent traffic optimisation. Using machine learning, Stats Accelerator automates the flow of traffic to your experiments, so you can drive learnings and impact more rapidly. By speeding up the experimentation process, you’ll be able to gather learnings and thus iterate more rapidly. A hypothetical experiment that would have previously taken 2 weeks, could be completed in only a few days using Stats Accelerator. Alternatively, you could double the number of variations in your experiment and reach statistical significance in the same amount of time as before.

Whelan Boyd - Optimizely
Reach statistical significance up to 300% faster with experiment stats accelerator

The Stats engine

Optimizely uses a statistical framework that is optimised to enable experimenters to run experiments with high statistical rigour, while making it easy for anyone to interpret the results. Optimizely’s Stats Engine differs from the vast majority of other statistical engines, negating the use of Bayesian inference or fixed horizon hypothesis to calculate results. Instead, Optimizely’s Stats Engine relies on sequential testing and focuses on false discovery rate rather than false positive rate. 

This enables its users to confidently check experiments as often as they like, without needing to know an effect size in advance, and test as many variations and goals as desired without worrying about hidden sources of statistical errors. In other words, it helps you make business decisions based on reliable results and makes sure your experiment results pay off.

False positives

With every experiment you run, there is a risk of seeing a result that is, in essence, a false positive. This phenomenon occurs when an experiment reports a conclusive winner, but in reality there is no real difference in behaviour between variations. The risk of generating at least one false positive during a test increases as you add more metrics and variations to your experiment.

False Discovery

Optimizely helps you reduce false positives by taking a more rigorous approach to controlling statistical errors. Rather than focusing on the false positive rate, Optimizely uses a procedure that manages the false discovery rate which is designed to control the expected proportion of conclusive results that are incorrect.

To summarise: Is it possible for quantity and quality to exist in your experimentation programme? The answer is a resounding yes. But make sure that your optimisation platform of choice has the capability to do this before you invest. By using Optimizely over the last few years, we have seen a significant increase in our testing velocity capabilities.

Over the last 12 months, this increase in velocity has made a real difference to our clients and the understanding of their customers, which in turn has allowed us to create growth across the organisations we work with, despite the current environment.

Looking ahead, the next 12 months will present new opportunities as the optimisation industry continues to develop and provide new solutions to problems. At Kin + Carta we are always on the lookout for innovation and new ways to help our clients.

Want to know more?

Click here to get in touch