Planning and forecasting for CRO experimentation can sometimes feel like putting a finger in the wind, but the key to looking forward is to first look backwards. You’ll often have a wealth of historical testing data which can be used to support a data-driven approach to upcoming plans. However, even when you take this approach, there can be some common missteps in how you handle this information which may lead to errors in your forecasting.
Here we will look at how to factor compounding results into your CRO planning and forecasts to help make smarter decisions.
Looking back to create a solid foundation
It’s really important to record and track your success rate over time to identify trends in test performance, and improve the accuracy of your forecasting and planning. Keeping a repository of all test outcomes with percent change will help you begin building a picture of what you can expect from the ‘average’ test you run.
From this, you can start to plan from a data-driven perspective, factoring test success rate, and the scale of that success, into a plan to achieve the biggest uplift over a defined time period.
Putting snowballing into practice
Imagine you’ve been reviewing your conversion rate (CVR) performance and trends in your data, and have noticed that mobile traffic is underperforming compared to desktop. Coupled with an increase in mobile traffic to your site, you want to improve mobile traffic CVR to support overall revenue goals for your business.
Let’s take a look at an example scenario.
Let’s say that this quarter you’re looking to create a 100% uplift in CVR for mobile traffic from 1% to 2%, so we need to explore the methods you could use to achieve this. Well, because you have tracked your test performance over time, you can start to break this out statistically, giving you a strong foundation for your planning and forecasting.
Scenario one
To start, you might first consider the odds of achieving this with just one really great test. Taking this by the data, of the last 92 tests that Croud ran, six had resulted in over 100% uplift.

So theoretically, you would need to plan, design, build and run 16 tests this quarter in order to find one to roll out that would have a big enough impact to achieve your goal all at once.
Scenario two
Subsequently, you might think of opting for scenario two instead – would it be easier to have five tests with a 20% uplift?
Of the same 92 tests at Croud, 26 had an average uplift of 20%.

Finding five tests that achieved this would require you to run 18 tests this quarter.
1/3.5 and 1/3.5 and 1/3.5 and 1/3.5 and 1/3.5 = 5/17.5
By choosing this example, you’d be planning, designing and developing 18 tests for the quarter, so why not just opt for scenario one instead? Scenario two is actually a lie based on a common fallacy; namely, you haven’t factored in the snowball effect of compounding results.
Actual scenario two
You actually only need a 15% uplift over five tests to achieve a 100% uplift:

At Croud, 35 of 92 tests had a 15% uplift or greater, one in every 2.6 tests.

So for scenario two, you would actually only need to plan to create 13 tests this quarter.
1/2.6 and 1/2.6 and 1/2.6 and 1/2.6 and 1/2.6 = 5/13
This would entail 19% less time spent in test creation, design and development than the 16 tests needed to find a 100% uplift result for a single test in scenario one.
Next steps
Now it’s perfectly possible that any of these five tests could overachieve the 15% uplift required, which would only reduce the number of tests required, or allow you to exceed your 100% uplift target. Over time, taking a policy of more frequent, lower impact iterative tests stack the odds in your favour of achieving your goals more efficiently.
As your planned testing becomes a reality, you could and should feed your actual results into your planning, allowing you to reforecast and re-plan with efficiency in mind so that your goals don’t creep away from you, or cost you more effort than needed.
There are obviously many other factors to consider when designing a programme of CRO testing, such as avoiding test and development collision, ensuring your test would receive enough traffic to support a significant result in an appropriate time frame, and resource for test ideation, design and development.
If you’re interested in learning more about effectively testing and planning your approach to CRO, please get in touch with our team.