In the previous post in this series, I showed you how to create an A/B split test on your website. Creating and implementing an A/B test is the hard part. The easy part is sitting back and letting the results collect so you have something to analyze and review. In this post, you’ll learn a strategy for reviewing your A/B test results in a smart, comprehensive, and objective way.
NOTE: This post is part of a complete series on Conversion Optimization. This series is intended for the beginner — we’ll start from ground zero and move our way through all of the aspects of a basic conversion rate optimization project.
Here’s an overview of the posts in this series:
How Much Data Do You Need For An A/B Split Test?
The tricky part of reviewing your results is knowing when you have enough data. The answer will never be the same for every website and for every split test (even your own tests will vary). Usually, you’ll be looking at one of two situations:
1. You’ll have a clear winner and loser
2. You’ll have very close results.
In scenario #1 (landslide victory by “A” or “B” versions of what you’re testing) the most important thing is that you’re sure you have enough data before identifying the winner. For example, say you’re testing the “Buy Now” button color on a landing-sales page. You get 75% of people clicking the BLUE version (“A”) and only 25% of people clicking the RED version (“B”) of the button.
Immediately, you decide that BLUE is the clear winner and determine your test is done.
The tricky part here is knowing what these percentages actually mean. 75% is definitely much higher than 25% but if your sample size is only 12 people, we’re talking 9 points for blue and 3 points for red. If 12 is a typical number for you in your business, then these numbers will have to do. But if you can get more visitors to this landing page, you definitely should. A sample size of 12 is really, really small.
How To Avoid Confirmation Bias
Most of the time, your A/B split tests will be really objective and clear. As in my example above — blue vs. red is really objective and easy to interpret the results objectively. But occasionally you may find yourself succumbing to confirmation bias — aka, a tendency to look for what you want to see or what you want to conclude.
In the blue vs. red example, maybe you’re strongly attached to the color blue for your “Buy Now” buttons. But being the good “Conversion Optimization” tester that you are, you decide to submit it for testing. However, you unconsciously pit blue against colors that you seem to already know will lose out to blue. This is an extreme example, but it illustrates my point — you must be careful not to get too attached to what you think, assume, or believe works best.
To avoid confirmation bias, be sure to test everything more than once. And get outside of your own box. It’s always a good idea to get someone else involved in your testing too.
To Re-Test Or Not To Re-Test…
That is the question and the answer is obvious! Yes, yes, yes — you need to retest.
After every A/B test you complete, you should really begin working on the next test right away. A/B split testing should be an on-going part of your overall website optimization strategy. Going from a 1% to 10% opt-in rate can obviously make massive differences in your business. But that’s not usually how it goes. Generally, you’ll be looking at much smaller gains at a time.
What questions do you have about reviewing the results of your Conversion Optimization A/B split tests? Leave your comment below.