How to Avoid the Most Typical A/B Testing Mistakes
A/B testing is a complex but very effective tool to increase the conversion. Due to the complexity, even experts sometimes make mistakes when carrying out A/B research and experiments.
Since every shortcoming can ruin the whole strategy, the only safe way is to avoid them at all cost. Although it’s not that easy, it’s possible!
#1 Testing all at once
Trying to test all at once is one of the biggest mistakes of inexperienced testers. The two main reasons are:
- The absence of clear and reasonable hypothesis testing.
- The desire to get fast results without checking the influence of individual elements.
As a result, you will never know what has affected the CTR increase, the increase in transitions, and the increase in profits.
[Tweet “This is a guide on how to avoid the most common mistakes in #abtesting.”]
How to avoid?
Test only one hypothesis in each experiment. This is a basic rule of A/B testing, which involves a systematic approach as follows:
- Analyze the website using web analytics services, analysis of user behavior, and surveys.
- Make a list of hypotheses that can solve these problems and improve site performance.
- Order hypotheses due to the priority level. At this stage, choose the most priority hypothesis and launch the testing process.
#2 A/B Testing by Comparing
This old method is very inaccurate. It acts as follows:
- A tester takes two versions of a web page (let’s take it as an example) and alternately interchanges them, fixing basic indicators and comparing the results.
The problem is that the indicators may change due to the season, the advertising campaign, the crisis, the fall or the strengthening of the exchange rates, and many other factors. As a result, the conclusion won’t be accurate and reliable.
How to avoid?
Use special services or scripts that will divide all traffic in half. This will make you able to carry out the experiment at almost identical conditions. For example, you may use a free service from Google Analytics.
#3 Finishing the A/B Testing Ahead of Time
This is a typical mistake of all those who want to get fast results from the test. Let’s take an example:
- A company orders the A/B-testing from an outside consultant. He starts to analyze the website, finds the problem and runs the test. After a few days, a website owner looks at the results and sees that a test version has a twice higher conversion. At this point, it seems everything’s cool and it’s time to stop. The website gets a new “tested” version, but after a few week, sales fall by half.
How could it happen and how to avoid?
Website visitors behave differently depending on the day of the week. For example, on Monday, they simply look for something while on Friday night, they buy. That is why it is critical to keep the experiment at least for 7 days to cover every day of the week. This is the required minimum.
Well, if you have time, continue to test for 2 weeks or even longer to get the most accurate results. Don’t stop the test until you can make a decision. In large businesses, it may take up to several weeks.
[Tweet “Even experts in #abtesting make these typical mistakes. Avoid them at all costs: “]
#4 Ignoring Statistical Significance
This is especially true in cases when the experiment is carried out manually using scripts that divide the traffic 50/50. All automated services have inbuilt algorithms that calculate statistical significance and beware testers of completing the experiment as long as it does not reach at least 95%. If the statistical significance is less than 95%, then the probability of error is too high to rely on such results.
How to Avoid?
Keep up the experiment as long as the figure reaches 95% when conducting an experiment through special services. If you test manually, then use one of the calculators of statistical significance to determine the minimum number of participants and conversions in the experiment.
#5 Tracking Only One Goal
Let’s take an example:
Imagine your sales funnel consists of 5 stages:
- The catalog of goods.
- Transition to the product card.
- Adding to basket.
- The start of a payment process.
- Completion of payment.
Let’s assume you start the A/B test for the catalog of goods.
- You increase the images of the goods to increase the number of transitions to the product card (the second stage). After 2 weeks you see that the transitions have increased.
Everything seems to be cool, except the fact that the second stage is not the most important one for your store. Of course, the completion of payment, the last stage, is crucial. Unless it’s completed, you do not get profits.
The problem is that the change of images may increase the number of clicks (emotional actions), but reduce the number of people moving to the next stage of the sales funnel.
I hope you’ll use the advice above to your advantage. Be conservative in testing and watch the results in their context. If something seems strange or too good to be true, be sure to check it twice.