10 A/B Testing Mistakes You’re Probably Making
If you’re an eCommerce or SaaS business owner and want to improve your a/b testing practices and make more money…you’re in the right place! Now, when you think about a/b testing, you instantly think about increasing your conversion rate.
“Will my new website changes results in more conversions or less?”
This is the most deeply rooted mistake in all of conversion optimization and a/b testing. The focus should not be on conversion rate, not even on revenue but on profits! In this guide to a/b testing mistakes you’ll learn:
- Why test?
- 15 a/b testing mistakes that make your wallet cry
- Bonus Ebook: “How to plan a/b tests the right way”
Let’s dig deeper…
Why Use A/B Testing?
Most website owners obviously have mystical powers and simply ‘know’ how to fix their sales funnels, they’ll have an idea, make a change and push it live…
Done. No tests, no data…push and pray!
But smart business owners know that they don’t know, they assume. And to have a good chance of increasing the number of leads and sales their website generates for their business they test these assumptions.
[Tweet “The focus should not be on #conversion rate, not even on revenue but on profits. – @gilesadamthomas”]
A hypothesis is a statement based on initial data collection and analysis that should be tested and validated. This means that you collected and analysed some data and from this process learnt something about your website or customer that you should now test to confirm it’s truth. You need a hypothesis to ensure your ab tests have a clear goal.
- Key page elements such as value propositions and call to action buttons
- Whole pages to find the best converting design
- Target content for specific visitor segments
Testing is the right thing to do. If you just push new website changes live and hope for the best you risk negatively impacting your conversion rates and profits. Blind changes to websites can really hurt your business. Testing also makes it much simpler to put previous website versions or designs back in place if they do negatively impact your conversion rate. Because there is no guarantee that your hypotheses are valid.
Mistake #1 – Don’t Just Measure Conversion Rate (Especially for lead gen)
You see, a/b testing results can be very misleading. Let’s say you ran an a/b test on a cpc landing page. You had 6040 visitors to your landing page throughout the test duration, and the traffic was split 50/50 over the control and your new variation.
You saw no significant increase in the number of leads generated and therefore decide your new variation did not win, the test failed 🙁
Or did it!?
Let’s look at the bigger picture. When you look at the profits you see you actually doubled sales!
You must focus on your lead-to-MQL (marketing qualified lead score) and profits. Not on the micro conversion rate of form submissions.
The test was successful because the number of qualified leads converted was much higher than the control, your bank account never lies (unfortunately).
Mistake #2 – Sequential Testing Is Not A/B Testing
Ok so next up, you have the folks who say:
“I don’t need a tool or software to a/b test”
“What if I just make a change and wait to see if the goal conversion rate increases or decreases?”
Testing the conversion rate of design A for one week, then swapping it and testing the conversion rate of design B for one week as a comparison is not real testing.
This is actually called sequential testing.
Sequential testing does not have validity as it doesn’t refer to the same source of visitors. The data set is different and therefore not comparable.
You conversion rate can change seasonally, by the day of the week or due to external factors such as weather or environment.
Therefore the only way to compare or test in a valid way is at the same with with the same traffic. Never use sequential testing. Only test using the same data set, the same traffic at the same time.
[Tweet “Sequential testing: no validity & not the same source of visitors. – @gilesadamthomas”]
Common Mistake #3 – Random Testing
Don’t test randomly, base your tests from data collection, analysis and hypothesis creation. Your test ideas should come from customer insight and digital analytics not the highest paid person’s opinion. Remember:
A hypothesis is a statement based on initial data collection and analysis that should be tested and validated.
Not. A statement based on random ideas, case studies or competitor analysis should be tested and validated.
Common Mistake #4 – Prioritization of tests
Prioritize your tests and remember the conversion hierarchy, test the fastest and cheapest changes with the biggest business impacts first.
Common mistake #5 – Small website changes give small conversion changes
Don’t test small changes like button copy, big changes and big tests get bigger result changes.
Common mistake #6 – Calling tests early
Don’t stop your test before they finish. Wait for statistical significance.
Even if your testing tool says there is a 0% chance of a variation winning. Wait until you have reached your pre-planned sample size.
Run tests for full weeks, as your day to day conversion rate can fluctuate. Always run complete weeks for your tests.
[Tweet “Don’t stop your test before they finish. Wait for statistical significance. – @gilesadamthomas”]
Common mistake #7 – Don’t surprise your regulars
Don’t surprise regular visitors. If you are testing a core part of your website, include only new visitors in the test. You want to avoid shocking regular visitors, especially because the variations may not ultimately be implemented.
Common mistake #8 – Pre plan your tests sample size
Know how much traffic and how many conversions will be roughly needed to achieve a statistically significant test before you start testing.
Common mistake #9 – Website wide consistency for tests
Make your A/B test consistent across the whole website. If you are testing a sign-up button that appears in multiple locations, then a visitor should see the same variation everywhere. Showing one variation on page 1 and another variation on page 2 will skew the results.
Common mistake #10 – Segmenting requires a bigger sample size
Segment testing requires more traffic (a bigger sample size) and more conversions. You need at least 250 conversions per variation for most tests to have a big enough sample size and reach statistical significance. This will increase with segmentation.
Testing requires pre planning
Make sure you’re avoiding these common testing mistakes and please pre-plan your tests! Make sure to calculate your required test participants and size per group. Use this and your unique visitors stats in Google Analytics to estimate the time to complete the test and ensure not to stop the test early.
You can improve your testing practices too
But you have to take action on what you have learned today…
So for those of you who are highly serious about getting higher conversion rates (and profits!) I’ve put together a free bonus area. What you’ll get:
- Bonus Ebook: “How to plan a/b tests the right way”
- Bonus PDF: 25 Conversion Rate Best Practices to test in your next design
- Bonus PDF: Test prioritization spreadsheet and manual, test the most important things that will make the most money first!