5 Most Common Mistakes When Conducting A/B Tests

5 Most Common Mistakes When Conducting A/B Tests

Always be up to Date subscribe to updates - September 2, 2014

In her webinar, Nazli Yuzak talked about the 5 mistakes most people make when conducting A/B testing software” href=”http://www.convert.com”>A/B tests. Nazli is perhaps the best person to talk about these kinds of tests, and why they fail, given her enviable experience in the industry. The first session was about what failed tests basically mean, and how gaining some insight from a test can be beneficial for your website.

1. Misunderstanding Client Goals

The first mistake that you could make when going about testing is failing to study the client’s goals or objectives. It may sound simple, but this really calls for deep conversations based on what your stakeholders are trying to achieve, as a result of a particular test. Is there a concept they are trying to prove? Are they going take steps after the results? If so, what kind of steps will they take? What is their timeline? You really need to delve a little deeper to understand the details.

If you don’t understand your stakeholders’ goals and beliefs in a detailed fashion, you won’t be able to create a hypothesis, and in the end you won’t be able to give them the insight they need. They may want to see something but by the time you’re done with the test, you could have a completely different result, a result they cannot take an action from. This is not the kind of situation you want to end up with at the end of the day. You really want to contribute to the stakeholders’ making good business decisions, and if you’re not able to provide that for them, that is considered a failure on your part.

2. Missing Timelines and the Action

When stakeholders see the timeline of a particular project, they determine the time needed to take action based on the insight from the test. You need to understand the amount of time you’ll need to test, and the cost that goes into developing the test, from beginning to end.

Let’s say that your stakeholders wanted to make a particular change on the product page, and you ran some tests and gave them some insight. In the next three months, however, the product page retired or merged into a completely different path. Basically, you invested your time and resources on content that will no longer be relevant in three months’ time. Keep this in mind, and understand that you really need to understand the timeline and plan of each project accordingly.

3. Weak Data

When it comes to the opening of a hypothesis, you need to ensure that it’s not just a gut feeling or an executive mandate you’re relying upon. Rather, it should be a combination of things such as a quick-stream analysis from particular or similar resources. Also look at previous tests and what you learned from them, not to mention what came out of those tests. Look at competitive applications to see if you can learn anything off from them as well.

Additionally, look at your best practices and a combination of all those will help you in building your hypothesis and your goals. Stakeholders will not tell you how to build their hypothesis. If you try to build one based on all that they’ve got, it’s highly unlikely you’ll achieve what you are looking to, and this will also be considered a failed test.

4. Poor Test Setup

Basically, your testing culture should be reflect on your test setup and your recipe as well. And if, for some reason, there happens to be changes in your default that wasn’t reflected on your test recipe, you’ll can end up testing something that’s not against the action default. Consequently, you won’t see any healthy numbers at the end of that test, and this too, will be considered a failed test. The best thing to do is to learn from experience and leverage it for your benefit. At the very least, by paying extra attention, you will not repeat the same mistakes on a similar kind of situation again.

5. Overestimating

Something very common with several companies is trying to cram too much into a single test. A whole page redesign or trying to test too many recipes on a single test never turns out fine. You need to have tremendous amount of traffic to be able to accommodate a test of such magnitude (say 20 recipes on a single test). And even if that’s the case, you’ll need to do the test in stages: you learn how it works, and once you have winner, you start testing against that. If you want to check out the entire webinar, click here.

SUBSCRIBE TO OUR NEWSLETTER

Signup to our monthly newsletter to get the best of our content with the latest
Conversion and A/B Testing resources right in your inbox.

  • 2 Sep, 2014
  • Posted by Lemuel Galpo
  • 1 Tags
  • 0 Comments
Lemuel Galpo

Written by Lemuel Galpo

As Customer Content Manager, Lem is responsible for bringing learnings in conversion optimization and testing to the world. He is part of Convert.com's growth team and coordinates all writers, editors and illustrators.

CATEGORIES Articles

[hclightbox id='5' text='Anchor text']