How to Avoid A/B Test Mistakes

How to Avoid A/B Test Mistakes

Always be up to Date subscribe to updates - July 16, 2014

The best conversion rate experts in the world get 1 our of 3 a/b tests right right, that means they fail 67% of the time. Why test fail and why that is a good thing is Nazli Yuzak‘s way to share learnings with you on why she as a Senior Digital Consultant at Dell celebrates failure and is open about it. Why you can share failure with your clients and how you bring the news.

Working With Correct Data

Having accurate data is critically important. Your data should be a combination of all your quick-stream analysis, competitive studies, prior system target learning maybe, or anything you can pull from the customer, with the help of some customer types of studies, surveys, focus groups and so on. In any given organization, you’ll find several different layers of information you can run from. Take advantage of that and make sure you’re starting with a really strong hypothesis.

Managing Expectations

The first thing you need to do is establish expectations. This involves working situations whereby you’re delivering results or tests of the organization to a client. For instance, if a stakeholder or client is looking into redesigning, say a webpage, and they’re expecting you to cram all that into a single test, you may want to establish the expectation that that may n convert.com/wp-content/uploads/2014/07/Image1B.jpg" alt=" convert.com/wp-content/uploads/2014/07/Image1B.jpg&description=Convert.com Blog | How to Avoid A/B Test Mistakes | https://blog.convert.com')"> A/B testing software” width=”200″ height=”150″ />ot be a good idea in the first place. You may not be able to get the results or insights they need to be able to take some actions at the end of it. Otherwise they’ll just think that you did the test and it failed to perform.

In all fairness, the whole setup wasn’t good to begin with, and thus it needs to be handled differently. Therefore, what you need to do is have a talk with the client and make sure they understand what can be done (if you were able to test) or what cannot be done. That is a very important part of the preplanning.

There’s also the case of learning from mistakes. In the older times, we would do all these tests, come up with some sort of flattering conclusive results and just let the clients know. But there was all the preplan information from that that would be worth capturing. So things like postmortem analysis really helped a lot. Doing postmortem analysis from two different perspectives would be a good idea.

One could be from an actual test perspective, somewhere along the lines of the pieces of the correct hypothesis, assuming you turn up the pages correctly. Ask yourself questions like:

  • Are there quality issues you failed to catch?
  • Were you able to come up with some results and take some form of action with all the test results?
  • If not, why was that the case and how can that be corrected?

You should do this not only for the test operation itself, but also for the larger part of it. Try to see if there is any learning you can gain from a project operation’s perspective, then feed that back into your system. When going about all the testing and conducting the postmortem analysis, there will be a lot of things you’ll learn from that process. You have to figure out a way to capture what you learn then institutionalize it.

Take an example; assuming you’ve done a test on call to action buttons and seen great results from a CTA button and known that it needs to be a permanent element on a page, that now needs to be institutionalized. You could then approach your design team and ask them when they’re designing a new product page and putting such a CTA, to make sure that it’s the most prominent element on the page. So, next time they’re designing something, they know they have the knowledge to move it instead of having to test the concept all over again, or even using other resources to gain that kind of insight.

Passing on the Knowledge

Additionally, you could also take the opportunity to inform your other stakeholders, depending on your organization. You may be interacting with people like a product team, website team or research team. If there’s any information or insight you can furnish these teams with, then it can greatly aid them in making their decisions in a more fashionable way.

Another thing that could be of benefit is learning from others. This is like best practices or competitive studies depending on what you’re testing. There are tons of resources out there, and it would be great if you could pull them in and get a chance to observe them and maybe discuss them with your stakeholders, as well as establishing which could work out best for your organization. Your clients are bound to make comparisons between you and other companies so it would be wise to go with what you learn and strictly apply it in your specific case.

For example, there are things the competition could be doing that you may want to introduce to your site. Things like how shopping carts have been set up, how coupon codes have been set up, as well as many other good studies or competitive practices around them that you could draw some inspiration from. See what you could have done differently in some of your own internal projects. It’s good to learn from one’s own mistakes just as it is to learn from the mistakes of others.

Check out the webinar in its entirety here

SUBSCRIBE TO OUR NEWSLETTER

Signup to our monthly newsletter to get the best of our content with the latest
Conversion and A/B Testing resources right in your inbox.

  • 16 Jul, 2014
  • Posted by Dennis van der Heijden
  • 1 Tags
  • 0 Comments

Written by Dennis van der Heijden

Co-founder and CEO of Convert.com passionate in building communities that care. Trying to make that happen inside and outside Convert. I love working with my team to make our A/B testing software better for agencies and e-commerce clients.

CATEGORIES Blogs

[hclightbox id='5' text='Anchor text']