This week, Zach Bulygo of KISSmetrics posted a blog that, in my opinion, does give a very limited view of what A/B testing tools can do. In the blog: Before you launch your next A/B test, there’s an important thing you need to know: he mentions the following:
Variants that appear to be winners may not actually be winners when it comes to making you more money. In fact, sometimes they may be losers that make you less money. And, if you aren’t using the right tool, you may unwittingly launch those losing variants. If you were using a tool that tests only to the next conversion step, you would pick the variant as the winner and show 100% of all visitors the variant headline. But as we can see, that would actually be detrimental to your business because it results in fewer customer signups.
The problem I have with this part is where it almost seems as if there are A/B testing tools out there that limit the amount of goals users can setup. I don’t know any A/B testing tool that has a one goal option. Convert Experiments and every other tool I have seen, over past years, have a multi-goal setup. Most clients also use the option to measure multiple steps in the funnel, and come to the same insights as the A/B testing report of KISSmetrics.
I do invite Zach to mention the tool that he is referring to when he mentioned the limits of the goals it can setup. For example, he mentioned less than 5 per experiment? Since the process of adding to cart, going to checkout, seeing the payment page, and finally paying (four) would usually do the trick, and pretty much every tool also allows you to connect revenue to one of these goals. Since we see that revenue per visitor and products ordered are often different from conversion rates.
I think KISSmetrics is an awesome tool for cohort analysis, where after the test stops months later you are still pushing data to KISSmetrics, and you want to see the past effects of experiments. In addition, this A/B testing report could be really handy when you have other metrics sent to KISSmetrics that you decided not to send to an A/B testing software tool (which could usually handle any outside data as well).