It’s challenging finding suitable KPIs, formulating hypotheses, and ultimately organizing and carrying out the A/B test. However, the real challenge awaits you when it comes to analyzing the collected values and using them to make your web project more successful. This is the part where even professionals can make mistakes, but at least make sure you avoid any of the mistakes that are easy to avoid, such as these:
Error 6: only relying on the results of the testing tool
The testing tool doesn’t just help you to start the test and help you visualize the data collected, but it also provides detailed information about whether the variant has made an improvement and how much it would affect the conversion rate. In addition, a variant is declared as the winner. These tools cannot measure KPIs such as the absolute sales or returns, therefore you have to incorporate the corresponding external data. If the results don’t meet your expectations, it might be worth taking a look at the separate results of your web analysis program, which usually provides a much more detailed overview of users’ behavior.
Inspecting individual data is the only way to identify rogue values and filter them out of the overall result. The following example illustrates why this can be very decisive criteria for avoiding a wrong assumption: the tool has shown that variant A is the optimal version since it achieved the best results. However, closer examination reveals that this is down to a single user’s purchase, who happens to be a B2B customer. If you remove this purchase from the statistics, variant B suddenly shows the best result.
The same example can be applied to the shopping cart, the order rate, or various other KPIs. In each of these cases, you will notice that extreme values can strongly influence the average value and that false conclusions can quickly arise from this.