Last week we examined some popular excuses for not doing usability testing. This week, we’ll take a look at conversion testing (aka A/B and Multivariate Testing).
1. “I don’t have the budget to purchase testing software.”
Visual Website Optimizer starts at $49 per month. Google Website Optimizer (GWO) is free. Budget is no excuse.
2. “I don’t have the technical expertise to set up a test.”
Though it’s true that technical implementations can get complex, that doesn’t have to be the case. Setting up a simple A/B test on a tool like GWO is easy and can take only minutes. So take the initial plunge with a simple A/B test. For example, test some revised headlines.
3. “I don’t know how to run a statistical analysis on the results.”
There’s no need, as basic statistical analyses are built into all the major testing tools. Sure, you can run more sophisticated analyses on the results and learn more from the data, but you can leave that for later tests.
4. “I fear that running tests could lower my conversion rate.”
Yes, it’s possible that a poor-performing variation could temporarily lower conversion rates. But all testing platforms allow you monitor your tests on an ongoing basis, and to either stop the test or disable poor-performing variations. Also, you can reduce risk by specifying that only a certain percentage of your traffic will take part in the test. The greater risk lies in NOT running tests.
5. “There’s no guarantee I’ll get a positive result.”
True. But the flipside of that argument is that if you do nothing, it’s guaranteed you won’t get results. Stick with testing, and you’re bound to get positive results soon enough.
6. “I don’t have a UX designer to come up with the alternate variations.”
Then either hire one on contract, or (as mentioned above) test on simple things like text. (Headlines, subheads, bullet points, calls to action, etc.)
7. “I don’t need to test, I know what works best for my customers.”
Nobody gets it right every time. I’ve seen lots of test where everyone’s prediction was wrong and the winner is a total surprise. So never assume that your page is perfect; you might be surprised at the changes that lead to better performance.
8. “I prefer the simplicity of sequential testing.”
Sequential testing (i.e. running one version for a while, then running a revised version and seeing if it performs better) doesn’t have the accuracy of true “Split A/B” testing. (You’ll never know whether external factors may have influenced the results.) The whole point of testing is to base decisions on data rather than hunches; data must be reliable.
9. “My Website doesn’t get enough traffic.”
For very low traffic websites, this might actually be a valid excuse. However, testing just one variation against the original does not take all that much traffic, especially if the two versions are dramatically different. It’s not uncommon to achieve statistically significant results with a total of fewer than 1,000 visitors to the page.
10. “It’s too early for us.”
If you have a live website with significant traffic, then it’s time to start testing. The sooner the better. New businesses can actually benefit the most from conversion testing, as they’re likely to have a less complete understanding of their customers and what makes them tick.