Jakob Nielsen is not Wrong (Pt. 2)

This week’s Alertbox takes a swipe at automated A/B testing and makes the argument that traditional qualitative testing methodologies (i.e. in-person usability testing) are better and more flexible mechanisms for gathering useful design feedback. Briefly, A/B testing involves the deployment of two different design variations in a production environment, and assessing the comparative success of the two designs based on a single identified metric — such as purchase conversion. Both Amazon and Yahoo have at times employed variations of this approach. Nielsen does a good job of pointing out the limitations of A/B testing and defining the narrow situations in which this methodology would be most appropriate.

We don’t really have a dog in this fight. I suppose, as practitioners of traditional usability testing (just like Jakob Nielsen), we are all in favor of any argument that helps maintain interest and broaden the applicability of this methodology — one which, incidentally, we believe in very passionately as well. But that’s not the reason for this post. The reason for this post is the very high level of interest (judging from our logfiles) in the concept of Multivariate Testing. We posted about Multivariate Testing some months ago and our post has attracted a lot of attention from people apparently searching for more information about this research strategy. It so happens that Multivariate Testing avoids some of Nielsen’s objections to A/B testing — and many of his points, in turn, can be used as arguments against Multivariate Testing in some situations.

So, even though Nielsen’s comments are not specifically directed at Multivariate Testing, anyone interested in this approach would benefit from his views about automated testing in general.

Leave a Comment