Best practices in multivariate testing for b-to-b companies

By Published on .

Reprints Reprints

Testing websites and campaigns is key to improving performance. A/B testing—trying two versions of a website landing page and comparing how they perform with your target audience—is often used for this, but it only allows you to test one factor at a time. Multivariate testing, on the other hand, enables you to test many changes simultaneously. Evaluating the impact of combinations of factors and variations often reveals significant interaction effects that can have a dramatic impact on your company's conversion goals.

Do you have a hunch about what's working or not working on your website? Test it and see what happens; the results are often surprising.

To help you get started, here are 10 best-practice recommendations:

1) Form a great testing team. Your testing team must have a mandate for improvement. They need to be charged with measurably improving website content and campaigns, and be able to clearly demonstrate and communicate results to stakeholders. It is crucial to include your organization's top talent as well as a technically minded project manager and an executive on the team.

2) Get your stakeholders on board. Like any endeavor, without management's support for your testing initiative you are doomed to fail. Illustrate exactly what you wish to achieve and the results that management can expect, communicated in bottom-line financial predictions.

3) Write a formal testing plan. Tests can be better justified and prioritized by creating a testing plan that explores who has requested the test, what is being tested and why, test expectations, measurements for success, risks associated with running the test, the resources that may be required and a date when the results are needed.

4) Think about measurement. Your testing program should integrate with your overall analytics efforts. You will want to ensure that data available through analytics tools, such as audience segments, can be applied to tests. Metrics that have been established within a Web analytics program should influence which tests should be undertaken.

5) Clearly define success and failure. Success means different things to different stakeholders. It can be obvious financial gains, increased user engagement or fewer support calls. Even a “failed” test can be considered a success when you have learned what does not work.

6) Test your test. Testing is not going to solve all the issues of your marketing program in a month, a quarter or even a year. Certain technical implementations of tests may be trickier than others. Make sure you isolate factors—individual changes that you make—so you can determine which ones are responsible for better results.

7) Clarify your testing timeline. Testers often forget to take dayparts and weekends into account. In testing over one or two weeks, I recommend a time period of seven days plus one more day (or 14 days plus one more day). Adding an additional day at the beginning of the test gives you a testing period that is long enough to yield a statistically significant sample size.

8) Communicate your test results with actionable analysis. The efforts, updates, successes and even failures of the testing team should be broadcast throughout the organization. But ensure this is not merely one-way communication; in-person presentations are always best. Above all, include actionable recommendations along with your results and use this opportunity to suggest additional tests to keep the optimization ball rolling.

9) Test different audience segments. The most revealing tests will be those that use targeted audience segmentation. Knowing that a certain change increased conversions by 5% for all visitors is somewhat helpful, but knowing that that same change resulted in a 20% increase for a key target audience is far more valuable.

10) Mine for deeper opportunities. Once your b-to-b testing program is up and running, and the team has a few wins under its belt, in-depth data analysis and statistical modeling is the next level to strive for. Analysis and comparison of offline data, or qualitative voice-of-customer data, alongside test results can yield insights that are not immediately obvious through the testing or analytics tools alone.

Eric Hansen is founder and CEO of Web optimization company SiteSpect ( He can be reached at

In this article:
Most Popular