Welcome. This is your first of seven free items this month.

To register, get added benefits and unlimited access to articles, Become a Member. Already a Member? Sign in.

BtoB

A/B testing boosts email, website development

By Published on . 0

Reprints Reprints

When Brady Corp. decided to revise its website, the company knew it would have to proceed with caution and test everything. It was particularly important because the company—a supplier of workplace safety products, including signs and systems used to ensure worker safety in manufacturing facilities—wanted to move toward an e-commerce approach and was treading on new ground. “We're trying to move away from an old-school b-to-b website,” said Craig Madden, senior manager-user experience at Brady. “We're trying to move to a transaction commerce experience on the Web, more of a b-to-c experience.” Madden said this meant extensive A/B testing during Web development, checking variations of elements of the redesigned website in head-to-head match-ups in real time, to see what elements were most effective. A/B testing—and the related multivariate testing—started with trials of subject lines in email campaigns. The goal was simple: Find out which subject line enticed the most recipients to open the email. Today, A/B testing is used in vastly expanded ways, touching every facet of both email campaigns and website design and development. “A/B testing is meant to validate incremental changes that result in improved conversion or a specific desired action,” said Ken Pikulik, director-process and strategy at ResponsePoint Inc., a maker of lead-generation and marketing process software. “Website testing is probably the most prominent use of A/B testing. The goal is to optimize a site through small changes that improve conversion rates.” While it may seem simple, said Jeff Soriano, director-demand generation at Maxymiser Inc., a leading testing and conversion company, it's easy to make major mistakes during A/B testing that render it useless. The first and most obvious is testing for the wrong variables. “The biggest screw-up marketers make is not defining what they want to test,” Soriano said. “They might say, "We want to change the subject line to change our open rate.' And they might think that one open rate is better than another. But did that email have a better click-through rate? Maybe, maybe not. Maybe the recipient identified with the tagline but not with the content of the email. Just because you've got a better open rate doesn't mean the email succeeded.” Another common mistake is making too many assumptions before a test begins. The point of A/B testing is to let customers tell you what they like. Yet Soriano said many marketers still start with their “gut” preferences. “Sometimes marketers will wonder, "Why did this test win?' ” he said. “Because more customers picked it, that's why. It's all about taking your gut out of it, and putting a lot of stuff out there and seeing what customers like.” This is exactly the approach Madden took, with some surprising results. As the company overhauled its website, it developed a hypothesis for each new page and section. Then the company tried out multiple versions of the same product page and ran simultaneous tests to see which one customers preferred. “It didn't matter what I thought,” Madden said. “It was about what the customers thought. In many cases, we'd do a prettier design versus a slimmed-down version that had less noise. The customer didn't care about the prettier page. They wanted a simpler, cleaner page so they could put something in their cart and be done with it.” Testing doesn't have to be run by experts or outsourced. There are plenty of testing platforms from marketing automation companies, and Google Inc. offers a “very simple and effective tool,” Pikulik said. But it can get very complex very fast, depending on how many variables a company is running and how deeply embedded testing is in the marketing department. Soriano recommended starting to test Web elements that are related to customer sales decisions, nearer the bottom of the sales funnel. Then, as those improve, begin optimizing elements higher up in the funnel. Ideally, tested versions should run simultaneously with audiences of a similar size and be allowed to run long enough to achieve statistical significance. “Testing never ends,” Soriano said. “As soon as you get a new version that works better than the old one, there are new things you can test.”
In this article:

Read These Next

Comments (0)