According to a March report by interactive agency eROI, 18% of marketers don’t track their e-mail marketing campaigns.
The report, “Trends and Use of E-mail Analytics,” also found that 12% of companies don’t track conversions. And even when people say they test, they aren’t always testing consistently or testing the right things, the report found.
This is unfortunate, since testing is still one of the best tools marketers can use to improve their e-mail marketing results, said Loren McDonald, VP-industry relations at Silverpop.
“Testing should be an ongoing process, not a one-time event,” he said. “Marketers should try to test at least one thing with every message that goes out.”
McDonald and Eric Groves, senior VP-global market development at Constant Contact, provide these tips to help you get on the testing bandwagon.
• Focus on testing the things that most impact your business goals. It’s not enough to say you’re going to do A/B testing. In order for that testing to make a difference, you actually have to test things that matter. “For a lot of marketers, they think they are doing testing—and they are—but they are testing things like whether the graphic on their e-mail is on the right or the left. What they should be focusing on are things that have an impact on the reason they are actually e-mailing,” McDonald said. For example, if you’re trying to encourage people to download a white paper, you’re going to be testing elements that contribute to that happening: the wording of your call to action, the placement of the “download today” link or the landing page that facilitates the download.
• Go random. Marketers that do their own A/B segmentation are putting themselves at a significant disadvantage. Unless you allow your e-mail marketing program to randomize your list sample, you may be introducing bias into your testing. “The purer your data is, the more actionable your data will be in the end,” Groves said.
• Go local—and global. Companies naturally split their lists to do A/B testing. While this is a good start, and something you should do, it’s not enough to stop there. You can test using a broad cross section of your list, but in addition, you’ll want to test specific elements within segments as well. “Certain changes might affect various segments differently,” McDonald said. “Understand that sending an e-mail at 3 p.m. is going to affect your overall list differently than it will your West Coast segment. Some elements are going to affect your more frequent customers more than those who have never purchased from you.”
The best test results don’t always translate into a winner. Consider this: If you are testing two special offers—a 10%-off coupon and a 25%-off coupon—the special offer with the higher discount might appear to win based on overall number of conversions, but when you do the math, your company might benefit more from that 10% offer. “You might actually generate more revenue with the 10% off offer, even though you sell more with the 25% off offer,” said McDonald. “You actually have to do the math and look beyond the test results.”