Few marketing channels are more “direct” than email marketing. The calls to action, offers, real-time messaging and personalized options mean every email campaign has the potential to gain decent direct-response performance. But what happens if it doesn't?
Multinational telecommunications company Level 3 Communications faced that question following a poorly received email campaign conducted in the second half of 2012. To answer that question and to learn from its missteps, the company marshaled its marketing stakeholders, did a lot of soul-searching and performed a lengthy post-mortem of what went wrong and how to improve things.
“If you look at the campaign design, co-ordinating emails, assets tied to the buyers' journey and timelines, you'd say it all was beautiful,” said Corey Livingston, senior director-global demand center at Level 3. “But what it really came down to was not understanding our target profiles and individual buyer needs. The campaign was misdirected; it didn't go the right target. The messaging was misaligned with the buyer's pain points, and it lacked [telemarketing] follow-up to respondents at the moment of interest, when they engaged with the campaign.”
Level 3's campaign cost the company “thousands of dollars” to plan and execute, Livingstone said, with creative and messaging developed by agency Mason Zimbler, Austin, Texas. Messages went to senior-level decision-makers at both large and midsize enterprises, and a total of 77,972 emails were delivered. The content looked strong; the email layouts were impressive; and the company projected that the program would generate 200 sales engagements and a strong return on marketing investment.
But over the life of the five-month campaign, the open rate was just 7.1%. Only seven forms were submitted, and a meager three inquiries were successfully reached. No inquiries qualified as a lead, and none went to sales.
“Afterward what we did was unique, to step forward and say we failed,” Livingston said. “It took some people by surprise, and put some who contributed to the campaign on the defensive; but the results were black and white. It was critically important to us to unpack each of our failures in order to set up our next programs successfully and not repeat the same expensive mistakes again and again.”
Among the problems that Livingston and her team identified was pushing a topic—in this case the need for “unified communications” services—that wasn't necessarily aligned with the interests of the company's target audience.
“Because we weren't focused on a "burning-platform' topic important to our key buyers, we weren't able to get attention and attract viable prospects, which manifested in dismal open and engagement rates,” she said.
Segmentation and content also went awry. The messaging was oriented toward a non-technology audience—primarily C-suite and VP-level executives—and subject lines were crafted accordingly. But when recipients clicked through to related content they found it was heavily tech-oriented.
“To these executives, we could only assume it was "Star Wars'-speak,” Livingston said.
The company organized what Livingston called a “SWAT team” of about a half-dozen staffers who analyzed the campaign for lessons. In addition to the above, other problem areas included:
- While the intended target audience was non-tech decision-makers, 80% of the company's existing database consisted of names associated with highly technical roles.
- Subject lines lacked A/B testing, and messaging needed to be more compelling.
- Tele-qualification needed to respond more quickly and adequately to all responders.
- Clicks went to content and subsequent actions that required 10 steps to convert, an overly complicated process.
“The post-mortem wasn't easy for any of us, but it did take us to a new level of growth and maturity in our marketing,” Livingston said. “We're now much more disciplined about our testing and experimentation strategy to help isolate and pinpoint gaps in our targeting, messaging and offers before we broadly deploy our campaigns.”