How to improve digital customer experience: Test, optimize, repeat
Now that media buzz over Superbowl ads is starting to fade, marketers can turn their attention to the least glitzy part of their jobs: testing.
While not a topic that typically wins much chief marketing officer recognition, to ignore testing can lead to lost opportunities, if not a lost job. When framed in the larger context of building a culture of experimentation, testing is a mandatory tactic of the modern marketer.
Carl Tsukahara, CMO of Optimizely, says every aspect of digital marketing and the user experience can and should be tested—not only to optimize ad performance, but also to increase the likelihood that prospective customers will have a fruitful journey. Tsukahara’s bullishness on testing—his company describes itself as “the world's leading experimentation platform”—is not surprising. Still, it’s hard to imagine a CMO who can thrive without some level of testing and experimentation.
What was your mandate when you started at Optimizely?
The company had gotten to a certain point in its growth and was adjusting its strategy to focus on spending more time with enterprise customers. The company had grown as an A/B testing tool, but primarily sold to small groups and practitioners in a bottom-up sense. We realized is while it's important to see the market bottom up, and to pay attention to a range of available prospects, pivoting toward bigger enterprise customers requires quite a bit of focus and effort. We had customers like IBM and HP and big companies that were saying they wanted to take this experimentation process and implement it. This became a strategic imperative for our organization, and we wanted to focus more on enabling that process adjustment for larger customers.
Where should testing fit into a content marketer’s toolkit?
So many places. A good way to understand is simply ask questions about your efforts. Let’s say you're doing paid search via Google AdWords. I would ask this: Is the messaging in your ads right? Is the language you're using correct? What about the experience that happens when somebody clicks through the link and gets to your landing page? Do you have the right images? Colors? Offer strategy? That's just one example. How do you know if you can't get 10 percent or 20 percent better?
Can you give another example?
Maybe you have a self-service acquisition model and you get somebody to a paywall. How do you make sure that paywall is right? How many screens in the paywall should you have? Should you have one expandable experience? Should you have three different click-throughs? What's the pricing? Should this be $9.95 for the first month to get from the trial version to one user?
Seems like you could end up testing everything.
There are so many areas where we help the marketer convert. And it may sound like a quite a lot—but when there’s so much content out there, and so many potential places users can go, you really need to consider every little thing in the experience. You want the person on the other end to feel engaged, and sometimes that can be about the subject of the article, sometimes it’s about the experience getting there. Every detail matters.
When it comes to optimizing—is it ever OK to stray from brand to get clicks?
I think brand is extremely important. It just depends on how strong they think their brand is, and what it stands for. We actually tried to think about how you can enforce your brand in the context of a testing program, because I think you have to put some governance and a few guardrails around testing. That's one of the things we really believe in, especially when we're dealing with larger B2B entities. I mentioned IBM, but we have HP and a lot of other big tech companies as customers. With those brands you have to really allow for governance. Part of that governance may be to say, "Look, I'm going to create a brand-safe environment in which you can test. You can put offers out there and other things, but here's how you can create a compliance-oriented framework to test." This is something that is a byproduct of how the organization wants to think about their brand, and we want flexibility, but you can't just let this testing run wild.
When does testing go off the rails? Why are guidelines important?
Well, this sounds really basic, but you need to first know what you’re testing for. That’s a great way to start off the rails. We've seen programs that don't have senior level sponsorship that knows what it’s shooting for. What are you shooting for? Is it acquisition? Is it share of market? Is it NPS? I think testing programs gone bad are the ones where you don't really know how they’ll line up with the important objectives of the company. That sounds so basic, but I think that happens.
Are there other pitfalls to avoid?
Another way that the program can go poorly is if you don't have alignment between the people that are doing the testing and the important lines of business, which could be in marketing, could be the product team, and in some cases could even be development. Sometimes testing is an expertise that you can create with the right structure. I would say those are things I would focus on. Alignment with the business objectives, alignment across departments. Naturally, there are millions of ways things can go wrong, but those two things should be basic and yet often get messed up.