Making Online Ads Suck Less in 8 Easy Steps

It's Time to Get More Scientific About Display Ad Creative

By Published on .

John Young
John Young
Banner ads suck. Or not. It depends on what you expect and how you go about delivering. No matter whether your objective is awareness, engagement or click-through, you can improve performance over time. This strategy takes rigor: in the forms of measurement and analytics, multidisciplinary collaboration (strategy, creative, analytics) and the long-view. Not to mention a client willing to embrace all three.

The idea sounds simple and it may seem obvious -- build a library of banner ads -- but none of the many Fortune 500 clients I've worked with have done it well. Most marketers are reactionary and opportunistic with a short-term vision. If it worked, good. If it didn't, kill it and give me something new. But no one spends the time to really learn much or record what they did learn.

To dramatically enhance performance long term, develop a plan following these eight steps:

1. The Create Step
Create a raft of banners, ideally six different ideas, executed in four sizes each. Be sure to use best practices when creating banners (which have been defined through lots of testing and research -- yet there's still plenty of room for creativity). As a creative turk, I'll always declare the quality of the creative has a huge impact on performance. And it does. But even insanely great creative can benefit from this process.

2. The Measurement Step
Measure the success of the banners. And this starts with strategically smart objectives to make sure you're measuring the right things with appropriate expectations. Impressions count online, especially for awareness. So does engagement, especially to drive home understanding of key benefits (and more). Click-through rates still matter too, especially for e-commerce (even though click-through rates are awful, the cost-per-sale can still be low).

3. The Selection and Speculation Step
Evaluate each banner and try to assess why some worked and others didn't by size, idea, call-to-action, execution, etc. Make some intelligent guesses.

  • Why are the top performers winning?
  • Why are the weak ones failing?
  • Can you tweak mid-level performers?

Set aside those top performers. And plan tweaks for the mid-level group. Be deliberate about the optimization so you can track performance lifts or dips.

4. Codify Learnings and Assumptions
Write this shit down! You wouldn't believe what people don't write down. Different team members will have knowledge, ideas and insights, but it doesn't help much if it's not written down and shared.

  • What do we know?
  • What do we suspect?
  • What patterns are emerging?
  • What should we avoid?
  • What are the surprises?

For one recent banner ad campaign, we saw static banners (ones used where media placements don't take Flash or where users turned off Flash) outperform our high-production animated banners. It shattered our assumptions.

Build on industry best practices to develop a set of client-specific and/or audience-specific best practices. Refine and record what is working in this situation.

But continue to question, test and question again. And be willing to take risks.

This is a journey with no final destination.

5. Create and Measure Again
Create a second raft of banners, another six ideas in multiple sizes, maybe killing a size that underperformed.

Consciously leverage what you think you learned as you build the second set.

Obviously you track and measure the banners again.

Again evaluate the performance, trying to glean learnings and making assumptions.

6. Select Top Performers Again
Where are you now? It's beginning to get scientific. Remember Gregor Mendel and his genetic research on peas from high school biology? Using similar observation and selection, you have the beginnings of a library.

Again, write down what you learned and what you can assume (recording the difference). This is how you add strength to future creative rounds, including rounds of optimization based on what you know and what you think you know. That means testing another call-to-action, different colors or images, new offers, different copy.

7. Leveraging the Library
Now you can begin to take advantage of what you've learned, what you've built.

  • Rerun top performing banners from round 1 and 2.
  • Run the tweaked or optimized mid-level banners from round 1 and 2.
  • Inject new banners into the mix.

And, of course, measure every drop of blood you can squeeze out.

8. Leveraging the Library Long Term
One of the biggest challenges is maintaining the rigor and momentum. It matters. Make it your routine to perform bi-weekly or monthly measurement and analytics reviews between agency and client.

Stay on it, rerun top performers (but don't wear them out), test tweaked banners, add new creative into the mix. It's scientific, but not a science. Take risks. Seek failure. Build on success.

Future, creative rounds can focus on seasons, events, and special offers -- even audiences and long-tail offers.

Ongoing measurement and analytics followed by optimization and codified learning combine to continue driving performance.

Build the library beyond categories to include banner types: expandable, video, static, etc. And have enough that you don't wear out any one high-performing piece of creative.

If you maintain the rigor and vision and take some creative risks, you can develop and grow a powerful library of banner ads that can outperform industry standards and drive your brands.

John Young is executive creative director at Bridge Worldwide.
In this article:
Most Popular