After a combined 15 years of campaign testing with work completed on ads that have launched some of the biggest entertainment properties of the past decade and redefined some of the nation’s leading providers of media and technology, do we call ourselves experts? Not even close.

Campaign testing remains one of the most elusive aspects of market research. Let’s be honest – if anyone had figured it out, they would be the only research shop in existence, and their clients would have no competitors after putting everyone else out of business.

However, we have learned a thing or two in our years behind the curtain. Here are some of our favorites and how they’ve guided our own approach to campaign testing:

Understanding the Additive Quality of Campaigns
Many times, clients want to find and use the TV commercial that scores best in dozens of rounds of ad testing. This is understandable, but problems arise when the digital team is testing their banner ads in a separate vacuum, while the outdoor team is testing their billboards without knowing that TV commercials are even in development.

It’s a new era, with innovative technologies and channels for getting the message out. In reality, people experience campaigns that span all of these channels. While we understand the importance of testing individual ads and often do just that, a key component of our testing is a campaign walkthrough. Whether a rich online survey, a qualitative assessment (where we literally walk people through each phase of a campaign as they would experience it in reality), or a geo-targeted mobile survey that pings people as they move past various campaign elements after the initial launch, we want to measure not only impact, but also cohesion. An amazing poster can be little more than artwork if it doesn’t connect back to the rest of the campaign. A groundbreaking TV ad can become a hindrance when the outdoor channels spread a completely different message. To avoid this, use various methodologies to understand the whole.

Norms
Some clients live by them; others never want to hear about them. Our team attempts to look at benchmarking a little differently. If we’re able to create a rich enough set of norms, on target with the exact type of content we’re looking at, we’re happy to use them. But things have changed – a television ad is no longer just a television ad, but an experience in and of itself. And because it’s nothing like the ads from years past, using 5 to 10 year old data from this platform to inform current norms may be unwise.

With benchmarking, we look for the best comparative tool to give us the strongest sense of how well our client’s ads are driving consideration – and ultimately action. Sometimes this means norms. Sometimes it means a social media analysis to look at volume, sentiment, and engagement. (There’s a big difference between a post that’s been re-tweeted a million times and one that has created a million sparks of unique, thoughtful discussion). Other times it means comparison alongside ads from competitors to see whether the message is overshadowing the competition, or getting lost in its wake. By thinking about campaigns individually and assessing each client’s particular goal, we use unique benchmarks to get a truer sense of how the campaign will perform in reality.

Furthermore, while norms make us feel “safe” about what we’re putting out there, many times they keep us from taking risks to create truly innovative campaigns that breakthrough. “Innovation” is sometimes hard to measure in a quantitative survey because consumers don’t always know how to rate something they haven’t seen before, which makes a unique and mixed approach to campaign research all the more important.

Mixing monadic and sequential
Whoever came up with monadic testing is a genius. In most cases, people see advertising for one product at one time, so testing a single piece of advertising on its own makes a lot of sense. However, roadblocks can present themselves.

Let’s say we show an ad to 400 people on a specific weekend and gather crystal clear feedback that couldn’t provide a better roadmap for moving forward. The following weekend, when it’s snowing and everyone’s in a bad mood, we test the new ad with another (and different) 400 people. And a few days before the second testing, a competitor releases an ad that looks an awful lot like what we developed. The comparison is far from apples-to-apples.

This can be combated by pairing monadic with sequential studies. By testing single ads in isolation in one survey, and testing a variety of ads against each other in another, we help to minimize the pitfalls of each approach and come up with an answer that has a bit more clout (and a lot more reality) behind it.

In truth, there is no single answer when it comes to campaign testing. With clients rolling out marketing efforts that include TV ads, YouTube mini-movies, digital banners, mobile games, static posters, lenticular and animated posters, and much more, we can’t rely on a single methodology or technology to give us all of the answers. Campaign testing works best when there’s not a set template. Mixing quant, qual, social media and desktop research is key to a successful approach.

We want to encourage an open dialogue with others about how they are tackling campaigns because, let’s face it, the research community is small. If we didn’t have each other to brainstorm with, we’d likely spend our days staring at the wall!