The most common ad testing research waits until the very last moment to do the fastest and cheapest research possible. Paid study subjects are shown an ad that has been slickly produced and is ready to go. The only purpose of this research is to make sure the advertising doesn’t contain any glaringly offensive mistakes that could ruin someone’s career.
This disaster check always comes after the ad is already scheduled to run.
There is no time built in to allow for changes or refinements so the only choices are to pull the plug or to let it run as is. Study subjects often find typos and screwups. Imperfect as the rest of us, advertising departments have many times let careless mistakes get through to the finished product. These are things that should have been caught in-house but weren’t. The disaster check is no more than a tentative dip of the toe into the bathwater to see if it’s too hot. It is always done too late to do anything except avoid a potential catastrophe. Sharp-eyed readers will note that this type of “ad testing” doesn’t even test ads.
An oft-told legend that could have been avoided with a disaster check.
If researchers had shown Chevrolet Nova ads to Spanish-speaking customers, they would have learned that nova, which means “bright star” in English, actually means “it doesn’t go” in Spanish. What level of success would you predict for a new car called Clunker, Beater, or Rustbucket? Some say this story is apocryphal, but you get the point.
Disaster check research has a hidden downside with unintended consequences.
It telegraphs study sponsors’ intentions to their contract research providers. When businesses tell vendors they want one or two hurried and cheap ad testing focus groups, the meta message is “Just throw together a study because we don’t care about the composition of the sample, the integrity of the data, or about learning anything useful.”
As an ongoing strategy, hurrying to beat a deadline is a bad one.
A preening colleague bragged he did his best work at the 11th hour. Our resident humorist said “Of course that’s when you do your best work – it’s when you do your only work.” Last minute research like this is a waste of time and money that displays organizational laziness and arrogance.
We were asked to do some copy testing research with tech-savvy tweeners. The client’s goal was to learn what would appeal to kids. We played audio recordings for paid study subjects and asked them to talk about what they just listened to. They universally mocked the ad agency’s efforts, saying: It is so lame when old people try to sound like us.
The old people they were referring to were the twenty-something ad agency experts who believed they were in touch with kids today. Untrained in basic behavioral science, agency experts were unaware that old is a matter of perspective. To a child, 18 is old. To a teenager, 40 is ancient. To a 60-year old, 80 is elderly. The research team – men and women in their 30s, 40s, and 50s – enjoyed the irony, because the twenty-something experts had been given a dose of their own “too old” medicine and didn’t like it. They reacted as most people do when they hear the unvarnished truth. They blamed the research, saying the kids didn’t know what they were talking about. Our agency experts believed they had the gift of being able to connect with youngsters because they understood the insider language used by tweens. Creatives with no training in sociolinguistics, they were unaware the entire point of insider jargon is to speak in code that outsiders don’t understand. When old people pick up the current slang, yo, dog, the kids have already gone on to the next shiny new thing.
When I was a rookie field researcher, advertising agencies hired independent focus group moderators.
Over time, agencies found moderating their own focus groups allowed them to exercise greater control – not just over the sessions, but over the entire study. Free of annoying scientific standards or pesky notions of disciplined objectivity, they recruited who they wanted, asked them the questions they wanted, and interpreted the findings to support decisions they had made beforehand. Ad agency moderators are easy to spot. They are unnaturally attractive, elaborately coiffed, grinning salespeople with no interest in research at all. Anthropologist and sociologist colleagues call them game show hosts. These ad agency moderators are a perfect fit for a very special type of advertising testing that experts call the beauty contest.
This slickly-produced version of advertising testing looks like real research but is actually nothing more than a grubby little con game run by actors in tiny video dramas.
The ad agency moderator’s job is to influence and persuade people. This should not come as a shock to anyone who realizes that advertising people are paid to sell things, not to be interested in understanding what makes people tick.
The Beauty Contest is an elaborate ruse.
When advertising agencies are directed to produce several ads for comparative testing, directors, editors, copywriters, designers, artists, web developers, account managers, and salespersons go to work. Collectively, they call themselves creatives, to distinguish them from the rest of us who lack their talent, imagination, and artistic gifts. As a rule, creatives detest the intrusion of science into their world. They see non-advertising people as lowbrow Philistines who have no understanding or appreciation of the arts. A widely held belief is that creative directors in ad departments everywhere would ditch their dreary jobs peddling whatnots for oppressor corporations except the money is so good and they can’t get jobs doing what they really want to do, which is become famous film directors. It’s not true that they are all frustrated Spielberg, Scorcese, and Tarantino wannabes, but it’s close. Since 1986, I’ve met only two.
Imagine you’re a paid study subject. This is how it looks:
Which of these do you like best?
It is obvious this contest has been deliberately rigged. The outcome has been determined before the voting even begins, much as the winner of a footrace between Usain Bolt and Fat Joe. Scarlett Johansson “wins” because she is glamorous and the others are not. The other contestants “lose” by a wide margin. If Regan MacNeil, the Wicked Witch of the West, and Gladys Ormphby get any votes at all, they’re from Frankenstein’s Monster, Freddy Krueger, and Arte Johnson’s Dirty Old Man.
We see hundreds of thousands of ads.
Only a handful are really memorable, so we all know creatives rarely produce Scarlett Johanssons. Mostly they produce Amy Farrah Fowlers. When they do, the Beauty Contest looks like this:
Here are four choices. Which do you like best?
Of course Amy wins. But the big difference is that Amy wins only because she is the least unappealing, which is something quite different, isn’t it? The competition is obviously awful choices, which is no competition at all. Which pretty pictures or funny scenes or dazzling special effects study subjects say they like best is of little import. What counts is how effectively the test ads communicate the intended message, which is why intelligent ad testing doesn’t begin with ads.
It begins with exploratory discussions.
Paid study subjects are asked to describe defined products and services in their own words. The researchers’ task is to identify deep-down themes. Good moderators use a variety of techniques and approaches. The best ask the fewest questions possible, often circling patiently around the subject rather than rushing at it recklessly. They are not looking for favorites, but instead are exploring how people react to the test ads and the feelings they evoke. Game show moderators do their job by controlling discussions and selling study subjects the least unappealing choice.
When properly applied, the iterative development process identifies the thoughts and feelings that are most important to consumers for the goods and services our company sells. The ad team uses them as building blocks for preparing several different creative versions of the same factual principles. The creatives are free to express particular themes in any way they wish. For example, if we find the themes of safe, secure, and trusted are important to our target customers, creatives are asked to produce four different ways of conveying safe, secure, and trusted.
Learn, refine, learn, refine.
The iterative development process takes things one step at a time, applies what is learned, and tests again. When researchers come to a dead end, they back out and try another route. Each cycle brings them closer to the optimum solution. Ads are tested not to see which one people like the most, but to determine the extent to which each ad communicates the issues important to customers. Refinements are made and another round of testing sharpens our focus. When creatives think ad testing of any kind wrongly interferes with their artistic gifts, they put their only real effort into creating one ad they like best. They rest they grudgingly dash off as cannon fodder.
Grabbing someone’s attention is easy. Getting your point across is lots harder.
Left to their own devices, creatives with no access to objective assessments by outsiders create ads that are attractive and win awards, which is an agency goal, not a study sponsor goal. Detractors say the iterative development process takes too long and costs too much. This is true for up front costs, but false for the longer term. When the ads we run are ineffective, we’ve spent a lot of time and money on production and placement that doesn’t generate the results that really well-developed and thoroughly tested ads do.
What kind of advertising testing research does your company do?
What kind of advertising testing research does your company use? How does your company calculate the ROI of your advertising efforts?