It was early evening at the garden party celebrating the couple’s magnificent new backyard deck. Built into the gentle slope of the land were walkways, conversation areas, and bridges spanning koi ponds, all made from materials of the highest quality. Gentle breezes sighed in the willows, a jazz trio was very cool on the veranda, lanterns swayed in the trees, everywhere the scent of night-blooming jasmine. White-gloved waiters glided about serving champagne and canapés on gleaming silver trays.
A guest, marveling at the beauty of it all, commented that it must have involved an enormous amount of planning and craftsmanship. The hostess replied with a dismissive wave of her bejeweled hand, “No, it’s quite easy, actually. You just get some boards and a husband.”
It’s easy to write a survey, too.
They do it every day on Family Feud. Writing a good survey is a different matter. The only way we can collect accurate and objective data is to rigorously adhere to ALL the principles universities teach in their graduate quantitative behavior programs, not just one or two. When it comes to fundamental truths, you can’t adhere to ‘em if you’re not aware of ‘em.
When a world renowned medical researcher came to our small shop for a survey, I was flabbergasted.
After all, he was a big shot and we were very small potatoes. He told me the data he collected would provide the raw materials for half a dozen articles he would publish in prestigious journals. He reasoned that as he wrote only one questionnaire every five or six years and we wrote hundreds, he would be a fool to write his own. A rare client, indeed.
The general decline of research standards and practices.
Here are five things contributing to the decrease in quality of the research we see, particularly the survey research:
- Writing your own surveys has never been more popular.
- Most clients demand faster and cheaper surveys, so more shortcuts are taken and more corners are cut.
- Research departments are buried deeply in most organizational hierarchies. Projects go up ladders and down chutes, processed by gatekeepers who don’t know how to tell good surveys from bad and don’t much care, either.
- Research has been made subordinate to marketing. This is fundamentally flawed because marketing wants to influence people and research wants to understand them. Conflict is inevitable, and guess what happens?
- Marketing-led research cares little about rigor. When you have the Count Chocula account, you proudly cite bought-and-paid-for research that shows it is “part of a balanced diet.”
DIY survey sites contain pre-emptive disclaimers warning users that online surveys have to be clear, concise, and unbiased.
Good advice, to be sure, but we can’t help but notice they are telling people what to do without telling them how to do it, or why it’s crucial. How can anyone who lacks the necessary training possibly know everything that is needed to produce surveys that are clear, concise, and unbiased?
There is too much to learn in a workshop or in one university course.
Think about all those years of Spanish classes in high school and college. Did they make you fluent? How could something as technical as the statistical underpinnings of survey research possibly be any less difficult to learn? Mi tía tienes un lápiz azul.
Ratings, rankings, categories, and scales are loaded with misinformation traps undetectable by most.
For example, when would you use odd-numbered or even-numbered scales? Did you know that people cannot reliably distinguish between more than five levels of agreement or importance? Did you know that most sophisticated statistical procedures require strict conditions that are typically ignored, making those analyses worthless? And on and on and on.
Mark Twain’s observation applies here: “All you need in this life is ignorance and confidence, and then success is sure.”
You can’t tell a book by its cover.
Neither can you tell the quality of the research by what you see in a report as slickly produced as Amanda Priestly’s Runway magazine. Be skeptical of the flashy presentation infographics that are there to distract you from the lack of meaningful content.
More surveys measure the wrong things than the right ones.
A career middle manager in a multibillion dollar division of a highly-admired company had been assigned to repeat the same survey her boss had done every year. She told me they tracked several key indicators to determine how well their programs were performing. Each year, her boss included these numbers in his presentation to the president and they were subsequently featured in the annual report.
It was immediately apparent to me this survey in no way measured the things she thought it did. And because new true and faithful measurements would be entirely different, she would not be able to compare them with any previous numbers.
She had to decide.
If she wanted to measure XYZ, the current survey measuring ABC was worthless and she would need a new one. If she used the same survey as last year it would be a pointless waste of resources.
She said they would repeat the useless survey.
It would be trouble for her if she changed things now, because her boss had written the survey years ago, and was quite proud of it. We declined the project.
This incident is isolated only in the fact that the middle manager was honest enough to admit the situation called for CYA self-preservation over bringing bad news to her boss.
You cannot make an analytical silk purse out of a data sow’s ear.
A potential client brought me a pile of data to analyze. He had written his own survey and collected his own data, then discovered he didn’t know what to do with it.
His questionnaire had violated nearly every principle of good design and the data had been contaminated, so I told him I could not help.
Much like the chain of custody for medical tests and forensic evidence (who handled it, what did they do with it and to it?), surveys require several interlocking steps, and the failure of any one of them dooms the entire outcome.
It is essential that the integrity of the data be maintained throughout the entire prescribed sequence. When it comes to collecting and handling data, all steps are critical. Start with a bad questionnaire, and the rest of the steps are meaningless.
Let’s say that somehow, against all odds, you have written a good questionnaire and have handled your data properly. You’re still stuck, because most companies have the combined investigative prowess of Inspector Clouseau, Car 54, and the Keystone Kops.
Surveys are made unfriendly for users because they are designed backwards.
Survey response categories and scales are all determined beforehand. The people we are surveying are required to shoehorn their thoughts into our format. This is the same backwards engineering that brought us all those annoying telephone menus and all those damnable sites that send us to endless FAQ pages that answer every question but the one we have.
How many times have you filled out a survey that:
- went into too much detail?
- was too long and tiresome?
- made you rate and rank things that aren’t important to you?
- bored you so badly you answered mindlessly just to get it over with?
- yelled at you go back and answer those questions you left blank?
I don’t know about you, but when a survey does not allow me to skip a question, I skip the rest of the survey. I wonder what they do with my responses.
Next week, learn how to scrape most customer satisfaction research off your shoe.