When people do the same sort of thing while away from work as they do while at work, they are said to have taken a busman’s holiday. In London in the 1800s, horse-drawn “carriages for everyone” were called omni-buses. The driver and the conductor were called busmen. While visiting other places, busmen would ride the local bus and observe what went on and how other busmen did things. While away on holiday a busman would ride on omnibuses just as a fireman away from home would visit firehouses or a librarian would visit libraries to see how things were in other places. Many today enjoy the dual experience of combining our customer’s perspective with our insider’s knowledge. A long-time survey writer, I recently took a busman’s holiday when I filled out the U.S. Census Bureau’s Household Pulse Survey. Because it’s published in the public domain, I can freely use it as an example of how I advise people to determine the value of research for themselves.
All of the survey particulars are published online.
They are accessible, but are not wholly transparent. Like translucent glass, they allow some light to pass through but prevent objects from being clearly visible. The writing is passive-voice, dense, and jargon-laden, each of which serves a particular need.
- Passive voice is used to avoid responsibility. Compare these two: Mistakes were made. I made a mistake.
- Dense writing is thick and difficult to absorb. It makes us work to figure out what is being said. It is often used deliberately to bore us enough to quit reading. Think about the online agreements we sign without reading because they’re too long and too boring.
- Jargon is insider language that is difficult for others to understand. It is often used to intimidate. As a student, I learned there are teachers who hide behind jargon and technical terms to assert their intellectual superiority. The teachers I liked best and learned the most from explained complicated subject matter simply, and I adopted that approach when I taught at Indiana University, the University of Miami, and the University of the West Indies.
Laden with footnotes, asterisks, and fine print, the Household Pulse Survey (HPS) report reads like many of the research reports used by businesses – dense, dreary, semi-explanations that deliberately dull the senses.
Household Pulse Survey Objectives.
The U.S. Census Bureau says they designed their survey to help understand the social and economic impacts of COVID-19 on American households. They say “the ability to understand how individuals are experiencing this period is critical to governmental and non-governmental response in light of business curtailment and closures, stay-at-home orders, school closures, changes in the availability of consumer goods and consumer patterns, and other abrupt and significant changes to American life.”
As the designated federal statistical agency conducting this study, the Census Bureau partnered with half a dozen government Bureaus, Services, and Centers (USDA, HUD, and others) to develop a 20-minute online survey. Its purpose was to measure COVID-19’s impact on employment status, consumer spending, stimulus payments, food security, housing, education disruptions, and physical and mental wellness. Collaborations are always difficult to manage and government agencies are known for their lack of efficiency (think of the DMV), so the Household Pulse Survey (HPS) is off to a stumbling start.
The HPS section on objectives adds another goal in the very last paragraph.
They are seeking to develop a new rapid-response “paradigm of possibility” where they use a new delivery method and a new questionnaire and a new sampling. Readers will note that the goal of providing information for policymakers to use in making critical decisions is not exactly compatible with experimenting with new methods, new samples, and a new questionnaire. The Census Bureau’s website says their own experimental data may not meet all of their own quality standards. This escape clause allows them to later explain away any and all issues by saying because they were in a hurry, they skipped some quality control steps, ignored the voices of many, and relaxed their quality controls in their haste to promote their experimental study. They have given themselves blanket permission to fail.
The Centers for Disease Control was involved with the section of the survey that sought to “rapidly monitor recent changes in mental health.” We will look at the section designed to meet this additional objective more closely in a minute.
Household Pulse Survey methods.
The source of the HPS sample is phone numbers and e-mail addresses taken from the Census Bureau’s Master Address File. The report says “sampled households will be contacted by e-mail and text if both are available, by e-mail if no cellphone number is available, and by text if no e-mail is available.” Not everyone has a smartphone (Pew says 20% of adults do not) and not everyone is online (Pew says 10% of adults are not), so they’ll mostly be left out of the statistics and analyses. Also absent are those not in the U.S. Census database (the 37% who don’t fill out the Census survey).
After discussing how they will handle addresses and numbers, HPS says they expect much lower than traditional response rates, but don’t say why. There is no discussion of the more than a dozen factors known to affect survey response rates. Here are a few of the big ones:
- The quality of the sample. The HPS survey uses e-mail addresses (107 million) and phone numbers (87 million) they are able to connect to Census data. All those who are not in the Census database, or in it but lacking e-mail addresses and phone numbers are not represented in the results of the HPS survey.
- Questionnaire design and layout. User-friendly surveys use simple language, clear instructions, and are easy to complete.
- Following up. Data quality is always improved when studies follow up with second and third waves. It is faster and cheaper to not bother with recontacting those who did not respond.
- Guaranteeing confidentiality. If the HPS has my name, phone number, and e-mail address, how confidential can that be?
The HPS response rate for the e-mail and text surveys was less than 3%.
In comparison, the response rate for the ongoing 2020 U.S. Census is holding steady at 63%. Even though all persons who receive a census are told the law requires them to fill it out, 37% do not, and surely there are some things different about people who fill out surveys and people who don’t. Statisticians will disagree on the details, but most agree the higher the response rate, the better. The HPS admits their 3% is “much lower than traditional response rates.” What they do not say is that they willingly accepted low response rates because their “paradigm of possibility” is built on a timeliness of response paradigm, an implementation efficiency paradigm, and a resource paradigm. If you have a decoder ring, you know this means it’s quicker, easier, and cheaper.
The six agencies who planned this study willingly chose to trade accuracy for an experimental study that is faster, cheaper, and easier. They say if they followed the typical path of a federal study, it would take months and even years to get done what they did in a few weeks.
Content Areas
These were determined by the needs of the six agencies involved. They were capped by the 20-minute limitation imposed by planners as one of their goals.
Let’s look at the invitation that arrived via e-mail or text.
The first too-long paragraph does nothing more than establish the language preferred by the survey taker. Oddly, every prospective survey taker has to slog through a windy explanation of how to change your language selection later, something surely applicable to only a handful. The rest is heavy on boilerplate and disclaimers. Surely the language and lengthiness of this introduction factored into the decisions of 97% of study targets to ignore the survey. Why didn’t HPS planners take the opportunity to use simple language to entreat prospective survey takers to help out? Also, I wonder if their paradigm of possibility factored in how readable their survey is on small handheld screens?
The first 8 survey questions are demographics and the CDC’s Mental Health section comes next.
It contains two questions to measure anxiety and two to measure depression:
“Over the last 7 days, how often have you been bothered by the following problems… Feeling nervous, anxious, or on edge? Would you say not at all, several days, more than half the days, or nearly every day? Select only one answer.
“Over the last 7 days, how often have you been bothered by the following problems… Not being able to stop or control worrying? Would you say not at all, several days, more than half the days, or nearly every day? Select only one answer.
“Over the last 7 days, how often have you been bothered by the following problems… Having little interest or pleasure in doing things? Would you say not at all, several days, more than half the days, or nearly every day? Select only one answer.
“Over the last 7 days, how often have you been bothered by the following problems… Feeling down, depressed, or hopeless? Would you say not at all, several days, more than half the days, or nearly every day? Select only one answer.
Take another look at that last question and see how it is actually three different questions pretending to be just one. Feeling down, depressed, or hopeless are not the same thing, yet survey takers are forced to give only one answer to three questions. When multiple questions are lumped together like this we can never know which survey takers were reporting things were hopeless (the worst case), which were saying they felt depressed (not as bad as hopeless), or which were just feeling down (a common occurrence among many people and likely to happen a time or two in a 7-day period).
How is it that after “thorough review” by nine experts and six agencies no one noticed a key question is fatally flawed?
Avoiding double-barreled questions is one of the first things apprentice questionnaire designers are taught as undergraduates. It seems obvious that triple-barreled questions are even worse. How did something so basic get by the Household Pulse Survey’s six-agency collaboration? How did it get by the nine independent experts who reviewed the questionnaire? How did it get by the consensus meeting to discuss comments and come up with recommendations?
CDC mental health findings.
In April of this year, 30% of those 18 and over adults who filled out the survey reported experiencing anxiety or depression, rising to 36% in July, when the study ended. This is three times the findings from a similar study in 2019 that found 11% of surveyed Americans reported feeling anxious or depressed.
Broadly speaking, who has the most reported symptoms of anxiety or depression? Females more than males, lower-educated more than higher-educated, younger more than older, Hispanics more than Blacks, Blacks more than Whites, Whites more than Asians.
Is the Household Pulse Study flawed?
To be sure. The HPS’ stated goal of providing “near-real time data” means hasty data dumps with minimal analysis. Is the study fatally flawed? Probably not. Are the findings acceptable? It all depends on the standards you use. Cheap, fast, and easy are fine for using casual standards to deliver broad measurements of things that aren’t important. The HPS said “the ability to understand how individuals are experiencing this period is critical to governmental and non-governmental response.”
If what we’re surveying is critical, we need more than broad measurements and more than casual standards. If our survey is not critical, why are we doing it?
To read more articles like this, click here.