Hat trick of survey fail
Of the things you can get wrong with reporting on public opinion, Ohio's Greatest Home Newspaper manages to hit almost all of them in this 1A tale (with charts). In no particular order, they represent failings of writing, statistical presentation and reporting. Let's go:
It might be legal, but that doesn't make it right.
That's generally how we feel about the mosque and Islamic community center proposed for near where New York's twin towers fell nine years ago today, says a new poll of the Columbus area conducted by Saperstein Associates for The Dispatch.
Never report survey results as a reflection of what "we" feel. They are not. That's the cheapest of cheap USA Today-style populism. Surveys show what proportions of a population say at a particular time. It's no more accurate to say "we feel" something than to say "we support" the candidate who took 55% of the vote in an election. And while you're at it, never write ledes of the form "Dead. That's what the man was when they found him," even in Thurber's old hometown.
And 41 percent of us admit that we are more suspicious of those of Arab descent, a 9-point jump after four straight years of decline to an all-time low just a year ago.
No, 41% of "us" admit that "they" are more suspicious; the rest of "us" don't. But did 100% of the people who handled this story not know that "Arab" and "Muslim" aren't the same thing? And did 0% of them think a difference like that doesn't matter?
That's skipping ahead, so let's get back to the numbers for a second. Here's one of the charts that accom- pany the story inside, and here's a rule you can use for handling this sort of data:
1) Never describe a result as "significant" if you don't know what "significant" means.
2) If you have to ask, you don't.
Basics: A poll is an attempt to generalize from a sample to a population. The bigger the sample, the closer your finding probably is to the real figure in the population. (That "confidence interval" is better known as the margin of sampling error.) There's always a chance your sample is nowhere near the population value; by convention, we accept a 5% likelihood of that -- meaning we test at a "confidence level" of 95%. A "significant" difference in poll results is one in which there are no cases outside that 5% missed-by-a-mile category in which your results overlap. Add the margin of error to the lower result and subtract it from the higher result: If the higher result is still higher, your difference is significant. In the case at hand, that would basically require a 10-point gap.
On the first of the questions shown in the second table, then, there are not only no significant differences; there are hardly any differences worth mentioning. Two significant differences show up on the second question. Significantly more people think the terrorists are winning the "war against terrorism," and significantly fewer think the US and its allies are. But of the 18 responses in the table (leaving out the DK/NRs), they're the only ones to reach that level. (The bolded numbers represent record lows, and only one of them is significant.)
"Nonsignificant" doesn't mean insignificant (that's another reason you shouldn't use "significant" unless you mean it). There are no practical differences from last year in the proportion of people who are "very worried" or "not too worried" about another terrorist attack.* On the other hand, the decline in the number who have "a great deal" of confidence that the government can protect them, and the increase in the number who report feeling "less safe" under Obama, are real. They're just "real" at slightly less than the arbitrary 95% confidence level used in most social science. "Significant" in this case isn't just inaccurate, it's a poor benchmark for distinguishing relevant findings from irrelevant ones.
This isn't the sort of survey that answers "why" questions, but better attention to the answers would be a step in teasing those questions apart. Here, we start to get into failures of interpretation -- shortcomings in the journalistic process rather than just the writing. Back to the story:
"I think they have the right to build a mosque, but if they are really looking for religious tolerance and sensitivity, they should say perhaps there is a location a little further away that won't stir up this controversy," said one of the poll participants, Eric Young, 48, a video producer in Westerville.
Many respondents probably are using similar reasoning, rather than simply expressing bigotry about the mosque, said Martin D. Saperstein, head of the Columbus firm that conducted the poll.
"People are saying, 'We know you can build it if you want; we're Americans, we get it.' But tolerance is a two-way street," he said. "Some people want them to meet us halfway, 'be as tolerant of us as we are of you.'"
Well, yes and no. We can't sit here and peer into people's souls to determine their true motives, any more than the pollster can. But we can draw some inferences from the way answers co-occur. Having seen the "if they were really sensitive" and "tolerant like us" examples, can you guess what's ahead?
... Although 9/11 was carried out by Muslim extremists, "the problem is that Islam's more-moderate wings have not spoken up to say it was wrong," Hewitt [the next respondent] said.
This raises a Green Cheese question for the journalist: When a source or subject says the moon is made of green cheese, what's your obligation to point out that it isn't -- that, in the cold sunlight of day, the moon is just another lifeless, airless rock? Not only have those "moderate wings" consistently spoken up, but so have some more (certainly by US standards) radical ones. The first time the NYT mentions Feisal Abdul-Rauf after the 9/11 bombings, he's touting a fatwa from Yusuf Qaradawi and other clerics that condemns the attacks and calls it a "duty" to bring the perps to justice.
** Slightly off-topic, but did it occur to anyone at the Dispatch to ask why the youngest of the five Real People quoted in this story is 48?
*** Do we, Mr. Ailes?
It might be legal, but that doesn't make it right.
That's generally how we feel about the mosque and Islamic community center proposed for near where New York's twin towers fell nine years ago today, says a new poll of the Columbus area conducted by Saperstein Associates for The Dispatch.
Never report survey results as a reflection of what "we" feel. They are not. That's the cheapest of cheap USA Today-style populism. Surveys show what proportions of a population say at a particular time. It's no more accurate to say "we feel" something than to say "we support" the candidate who took 55% of the vote in an election. And while you're at it, never write ledes of the form "Dead. That's what the man was when they found him," even in Thurber's old hometown.
And 41 percent of us admit that we are more suspicious of those of Arab descent, a 9-point jump after four straight years of decline to an all-time low just a year ago.
No, 41% of "us" admit that "they" are more suspicious; the rest of "us" don't. But did 100% of the people who handled this story not know that "Arab" and "Muslim" aren't the same thing? And did 0% of them think a difference like that doesn't matter?
That's skipping ahead, so let's get back to the numbers for a second. Here's one of the charts that accom- pany the story inside, and here's a rule you can use for handling this sort of data:
1) Never describe a result as "significant" if you don't know what "significant" means.
2) If you have to ask, you don't.
Basics: A poll is an attempt to generalize from a sample to a population. The bigger the sample, the closer your finding probably is to the real figure in the population. (That "confidence interval" is better known as the margin of sampling error.) There's always a chance your sample is nowhere near the population value; by convention, we accept a 5% likelihood of that -- meaning we test at a "confidence level" of 95%. A "significant" difference in poll results is one in which there are no cases outside that 5% missed-by-a-mile category in which your results overlap. Add the margin of error to the lower result and subtract it from the higher result: If the higher result is still higher, your difference is significant. In the case at hand, that would basically require a 10-point gap.
On the first of the questions shown in the second table, then, there are not only no significant differences; there are hardly any differences worth mentioning. Two significant differences show up on the second question. Significantly more people think the terrorists are winning the "war against terrorism," and significantly fewer think the US and its allies are. But of the 18 responses in the table (leaving out the DK/NRs), they're the only ones to reach that level. (The bolded numbers represent record lows, and only one of them is significant.)
"Nonsignificant" doesn't mean insignificant (that's another reason you shouldn't use "significant" unless you mean it). There are no practical differences from last year in the proportion of people who are "very worried" or "not too worried" about another terrorist attack.* On the other hand, the decline in the number who have "a great deal" of confidence that the government can protect them, and the increase in the number who report feeling "less safe" under Obama, are real. They're just "real" at slightly less than the arbitrary 95% confidence level used in most social science. "Significant" in this case isn't just inaccurate, it's a poor benchmark for distinguishing relevant findings from irrelevant ones.
This isn't the sort of survey that answers "why" questions, but better attention to the answers would be a step in teasing those questions apart. Here, we start to get into failures of interpretation -- shortcomings in the journalistic process rather than just the writing. Back to the story:
"I think they have the right to build a mosque, but if they are really looking for religious tolerance and sensitivity, they should say perhaps there is a location a little further away that won't stir up this controversy," said one of the poll participants, Eric Young, 48, a video producer in Westerville.
Many respondents probably are using similar reasoning, rather than simply expressing bigotry about the mosque, said Martin D. Saperstein, head of the Columbus firm that conducted the poll.
"People are saying, 'We know you can build it if you want; we're Americans, we get it.' But tolerance is a two-way street," he said. "Some people want them to meet us halfway, 'be as tolerant of us as we are of you.'"
Well, yes and no. We can't sit here and peer into people's souls to determine their true motives, any more than the pollster can. But we can draw some inferences from the way answers co-occur. Having seen the "if they were really sensitive" and "tolerant like us" examples, can you guess what's ahead?
... Although 9/11 was carried out by Muslim extremists, "the problem is that Islam's more-moderate wings have not spoken up to say it was wrong," Hewitt [the next respondent] said.
This raises a Green Cheese question for the journalist: When a source or subject says the moon is made of green cheese, what's your obligation to point out that it isn't -- that, in the cold sunlight of day, the moon is just another lifeless, airless rock? Not only have those "moderate wings" consistently spoken up, but so have some more (certainly by US standards) radical ones. The first time the NYT mentions Feisal Abdul-Rauf after the 9/11 bombings, he's touting a fatwa from Yusuf Qaradawi and other clerics that condemns the attacks and calls it a "duty" to bring the perps to justice.
It's tempting to dump all this into a factor analysis and see what turns up, isn't it? Because if you've been keeping up with the unpleasantness, you might be thinking that all those justifications are just slightly different versions of the same answer. To give the pollster credit, he's right; you can't say they're openly bigoted. But they do tend to be accompanied by dark hints about liberal witch-huntery ("this isn't about racism; it's about policy") and the tyranny of "political correctness." That suggests the Persecuted White Folks theme toward the end:
... She** said she thinks that the mosque should be built "just as soon as we can build a Protestant church or Catholic church right in Baghdad or one of those other places. We are not even allowed to witness there."
Heard it before? It's usually associated with the "Muslim victory dance"/"they're all trained to lie" cluster, and it suggests that there's a much more interesting set of associations out there than the ones we're looking at. Wouldn't it be fun to throw in a few questions about where and how people get their information, then see which themes are associated with which few hours on the AM radio dial?
Leaving the question about Obama's birthplace until Sunday's story, thus, seems like a bad decision. There's less of a random burst of attitude change out there than the paper is letting on. Something is making stuff happen, but we don't know what it is.*** Our story raises that question:
What's fueling this uneasy brew as America commemorates another grim anniversary of the 2001 terrorist attack? The rest of the Saperstein survey, which contains several questions that have been asked virtually every year since 9/11, contains strong clues.
... but doesn't get around to answering it, let alone taking a serious run at those "strong clues." That's a failure of journalism: in Hutchins Commission terms, reporting the fact without the truth about the fact. It's harder to fix than a little statistical clumsiness, but imagine the fun if we tried.
* That doesn't show a "statistical dead heat," because there is no such thing. It means the results have a high chance of reflecting noise rather than a change in the population value.... She** said she thinks that the mosque should be built "just as soon as we can build a Protestant church or Catholic church right in Baghdad or one of those other places. We are not even allowed to witness there."
Heard it before? It's usually associated with the "Muslim victory dance"/"they're all trained to lie" cluster, and it suggests that there's a much more interesting set of associations out there than the ones we're looking at. Wouldn't it be fun to throw in a few questions about where and how people get their information, then see which themes are associated with which few hours on the AM radio dial?
Leaving the question about Obama's birthplace until Sunday's story, thus, seems like a bad decision. There's less of a random burst of attitude change out there than the paper is letting on. Something is making stuff happen, but we don't know what it is.*** Our story raises that question:
What's fueling this uneasy brew as America commemorates another grim anniversary of the 2001 terrorist attack? The rest of the Saperstein survey, which contains several questions that have been asked virtually every year since 9/11, contains strong clues.
... but doesn't get around to answering it, let alone taking a serious run at those "strong clues." That's a failure of journalism: in Hutchins Commission terms, reporting the fact without the truth about the fact. It's harder to fix than a little statistical clumsiness, but imagine the fun if we tried.
** Slightly off-topic, but did it occur to anyone at the Dispatch to ask why the youngest of the five Real People quoted in this story is 48?
*** Do we, Mr. Ailes?
Labels: editing, green cheese, ledes, polls
3 Comments:
I've never worked at a newspaper, but I do wonder why that doesn't get done. "... he said, although in fact they have." doesn't seem that hard to write.
I'm sorry, Fred, but I continue to believe you're *way* off base in insisting that reporters use only the esoteric jargon sense of the word "significant" rather than one one five hundred million normal English-speakers use. Polysemy is a fact of life in this here language community, and statisticians have no more right than systematists do to insist on the supremacy of their jargon to the normal English meanings of words. Since most readers won't take the statistical jargon meaning of "significant" it makes no sense to pretend that you are making a meaningful distinction. (Now maybe if we can fix the educational system so that people ask questions like "what sort of evidence would be required to believe that assertion" we might be getting somewhere, but I'll still stick up for the general-English sense of "significant" over the arbitrary social-science one when not writing learned monographs.)
Interpretation leaves a lot to be desired in original story. Pollster's assertion that bigotry was not a factor in responses goes unsupported. No poll questions try to tease that out. Story conflates attitudes towards Islam with debate over amending Constitution to restrict citizenship. That is aimed at illegal Mexican immigrants. Poll offers no basis for interpreting one question in terms of the other. Finally, no attempt is made to interpret why fewer feel safer under Obama this year than last. Is it an overall decline in confidence in his leadership, or because GOP is playing the fear card again?
Post a Comment
<< Home