Fun with a purpose
Now that your correspondent is officially procrastinating on three term projects and a one-off, it's time to take a closer look at a burst of journalistic nonsense from last week's screed. Picking on a story this bad is fun, of course, but as the folks at Highlights For Children like to say, it's Fun With A Purpose. Three purposes, actually:
1) Copyeds will almost always lose arguments about whether ledes sound good or bad. They're more likely to win when they can point to specific errors of writing or reasoning.
2) There's a difference between treating sources with respect and courtesy, on the one hand, and sucking up to them on the other.
3) Experts don't get to make things up. It doesn't matter if the expert at hand is your "religion" "editor" and has been covering the subject for 15 years. It doesn't matter if the writer is the pope of Rome.* If the data don't support a conclusion, the conclusion is effectively fiction.
Ready? Here we go, editor's comments in italics:
The latest Charlotte Observer/WCNC News Carolinas Poll confirms what we've known for as long as packed sanctuaries have graced the Carolinas.
Ahem. As noted here last week, that's essentially the lede from the September 2001 take on this annual poll, except then it confirmed "what many believe." Now it's something "we" "know," which is worth some dissecting. Who's "we"? How long have there been packed sanctuaries in the Carolinas -- in other words, have "we" actually known this since the late 18th century? But painful illogic and smarmy self-plagiarism aside, there's a more worrisome point here, which is the writer's (and thus the paper's) belief that packed sanctuaries are unquestionably a good. Given the sorts of things that "packed sanctuaries" have tended to correlate with -- witch-burning, racial intolerance, blowing oneself up in crowds of civilians and the like -- that's a risky conclusion. Not to mention one that suggests an open bias on the part of the writer (and the paper).
This is a region whose residents take religion seriously:
Rule 1 of survey reporting: Polls measure only what they measure. This poll doesn't measure how seriously people take religion. It measures self-reported attendance at religious services. They are not the same thing. Ask a preacher.
• Forty-six percent said they attend a church or other house of worship very often. An additional 23 percent said they attend somewhat often and 22 percent not that often, with only 9 percent saying they never attend.
Though the question was worded differently, the Carolinas Poll in August 2001 found active devotion as well: Fifty-four percent said they attended a service within the past week.
Once again, the poll didn't measure "active devotion." It measured self-reported attendance at a service within the past week. RTFP.
• Worship attendance in 2005 varies little from county to county. Forty-eight percent of those living in the eight-county Mecklenburg region said they attend a house of worship very often, compared with 49 percent in the balance of South Carolina and 45 percent in the balance of North Carolina.
Here, a bit of overstretch turns a usable fact (there's no significant difference between self-reported attendance in the core region and either of the two states as a whole) into a false statement. There's no way of telling from the data whether worship attendance varies "from county to county" (this should be obvious, given a sample size of 923 and 146 counties total). County-by-county rates could vary drastically -- say, 10 percent in County A and 80 percent in same-sized County B -- and still yield an overall mean of 45 percent. You can't guess the range from the mean. Put another way, the normal temperature in St. Louis in October is 58 degrees: What should you wear this afternoon?
Moral: Never go beyond the data. No errors are good, but this one's particularly useless. The writer doesn't even get to ingratiate himself with his sources.
• Women generally worship more than men: Fifty percent of women said they attend a house of worship very often, compared with 43 percent of men.
Three faults here:
* First, sampling error for subgroups is larger than sampling error for the whole. Judging from the original n=923, this difference between subgroups isn't going to be statistically significant.
* Second, we're committing several sins against the numbers:
a) Confusing categorical data with continuous data. "Very often" is probably more often than "somewhat often," but not certainly: your "very" could be my "somewhat," but your "3" can't be my "2."
b) Ignoring the rest of the answers. What happens to the results above if 40 percent of women also report "not that often," but 40 percent of the men report "somewhat often"? Again, since there's no measure of the difference between "somewhat often" and "not that often," we don't know, but we'd at least want to scratch our heads.
* Third, back to Rule 1: The poll doesn't measure how much people worship. It measures (sort of) how much they say they worship. Again, if you can't tell the difference, go ask a preacher.
Here's a better way to state the result: "The survey found no significant difference between the proportions of men and women who report attending services very often." It doesn't sound as interesting, but it has the advantage of being true.
• African Americans worship more than whites: Fifty-four percent of blacks said they worship very often, compared with 46 percent of whites.
Same three offenses. Broken record time: It doesn't matter what your title is. You can't make the statistics say things they don't say. That's called "making stuff up."
• The older you are, the more you go to a church, mosque or synagogue: Fifty-eight percent of those ages 55 and over said they worship very often, compared with 47 percent of those 35 to 54, and 34 percent of those 18 to 34.
Our old friend the fallacy of division. "You" is a specific person, and you can't predict one person's behavior from a poll. Again, a properly reported poll would include margins of sampling error for subgroups, but these differences might be significant. If so, you could say "Respondents ages 55 and older were more likely to report worshipping 'very often' than younger respondents." In any case, please avoid addressing the reader directly (see below).
• The person beside you on the pew is more likely a Democrat or Republican than an independent: Fifty-two percent of Republicans and 51 percent of Democrats said they worship very often, compared with 35 percent of independents.
Not to be tacky, but so is the "person beside you" at the Harris-Teeter or the football game or the cross-burning. In North Carolina, about four times as many voters are registered Republican or Democrat as are registered unaffiliated. It might be an interesting finding (depending on those subgroup stats!) if unaffiliated voters are less likely to report worshipping "very often" than party members. It'd be even more interesting to tease out views on political issues held by "very often," "somewhat often" and the like. But this datum in itself is about as interesting as saying the sun was likely to have risen in the East before you headed off to your church, mosque or synagogue.
Someone also should have protested the use of the second-person pronoun. Has it not occurred to the Observer that some of its readers might not attend services and might consider this particular bit of sucking-up to be, well, exclusionary? That respect and pandering are not the same thing?
The poll is based on 923 confidential telephone interviews conducted Aug. 25-Sept. 19. The maximum sampling error is 3.2 percent.
No it isn't. Margins of sampling error are expressed in percentage points, not "percent." That's a small amount of type, but it isn't a small thing. It's a building block, and if your doubts about the writer's ability to handle numbers had been growing all along, it's kind of the icing on the cake.
Taken together, what we have here is a chunk of marginally interesting data about self-reported religious behavior in the Carolinas. Under better circumstances, it might have added something to the sum of human knowledge. Instead, thanks to expert intervention, it makes the paper look not only stupid but biased. That's a bad outcome.
* Or the pope of Alexandria. HEADSUP-L is all about diversity.
1) Copyeds will almost always lose arguments about whether ledes sound good or bad. They're more likely to win when they can point to specific errors of writing or reasoning.
2) There's a difference between treating sources with respect and courtesy, on the one hand, and sucking up to them on the other.
3) Experts don't get to make things up. It doesn't matter if the expert at hand is your "religion" "editor" and has been covering the subject for 15 years. It doesn't matter if the writer is the pope of Rome.* If the data don't support a conclusion, the conclusion is effectively fiction.
Ready? Here we go, editor's comments in italics:
The latest Charlotte Observer/WCNC News Carolinas Poll confirms what we've known for as long as packed sanctuaries have graced the Carolinas.
Ahem. As noted here last week, that's essentially the lede from the September 2001 take on this annual poll, except then it confirmed "what many believe." Now it's something "we" "know," which is worth some dissecting. Who's "we"? How long have there been packed sanctuaries in the Carolinas -- in other words, have "we" actually known this since the late 18th century? But painful illogic and smarmy self-plagiarism aside, there's a more worrisome point here, which is the writer's (and thus the paper's) belief that packed sanctuaries are unquestionably a good. Given the sorts of things that "packed sanctuaries" have tended to correlate with -- witch-burning, racial intolerance, blowing oneself up in crowds of civilians and the like -- that's a risky conclusion. Not to mention one that suggests an open bias on the part of the writer (and the paper).
This is a region whose residents take religion seriously:
Rule 1 of survey reporting: Polls measure only what they measure. This poll doesn't measure how seriously people take religion. It measures self-reported attendance at religious services. They are not the same thing. Ask a preacher.
• Forty-six percent said they attend a church or other house of worship very often. An additional 23 percent said they attend somewhat often and 22 percent not that often, with only 9 percent saying they never attend.
Though the question was worded differently, the Carolinas Poll in August 2001 found active devotion as well: Fifty-four percent said they attended a service within the past week.
Once again, the poll didn't measure "active devotion." It measured self-reported attendance at a service within the past week. RTFP.
• Worship attendance in 2005 varies little from county to county. Forty-eight percent of those living in the eight-county Mecklenburg region said they attend a house of worship very often, compared with 49 percent in the balance of South Carolina and 45 percent in the balance of North Carolina.
Here, a bit of overstretch turns a usable fact (there's no significant difference between self-reported attendance in the core region and either of the two states as a whole) into a false statement. There's no way of telling from the data whether worship attendance varies "from county to county" (this should be obvious, given a sample size of 923 and 146 counties total). County-by-county rates could vary drastically -- say, 10 percent in County A and 80 percent in same-sized County B -- and still yield an overall mean of 45 percent. You can't guess the range from the mean. Put another way, the normal temperature in St. Louis in October is 58 degrees: What should you wear this afternoon?
Moral: Never go beyond the data. No errors are good, but this one's particularly useless. The writer doesn't even get to ingratiate himself with his sources.
• Women generally worship more than men: Fifty percent of women said they attend a house of worship very often, compared with 43 percent of men.
Three faults here:
* First, sampling error for subgroups is larger than sampling error for the whole. Judging from the original n=923, this difference between subgroups isn't going to be statistically significant.
* Second, we're committing several sins against the numbers:
a) Confusing categorical data with continuous data. "Very often" is probably more often than "somewhat often," but not certainly: your "very" could be my "somewhat," but your "3" can't be my "2."
b) Ignoring the rest of the answers. What happens to the results above if 40 percent of women also report "not that often," but 40 percent of the men report "somewhat often"? Again, since there's no measure of the difference between "somewhat often" and "not that often," we don't know, but we'd at least want to scratch our heads.
* Third, back to Rule 1: The poll doesn't measure how much people worship. It measures (sort of) how much they say they worship. Again, if you can't tell the difference, go ask a preacher.
Here's a better way to state the result: "The survey found no significant difference between the proportions of men and women who report attending services very often." It doesn't sound as interesting, but it has the advantage of being true.
• African Americans worship more than whites: Fifty-four percent of blacks said they worship very often, compared with 46 percent of whites.
Same three offenses. Broken record time: It doesn't matter what your title is. You can't make the statistics say things they don't say. That's called "making stuff up."
• The older you are, the more you go to a church, mosque or synagogue: Fifty-eight percent of those ages 55 and over said they worship very often, compared with 47 percent of those 35 to 54, and 34 percent of those 18 to 34.
Our old friend the fallacy of division. "You" is a specific person, and you can't predict one person's behavior from a poll. Again, a properly reported poll would include margins of sampling error for subgroups, but these differences might be significant. If so, you could say "Respondents ages 55 and older were more likely to report worshipping 'very often' than younger respondents." In any case, please avoid addressing the reader directly (see below).
• The person beside you on the pew is more likely a Democrat or Republican than an independent: Fifty-two percent of Republicans and 51 percent of Democrats said they worship very often, compared with 35 percent of independents.
Not to be tacky, but so is the "person beside you" at the Harris-Teeter or the football game or the cross-burning. In North Carolina, about four times as many voters are registered Republican or Democrat as are registered unaffiliated. It might be an interesting finding (depending on those subgroup stats!) if unaffiliated voters are less likely to report worshipping "very often" than party members. It'd be even more interesting to tease out views on political issues held by "very often," "somewhat often" and the like. But this datum in itself is about as interesting as saying the sun was likely to have risen in the East before you headed off to your church, mosque or synagogue.
Someone also should have protested the use of the second-person pronoun. Has it not occurred to the Observer that some of its readers might not attend services and might consider this particular bit of sucking-up to be, well, exclusionary? That respect and pandering are not the same thing?
The poll is based on 923 confidential telephone interviews conducted Aug. 25-Sept. 19. The maximum sampling error is 3.2 percent.
No it isn't. Margins of sampling error are expressed in percentage points, not "percent." That's a small amount of type, but it isn't a small thing. It's a building block, and if your doubts about the writer's ability to handle numbers had been growing all along, it's kind of the icing on the cake.
Taken together, what we have here is a chunk of marginally interesting data about self-reported religious behavior in the Carolinas. Under better circumstances, it might have added something to the sum of human knowledge. Instead, thanks to expert intervention, it makes the paper look not only stupid but biased. That's a bad outcome.
* Or the pope of Alexandria. HEADSUP-L is all about diversity.
4 Comments:
And remember Strayhorn's Big Law about front pages:
Polls aren't news.
Anyone who ledes a front with a poll story will be shot. In fact, poll stories should be put on the same page as the horoscopes, unless you are willing to print the exact questions and methodology of the poll.
This comment has been removed by a blog administrator.
A minor dissent on the note about margins being expressed in points. It remains a matter of debate.
Kathleen Wickham in "Math Tools for Journalists" (2nd Ed.) has a good explanation for why "percent" and not "percentage point" is preferred. (p. 73) [correcting the page reference]
Doug: We don't seem to have the 2nd edn at the library; if you have it at hand, would you mind posting Wickham's explanation?
Judging from the 1st edn (2002), it looks as if she's simply using 'percent' and 'percentage point' interchangeably. Lots of people do that; I think it's misleading. Viz:
[in this example, the candidates have 53% and 47% support, respectively, in a 'random poll of 600 female voters']
"The margin of error ... is plus or minus 4 percent. That means the support for Lax could range between 57 percent (53+4=57) and 49 percent (53-4=49)."
Not quite. Four percent of 53 is 2.12; that'd make the range 55.1% to 50.9%. The classic version of this error hereabouts is referring to a recent increase in the city's hotel tax as a '2 percent increase'; the tax went from 2 percent to 4 percent, which is a 100 percent increase (but 2percentage points; since 'the 2-percentage-point increase' would be a bit on the clumsy side, I suggest 'the doubling of the tax').
Not to complain too much, but this one's a badly chosen example of whether a poll does or doesn't reflect a significant difference in means based on sampling error. A poll of female voters can only be used to draw inferences about female voters; it can't be extended to the entire voting population. (I also hope "gives the reader a chance to assess the results for themselves" at p. 53 has been fixed.)
"Percent" also has the unpleasant habit of changing, depending on which direction you're going in and other stuff. The example in Darrell Huff's excellent "How To Lie With Statistics" (kids, it's NOT TOO LATE to ask Santa ...) is pay cut and pay raise. If your pay is cut by 25 percent, then raised 25 percent after the advertising crisis is over, are you back at your former salary?
Margin of sampling error is usually expressed in whatever units you're measuring (eg, inches, if you're studying heights in inches). Until I see a persuasive argument, I'm going to remain a prescriptivist about this one.
Post a Comment
<< Home