The AP's new clues
Here's an interesting item from this week's Ask The Editor collection at the AP Stylebook site:
I have a question about reporting poll results and the margin of error. The stylebook address a poll taken when two candidates are facing off, but I'm not exactly sure how to apply that to a ballot measure that requires a certain percentage to pass. Specifically, a poll found 51 percent of voters plan to vote against a measure, 42 percent are for it and 7 percent are unsure, with a margin of erro
And the response (the nut of it, at least; most of the response is taken up with noting that the questioner seems to have overlooked the character limit on questions):
I'll try to find out, but rpt just this sentence with the missing words ...
I don't think it's really a trick question,* or something unique to the AP. It's broader than that, and that makes it interesting, because it says something about how journalism interprets concepts like expertise and authority. "Margin of error" isn't something you can own, like Yastrzemski's batting average in 1967;** it's something you have to take an expert's word for (once you find an expert). And if you're interested in social science reporting, word choice, audience perceptions and other stuff that comes up around here at times, that's a way of understanding what things like "precise" and "credible" and "objective" mean when we apply them to news decisions.
What's the answer? Well, start with the question. The stylebook doesn't say anything about how to handle a poll "taken when two candidates are facing off" (or putting the gloves on, or taking the gloves off, or doing the horizontal hokey-pokey). It addresses how to discuss a difference between two candidates.*** There's no mention of whether that means two of two candidates or two of five candidates -- or if it's a primary with a 40% threshold, or a ballot issue ("yes" and "no" being about as two-candidate as you can get) with a 60% threshold, or anything like that, because none of those things affect what the margin of sampling error measures. So the answer is simple: Apply the same guidelines.
That doesn't mean the question is a stupid question. It suggests that the questioner doesn't know much about the domain involved (which is vastly different from being stupid). The problem is that a straightforward question ought to get a straightforward answer, and what it gets is a misleading (I'm trying hard not to say "stupid") one.
Misleading how? Because it pushes a mathematical formula that's only slightly more complicated than "earned-run average" into the domain of wizardly expertise. Journalists can't live there; we can only knock on the door, accept what we're told, and go home to see what the Times is saying. And the whole point of inferential stats is that -- as in baseball -- we all own them. We might use a batting average to support different arguments (you say a .367 average proves that power is irrelevant, and I say it fails to show that the clown doesn't hit with runners on second or third). But we're both using the same proportion, and in both cases it represents the same thing. If someone asks how to calculate batting averages if a player needs 350 plate appearances to qualify for the batting title, we have the same answer: No difference. Do all the same calculations, then go to "select cases" and include 'em out**** if PA is less than 350.
If we want to cure what ails poll reporting, and science reporting in general, that'd be a good place to start. The discussion section -- where you talk about what your statistics mean, and why they do or don't support your theory-driven predictions -- is where experts disagree (and ought to be consulted; there's nothing wrong with talking to smart people, as long as you're talking to the right people about the right stuff). The results section is a slightly more complicated version of what you've been doing since the first time you picked up a Sunday sports section and started pondering the AL Top Ten. You can play there. Any copyed, on any desk, anywhere, can own "sampling error" as thoroughly as the sports desk owns "earned-run average." And once you do, you can challenge any faulty conclusions that rely on your being too ill-informed -- or too cowed by magic -- to challenge.
Is the AP serious about "accountability journalism"? A little more aggression with undergraduate statistics would be a good place to start.
* As in "Hey, Stylebook! If a plane crashed on the border of two countries, where would you bury the survivors?" Though it's close.
** Though since you're reading this on a computer, you probably have a calculator a few clicks away, and the margin of sampling error needs maybe one more keystroke to calculate.
*** To its everlasting credit, the stylebook also notes that there is no such thing as a "statistical tie." Please heed it.
**** Should have been Berra, was probably Goldwyn. Alas.
I have a question about reporting poll results and the margin of error. The stylebook address a poll taken when two candidates are facing off, but I'm not exactly sure how to apply that to a ballot measure that requires a certain percentage to pass. Specifically, a poll found 51 percent of voters plan to vote against a measure, 42 percent are for it and 7 percent are unsure, with a margin of erro
And the response (the nut of it, at least; most of the response is taken up with noting that the questioner seems to have overlooked the character limit on questions):
I'll try to find out, but rpt just this sentence with the missing words ...
I don't think it's really a trick question,* or something unique to the AP. It's broader than that, and that makes it interesting, because it says something about how journalism interprets concepts like expertise and authority. "Margin of error" isn't something you can own, like Yastrzemski's batting average in 1967;** it's something you have to take an expert's word for (once you find an expert). And if you're interested in social science reporting, word choice, audience perceptions and other stuff that comes up around here at times, that's a way of understanding what things like "precise" and "credible" and "objective" mean when we apply them to news decisions.
What's the answer? Well, start with the question. The stylebook doesn't say anything about how to handle a poll "taken when two candidates are facing off" (or putting the gloves on, or taking the gloves off, or doing the horizontal hokey-pokey). It addresses how to discuss a difference between two candidates.*** There's no mention of whether that means two of two candidates or two of five candidates -- or if it's a primary with a 40% threshold, or a ballot issue ("yes" and "no" being about as two-candidate as you can get) with a 60% threshold, or anything like that, because none of those things affect what the margin of sampling error measures. So the answer is simple: Apply the same guidelines.
That doesn't mean the question is a stupid question. It suggests that the questioner doesn't know much about the domain involved (which is vastly different from being stupid). The problem is that a straightforward question ought to get a straightforward answer, and what it gets is a misleading (I'm trying hard not to say "stupid") one.
Misleading how? Because it pushes a mathematical formula that's only slightly more complicated than "earned-run average" into the domain of wizardly expertise. Journalists can't live there; we can only knock on the door, accept what we're told, and go home to see what the Times is saying. And the whole point of inferential stats is that -- as in baseball -- we all own them. We might use a batting average to support different arguments (you say a .367 average proves that power is irrelevant, and I say it fails to show that the clown doesn't hit with runners on second or third). But we're both using the same proportion, and in both cases it represents the same thing. If someone asks how to calculate batting averages if a player needs 350 plate appearances to qualify for the batting title, we have the same answer: No difference. Do all the same calculations, then go to "select cases" and include 'em out**** if PA is less than 350.
If we want to cure what ails poll reporting, and science reporting in general, that'd be a good place to start. The discussion section -- where you talk about what your statistics mean, and why they do or don't support your theory-driven predictions -- is where experts disagree (and ought to be consulted; there's nothing wrong with talking to smart people, as long as you're talking to the right people about the right stuff). The results section is a slightly more complicated version of what you've been doing since the first time you picked up a Sunday sports section and started pondering the AL Top Ten. You can play there. Any copyed, on any desk, anywhere, can own "sampling error" as thoroughly as the sports desk owns "earned-run average." And once you do, you can challenge any faulty conclusions that rely on your being too ill-informed -- or too cowed by magic -- to challenge.
Is the AP serious about "accountability journalism"? A little more aggression with undergraduate statistics would be a good place to start.
* As in "Hey, Stylebook! If a plane crashed on the border of two countries, where would you bury the survivors?" Though it's close.
** Though since you're reading this on a computer, you probably have a calculator a few clicks away, and the margin of sampling error needs maybe one more keystroke to calculate.
*** To its everlasting credit, the stylebook also notes that there is no such thing as a "statistical tie." Please heed it.
**** Should have been Berra, was probably Goldwyn. Alas.
0 Comments:
Post a Comment
<< Home