From man to pig ...
Fair warning: If you aren't interested in another rant about the journalistic misuse of public opinion data, now would be a very good time to check out the comics.
Who wants to hazard a guess on the offense reflected in this McClatchy tale from Washington? (That's Augusta, top, and Wichita, the only ones I can find that fronted it, but if you ran it inside, you're still among the sinners.) Here's a hint: What's the difference between this story -- "New Ipsos-McClatchy online polls find that patients in Canada are indeed much more frustrated by waiting times to see medical specialists than patients in the United States are, and slightly less happy with the waiting times to see their family doctors," and "Poll: Americans split on health care," from last week?
All right. To spoil the surprise, "Americans split" was a survey of a random sample of American adults, meaning we can make fairly accurate predictions about the attitudes of the population -- all American adults -- from it. What sort of sample does today's story represent?
The online polls surveyed 1,004 U.S. adults July 9-14 and 1,010 Canadians on June 5-7. They aren't scientific random samples, don't statistically mirror the population and thus have no margin of error. Rather, they resemble large focus groups to help see what people are thinking about a particular issue.
It's nice of McClatchy to say it so prominently, and in so many different ways, but the essence of the methods graf is that the story is a crock. We have no way of knowing which country is more frustrated with waiting times because we don't have the kinds of samples that allow for those generalizations. And if you know enough to write that these samples "resemble large focus groups," you ought to know that.
There is a larger point than just standard Washburo carelessness and wire-desk timidity. When we beat on the Fair 'n' Balanced Network for its reporting on public opinion, we aren't complaining about its poll results. Competently run national surveys -- consistent, single-dimension questions, asked of genuine random samples -- tend to produce very similar results, no matter whether they're funded from Mars or Moscow or Rupert Murdoch's private bunker. It's no accident that Fox and the Washington Post reported nearly identical job-approval ratings for Bush in January; random samples really do generalize to populations.
The trouble with Fox is that it cheats. It uses different sets of rules for outcomes it likes and outcomes it doesn't like, and it reports non-polls as if they were the real thing -- say, claiming a "landslide of support for McCain" in the "military vote," based on a self-selected sample from a nonrepresentative population. If McClatchy doesn't want to be mistaken for Fox -- that is, if it doesn't want to be known as the sort of outfit that makes stuff up to support its ideological preferences -- it needs to do a better job of playing by the rules. When you look from man to pig and from pig to man, you'd like to be able to tell the difference.
Who wants to hazard a guess on the offense reflected in this McClatchy tale from Washington? (That's Augusta, top, and Wichita, the only ones I can find that fronted it, but if you ran it inside, you're still among the sinners.) Here's a hint: What's the difference between this story -- "New Ipsos-McClatchy online polls find that patients in Canada are indeed much more frustrated by waiting times to see medical specialists than patients in the United States are, and slightly less happy with the waiting times to see their family doctors," and "Poll: Americans split on health care," from last week?
All right. To spoil the surprise, "Americans split" was a survey of a random sample of American adults, meaning we can make fairly accurate predictions about the attitudes of the population -- all American adults -- from it. What sort of sample does today's story represent?
The online polls surveyed 1,004 U.S. adults July 9-14 and 1,010 Canadians on June 5-7. They aren't scientific random samples, don't statistically mirror the population and thus have no margin of error. Rather, they resemble large focus groups to help see what people are thinking about a particular issue.
It's nice of McClatchy to say it so prominently, and in so many different ways, but the essence of the methods graf is that the story is a crock. We have no way of knowing which country is more frustrated with waiting times because we don't have the kinds of samples that allow for those generalizations. And if you know enough to write that these samples "resemble large focus groups," you ought to know that.
There is a larger point than just standard Washburo carelessness and wire-desk timidity. When we beat on the Fair 'n' Balanced Network for its reporting on public opinion, we aren't complaining about its poll results. Competently run national surveys -- consistent, single-dimension questions, asked of genuine random samples -- tend to produce very similar results, no matter whether they're funded from Mars or Moscow or Rupert Murdoch's private bunker. It's no accident that Fox and the Washington Post reported nearly identical job-approval ratings for Bush in January; random samples really do generalize to populations.
The trouble with Fox is that it cheats. It uses different sets of rules for outcomes it likes and outcomes it doesn't like, and it reports non-polls as if they were the real thing -- say, claiming a "landslide of support for McCain" in the "military vote," based on a self-selected sample from a nonrepresentative population. If McClatchy doesn't want to be mistaken for Fox -- that is, if it doesn't want to be known as the sort of outfit that makes stuff up to support its ideological preferences -- it needs to do a better job of playing by the rules. When you look from man to pig and from pig to man, you'd like to be able to tell the difference.
Labels: polls
0 Comments:
Post a Comment
<< Home