Monday, August 27, 2018

Oh, stop it

Is it time to complain about the misinterpretation of public opinion data again?  Yes, and Nate Silver is the one overplaying things here.

The point of the complaint seems to be that @MeetThePress is overemphasizing the stability of Trump's approval based on a single poll finding. To be fair, yeah: Most journalists writing about survey data would do well to dial back the loud adverbs, just on principle. But on the whole, there aren't a lot of reasons to reject the null here, and when in doubt, it's always good to err on the side of not jumping up and down for the sake of jumping up and down. Let's have a look at the data and see why.

The NBC/Journal poll (n = 600 registered voters, 44% approve, 52% disapprove) is the only one I know that was in the field entirely after the (ahem) events of Tuesday. The "margin of error" at 95% confidence is +/- 4 points. As Silver notes, the approval finding is down 2 points on an NBC poll that was in the field Aug. 18-22, and with 2 points being almost exactly the standard error,* he suggests that that's a bigger finding than "stable."

I think not, for a couple of reasons. NBC/WSJ surveys from June and July showed 44% and 45% approval, respectively, so one perfectly good interpretation of the approval result is "same as in June." More importantly, though, all samples are estimates. We can't prove that the population value (the actual figure that the samples are trying to estimate) has remained stable at 45% since June, but findings of 44-45-46-44 would be a really good representation of such a reality.

It's also entirely possible (when in doubt, bet the mean) that Trump's approval has inched up and then down since June at exactly the rate the samples indicate, or even a bit more or less. The findings could reflect a change in population value from 48% to 42%. They could also all reflect a steady population value of 48% or 42%; that would be a much less likely but still nonchance result (assuming identical sample sizes, which isn't spelled out in the pdf). And any of those results could be an outlier; a 95% confidence interval tells you that one of every 20 will be, but it doesn't tell you which one (or which 20). The safest conclusion about Trump's approval among registered voters since June is: Yep, pretty stable.

True, a decline of 2 points would be "reasonably big"** if "it held across other polls," and it'd be bigger if the SE was 1.5 instead of 2, but "other polls" haven't happened yet, and I'd rather that poll stories talk about what has been found rather than what might be. And "pretty stable" is what this poll has found -- like it or not.

That, I think, is the real issue: the urge to bash the methodology when you dislike the results of a survey. That's normal journalism, in that journalism finds it hard to resist pounding the data into the shape of a story, however hard the data resist. But business as usual -- specifically, the urge to proclaim that Trump's support was "tanking" when the data said no such thing -- runs the risk of making us look both biased and stupid. I'd prefer we aim for "neither."

* For approval, the square root of ((.44)(.56))/600, if you're scoring along at home; multiply by 1.96 to get the "margin of error" at 95% confidence. For simplicity, you can use .25 for the product, which would give the maximum margin for any finding based on the whole sample.
** See what I mean about the damn adverbs?

Labels: ,

0 Comments:

Post a Comment

<< Home