Wondering about that funny-looking chunk of election-oriented social science that just landed in your basket for tomorrow's paper? Here are some quick tests to help you tell whether it's ready for the front page or the big sleep. And there's a real news story to practice on!
[That's actually the serious part, because this story has been floating around the McClatchy Web site since yesterday, suggesting it's probably headed for print at some shops that use the McClatchy service. It really, really needs to be killed, so if you see it headed for print ...]
Here's the story:
(Several assertions in the hed are either questionable or flatly wrong, but let's move on to the text first)
John McCain struck again on Friday, releasing a Web video suggesting that his Democatic rival, Barack Obama is "The One," a semi-religious figure sent to save the world. The spot includes footage of Charlton Heston as Moses, parting the Red Sea. The ad was the second released this week by McCain intended to make fun of Obama. Earlier, the campaign issued an ad that likened Obama to Britney Spears and Paris Hilton in an effort to take the shine off the huge crowds Obama drew in Berlin during his European tour.
Friday's ad takes that theme one step further, lampooning Obama's soaring rhetoric and suggesting that the Illinois senator suffers from a Messianic complex.Wondering when we'll get to the study? That's a good question. First, though, you might want to point out a contradiction between the lede and the second graf. What the McCain camp has done is to release a video. It's not an "ad" until someone buys time or space* for it (which is why we say stuff like "tell him to buy an ad" when a candidate tries to pitch a story about the other candidate's messiah complex). "Ads" also run the risk of disapproval by the outlet's ad-standards department. You can make fun of McCain's Intertube skills all you want, but he's getting some pretty good viral-campaign mileage out of this.
... A small study of people's reactions to the Britney-Paris ad suggested, however, that while people don't like the ad, it caused them to doubt Obama, and small percentages who'd said before viewing the ad that they'd vote for him said afterword that they wouldn't.
OK. Now we're out of "normative" territory (is it appropriate to write a story that, in effect, replicates a dishonest ad?) and into nuts-and-bolts land. If this is a "study," there's a set of questions we need to know the answers to before it can run. First, what kind of "study" is it? Masscomm research breaks down broadly into two kinds:
1) Studies that count stuff, test the resulting numbers and draw inferences, and
2) Everything else
"Everything else" is a huge range of of domains and methods: history, law, Saidian critical discourse analysis, focus-group discussions of pizza ads, and more. You shouldn't sell it short, but today we're talking about the first kind of study.
That settled, we need to ask what kind of quantitative study it is, because methods aren't interchangeable. Content analysis can tell you that the War on Terror® looks different on Fox than on the BBC, but it can't tell you what effect that difference has. Surveys can tell you what people say, but not what sort of content makes them say it. The story isn't complete if it doesn't tell you what "study" means. And when that's settled comes the fun stuff: What did they measure, how did they measure it, and what do the results look like?
Those declines didn't result in more support for McCain; doubting Democrats and Republicans instead moved into the undecided column. Independents who moved away from Obama did say they'd vote for McCain.
The study, of 320 Americans, found that a majority of Republicans were "disturbed, skeptical" and "saddened" after viewing the ad and that 61 percent of Republicans had a negative view of the ad.
...While viewing the ad, participants indicated their levels of agreement by moving their computer mouse from left on a continuum. The responses were recorded in quarter-second intervals and reported in the form of curves. Participants were also asked pre- and post-viewing questions.
McClatchy hasn't bothered to say, but at this point you can figure out that the "study" is an experiment, not a survey. That puts the hed in a different light: 320 would be small for a survey,** but it's really big for an experiment. (And it's a study of only one ad, so the hed's wrong on that count too; we can't talk about what the "ads," plural, are doing, because we aren't measuring it.)
Is it a good experiment? Yet another set of questions. For starters, participants aren't randomly assigned to conditions. That's not a deal-breaker (after all, you can't randomly assign people to smoke or be pregnant), but it puts us in the category of "quasi-experiment. It's a single-shot, pretest-posttest design with no control group. That means any conclusions about the effect raise an immediate question: Compared to what?
If the ad does anything, we don't know how it compares to the effect of no ad at all.*** We don't know whether McCain ads have more impact than Obama ads, or whether an "acclaim" (pro-McCain) ad has more impact than an "attack" (anti-Obama) ad. With only one stimulus, we have no idea what element of the ad -- visual, voice, music, content -- might be having the effect. Which puts this ominous paragraph in a whole different light:
But the results that may have been most telling were the changes in whom the participants would vote for and suggested that such advertising could have an impact, especially among independents.
If McClatchy thinks this result is the "most telling," why isn't it the one that the researchers emphasize? Why do the researchers note, to the contrary, that "the ad did not move voters"? That gets to what's measured and how, so let's try to tease some numbers out of the story and the original report and see what we can do with Excel and that nice VassarStats link to the right.
The researchers probably didn't mention this ominous sign of the Power of Evil Ads because it's irrelevant. Put the changes for "who would you vote for today?" in a chi-square and the P value comes out to about .89. That represents less than one chance in eight that there's any difference related to watching the ad. We don't know where the almost imperceptible change -- for the record, three original Obama voters went to "other" and three to "undecided" -- came from. We do know that whatever is happening on that question (not "doubt," which seems to have been made up by the reporters****) is almost certainly not a result of the experimental treatment.
That doesn't mean there aren't significant results of the experiment. There are. If you treat "very favorable," "somewhat favorable" and the like as nominal data, the ad has no effect on opinions about Obama but a significant negative effect on opinions about McCain. If you squint a bit and assume that the intervals from 1 (very negative) to 4 (very positive) are equal, the ad makes Republicans significantly more positive about McCain and Democrats and Independents significantly more negative. The mean differences are small (with an N of 320, you don't need much to reach significance, but that's another issue), but they almost certainly didn't come about by chance.*****
Whether the "if the election was today" question is a better predictor of voting behavior three months in the future than the "your overall opinion" question is a matter for debate. The results aren't. To the extent we can say anything at all about its impact, the ad affects opinion but not intent. And if the ads "hurt" anyone, they "hurt" McCain.
If you've been following the playbook -- what sort of study, what was measured, what do the numbers say -- the conclusion ought to be pretty clear. Kill the story. Right now. Or ask McClatchy to provide a version that accurately reflects what the study found, rather than what the reporters speculate.
* Unless MCT is donating the space as a public service, like an antismoking campaign, and it really doesn't want to go there.
** MCT has certainly been happy to draw inferences from smaller subsamples in the past, though.
*** With 320 people, you could have 160 watch a 2-minute clip from "The Daily Show" (80 with ad, 80 with no ad) and 160 watch a 2-minute clip from "The O'Reilly Factor" (80 with, 80 without). Now you have a 2 (show) x 2 (ad) x 3 (affiliation) design, and that's going to start being fun in a hurry.
**** As does the bit about whether participants "dislike" the ad. If it wasn't measured, you can't say it was. Period.
***** You can, and should, try this at home. To see whether the average "before" response differs from the average "after" response:
1) Find the original data and convert the percentages back to raw numbers
2) Create an Excel sheet with three columns: party (1, 2, and 3, just to make things easier), before and after. The first 104 cases are the Democrats (1 in the "party" column), the next 108 are the Republicans (2) and the next 108 are the Indys (3)
3) In "before" and "after," enter the number of responses that correspond to each level of the variable. Among Democrats, four have a "very favorable" opinion of McCain before seeing the ad, so the first four rows in "before" get a 4.There are 21 "mostly favorable," so the next 21 rows get a 3. In "after," the first 5 rows get a 4 (for "very favorable"), the next 18 get a 3, and so on.
4) Run a T-test (under "data analysis") on the second and third columns, then select just the Democrat, Republican and Independent conditions. The test will tell you what the averages are for before and after and confidence level of the test statistic -- whether the difference is statistically significant.
Labels: politics, science