Our friend Rick Perlstein had a good piece in The American Prospect yesterday about polling and its pitfalls — both its pitfalls as a practice, with its evolving, imperfect methodologies, but also as something we political junkies obsess over. The gist of his argument is contained in the subhed: “Presidential polls are no more reliable than they were a century ago. So why do they consume our political lives?” I don’t think Rick quite sustains that claim fully. I’m not sure he’s actually trying to. But as is the case with many articles, its interesting enough for the points and the bits of history he shares along the way. The gist is that new methodologies keep working great until they suddenly don’t, and then it’s on to some new methodology. Then there’s the fact that for decades pollsters always seemed to stop polling too soon and miss big shifts at the ends of campaigns.
Meanwhile, there’s this thread from Adam Carlson, one of the numbers-and-polls types I follow on Twitter. It’s another one of those good discussions of what polls are and are not for, and why and why not to get obsessed with them. He mentions one thing that I think about a lot. Who’s winning or not winning is the least important part of a poll for an actual campaign. They look at directional movement in the race, what messaging is and isn’t working, and how best to allocate resources, which is usually downstream of what is and isn’t working. If you’re the campaign and the poll says you’re behind, well, what does that tell you? You need to try harder? I mean, presumably you’re already doing everything you can. The interesting thing is that the part of the polls most people are obsessed with is that part that the people paying for them, or people using them for their jobs, care about least. There’s probably some message there.
Pollsters and poll analysts like to say that most people don’t really understand how probability works. They haven’t really internalized that a poll is a snap shot and not a prediction. This is all true, but the funniest or more perverse part of poll-obsessive discourse (which includes the consumers and producers of polls) is that the polling experts don’t really understand them either. Or, to put it more precisely, if you listen to the pollsters and polling experts when they’re describing their polls to public, jousting with campaign and pollster rivals and generally arguing about the results, they quickly toss aside all that caution of uncertainty and probability themselves.
A lot of us are just living and dying with the polling of the day because we’re obsessed, we are lost in that human frailty of trying to control the future by understanding it, albeit often illusorily, as well as we can and we just can’t stop.
I was telling someone recently that on a basic level I don’t understand gambling. I don’t get what’s fun about it. Don’t try to explain it to me. Because I do understand it at that level. I’m saying it’s never personally had any appeal for me. There’s a part of most people’s brains I don’t have and so there’s something that clicks for many people about a certainly structured kind of risk taking that is lost on me. The reverse is the case with polling. I can’t put it down. And it’s the same for a lot of other people who are really into politics. It is what it is. As long as you’re just betting lunch money on the weekend game, it’s probably not a huge problem.
Rick describes in his piece a story the Times’ Nate Cohn tells about taking the raw results of a poll and showing them to a handful of highly respected pollsters to weight them. This means adjusting the results based on weighting responses from different population groups — men and women, various income levels, racial characteristics, and education levels — to match each group’s percentage of the voting population and their propensity to vote. The top line results Nate got back from the same raw data ranged 5 or 6 percentage points — so from one candidate being up by four to the other being up by one — based on who was doing the weighting and the decisions they made.
It reminds us of the essential point: a significant amount of modern polling is about theories of the electorate. These aren’t random guesses. They’re based on a lot of data and experience. But they are still theories for which the most critical data to confirm or disconfirm them — the results of the election, the outcome of which they are trying to predict — is not yet available. Because they’re not random guesses, good pollsters should all be pretty close in their understanding of the shape of the electorate and thus the results of their polls. But in a close election, the range of the results based on different good faith theories can be bigger than the actual difference in support between the two candidates.
Source link