I have a long list of methodological issues with PPP polls. Today, the list goes onto a second page.It turns out that PPP permits respondents who give “all 1s” to every question, which distorts their results.
I’ve written at length about PPP’s troubling methodological choices. The firm failed to disclose important methodological decisions, offered inconsistent or baffling explanations, and continues to employ an unscientific and inconsistent approach. Altogether, it’s difficult to distinguish PPP’s polling from weighting toward an intended result. Now the question is: “how should we use PPP going forward?”It’s a tough question.
Last week, I wrote a long piece about PPP’s troubling methodological choices. Some people assumed it was a continuation of a fight the previous day between Nate Silver and PPP, but it’s not. And other people latched onto various elements at the expense of others. Many of those elements are important, like PPP’s baffling incompetence, or its decision to surreptitiously let the self-reported ’08 vote inform the racial composition of the electorate.
The problem with PPP's methodology
No pollster attracts more love and hate than Public Policy Polling. The Democratically aligned polling firm routinely asks questions that poke fun at Republicans, like whether then-Senator Barack Obama was responsible for Hurricane Katrina. Not coincidentally, Republicans routinely accuse them of being biased toward Democrats. Last fall, PPP was front and center in conservative complaints about allegedly skewed polls. But when the election results came in, PPP’s polls were vindicated and the conspiracy-minded critics were debunked.
Yesterday, my inbox blazed with the news that Obama's standing among African Americans is plummeting in North Carolina.