How many pollsters does it take to screw in a light bulb? Part two.

Posted on August 17, 2011

14


Previously — How many pollsters does it take to screw in a light bulb? Part one.

———————————-

It's Mr. Hyde who votes, but Dr. Jekyll who answers the poll.

The previous post ended with a look at the way results can get skewed when respondents try manipulating the polls.

Another form of manipulation is the “social desirability bias,” although here, rather than trying to manipulate the polls, respondents are instead trying to manipulate their own image by providing answers they believe conform to societal values. This can skew results by making nuanced beliefs less apparent. Suppose, for instance, a candidate runs on a platform to increase funding for affordable housing in order to reduce homelessness — a position the polls show is supported by 70% of the population. On election day, however, he loses to an opponent promising to freeze funding for public housing, and increase funding for mental health facilities. Was the candidate’s defeat due to a significant shift in public opinion? Probably not. A greater likelihood is that a large number of the poll respondents privately believed that at this point homelessness was due more to mental problems and drug addiction, but were reluctant to say “no” to public housing out of a fear of appearing heartless.

A variation (or suspected variation) of this is known as The Bradley Effect.

Tom Bradley

Back in 1982, when black candidate Tom Bradley was running against white candidate George Deukmejian for Governor of California, polls showed Bradley with a double digit lead. So confident were many pollsters that they declared him the victor before all the ballots had been counted. Imagine their embarrassment, then, when he lost. Many analysts have attributed this discrepancy to white Deukmejian voters wanting to look progressive by declaring their support for a black candidate. (It should be noted, however, that a private pollster, Gary Lawrence, called the election accurately. Furthermore, some analysts, such as Sal Russo of the Wall Street Journal, believe Bradley lost for reasons unrelated to race, and that the failure to predict the results was a fault of polling methods, not duplicitous respondents.)

Whether or not the Bradley Effect exists, however, the “social desirability bias” is certainly real, and has exerted a strong influence on many polls, including (but not limited to) polls dealing with:

  • Sexual behavior and fantasies
  • Personal income and earnings
  • Feelings of low self-worth and/or powerlessness
  • Excretory functions
  • Compliance with medicinal dosing schedules
  • Religion
  • Patriotism
  • Bigotry and intolerance
  • Intellectual achievements
  • Physical appearance
  • Acts of real or imagined physical violence
  • Indicators of “kindness” or “benevolence”
  • Illegal acts

Naturally, professional pollsters attempt to compensate for this, through such methodological devices as the Marlowe-Crowne Social Desirability Scale, but they can’t always be sure of the results. To further complicate matters, things that were socially desirable a decade ago may be socially undesirable now, and vice versa. For instance, the author of the Wikipedia article from which the above list was taken, notes that the social desirability bias often causes respondents to sanitize sexual behaviour and fantasies. While this was undoubtedly true at one point, it is less certain in an age of teen sexting, blatantly sexualized lyrics in popular music, and the promotion of sexual experimentation under a banner of feminism. Likewise, although people were once quite hesitant to admit having been victimised, it would now appear to be a badge of honour — as evidenced by the steady parade of victims lining up to tell their stories to newspaper reporters, television interviewers, social workers, and talk show hosts.

Seriously. Who are you going to believe about medical science: Jenny or some stuffy scientist?

All of these problems (and a host of others) make polls an extremely unreliable source of information — and policies built upon them often fail for exactly this reason. Perhaps the most worrisome aspect, however, is the use of public opinion polls to decide issues more properly decided on scientific criteria alone. Does Dr. Paolo Zamboni’s controversial treatment for MS work? Yes, according to a large percentage of the population. Doctors conducting clinical trials, on the other hand, are less confident. Likewise, magnetic therapy, aside from a few very specific applications, continues to prove completely worthless, no matter how many millions of people believe in the power of magic bracelets.

A compelling example of how much damage that opinion-based decisions can inflict on public health is frighteningly illustrated by the fallout from Andrew Wakefield’s fraudulent report linking MMR vaccines to autism. First published in 1998, Wakefield’s “theory” has enjoyed several cycles of popularity, each more intensive than the next.  The support it has gained from such scientific dignitaries as the Dixie Chicks, Jim Carrey, and Playboy model Jenny McCarthy has convinced hundreds of thousands of European and North American parents to refuse inoculations for their children. His paper was thoroughly and convincingly refuted a few years back, and was finally retracted in 2010, but as the following graph shows, the damage he’s caused has been immense.

But this brings up a rather serious question: In a democracy, should issues of public health be decided by medical researchers or public opinion? If a majority of people believed that operations should be scheduled according to astrological principles, should public health workers follow the wishes of the people, or of the doctors?

Before coming solidly down on the side of the doctors, it should be remembered that while public opinion may often be misinformed, ignorant, and even wilfully stupid, it is nevertheless the bedrock of our society. Furthermore, scientific consensus itself is often strongest just before being overturned by a fundamental paradigm shift, and science that has been declared to have been settled has an annoying tendency of becoming quite suddenly very unsettled.

Not all polls are completely worthless, and the truth is that sometimes valuable insights can be gained from them, but separating the wheat from the chaff requires long and diligent analysis, and can be prone to many errors along the way.

How interesting it would be to have a ten year moratorium on polls. This, of course, is not only impractical, but also an outrageous violation of our rights and freedoms. Still, what changes might we witness in elections, public discourse, and government policies? It’s possible that, free from the constraints of being shoe-horned into quantified scales of pre-set answers, even our opinions may subtly change, becoming more nuanced, and our differences less polarised.

On the other hand, probably not. Still, it might stop the phone from ringing quite so much during dinner hours.

And finally — the answer to the riddle

Q: How many pollsters does it take to screw in a light bulb?

A: Just two. The trick is getting them into the light bulb. (Ba dum dum.)

Advertisements
Posted in: • No category