<$BlogRSDUrl$>

Maintaining Optimism in the Face of Reality. Occasional observations on the state of the world, society, business and politics. Usually anchored by facts, always augmented by opinion.


Was the NEP Exit Poll Designed to Find a Divided America?  | e-mail post

In my last post, I discussed the problems and maybe even the pointlessness of exit polls as tools for predicting election outcomes, given both the logistical challenges of coming up with a meaningful sample in numerous narrowly fought races around the country as well as the fact that there is really just not that great a delay in learning the true election results from the polling places and the states, especially if one considers that voting systems will only become more automated over time.

I wrapped up my critique and dismissal of the exit polls as a worthwhile election result predictor (as opposed to a provider of meaningless numbers to blather about until there is some real news) by mentioning the other role of exit polls:
However, let's at least consider the more hopeful possibility, that this information is extremely valuable for determining the opinion and sentiment of the electorate.
Certainly, this is a sentiment almost uniformly held by exit polling firms themselves, most other pollsters and any professionals or academics involved in public opinion research. Gerald Kosicki, Director of Ohio State's Center for Survey Research describes the noble role of exit polls and the genuine gravity of their mission in his essay 2003 "Framing Elections":
Exit polls are the main ways that the meanings of elections can be interpreted in an unbiased and non-partisan manner. The goal is to explain the behaviors and intentions of voters and the data are coming from a random sample of those individual voters. Without exit polls, winners would be unbound in terms of their so-called mandate to undertake their favorite programs, whether or not those programs are why people elected them. Exit polls give an independent source of additional information that journalists, scholars and citizens can use to help understand what voters really meant when they voted a certain way on Election Day. These data are important and must be gathered systematically and well....

It may be that people, if they think at all about exit polls, may see them as being about speed. But scholars know that exit polls are about long-term interpretation and analysis in the service of democracy. Indeed, they may be irreplaceable resources to help us understand our elections. Creating these contemporaneous data is important work.

The demise of VNS earlier this year brings Mitofsky back onto center stage in the business of providing data to carry on the important work of providing data to facilitate the interpretation of the meaning of the elections in the world’s oldest democracy. Godspeed.
Kosicki's essay is part of a Roundtable that also includes a piece by Michael X. Delli Carpini, the Director of the Public Policy Program at Pew, as well as one from Michael Traugott, Chair of UMich's Communication Department and past President of the American Association of Public Opinion Research. (As an aside, longtime HC readers may have read that AAPOR is the organization whose professional and ethical standards and practices John Zogby tends to ignore.)

Both Carpini and Traugott echo Kosicki's sentiments of the importance of good exit polling to our democracy. Further, all three suggest that this far more important function is unrelated, if not antithetical, to the objective of the media underwriters, which is the rapid and ideally accurate prediction of an election's outcome.

While that is may be the highest and best use of exit poll data, I am afraid that as I see the "interpretations" of it (most of it just basic two-variable crosstabs) and look at the National Election Pool (NEP) data, available online, I am concerned it even missed the mark there.

However, even worse than grotesquely simplifying the components of the most significant decision we as a nation make each quadrennium, I believe the exit poll results, due primarily to question design and wording choices, have caused unnecessary (but understandable) heartache and confusion about the state of our nation by many people reading the high-level crosstabs and are being actively referenced by the media to deliver endless news reports of our "divided" nation.

My initial thinking when I started to write this piece was that the trouble with the voter opinion questions was due to an excessive focus on getting the race predictions right, and some hasty trimming of the voter opinion questions to shorten the questionnaire and make it more likely voters would take the time to respond (thus improving the sample quality and therefore the predictive power for calling races).

However, as I started looking at some of the voter opinion questions, particularly in comparison to both previous exit polls run by NEP predecessors, Voter News Services (VNS) and Voter Research Services (VRS) as well as this year's LA Times exit poll, I felt it less and less likely that hasty or sloppy question design simply accentuated the sense of division that we all certainly have seen and felt. The more I looked at some of the questions, and the formulation of the results, it seems more probable that the opinion questions were intentionally designed with a certain disregard for best practices in opinion research, presumably at the direction of the exit poll sponsors: the media networks.

Is the NEP Exit Poll Sample Suitable For Professional Opinion Research?

(To the reader: the bottom line is that it almost certainly can be, or is at least as good as the other exit polls conducted by the networks, which is to say, it's at least better than no information. Feel free to skip this section unless you really want to read about sample validity issues.)

If the most important function of an exit poll is to provide professional and academic researchers visibility into the minds and motivations of the voting public, it is worth asking if they NEP data are even sufficiently valid to serve this function, particularly in light of their gross inability to deliver on the media's prime goal of delivering race predictions due to significant skewing.

General Sample Validity

The first item to consider in terms of the NEP data validity for opinion research is fairly simple: did they capture a statistically valid sample that reflects the broad diversity of opinion and motivations of that enormous yet amorphous beast, the electorate? I think this is a debatable question.

Certainly the media establishment is beating up the pollsters on the sampling of the exit polls because of the significant skew in the surveyed voting behavior. Of course, the same media organizations are running all kinds of stories about the "moral values" issue and all manner of other indications of the "deep divisions" in the country based entirely on the very same polling data.

However, the predictive flaws exhibited by the data do not prima facie indict the sample as a valid tool for opinion research, although the limited geographic coverage is a bit of a concern. I would need to see more of the site selection methodology, if not the raw data, to get comfortable with the validity of their geostrata sample, but we should give the NEP pollsters the benefit of the doubt, as they are very experienced and we should assume they have done their homework on sample development.

Sample Validity Due to Self-Selected Participation

Of course, another potential concern related to the sample's validity is that exit poll participants are somewhat self-selecting. While an exit interviewer may stop every fifth, tenth or 32nd person, participation is entirely optional. Thus, respondents are those who are willing and able (and maybe even want) to take the time to complete the survey. This may have the tendency of skewing the sample significantly in different ways, some of which may result and the most significant skewing may be in those motivation and opinion results.

Note that I am not suggesting that this self-selection bias was also exhibited by an apparent differential in exit poll participation interest between Kerry and Bush voters, at least not because they were Kerry voters or Bush voters. I imagine that it is more likely that the self-selection bias, if it did exist, was driven by demographic factors. However, because of some very sharp demographic distinctions that seem to exist between each candidate's voting base, the actual outcome would present itself as a differential between candidate supporters.

Sample Validity Can Impact on Voter Opinion Analysis from Fresh Data

An example of why sampling errors should lead all of us to view the quick crosstabs developed on election day and possibly even those being reported presently with a wary eye: in the 1990 off-year election the exit polls worked as the media most wanted them to, for calling the race outcomes. However, the early crosstabulations indicated Republican House candidates won 22% of the black vote, which was stunning; months later, the actual number turned out to be 18%, still a significant jump from the historical number, but not "wow."

Another example of this sampling challenge is more recent. Last year Mitofsky and Lenski at Edison Media Research and the LA Times both conducted exit polls in the California recall election. Mitofsky had a sample of 4,214 voters (Complete results: [HTML] [PDF] [Mitofsky as HTML]) and the LA Times [PDF Results] used 5,205. While they both delivered key outcome data, they did have some noticeable demographic differences as well:
Despite these variations, and numerous trivial demographic variations inside the margins of error, both polls predicted statistically similar election results: Edison was 46-34-12-8; the Times was 49-32-13-6. However, the income and education sampling differences above do appear to have a significant impact when examining aggregate voter opinion. Of the substantially better-educated and somewhat wealthier Times sample, 26% thought the California economy was doing well and 74% thought it was doing badly, as compared to Edison's result of 16% well/84% poorly.

As for the raw NEP data, Washington Post Managing Editor Steve Coll, while discussing his disgust with the exit poll results in an online chat [transcript] commented that "The scale of the flaws in the exit polling -- so great by the early hours of the morning that it called into question every aspect of its analysis of demography and voter preference -- led us to reduce the claims and narrow the focus of those stories as we moved from edition to edition."

Adjustments Can Correct the Sample's Validity for Opinion Research

While the potential for significant sampling error should encourage all of us to not blindly accept early interpretations of various data correlations, the possibility of discerning valid insights from the NEP data is very real. If there is a sufficiently large set of total observations, the sample can be statistically adjusted by weighting or trimming on those dimensions for which we have confidence in the validity of the adjustment, like race, religion, income and gender, as well as the vote itself (as the true numbers for all of these items are known, from the vote counts, voter registration data and the census). This does require some assumptions and judgement, so it may be imperfect, but samples always are: they represent the larger group, but are unlikely to precisely mirror it. However, they did it with the 2002 exit poll data, and that was a complete debacle, and I can't imagine these data could be any worse. [UPI explains 2002 voter behavior 54 weeks later]

However, even after the data gets normalized and adjusted, to correct the sampling errors, there is a serious and potentially intractable problem with the exit polls this year: the survey itself, that is, the questions.

The Questionnaire Compromises NEP's Validity for Objective Voter Opinion Analysis

...and I'm not talking about hanging chads or punchcard ballots.

2004 Exit Polls Offer Very Limited Insight Into Voter Preferences about Policy

Even if the NEP sample didn't have any issues, they didn't solicit nearly enough information to really gain real insight into voter opinions about policy issues. The opinion research reported in these exit polls is not nearly as extensive or informative as the opinion data collected in the 2000 exit polls, which provide much more insight into what was important to voters. [CNN Results] [National Results at LSU]

It is possible that there is more data that just hasn't been vetted and released, but I'm doubtful, as Lenski and Mitofsky blamed long exit poll questionnaires for some of the difficulty in getting a larger sample back when they were handling the 2000 exit polls.

But the worst part is that in the course of trimming down the survey not only were many questions eliminated that could have yielded valuable insights, but the few that remain may have been simplified to the point of being flawed. Observations based on these flawed questions are now unfortunately being presented as insight.

Just How Important is Writing Good Survey Questions, and How Hard Can It Be?

It's incredibly important, and it really is difficult. Just as a quick example of the difference wording can make on even a simple yes-no question: the question "Do you support limiting marriage to heterosexual couples?" polls about ten points higher than "Do you supporting banning gay marriage?"

The following is an excerpt from the AAPOR's list of best practices for survey and public opinion research. (Italics added for emphasis)
Take great care in matching question wording to the concepts being measured and the population studied.

The ideal survey or poll recognizes that planning the questionnaire is one of the most critical stages in the survey development process, and gives careful attention to all phases of questionnaire development and design, including: definition of topics, concepts and content; question wording and order; and questionnaire length and format. Ideally, multiple rather than single indicators or questions should be included for all key constructs.

Beyond their specific content, however, the manner in which questions are asked, as well as the specific response categories provided, can greatly affect the results of a survey. Concepts should be clearly defined and questions unambiguously phrased. Question wording should be carefully examined for special sensitivity or bias. Techniques should be developed to minimize the discomfort or apprehension of both respondents and interviewers when dealing with sensitive subject matter. Ways should be devised to keep respondent mistakes and biases (e.g., memory of past events) to a minimum, and to measure those that cannot be eliminated. To accomplish these objectives, well- established cognitive research methods (e.g., paraphrasing and "think aloud" interviews) and similar methods (e.g., behavioral coding of interviewer-respondent interactions) should be employed with persons similar to those to be surveyed to assess and improve all key questions along these various dimensions.
What Questions Were Flawed?

In any event, since there isn't much opinion data, the media has to report on what limited information they have, and apparently with only the most trivial level of analysis. The most obviously flawed questions that jumped out at me are the "most important issue" and "most important quality" questions. I am particularly frustrated with this "issue" question, as it has been at the center of the "divided country" story, and is becoming fuel for those in various camps seeking to validate an opinion or support an agenda. Here's the question:
Question: Which ONE issue mattered most in deciding how you voted for president? (Check only one)


% of Sample
GBush JKerry
RNader
Taxes5
57%
43%
0%
Education4
26%
73%
*
Iraq15
26%
73%
0%
Terrorism19
86%
14%
0%
Economy/Jobs20
18%
80%
0%
Moral values2280%
18%
1%
Health care8
23%
77%
*
What problems does this question have? Plenty.

First, it asks a voter to pick a single issue that most influenced their voting choice, which is fairly absurd for anything but a single-issue voter, particularly in this campaign. Further, the question is not equivalent to the more relevant insight into voter opinion: "what do you feel is the most important issue affected by this election?" Imagine a voter with two issues of equal importance and each favored a different candidate. For example, the voter might feel that terrorism will be better fought by Bush but believe Kerry would be better for domestic job creation. Under the wording of the question, the issue that would matter most in formulating the voter's decision would be the voter's third most important issue.

The other problem is with the choices:
Obviously when there are multiple interpretations of a question like this, it is very difficult to determine what preferences voters might really be indicating; and with Bush getting 80% of this group, the ambiguity provides rhetorical fodder for a variety of positions.

In fairness, it is worth noting that the idea of "Moral Values" as an issue of concern makes some sense when one examines the 2000 exit poll questions and results:
It is possible that although the pollsters needed to cut those questions to shorten the survey, they still wanted some ability to gauge the importance of the issue to people. It is also possible that they knew from the 2000 results that given a sufficiently limited number of choices for the "most important issue" it would be chosen, most likely heavily by Bush voters.

I think the easiest way to both illustrate how screwed up this question is, as well as to resolve the ambiguity, is by looking at an exit poll that does it better and compare the results. While the media consortium approach to exit polling has all but eliminated any other national exit polling to provide a cross-check, the LA Times does have a national exit poll. While it was based on a smaller sample of 5,154 voters from 136 polling places, and it does oversample California, their results for the "major issue" question are largely consistent with the NEP exit poll data and provide some guidance, if not clarity, about what the issue of Moral Values means:
Question: Which issues, if any, were most important to you in deciding how you would vote for president today? (UP TO TWO REPLIES ACCEPTED)


Total GBush JKerry
Moral/Ethical values40%54%24%
Jobs/economy33%18%48%
Terrorism/Homeland security29%45%13%
Situation in Iraq16%11%21%
Education15%12%18%
Social issues such as abortion and gay marriage15%14%15%
Taxes9%11%7%
Health care9%5%14%
Foreign affairs5%3%8%
Social Security5%3%7%
Medicare/Prescription drugs3%2%4%
None of the above2%2%3%

The first thing to note here is that the LA Times result largely validates the 22% figure for the Moral Values issue as identified by NEP, (to the extent that 40% of voters considered it one of their two most important issues). The candidate balance also seems reasonably inline with the NEP Moral Values being primarily composed of Bush voters as do some of the other NEP answer options.

An important thing these comparative results show is that voters appear to view the "issue" of Moral Values as being fundamentally distinct from the conservative social agenda that includes abortion and gay marriage. That issue polls much lower, with only 15% of voters ranking it in their top two issues, and that 15% is basically even both Bush and Kerry, unlike the Moral Values issue that is more significant to Bush voters under both polls.

Finally, by allowing voters to pick two issues, instead of forcing them to be pick a single issue, the LA Times reveals that rather than being miles apart about what issues are important, both sides are concerned about basically the same main issues. This is not to say that they don't have substantive differences of opinion about the best policy or approach for a particular issue, but the LA Times poll doesn't make it look like both sides are talking past one another. In fact, putting the top six (out of 12) issues for each candidate's supporters in order reveals that supporters of both candidates have fairly common areas of concern

Bush Voters' Top Issues
Kerry Voters' Top Issues
Moral/Ethical valuesJobs/economy
Terrorism/Homeland securityMoral/Ethical values
Jobs/economySituation in Iraq
Social issues/abortion and gay marriageEducation
EducationSocial issues/abortion and gay marriage
Situation in Iraq Healthcare


Somehow that just doesn't look like a portrait of a profoundly divided nation to me.

The second offending question was the "most important quality" question:
Question: Which ONE candidate quality mattered most in deciding how you voted for president? (Check only one)


% of Sample
GBush JKerry
RNader
He cares about people like me9
24%
75%
1%
He has strong religious faith8
91%
8%
*
He is honest and trustworthy11
70%
29%
1%
He is a strong leader17
87%
12%
0%
He is intelligent7
9%
91%
0%
He will bring about needed change24
5%
95%
0%
He has clear stands on the issues17
79%
20%
0%
So what's the problem with this question? Of course there is the "choose only one" instruction, but the problem is much deeper than that. Actually, examining this question was what really made me think that the survey was being consciously designed to create the impression of an enormous "gap" between each candidate's supporters.

First, "he will bring about needed change" is a completely inappropriate choice, especially in a two-way race (no offense, Mr Nader). That is, by definition, what a challenger will do, that is not a quality of the challenger as a person. And, because of the large number of Kerry voters who oppose Bush more than support Kerry, this would be the choice they would almost automatically pick. If your skeptical, I just did the math and maybe it's just coincedence, but of the 25% of all voters who said their vote was mostly against the opponent, 70% of them voted for Kerry, which is 17.5% of the total sample, and larger than the percentage of the sample for any other choice for this question.

Second, the inclusion of the "clear stands on the issues" appears to be aimed straight at picking up Bush supporters, given the campaign theme of Kerry's indecisiveness. Obviously anyone adding the "strong religious faith" question would know it would skew almost entirely for Bush; and conversely adding "is intelligent" would skew wildly for Kerry.

I mention those four items because two of them have never have a choice on presidential election exit polls in the past, and they actually bumped off perennial choices like "has good judgment" and "is likeable." And, I don't understand why the pollsters wouldn't have been perfectly happy leaving the choice added in 2000 "understands complex issues" rather than switching to intelligence.

Now consider the same question and its choices from the 2000 exit poll. Would this have been a more interesting and informative set of choices to gauge voter sentiment? I think it would have been.


% of Sample
GBush AGore
RNader
Understands complex issues
13
19%
75%
4%
Honest/Trustworthy24
80%
15%
3%
Cares about people like me
12
31%
63%
5%
Has Experience
15
17%
82%
1%
Likeable2
59%
38%
2%
Strong leader
14
64%
34%
1%
Good judgment
13
50%
48%
1%

If you compare the results between the 2000 and 2004 versions of the question, you can observe two things. First, that percentage of voters who selected each quality as the most important is more evenly balanced in the 2000 exit poll. I think that is a good indicator that the options in the 2000 poll are a much better set of choices. More significantly, though, also, note that the average difference in the candidate's level of support based on the "important quality" nearly doubles in the 2004 poll: from 38.7% percent to 68.7%. For the four added choices added this year, the gap is 78.5%.

And also, consider the LA Times choices and results:
Question: What did you like most about your choice for president? (UP TO TWO REPLIES ACCEPTED)


All GBush JKerry
Strong leader37%55%19%
Shares my values22%23%20%
Cares about people like me21%17%26%
Has honesty/integrity21%27%15%
His overall political ideology13%6%20%
Best at keeping country safe from terrorism13%19%6%
Will build respect for U.S. around the world13%4%23%
Will strengthen nation’s economy11%3%19%
More effective commander-in-chief10%8%11%
He will stand firm in the positions he takes9%15%3%
Has a plan for achieving success in Iraq3%3%3%
His military service and record at the time of Vietnam2%1%4%
He is likable2%2%2%
None of the above4%2%6%
As with the issues question, the LA Times question is both more genuinely revealing s to voter sentiment and doesn't present as divided a portrait of the electorate as the NEP question does.

What Went Wrong With The Questions?

As I mentioned earlier, I initially thought part of the reason was the well-intended reduction of the length of the exit survey in the hopes of increasing the likelihood of voters participating.

Television-Dominated Consortium Members Dictate Questionnaire Design

Good opinion research starts with good questionnaire planning. This is clearly a job for a public opinion research professional. The exit polls were managed by two very experienced pollsters, so one would hope the pollsters developed and validated a robust questionnaire, right? We can presume they handled the key demographic and voting questions, which are pretty standard each election cycle. Unfortunately, according to the plainest interpretation of the NEP FAQ, the actual questions themselves appear to have been written by the media firms (the five news networks and the AP that make up the NEP consortium).

If exit poll data is a source for developing a sophisticated understanding of the electorate's motivations, this is very frustrating. Not to be uncharitable, but the broadcast news media can rarely be accused of the sophisticated analysis of anything, unless it happens to be a celebrity crime. They often appear incapable of communicating anything but the most basic poll results. The thought of leaving this extremely significant part of the survey design in their hands strikes me as the potentially fatal methodological flaw. Leaving such a critical component to anything less than a professional opinion researcher was a mistake.

In fact, I am coming around to the suspicion (and that's all it is right now) that the television networks in fact chose the questions to clearly paint the "divided America."

Other Indications That Results Were Engineered to Highlight "The Divide"

Change of Summarization of by Education Level from Prior Years

The raw NEP questions allow voters 5 responses for their education. This year, the results were as follows:


% of Sample
GBush JKerry
RNader
Did not complete high school
4
49%
50%
0%
High school graduate
22
52%
47%
0%
Some college or associate degree
32
54%
46%
0%
College graduate
26
52%
46%
1%
Postgraduate study
16
44%
55%
1%

In the 2000 election reporting, VNS results also summarized the data between "no college" (includes first two responses) and "college-educated" (includes "some college" response). This year, they changed their summarization to split to "No College Degree" and "College Degree." Of course, this is not precisely accurate, because the Some College category is really "Some College or Associate Degree," so the College Degree should really be designated as "Bachelor's Degree or higher" or similar.


% of Sample
GBush JKerry
RNader
No College Degree
58
53%
47%
0%
College Graduate
42
49%
49%
1%

By moving that largest segment of the sample (the "some college") group, the report can again be about "the divide," with Bush's support being stronger from those without a college degree. If they had maintained the 2000 breaks (and labels), it would have looked like this:


% of Sample
GBush JKerry
RNader
No College
26
52%
48%
0%
CollegeEducated74
52%
48%
*%

That is to say, there would not have been another indication of the deep rift.

Is There Any Motivation for Television Networks to Highlight Voter Division?

I'm not 100% sure why they would want to create that image so badly it would be worth effectively rigging the voter opinion results. The only rationale that comes to mind for doing this is that an America divided along largely demographic grounds is a better story for television.

I suppose television as a medium can't compete as strongly with print or online in the deeper discussion of policy, and that is exactly what our nation needs. If polling revealed the commonsense fact that most Americans, regardless of who they vote for, are generally concerned about the same sorts of things, and the conflict is really about the philosophical approach to achieving those common goals, then Americans are going to realize that we need to come together, across party lines and ideologies, and have a dialog about how best to achieve those common goals. That usually what I try to do here, but I think many people think it's just easier to believe that "the other guys" are just too far apart from them to even have a conversation.

Believe me, I am not saying that the idea of the divide is a myth. I think we all can see examples of it as we look around. I just think the television networks may be trying to juice-up that story.

Update: Someone pointed me to ABC Polling Director Gary Langer's NYT Op-Ed piece, in which he discusses the "moral values" question choice and his opposition to using it in the poll. "A poorly devised exit poll question and a dose of spin are threatening to undermine our understanding of the 2004 presidential election."

Please, comment away. I'm not a conspiracy theorist, I initially thought it was just another case of screw ups with the exit polling, which is a far more expected story based on history than the exit poll not having problems. However, some of these factors but it just doesn't seem like a mistake.

e-mail post | Link Cosmos | [Permalink]  |  | Sunday, November 07, 2004
Comments: Post a Comment

This page is powered by Blogger. Isn't yours?