Truth in numbers: The reliability of political polling
There's a saying that numbers never lie. But numbers can be deciphered, manipulated, interpreted and misinterpreted, all in an effort to tell one story or another. Just ask a pollster or campaign manager for anyone running for public office, especially right now, as we head into the final days of the 2018 midterm general election.
In the abstract, a pollster is someone who conducts and/or analyzes opinion polls, on any matter or topic. A strong pollster provides a snapshot in time, the feelings and state of mind of those who answered the poll. A poll should not predict, but reflect, and distill down and put in perspective as one tool, one measurement, of the public's thoughts at the time it was taken.
“It is a snapshot in time – it could change tomorrow,” noted Tim Malloy, assistant director of Quinnipiac University Poll at Quinnipiac University Connecticut. “The purpose of polling is a bit like reporting, but of the public. It's taking the temperature of the public's feelings, of issues and politicians, and then relating it to government.
“It's not our intent to shape things,” Malloy continued. “Our hope is that politicians and leaders see them (the polls) and understand them, and we hope the general public sees them. It's telling the broader public what we learned from calling other Americans.”
Ed Sarpolus, founder and executive director of Target-Insyght, which has done polling for The Detroit News, Detroit Free Press, Los Angeles Times, Washington Post, and WDIV-TV, is emphatic that the role of a pollster is not just to report what they've polled.
“The nature of being a pollster, your job is to interpret the research, data and statistics,” Sarpolus said. “Sometimes it is looking at the margins of error, and sometimes it's dismissing the margins of error. It's looking at the patterns and trends, because those can be telling me something. I have to study history, other sources, and previous voting patterns (in a specific race). I'm always less interested in what the politically correct response is than in what actually is...what the data tells me.”
Polling is now a big business, with many Americans viewing polls with skepticism and distrust – or interpreting them through partisan filters and self-interests. Part of that distrust has been fostered by President Trump, who has consistently implored his followers that the polls from the 2016 election were wrong – when pollsters say they generally were accurate, with a majority of their polls showing only that Democrat Hillary Clinton was leading in the 2016 presidential contest, and in the final days, by just two or three percentage points – which is what she won the popular vote by nationally.
The Brookings Institute noted, “If you took a public opinion poll about polls, odds are that a majority would offer some rather unfavorable views of pollsters and the uses to which their work is put. Many potential respondents might simply slam down their telephones. Yet if you asked whether politicians, business leaders, and journalists should pay attention to the people's voices, almost everyone would say yes.”
The Brookings Institute asks the same question we all do – “Can we trust the polls? Under the best of circumstances, the answer is 'Not necessarily without a fair amount of detailed information about how they were conducted.'”
The Pew Research Center concurs, noting that the accuracy of a poll depends on how it was conducted.
“It all depends on the poll, and the crafting on the poll,” explained Jen Eyer, senior vice president, Vanguard Public Affairs, which provides public relations, marketing and consulting services, but not polling work for candidates. This cycle, Eyer is working with 9th District Democratic candidate Andy Levin, which covers Bloomfield Township, Beverly Hills, Bingham Farms, Franklin, Royal Oak, Huntington Woods, and part of Macomb County.
She said campaigns typically choose their own pollsters. “Polls have to be careful not to skew the results. A carefully crafted, worded poll can help predict the results of a race or ballot initiative, but it's important not to rely just on one poll. Further, it's important (for the candidates and the strategists) to know how the messages are being received by voters. But polls are a guidepost – every candidate I've ever worked with has certain core beliefs they are going to stick with. Polls allow them to know which issues are most important to voters, and what voters want to hear at that time – so you can tell them what you're going to do about certain issues they want to know about right now.”
She said a good example might be learning that tax cuts are not top of mind to the public, even if they are a central belief to a candidate – while health care concerns are very important to voters. A candidate might pivot to speaking about health care after learning that from polling to appeal to voters concerns. Once in office, if the candidate prevails, he or she might return to working on tax cuts, along with the health care issue.
“Good pollsters, especially this cycle are better, are taking deeper dives. They are not just asking questions about candidates, but about voters feelings. There are pollsters who are engaged in the process,” noted Dennis Darnoi, a political consultant with Densar Consulting. “In 2016, voters said, 'Oh yeah, I'm going to vote,' but they were as a 'likely voter,' and many pollsters did no follow up. They just took it at face value. They just did not do the deeper dive. This cycle, it's a different electorate and a different time.
“Before, you could just look at someone's voting history – you could just look at that data and not press any harder,” Darnoi said.
“What I'm seeing in '18 from trusted pollsters, is they're really gaining an understanding of the motivations of the voters,” he said. “It's really cheap to do a poll – and it doesn't have to be good or bad.
“The good pollsters, who do live polls, which are very labor intensive and expensive – those are very accurate,” Darnoi said. “But they get lobbed in with the cheap pollsters. Unless there is a clear distinction between cheap pollsters and good pollsters, it's going to be said that polling is obsolete.”
The 2016 election, where polls seemingly led pundits, media and voters to believe that former Secretary of State Hillary Clinton, the Democrat, was leading by a landslide in the presidential election, only to have Republican Donald Trump squeak by and win the presidency.
“What occurred in the 2016 general election is that every reliable poll was done in advance – at least 10 days or more in advance,” said Richard Czuba, founder, Glengariff Group. Czuba said he is currently doing the polling for The Detroit News and WDIV-TV. “Polling is a snapshot of a particular moment of when a poll comes out. Yet, races close in the final five days, the weekend before an election. I don't know of any media that can afford to do polls that close – and I wouldn't trust any that is spending money to. It would be influencing the race.”
He said that in 2016, his last public opinion polls were done on October 10 and 11.
“That's four weeks away from the election. There was no way it was meant to predict a race,” he said. “Instead, it was meant to help inform people of which way people were thinking.”
The point of polls, Czuba emphasized, is not prediction, but as a barometer.
“We need to be more responsible with public polls, because it (has the potential to) change the narrative,” he said. “It sears itself into the public mindset, and tells the voter population what is happening.
“Polls should be much more than predicting a horse race.”
This year, for the 2018 November 6 election, Czuba said he and many other pollsters are looking at “how people are viewing races through the lens of how they personally view the president. Rather than just putting out the numbers, we're putting out what is motivating voters.”
In order to accomplish that effort, Czuba, Sarpolus and other experienced pollsters are working in different ways to achieve that goal. Few are relying on automated “robo-calls,” which can only be done on landlines, per federal law. In an effort to address the ever-growing scourge of telemarketing calls, in 1991, Congress enacted the Telephone Consumer Protection Act (TCPA), administered by the Federal Communications Commission (FCC). The TCPA restricts the making of telemarketing calls and the use of automatic telephone dialing systems and artificial or prerecorded voice messages. The law was updated in 2012, and forces telemarketers to respect Do Not Call lists to landlines. Political campaigns and polling are exempt. However, telemarketing calls to wireless, or cell phones, remains illegal, and it is illegal for anyone making any call, other than for emergency purposes, to use any automatic phone dialing system or any artificial or prerecorded voice message to any cell phone. That doesn't mean all campaigns follow that law.
“The biggest change in polling in the last several years is people getting rid of their landlines – so the trick is getting them on their cell phones. Anyone looking to save a dime is getting rid of their landline,” said Dave Doyle, executive vice president of Marketing Research Group (MRG). “When there were landlines, you were pretty sure you were getting a certain geographical area – but now with a 248 area code, you could be in Los Angeles, New York, or Mozambique. The key is figuring out the right mix of cell phones, and if you have a large enough area with various demographics.”
Doyle has worked with state Rep. Mike McCready (R-Birmingham) in his three previous races for the House 40th District, as well as in his current state Senate battle for the 12th District (Bloomfield Township, Beverly Hills, Bingham Farm, Franklin, Auburn Hills, Pontiac, Clarkston, Independence Township, Keego Harbor, Sylvan Lake, Oakland Township, Addison Township, Orion Township, Oxford, and Southfield Township). “In an area like Bloomfield Township, you want to make sure you have a good mix of the community,” he said. “It's harder to do with cell phones. You have to ask more questions to make sure you have the right geographic mix. You have to have good live callers, because the best polls are with live callers.”
Doyle said that some firms get around the law prohibiting “robocalls,” or automated surveys only to landlines, by having a live dialer call to a cell phone, and then connecting it to a recording.
“We do not do that,” he emphasized.
“Anyone who is only calling landlines, and not calling cell phones – you're obsolete and irrelevant,” Malloy, of Quinnipiac, said.
Czuba concurred. “Michigan has a lot of schlock pollsters – driven by doing automated polls,” he said, “where they never call cell phones and just push a button.”
Glengariff only uses live callers “talking to real people,” Czuba said.
“Automated polls are known for under representing young people, who primarily have cell phones, rather than landlines,” Jill Alper, of Alper Strategies, said to Michigan Information Research Service (MIRS) in early October.
“Polls and surveys have to fold in cell phones as people have cut the cord on landlines,” said Arnold Weinfeld, interim director at Michigan State University's Institute for Public Policy and Social Research, noting that makes it “ever more challenging these days. Response rates are very low right now – people are shying away from surveys and polls right now.”
The advent of Caller ID on phones alerts callers to that likelihood that a call is a poll – or a telemarketer, or someone they don't know, and don't care to speak to. And increasingly, callers just don't answer the phone, meaning pollsters have to increase the amount of calls they make in order to reach a viable survey number.
Another challenge – “In a tight labor market, it's harder to hire surveyors,” Weinfeld said. “Ours rely on (Michigan State) students (as does Quinnipiac). We are competing with retailers, restaurants, fast food. We have had our own challenges – so we recently raised our wages to be competitive, in order to have enough interviewers to handle the number of calls and polls we have to make.”
Weinfeld said the right number of respondents for an adequate poll depends on what the poll is being conducted for, what methodology is being used, and what the geographic area is that is being covered.
“For our state of the state (polling), we want to get to 900, 1,000 respondents,” he said, meaning they have to make several thousands of calls to reach that many respondents who answer and fit the right demographic mix.
A striking example of how many calls are needed to reach an ever-shrinking pool of people answering is rolling polls on Michigan's 11th Congressional District (Birmingham, Bloomfield Hills, Rochester Hills, the western Oakland lakes area and western Wayne counties) by The New York Times, as reported by MIRS. Of the 66,770 calls the newspaper claimed it made in the district, MIRS reported they were only able to get 465 people to respond to their poll – a slim .7 percent response rate. Results on October 10, showed Democrat Haley Stevens leading Republican Lena Epstein, 45 to 38 percent, with a margin of error of five percent, which is considered a high percentage of error among pollsters.
In the 8th Congressional District, covering Rochester, Rochester Hills, northern Oakland County through Livingston County to Lansing and E. Lansing, the response rate was slightly higher – .9 percent, with 501 people picking up the phone out of 53,590 people surveyed. In this race, which many feel will be too close to call through election day, incumbent Rep. Mike Bishop (R) was up over Democratic challenger Elissa Slotkin, 47 to 44 percent. But noted national pollster and pundit Nate Silver of FiveThirtyEight had it much tighter, MIRS reported, over the same time period – with Bishop at 48.6 percent and Slotkin at 48.3 percent.
To many, that's a dead heat.
“Our margin of error is two to three percent,” said Michigan State's Weinfeld. “The lower the margin of error, the more reliable the poll, and the more reliable your entity is seen as a research source.
“In a close race, plus or minus five or nine is too high a rate of error,” he noted. “We're trying to be relevant so people rush out to the polls. We'd rather have as many respondents as possible, with as low a margin of error as possible.”
Doyle, of MRG, said, “The smaller the sample size, the larger the margin of error. It makes it much less reliable. If you get 300 respondents (in a Congressional race or state Senate race), with a plus or minus three percent, once you look at the smaller quantities within that, the actual margins of error go up. For example, if you have 152 within that group, the margin of error goes up. If you have 50 people over the age of 65 voting – the margin of error goes up.
“The reason people do polling is because it's generally reliable – but only if it's done right,” Doyle continued. “It needs to be a geographically appropriate size, with an accurate demographic representation. If you're polling eight precincts in the 40th House District (Birmingham, Bloomfield Hills, Bloomfield Township, the eastern portion of West Bloomfield), you can't talk to 40 percent from West Bloomfield, and they can't all be males.”
Bernie Porn, of EPIC/MRA, does primarily live polls, seeking a response rate of 600 calls statewide, with no more than a plus/minus margin of error rate of four percent. Their mix is approximately 30 percent cell phones, “and it will probably be 35 percent cell phones next cycle. You have to have cell phones to represent all voters.
“If 50 percent said, 'Yeah, it's a beautiful day out today,' that means it could be 54 percent, or down to 46 percent,” Porn said. Robocalls, he said, can be somewhat accurate only in primaries, “when you're only talking to Republicans or Democrats, or in school elections, because younger voters don't vote in those.”
He quoted a recent poll which showed Slotkin up over Bishop by four points – the margin of error.
“If there is a Blue Wave, the Democrats will rise up, and she will capture the Independents, as well, for a solid win,” Porn prognosticated, based on his polling. He said pre-existing conditions are “really hitting home for voters – it's a huge concern.” And that is Slotkin's signature calling card over Bishop.
“The federal tax cuts – people are slightly against them,” he said. “People are realizing the benefits are chump change versus the health care costs they are seeing. If Bishop loses, it will be because of that issue.”
Darnoi, of Densar Consulting, thinks that is the race that is too close to call, and that Bishop could pull it off – but only if “Mike holds on to his female voters in Rochester Hills – women who have typically identified as Republicans but may be identifying as Independents this cycle, and how well he does with ticket splitters in Independence Township.”
Darnoi said the biggest thing being seen this cycle is women who have traditionally been Republicans, supportive of Republican candidates and Republican policies, “are now in the polling, supportive of Independents, and self-identifying as Independents. It's not good for Republicans right now. It shows there's a stain on the Republican brand right now – that people do not want to say, 'I'm a Republican' right now.
“Those voters will dictate and set the tone – just because those voter aren't self-identifying as Republicans doesn't mean they support liberal policies,” Darnoi explained. “Demographics will dictate, district by district.”
He agrees that “it's hard to trust a poll with a margin of error greater than four percent – because if it's five percent or bigger, there's something wrong with the assumptions you have to make. It means your poll is unbalanced somewhere – something is off. There's not enough women, not enough women under 40, not enough people earning X. Then you're making an error in your judgement in the polling. With a large margin of error it should be discounted – it's not worth the paper it's printed on.”
In 2016, “We said polling nationally was showing that Clinton was up three percent – and it is what she won by,” Porn said. “So the polling was accurate for the popular vote. Where the disconnect was at the time was in the social media impact – emphasizing how unpopular she was, and it may have been where voting or influence was suppressed, or where there was outside influence.
“If not for that, in Michigan, I think she would have won.”
Czuba of Glengariff said they only conduct live operator polls. “We only have live operators talking to people,” he said.
With a plethora of polls being disclosed this election cycle, Czuba's advice is that “if the media is going to report it, I say, 'show me the full poll. Show me the demographics, the region, the age breakdown, racial breakdown, gender – this year you must have accurate age breakdowns. If, as a reporter, you can't see the full demographics, you shouldn't report it. This year, there are too many people putting out polls with an agenda – and those are the polls not to trust. The public has a right to be skeptical of polls – to be really wary of them.”
“Four years ago, in 2014, we only used automated (landline) dialing,” said Steve Mitchell, chairman, Mitchell Research and Communications, who claimed his firm was only off by two percent in the governor and senate races. “In 2016, right at the end, we had Hillary Clinton at plus-three, and we changed our collection method. We did geofencing, which is a form of capturing voters on their cell phones or tablets. It's not a call. It's a banner. It's asking them if they want to answer.”
Geofencing is a location-based service or app which uses GPS, WiFi, or other cellular data to trigger a pre-programmed action when a mobile device enters or exits a virtual boundary set around a geographical area, known as a geofence. Depending on how it is configured, it can prompt mobile push notifications, trigger text messages or alerts, or send targeted advertisements on social media, among other applications.
Mitchell was the only pollster or analyst who mentioned utilizing geofencing as a polling tool.
He said at the end of the polling period in 2016, “I ended up overweighting, and I had the margin (of error) go up to plus-five for Clinton.
“I will not make that same mistake.”
He said this cycle, he is doing a combination of autodials and cell phones, as well as operator-assisted to landlines and cell phones.
“Sometimes people are more honest to a computer than a person – especially Trump voters, because they've been castigated so much that they're racists, bigots. They don't want to tell a person that they're voting for Trump,” Mitchell said.
Alper disagreed. “People are not as likely to profess their real attitudes to an auto dial as they are to a real human being,” she said.
One of the bad raps of 2016, Mitchell added, “is that the polling was wrong. It wasn't wrong. The polling had Hillary Clinton at plus-two – and she did win with two percent of the popular vote,” he pointed out. “It was a dead heat.
“Anyone who had it within two to three percent of actual results is a pretty accurate poll.”
“Polls are only a snapshot of the moment they were done. They're not supposed to influence voters,” Czuba said. “We've seen countless times when something happens, an event that changes things. In 2012, Romney was closing in on Obama, but then there was Hurricane (Sandy) in October, and everything stopped and went back to where it was. In 2016, Hillary Clinton had a healthy lead in Michigan until the Comey letter in the last 10 days. But it was in the last weekend that Trump closed in on a close race – and then the person who is behind will win. No poll will show that because they're close in the last weekend, and they'll close and lead on election day – and there's no poll that can predict that.”
Sarpolus, of Target-Insyght, noted another problem with 2016, was that despite respondents to polls, “more moderate voters, progressives and Independents chose not to vote for the top of the ticket, or did not show up at all.”
He said that election day 2016, at 8 p.m., he was working for MIRS News, and noted that Clinton and Trump were basically tied, with Trump with a slight lead – “which is what happened.”
He pointed out, “The Detroit Free Press had the big error, because they were asking people at the polls who they had voted for. The sample design they had created had the error. It was based on past elections, which was random samplings of precincts – and not representative of actual voting patterns.
“Over history, theirs was more accurate. But not in 2016,” he said.
Sarpolus chose a different method completely.
“I basically started an automated poll at 5 p.m. and ended at 8 p.m. – with 600 completed interviews. It was more accurate (than the Free Press) because it was of people who actually voted. My sample design was more accurate than their scientific polling. I chose people who had voted, as well as some absentee ballot voters,” he said.
“2016 broke the mold of how they should have designed their polling,” Sarpolus said. “Mine was more random, and caught the differences in actual turnout. It let people answer, versus specifically asking them who they wanted to vote for.”
A pollster has to not just report the data, but to look at it and analyze it, as well. “I'm just a hired gun. Because of my background, I can't be partial. Whatever the data shows me, coupled with my experience, gives me the outcome. As a pollster, I may reject my results,” Sarpolus said. “Throughout 2016, I never said Clinton was winning – just said she was leading. I always said both Clinton and Trump had high negatives. It was not typical of a presidential year.”
He noted that in the polling throughout 2016, “Clinton never broke through 50 percent, which meant I could never say she was winning. It tells me that undecideds are going to break for the challenger – Trump – or just not vote. And that's what happened.”
He asserted that in 2016, it wasn't the polling that was wrong – “It was the talking heads who were wrong, who always said she was winning,” Sarpolus said. “The polls always said she was leading. Further, the polls always said she would win the popular vote – and she did. The polls never said anything about the Electoral College.”
In 2018, he sees similar momentum – or lack thereof – for two of the three statewide ballot measures, for Proposal 1, to Regulate Marijuana Like Alcohol; and Proposal 2, to create an independent citizens redistricting commission in an effort to end gerrymandering.
Sarpolus has been testing the ballot language, and finding that many respondents are either not understanding it, that it is illegible, and they will vote no because of that, or that there needs to be more education on the issues and the language, or that voters do not feel a need to approve them, especially for the marijuana issue.
“For the marijuana proposal, I don't see any marches, any youth rallies,” Sarpolus said. “People are indicating that those who want their marijuana, get their marijuana.”
In typical polling, Sarpolus uses a mix of landlines and cell phones, and does stratified random samples, where calls reflect the population that will be voting that day.
“Pollsters should try to call unlisted numbers as well as listed numbers,” he advised. “You're trying to build your sample in order for it to be a reflection of the community. You call the secondary numbers – if there isn't an answer, you hang up, and it's why you call the neighbors.”
He gave an example of Troy, which he said is divided up into four segments.
“There are bands of conservatives; Indians; whites and Polish;and Chaldeans,” Sarpolus said. “That is what a sample is supposed to reflect. With a mix of landlines and cell phones, I can do similar things as the Census Bureau.”
The more advanced the sampling, with geo targeting, the greater the ability to come down to predictability, he said.
He also emphasized that is why the sample size is so important – “because of the margin of error. The size depends on what you do with it.
“One in 20 polls will be off,” Sarpolus asserted, “because it's a snapshot in time, if nothing else changes. That's why you do so many polls, and do a lot to withstand attacks on your polls. It's a science – but more of an art. It's why you have to study history – you have to understand the history of off-year elections, of who votes, the Chaldean community, the Hispanic community, of why males will answer the phone.
“I've predicted every election (I've worked on since 1972) within a half-percentage because of my stratification,” he said.
What are the pollsters forecasting in their crystal balls in the coming weeks? Most do foresee not only a Blue Wave of Democrats coming to roost in Michigan, but a Pink Tsunami – not only of women candidates who are running winning, but of suburban women who are angry, disaffected and highly motivated for change, voting – and voting for Democrats.
“Trump will be largely responsible for the Blue Wave – he is affecting all the other candidates,” Porn said.
He said that for governor, his polling shows a lock for Democrat Gretchen Whitmer.
“She has 40 percent favorables, and only 26 percent unfavorables,” Porn said. “She is above water and positive. Schuette, however, is underwater. He has 38 percent unfavorables, with 32 percent favorables.”
In Michigan, Porn said that voters are most concerned about infrastructure, roads, and water, 27 percent; education, 21 percent; and health care, 13 percent; the state's economy and more jobs, 10 percent; controlling local and state spending, eight percent; state and local taxes, seven percent; the environment, seven percent; and controlling crime and drugs, just three percent.
“Of the top three issues, Whitmer leads Schuette,” Porn said. “Schuette keeps pushing on taxes, and that is not a major issues even for Republicans. It's surprising that the sun, moon and starts are setting on the tax issue (for him) when it's just not an issue even for Republicans. It doesn't seem to make a lot of sense.”
“Further, he seems to be stuck on the Nassar (the former Michigan State doctor convicted of multiple counts of sexual abuse) issue and that Whitmer didn't do her job – at the same time when you've got Bill Cosby sentenced to jail, and a Supreme Court justice accused of sexual assault and real issues of sexual harassment,” Porn continued. “Why bring up Nassar and sexual harassment? He's refocusing men and women, and it's not a good idea. He's not going to come out well on this.”
While the attorney general race could be close, he said, otherwise, “it will be a Blue Wave all the way, and it will influence Congressional races, even down to state House races, and likely turn the state House. They will even pick up a few in the state Senate, but it's unlikely to flip the state Senate, but likely enough for the state House.”
His polling and projections are showing Stevens winning in the 11th District, “and also Elissa Slotkin is looking stronger and stronger – I'd be surprised if she didn't win. Especially with (Speaker of the House Paul) Ryan's PAC pulling its financing (for Bishop). They're in the triage cycle, and they're come to the conclusion that they're not going to win and will put money in more viable races.
“If there is a Blue Wave, Democrats will rise up, and she will capture the Independents as well, for a solid win,” he forecasted.
Sarpolus pointed out that the 11th District – once so reliably Republican, it was gerrymandered to be a safe district for former Congressman Thaddeus McCotter, has changed demographically. “Many of the kids of immigrants who lived there, Indians, Chaldeans have moved back with college degrees. They're voting Democratic. The union people in western Wayne County – Trump has turned that area, the Big Three – they don't like Donald Trump. And a lot of children of Reagan Democrats from Macomb County and Wayne County, college-educated kids, have moved into the 11th. It's really a melting pot.
“Democrats are leading where voters don't necessarily know who the Democratic candidates are – it's all against Trump and Schuette,” Sarpolus said.
Darnoi concurred, and thinks the down ballot effect could be profound, noting that long time politician and incumbent state Sen. Marty Knollenberg (R-Birmingham, Bloomfield Hills, Rochester, Rochester Hills) is “under water. His unfavorables are not good, especially for an incumbent. The Republicans are very worried, and it's trickling down to Doug Tietz (running for state House for Troy).”
He said the 12th state Senate seat, usually a safe Republican district between McCready, the Republican, and Democratic challenger Rosemary Bayer, “is definitely being eyed. And McCready's seat, the 40th House District, (Democrat) Mari Manoogian versus (Republican) David Wolkinson, she's looking in very good shape. It's trending to Mari.
“If there is a Blue Wave, we may see it crash upon the shores of Oakland County,” Darnoi said.