This past Saturday, FiveThirtyEight founder and editor-in-chief Nate Silver came to Duke’s campus for the 22nd John Fisher Zeidman Memorial Colloquium on Politics and the Press. Prior to the colloquium, Silver sat down with DPR’s Jacob Zionce to talk about the 2014 midterm elections, the problem of ‘herding’, and the first eight months of the new FiveThirtyEight. Below is the second installment of Silver’s interview with DPR, you can read part one here. Questions and responses were edited for brevity and clarity.
DPR: You wrote a piece for FiveThirtyEight on the Virginia senate race that I thought was fascinating, and threw a really interesting theory out there about people not voting for Mark Warner because they did not feel like he needed their vote. I was hoping you could expand a little on that idea because you mentioned it halfway through your article.
Silver: I mean it is a theory, it probably would be hard to prove, but there have been a few races historically where a candidate’s margin was just large enough that people felt it was safe [not to vote for him or her], and it wasn’t completely safe. But Warner had a high favorability rating, [around] 57% in the exit poll itself, and yet a lot of those people voted against him. [People think] ‘well this guy’s going to win, I’m unhappy with the direction of the country, why not try Gillespie.’ But this year, I hope, will knock people out of their complacency about the polls always being right. And this is why we have always done things probabilistically, and why we take a long view and don’t just look at ‘2008 as a great year for the polls,’ but over the long run there have been many years when the polls didn’t do so great…People stereotype the story as ‘hey, just take a look at the polling average and that will always be right’ and the story is more complicated than that – it’s that on average, the poll average is an unbiased estimate, but the bias can be substantial, you just don’t know which direction it’s going to go.
You have to think about the consequences of what happens if all the polls missed in the same direction. Both in 2012 and 2014, the polling community was fortunate that the miss came on the side that was winning already. So Republicans won by more in 2014 than the polls said, [they won] a couple of additional states, but most polls correctly identified that they would win the Senate. And likewise, Obama, although all the states were called correctly — except maybe Florida depending on whose forecast you looked at — …won by wider margins in a number of states [than polls indicated]. If that error had gone in the opposite direction, then Romney would have won several states where he was behind in the polls. I’m not sure if he would have won the Electoral College or not, but it would have been very close. But yes, sooner or later, and maybe sooner, there is going to be a year when the polls do not identify the right winner – not just in some state, but the overall election. From a pure intellectual point of view, you’re supposed to get 70% of your 70-30 calls right, which means you’re supposed to get 30% of them wrong. You say Democrats are the underdogs, but they have a 30% chance of winning, that’s supposed to happen. If not, then you’ve put out a bad forecast, actually. I know that …the Politicos of the world are going to write, ‘this is the demise of FiveThirtyEight and data journalism!’
Actually, the other point I want to make before about the criticism is that if you’re a reader of the site, you’re under absolutely no obligation to treat us fairly. If you like the content, read it, if you hate it, that’s your right. If you’re someone who is a press critic, then you absolutely have the obligation to say ‘let’s actually look at what the site’s doing as a whole.’ I mean, we publish… probably a thousand or so articles, you could absolutely pick our five worst pieces or our twenty worst pieces, and say ‘boy, this is either bad, or it’s not in the direction that it was promised to me.’ As a reader, you can see things that way. But if you’re going to do a review of the site, then you have to be comprehensive, and I hope people would say that there are a lot of things that I might not like, but there are also a lot of things that are working really well…The problem is that we feel as though we’re at war against a certain type of bullshit analysis, so the exact people who you are most provoking are the bullshitters, and so when the bullshitters are covering you it’s going to be bullshit multiplied.
At the same time, obviously, it’s easy to underestimate how much of an idiot you are until you start actually publishing a site for an audience. And you can white-board things as much as you want, and be as theoretical as you want, until you start actually publishing every day, you don’t learn anything really, until you actually know what it’s like to deal with a news story, and get feedback from readers, and see what works and what doesn’t.
DPR: So what do you think are the takeaways from a polling perspective of the election? Do you think we just need more of a focus on state fundamentals?
Silver: If we were redesigning the model…we [would]have dozens of cycles of input, but this would marginally increase the weight you would put on state fundamentals. It would [also] increase the correlation between different states. We already have that [correlation] being fairly high, maybe it’s not high enough… I think of the competitive senate races. Only [in] New Hampshire, I think [Sen. Jeanne] Shaheen did half a point better than her polls. Everywhere else it was right on, or had a Republican outperformance. We have known that was going to be true, and we said ‘hey, Democrats, there could be a polling error but it might not work in your favor’.
Also pollsters tend[ing] to herd toward certain results and suppress other results, I think that is really important, especially because potentially a site like FiveThirtyEight can make that tendency worse. Now you can go to FiveThirtyEight or Real Clear Politics or whatever else and say ‘boy, here is what the average is, these guys usually do pretty well, so I’m going to publish [and] regurgitate that average. But the whole idea of the wisdom of the crowd is that you are gathering independent information. The crowd is not wise when it starts to think as a crowd, that is a different type of problem. And when polling is behaving more that way, then it could become more like the patterns you see in the stock market, where it’s partly fundamentals-based, the price of a stock, and also partly based on sentiment, and conventional wisdom, and when it fails it often fails catastrophically.
DPR: But when stocks have been used for predictions they have done fairly well. In your book you talked about prediction models like Intrade, where people could buy ‘stocks’ on something happening in the future, and that was able to make predictions fairly successfully.
Silver: Look, I think that’s a lot better. Everything is always about how something compare[s] to the alternative. I think that these markets are fairly good as compared to the alternative of the conventional wisdom, in part because it forces people to try and think in terms of probabilities, it requires people to actually invest something in the outcomes — there’s a tax if you’re just bullshitting, so that’s really valuable. My read of the evidence is that the markets for elections are maybe not better than the best models, but they should be in the long run, because they can say, ‘I’m going to incorporate all this information from the FiveThirtyEight model, or The Upshot’s model, or the Huffington Post’s model, it’s all public. And then if I feel like there’s something they’re missing, I can add information on top of that.’ So in the long run, an efficient market should prevail. I don’t know if there’s the volume of interest to really encourage that, but I hope it’s something we see more of.
DPR: So do you think that you and FiveThirtyEight have a responsibility, to stop all these pollsters from putting their thumbs on the scale? Do you feel like by bringing them to the light and holding their feet to the fire you give them the negative press to change their actions?
Silver: I hope so, and I hope that we create incentives. There are different things we’ve done, and one reason we’ve always looked at the last three weeks of polls and calibrated our pollster ratings is because pollsters often do do funny things in the very last poll. And they used to protest about that, but it’s like, ‘well, sorry, this was in the public record, it affected the expectations and the discourse about the election. It affected the press coverage, it affected the models.’ But we’ve moved further away from just [judging]… pollsters based on past results because you can kind of cheat by that standard, by saying ‘let’s look at the things that are methodologically correlated in the long run with good polling practice’.
So we think about that a lot, and we conducted the survey of pollsters, a poll of pollsters, about many things, [including] where they thought the industry was and how they felt about FiveThirtyEight. And, unsurprisingly, how they feel about us is very correlated with how we rate them. But I think there is some responsibility to, like I say, go beyond the simple story of ‘oh, just take the polling average and it’s a right’. It’s not a bad approximation, but you’ve seen in this last election, and I’d argue 2012 too, that it is an approximation, and if you’re covering the stuff comprehensively then you need a bit more subtlety in how you describe the polls.