From Wikipedia, the free encyclopedia - View original article
|Slogan||Nate Silver's Political Calculus|
|Type of site||Opinion poll analysis, political blog|
|Created by||Nate Silver|
|Launched||March 7, 2008|
|Current status||Licensed to The New York Times (since August 2010)|
|Slogan||Nate Silver's Political Calculus|
|Type of site||Opinion poll analysis, political blog|
|Created by||Nate Silver|
|Launched||March 7, 2008|
|Current status||Licensed to The New York Times (since August 2010)|
FiveThirtyEight is a polling aggregation website with a blog created by Nate Silver. Sometimes colloquially referred to as 538 dot com or just 538, the website takes its name from the number of electors in the United States electoral college. Established on March 7, 2008, as FiveThirtyEight.com, in August 2010 the blog became a licensed feature of The New York Times online and was renamed FiveThirtyEight: Nate Silver's Political Calculus.
During the U.S. presidential primaries and general election of 2008, the site compiled polling data through a unique methodology derived from Silver's experience in baseball sabermetrics to "balance out the polls with comparative demographic data" and "weighting each poll based on the pollster's historical track record, sample size, and recentness of the poll".
Since the 2008 election, the site has published articles – typically creating or analyzing statistical information – on a wide variety of topics in current politics and political news. These included a monthly update on the prospects for turnover in the U.S. Senate; federal economic policies; Congressional support for legislation; public support for health care reform, global warming legislation, gay rights; elections around the world; marijuana legalization; and numerous other topics.
On June 3, 2010, Silver announced that in early August the blog would be "relaunched under a NYTimes.com domain". The transition took place on August 25, 2010, with the publication of Silver's first FiveThirtyEight blog article online in The New York Times.
When Silver started FiveThirtyEight.com in early March 2008 he initially published under the name "Poblano", the same name that he had used since November 2007 when he began publishing a diary on the political blog Daily Kos. Writing as Poblano on Daily Kos, he had gained a following especially for his primary election forecast on Super Tuesday, February 5, 2008. From that primary election day, which included contests in 24 states plus American Samoa, "Poblano" predicted that Barack Obama would come away with 859 delegates, and Hillary Clinton 829; in the actual contests, Obama won 847 delegates and Clinton 834. Based on this result, New York Times op-ed columnist William Kristol cited "Poblano" thus: "And an interesting regression analysis at the Daily Kos Web site (poblano.dailykos.com) of the determinants of the Democratic vote so far, applied to the demographics of the Ohio electorate, suggests that Obama has a better chance than is generally realized in Ohio".
FiveThirtyEight.com gained further national attention for beating out most pollsters' projections in the North Carolina and Indiana Democratic party primaries on May 6, 2008. As Mark Blumenthal wrote in National Journal, "Over the last week, an anonymous blogger who writes under the pseudonym Poblano did something bold on his blog, FiveThirtyEight.com. He posted predictions for the upcoming primaries based not on polling data, but on a statistical model driven mostly by demographic and past vote data.... Critics scoffed. Most of the public polls pointed to a close race in North Carolina.... But a funny thing happened. The model got it right". Silver relied on demographic data and on the history of voting in other states during the 2008 Democratic primary elections. "I think it is interesting and, in a lot of ways, I'm not surprised that his predictions came closer to the result than the pollsters did", said Brian F. Schaffner, research director of American University's Center for Congressional and Presidential Studies.
As the primary season was coming to an end, Silver began to build a model for the general election race. This model, too, relied in part on demographic information but mainly involved a complex method of aggregating polling results. On June 13 2008, Rasmussen Reports began partnering with FiveThirtyEight.com in order to include this unique methodology for generating poll averages in their "Balance of Power Calculator". At the same time, FiveThirtyEight.com's daily "Today's Polls" column began to be mirrored on "The Plank," a blog published by The New Republic.
By early October 2008, FiveThirtyEight.com approached 2.5 million visitors per week, while averaging approximately 400,000 per weekday. During October 2008 the site received 3.63 million unique visitors, 20.57 million site visits, and 32.18 million page views. On Election Day, November 4, 2008, the site had nearly 5 million page views.
One unique aspect of the site is Silver's efforts to rank pollsters by accuracy, weight their polls accordingly, and then supplement those polls with his own electoral projections based on demographics and prior voting patterns. "I did think there was room for a more sophisticated way of handling these things," Silver said.
FiveThirtyEight.com weighs pollsters' historical track records through a complex methodology and assigns them values to indicate "Pollster-Introduced Error".
Polls on FiveThirtyEight.com are weighted using a half-life of thirty days using the formula 0.5P/30 where 'P' is the number of days transpired since the median date that the poll was in the field. The formula is based on an analysis of 2000, 2004, 2006 and 2008 state-by-state polling data.
At base Silver's method is similar to other analysts' approaches to taking advantage of the multiple polls that are conducted within each state: he averaged the polling results. But especially in the early months of the election season polling in many states is sparse and episodic. The "average" of polls over an extended period (perhaps several weeks) would not reveal the true state of voter preferences at the present time, nor provide an accurate forecast of the future. One approach to this problem was followed by Pollster.com: if enough polls were available, it computed a locally weighted moving average or LOESS.
However, while adopting such an approach in his own analysis, Silver reasoned that there was additional information available in polls from "similar" states that might help to fill the gaps in information about the trends in a given state. Accordingly, he adapted an approach that he had previously used in his baseball forecasting: using nearest neighbor analysis he first identified "most similar states" and then factored into his electoral projections for a given state the polling information from "similar states". He carried this approach one step further by also factoring national polling trends into the estimates for a given state. Thus, his projections were not simply based on the polling trends in a given state.
Furthermore, a basic intuition that Silver drew from his analysis of the 2008 Democratic party primary elections was that the voting history of a state or Congressional district provided clues to current voting. This is what allowed him to beat all the pollsters in his forecasts in the Democratic primaries in North Carolina and Indiana, for example. Using such information allowed Silver to come up with estimates of the vote preferences even in states for which there were few if any polls. For his general election projections for each state, in addition to relying in the available polls in a given state and "similar states," Silver estimated a "538 regression" using historical voting information along with demographic characteristics of the states to create an estimate that he treated as a separate poll (equivalent to the actually available polls from that state). This approach helped to stabilize his projections, because if there were few if any polls in a given state, the state forecast was largely determined by the 538 regression estimate.
In July 2008, the site began to report projections of 2008 U.S. Senate races. Special procedures were developed relying on both polls and demographic analysis. The projections were updated on a weekly basis.
The site presents an analysis of the swing states, focusing on so-called "Tipping Point States". 'Tipping Point States' are those states that tip the outcome of the election from one candidate to the other. In each simulation run, the winner's states won are lined up in reverse order of victory margin by percentage. A simple algorithm selects the minimum closest states that, if switched to the loser's side, would change the election outcome, then weights that run's significance based on the margin of victory in the popular vote. Thus, the closer the popular vote, the fewer the number of tipping point states and the greater the significance of that run in assessing tipping point importance. For example, the 2004 election's sole tipping point state was Ohio by this method, while 1960s were Illinois, Missouri, and New Jersey – even though Hawaii was the closest state race.
In the final update of his presidential forecast model at midday of November 4, 2008, Silver projected a popular vote victory by 6.1 percentage points for Barack Obama and electoral vote totals of 349 (based on a probabilistic projection) or 353 (based on fixed projections of each state).. Obama won with 365 electoral college votes, Silver's predictions matching the actual results everywhere except in Indiana and the 2nd congressional district of Nebraska, which awards an electoral vote separately from the rest of the state. His projected national popular vote differential was below the actual figure of 7.2 points.
The forecasts for the Senate proved to be correct for every race. But the near stalemate in Minnesota led to a recount that was settled only on June 30, 2009. In Alaska, after a protracted counting of ballots, on November 19 Republican incumbent Ted Stevens conceded the seat to Democrat Mark Begich, an outcome that Silver had forecast on election day. And in Georgia, a run-off election on December 2 led to the re-election of Republican Saxby Chambliss, a result that was also consistent with Silver's original projection.
During the first two months after the election, no major innovations in content were introduced. A substantial percentage of the articles focused on Senatorial races: the runoff in Georgia, won by Saxby Chambliss; recounts of votes in Alaska (won by Mark Begich), and Minnesota (Al Franken vs. Norm Coleman); and the appointments of Senatorial replacements in Colorado, New York, and Illinois.
After President Obama's inauguration, Sean Quinn reported that he was moving to Washington, D.C., to continue political writing from that locale. On February 4, 2009, he became the first blogger to join the White House press corps. After that time, however, he contributed only a handful of articles to FiveThirtyEight.com.
During the post-2008 election period Silver devoted attention to developing some tools for the analysis of forthcoming 2010 Congressional elections, as well as discussing policy issues and the policy agenda for the Obama administration, especially economic policies. He developed a list of 2010 Senate races in which he makes monthly updates of predicted party turnover.
Later, Silver adapted his methods to address a variety of issues of the day, including health care reform, climate change, unemployment, and popular support for same-sex marriage. He wrote a series of columns investigating the credibility of polls by Georgia-based firm Strategic Vision, LLC. According to Silver's analysis, Strategic Vision's data displayed statistical anomalies that were inconsistent with random polling. Later, he uncovered indirect evidence that Strategic Vision may have gone as far as to fabricate the results of a citizenship survey taken by Oklahoma high school students. FiveThirtyEight devoted more than a dozen articles to the Iranian presidential election in June 2009, assessing of the quality of the vote counting. International affairs columnist Renard Sexton began the series with an analysis of polling leading up to the election; then posts by Silver, Andrew Gelman and Sexton analyzed the reported returns and political implications.
FiveThirtyEight covered the November 3, 2009, elections in the United States in detail. FiveThirtyEight writers Schaller, Gelman, and Silver also gave extensive coverage to the January 19, 2010 Massachusetts special election to the U.S. Senate. The "538 model" once again aggregated the disparate polls to correctly predict that the Republican Scott Brown would win.
In spring of 2010, FiveThirtyEight turned a focus on the United Kingdom General Election scheduled for May 6, with a series of more than forty articles on the subject that culminated in projections of the number of seats that the three major parties were expected to win. Following a number of preview posts in January, and February, Renard Sexton examined subjects such as the UK polling industry and the 'surge' of the third-party Liberal Democrats, while Silver, Sexton and Dan Berman developed a seat projection model. The UK election was the first time the FiveThirtyEight team did an election night 'liveblog' of a non-US election.
In April 2010, the Guardian Newspaper published Silver's predictions for the 2010 United Kingdom General Election. The majority of polling organisations in the UK use the concept of uniform swing to predict the outcome of elections. However, by applying his own methodology, Silver produced very different results, which suggested that a Conservative victory might have been the most likely outcome. After a series of articles, including critiques and responses to other electoral analysts, his "final projection" was published on the eve of the election. In the end, Silver's projections were off the mark, particularly compared with those of some other organizations, and Silver wrote a post mortem on his blog. Silver examined the pitfalls of the forecasting process, while Sexton discussed the final government agreement between the Conservatives and the Liberal Democrats.
On June 6, 2010, FiveThirtyEight posted pollster rankings that updated and elaborated Silver's efforts from the 2008 election. Silver expanded the database to more than 4,700 election polls and developed a model for rating the polls that was more sophisticated than his original rankings. The new ratings came under criticism by Taegan Goddard in an article in his blog Political Wire on June 9 titled "Where's the Transparency in Pollster Rankings?"
Silver responded on 538: "Where's the transparency? Well, it's here [citing his June 6 article], in an article that contains 4,807 words and 18 footnotes. Literally every detail of how the pollster ratings are calculated is explained. It's also here [referring to another article], in the form our Pollster Scorecards, a feature which we'll continue to roll out over the coming weeks for each of the major polling firms, and which will explain in some detail how we arrive at the particular rating that we did for each one".
As for why the complete 538 polling database had not been released publicly, Silver responded: "The principal reason is because I don't know that I'm legally entitled to do so. The polling database was compiled from approximately eight or ten distinct data sources, which were disclosed in a comment which I posted shortly after the pollster ratings were released, and which are detailed again at the end of this article. These include some subscription services, and others from websites that are direct competitors of this one. Although polls contained in these databases are ultimately a matter of the public record and clearly we feel as though we have every right to use them for research purposes, I don't know what rights we might have to re-publish their data in full".
Subsequently, on June 11, Mark Blumenthal also commented on the question of transparency in an article in the National Journal titled "Transparency In Rating: Nate Silver's Impressive Ranking Of Pollsters' Accuracy Is Less Impressive In Making Clear What Data Is Used". He noted that in the case of Research 2000 there were some discrepancies between what Silver reported and what the pollster itself reported. Other researchers questioned aspects of the methodology.
On June 16, 2010, Silver announced on his blog that he is willing to give all pollsters who he had included in his rating a list of their polls that he had in his archive, along with the key information that he used (poll marginals, sample size, dates of administration); and he encouraged the pollsters to examine the lists and the results to compare them with the pollster's own record and make corrections.
On June 3, 2010, The New York Times and Silver announced that FiveThirtyEight had formed a partnership under which the blog would be hosted by the Times for a period of three years. In legal terms, FiveThirtyEight granted a "license" to the Times to publish the blog. The blog would be listed under the "Politics" tab of the News section of the Times. FiveThirtyEight would thus be subject to and benefit from editing and technical production by the Times, while FiveThirtyEight would be responsible for creating the content.
Silver received bids from several major media entities before selecting the Times. Under terms of the agreement, Silver would also write monthly articles for the print version of both the newspaper and the Sunday magazine. Silver did not move his blog to the highest bidder, because he was concerned with maintaining his own voice while gaining the exposure and technical support that a larger media company could provide. "There's a bit of a Groucho Marx quality to it [Silver has said].... You shouldn't want to belong to any media brand that seems desperate to have you as a member, even though they'll probably offer the most cash".
The first column of the renamed FiveThirtyEight: Nate Silver's Political Calculus appeared in The Times on August 25, 2010, with the introduction of U.S. Senate election forecasts. At the same time, Silver published a brief history of the blog. All columns from the original FiveThirtyEight.com were also archived for public access.
When the transition to The New York Times was announced, Silver listed his staff of writers for the first time. However, of the seven listed writers, only three of them had published on 538/New York Times by late December 2010: Silver, Renard Sexton and Hale Stewart. Andrew Gelman contributed again in early 2011. Brian McCabe published his first article in January 2011. Why other writers played only a limited role in FiveThirtyEight/NYT was explained in February 2011 as follows:
"Before his partnership with the Times, Silver had five contributors who wrote about half the posts on the site. Now none of those contributors write regularly for the blog, meaning Silver writes about 85 percent of the posts with occasional guest contributions. The Times, Silver said, wasn't comfortable allowing some of the contributors to continue writing because of their tone or political affiliations.
'That is one of the challenges – getting people that meet the Times' standards of what it means to be a journalist. You can't have too many conflicts of interest and you really have to write in a way that comes across as being not overly opinionated,' Silver said. 'I disagree with some of the decisions the Times made as far as what they considered disqualifying, but the fact is they do have a standard which is both high and in some ways kind of quirky'.
[Jim] Roberts, [Assistant Managing Editor of the Times], "explained that the Times decided contributor Ed Kilgore would stop writing for FiveThirtyEight because his work would have violated the Times' ethics policy. Kilgore is also managing editor of The Democratic Strategist – 'a partisan connection that was a bit too close for comfort, Roberts said'.
Two of the contributors who used to write regularly for FiveThirtyEight – Renard Sexton who covered international politics and Hale Stewart who covered economics – have both written some pieces for the blog since the switch. But they're doing so less frequently, Silver said, because the Times already has reporters who write about the same topics.
'One of the things that I have to think about now that I didn't have to think about before is how FiveThirtyEight's coverage relates to everything else The New York Times is doing,' said Silver, who hopes to add other contributors to balance out the workload and give him time to work on longer-term projects.
On the one hand, that can create opportunities to write about subjects that I might have skipped before. But in other circumstances, there can be issues with duplication or redundancy'".
Beginning in 2011, one writer who emerged as a regular contributor was Micah Cohen. Cohen provided a periodic "Reads and Reactions" column in which he summarized Silver's article for the previous couple of weeks, as well as reactions to it in the media and other blogs, and suggested some additional readings related to the subject of Silver's columns. Silver identified Cohen as "my news assistant". Cohen also contributed additional columns on occasion.
On September 12, 2011, Silver introduced another writer: "FiveThirtyEight extends a hearty welcome to John Sides, a political scientist at George Washington University, who will be writing a series of posts for this site over the next month. Mr. Sides is also the founder of the outstanding blog The Monkey Cage, which was named the 2010 Blog of the Year by The Week magazine".
While politics and elections remained the main focus of FiveThirtyEight, the blog also sometimes addressed sports, including American college athletic conference realignment, professional tennis, the 2011 NCAA Men's Basketball "March Madness" and the 2012 NCAA Men's Basketball tournament selection process, the B.C.S. rankings in NCAA college football, NBA Basketball, and Major League Baseball matters ranging from Derek Jeter's 2011 performance to 2011 attendance at the New York Mets' Citi Field and the historic 2011 collapse of the Boston Red Sox.
In addition, FiveThirtyEight sometimes turned its attention to other topics, such as the economics of blogging, the financial ratings by Standard & Poors, economists' tendency to underpredict unemployment levels, and the economic impact and media coverage of Hurricane Irene (2011).
FiveThirtyEight published a graph showing different growth curves of the news stories covering Tea Party and Occupy Wall Street protests. Silver pointed out that conflicts with the police caused the sharpest increases in news coverage of the protests. And he assessed the geography of the protests by analyzing news reports of the size and location of events across the United States.
Shortly after 538 relocated to The New York Times, Silver introduced his prediction models for the 2010 elections to the U.S. Senate, the U.S. House of Representatives, and state Governorships. Each of these models relied initially on a combination of electoral history, demographics, and polling.
Stimulated by the surprising win of Massachusetts Republican Scott Brown in the special election in January 2010, Silver launched the first iteration of his Senate prediction model a few days later, using objective indicators including polling to project each state outcome in November. This model incorporated some elements of the 2008 presidential model. It was first published in full form in The New York Times on August 25, 2010. It relied basically on aggregating of public polls for each Senate race, with some adjustment for national trends in recognition of a correlation in poll movement across state lines, i.e., each race cannot be interpreted as entirely independent of all others.
In addition to making projections of the outcomes of each Senate race, FiveThirtyEight tracked the expected national outcome of the partisan division of the Senate. Just before election day (October 31), the FiveThirtyEight Senate projection was for the new Senate to have 52 Democrats and 48 Republicans. (The model did not address the possibility of party switching by elected candidates after November 2.)
Of the 37 Senate seats contested in the November 2, 2010 elections, 36 were resolved by November 4, including very close outcomes in several states. Of these 36, the FiveThirtyEight model had correctly predicted the winner in 34. One of the two misses was in Colorado, in which the incumbent Michael Bennet (D) outpolled the challenger Ken Buck (R) by less than 1 percentage point. The 538 model had forecast that Buck would win by 1 percentage point. The second miss was in Nevada, in which the incumbent Harry Reid beat challenger Sharron Angle by 5.5 percentage points, whereas the 538 model had forecast Angle to win by 3.0 percentage points. Silver has speculated the error was due at least in part to the fact that polling organizations underrepresented Hispanic voters by not interviewing in Spanish.
In the remaining contest for U.S. Senate, in Alaska, the electoral outcome was not yet determined as of November 4, pending a count of the write-in ballots, but in the end the FiveThirtyEight forecast of GOP nominee Joe Miller as winner ultimately proved to be wrong, as write-in candidate, incumbent Republican Senator Lisa Murkowski, prevailed.
The 538 model had forecast a net pickup of 7 seats by the Republicans in the Senate, but the outcome was a pickup of 6 seats.
The model for projecting the outcome of the House of Representatives was more complicated than those for the Senate and governorships. For one thing, House races are more subject to the force of national trends and events than are the other two. One way to account for this was to take into account trends in the "generic Congressional ballot." Use of such a macrolevel indicator, as well as macroeconomic indicators, is a common approach taken by political scientists to project House elections.
Furthermore, there was much less available public polling for individual House districts than there is for Senate or gubernatorial races. By the end of the 2010 election season, public polls were available for only about 25% of the districts. This is one reason why some analysts rely principally on making global or macro-level projections of the number of seats to be won by each party rather than trying to forecast the outcome in every individual district. Silver's FiveThirtyEight model, however, while weighting the generic partisan division as one factor, focused on developing estimates for each district. For this purpose he used information on past voting in the district (the Cook PVI), the quality of the candidates (in particular whether one was an incumbent), fundraising by each candidate, "expert ratings" of the races, public polls of the given race (if they were available), and, in the absence of public polls a cautious use of private polls (i.e., polls conducted by or for partisan organizations or a candidate's own campaign organization).
In response to some concerns that he was hedging his projection, Silver contended that in his model the uncertainty of the outcome was a feature, not a flaw. In comparison with previous Congressional elections, a far larger number of seats were being contested or were "in play" in 2010. While his model, which relied on simulating the election outcomes 100,000 times generated a projected "most likely" net gain of 53 seats by the Republicans (two days before the election), he emphasized that the 95% confidence interval was ± 29–30: "Tonight, our forecast shows Republicans gaining 53 seats – the same as in recent days, and exactly the same answer you get if you plug the generic ballot average into the simple formula. Our model also thinks the spread of potential outcomes is exceptionally wide: its 95 percent confidence interval runs from a 23-seat Republican gain to an 81-seat one".
On election eve, he reported his final forecast as follows:
Our forecasting model, which is based on a consensus of indicators including generic ballot polling, polling of local districts, expert forecasts, and fund-raising data, now predicts an average Republican net gain of 54 seats (up one from 53 seats in last night's forecast), and a median net Republican gain of 55 seats. These figures would exceed the 52 seats that Republicans won from Democrats in the 1994 midterms.
In final vote tallys as of December 10, 2010, the Republicans had a net gain of 63 seats in the House, 8 more than the total predicted on election eve though still within the reported confidence interval.
The FiveThirtyEight model for state governors' races also relied basically on aggregating and projecting public polls in each race. However, Silver reported that gubernatorial elections in each state were somewhat more independent of what happened in other states than were either Senate or House of Representatives elections. That is, these races were somewhat more local and less national in focus.
Just before election day (October 31), the FiveThirtyEight projection was that there would be 30 Republican governors in office (counting states where there was no gubernatorial election in 2010), 19 Democratic governors, and 1 (actually 0.8) Other (Lincoln Chaffee, who was leading in the polls running as an Independent in Rhode Island).
Of the 37 gubernatorial races, FiveThirtyEight correctly predicted the winner of 36. Only in Illinois, in which the Democratic candidate Pat Quinn defeated the Republican Bill Brady 46.6% to 46.1%, was the FiveThirtyEight prediction wrong.
While FiveThirtyEight devoted a lot of time to coverage of the 2012 Republican party primaries throughout 2011, its first effort to handicap the 2012 Presidential general election was published a year in advance of the election: "Is Obama Toast? Handicapping the 2012 Election" in The New York Times Magazine. Accompanying the online release of this article, Silver also published online "Choose Obama's Re-Election Adventure," an interactive toy that allowed readers to predict the outcome of the election based on their assumptions about three variables: President Obama's favorability ratings, the rate of GDP growth, and how conservative the Republican opponent would be. In February 2012 Silver updated his previous Magazine story with another one, "Why Obama Will Embrace the 99 Percent". This article painted a more optimistic picture of Obama's re-election chances. Another article, "The Fundamentals Now Favor Obama," explained how the model and Obama's prospects had changed between November and February.
On December 13, 2011, Silver published his first version of a primary election forecast for the Republican Party Iowa Caucuses. In this article he also described the basic methodology for forecasting the primaries; his approach relied solely on an adjusted average of state-level polls, and not on any other information about the campaign or on national polls. Silver later analyzed the prospects and results of each Republican caucus and primary. He maintained and regularly updated a set of vote projections, applying his aggregation methodology to the available polls. In keeping with a concern for the uncertainty of the forecasts, his projections showed both a point estimate and a confidence interval of the vote percentage projected for each candidate.
Silver rolled out the first iteration of his 2012 general election forecasting model on June 7, 2012. The model forecasts both the popular vote and the electoral college vote, with the latter being central to the exercise and involving a forecast of the electoral outcome in each state.
The forecast works by running simulations of the Electoral College, which are designed to consider the uncertainty in the outcome at the national level and in individual states. It recognizes that voters in each state could be affected by universal factors – like a rising or falling economic tide – as well as by circumstances particular to each state. Furthermore, it considers the relationships between the states and the ways they might move in tandem with one another. Demographically similar states like Minnesota and Wisconsin, for instance, are more likely to move in the same direction than dissimilar ones like New Hampshire and New Mexico. Although the model – which is distinct from the electoral map put together by The Times's political desk – relies fairly heavily on polling, it also considers an index of national economic conditions.
In the initial forecast, Barack Obama was estimated to win 291.3 electoral votes, compared to 246.7 by Mitt Romney. This was consistent with Obama having a 61.8% chance of winning the electoral vote in November 2012. Obama was forecast to win 50.5% of the popular vote, compared to 49.4% by Romney.
The website provided maps and statistics about the electoral outcomes in each state as well as nationally. Later posts addressed methodological issues such as the "house effects" of different pollsters as well as the validity of telephone surveys that did not call cell phones.
Through the general election campaign, the blog tracked the movement in the projected electoral vote for Mitt Romney and Barack Obama. In the process it drew an enormous amount of traffic to The New York Times. On election night, November 6th, it was reported that "Silver’s blog provided a significant—and significantly growing, over the past year—percentage of Times pageviews. This fall, visits to the Times’ political coverage (including FiveThirtyEight) have increased, both absolutely and as a percentage of site visits. But FiveThirtyEight’s growth is staggering: where earlier this year, somewhere between 10 and 20 percent of politics visits included a stop at FiveThirtyEight, last week that figure was 71 percent.... But Silver’s blog has buoyed more than just the politics coverage, becoming a signifiant traffic-driver for the site as a whole. Earlier this year, approximately 1 percent of visits to the New York Times included FiveThirtyEight. Last week, that number was 13 percent. Yesterday, it was 20 percent. That is, one in five visitors to the sixth-most-trafficked U.S. news site took a look at Silver’s blog".
In a series of posts in 2011 and 2012, FiveThirtyEight criticized the forecasting methods that relied on macro-economic modeling of the electoral outcomes. According to Silver, models based primarily on the macro-level performance of the economy (such as unemployment, inflation, and the performance of the stock market), presidential approval ratings (when an incumbent is running for re-election), and the ideological positioning of the (potential) opposing candidates were useful for making forecasts of the election outcome well in advance of election day, though not very precise ones.
An article stating such a position published exactly a year before the 2012 election day was attacked in an online article in Bloomberg News by Ron Klain, the former chief-of-staff to Vice President Biden and a political advisor to Barack Obama.
For many years, a group of political scientists, mathematicians and scholars have argued that a handful of factors determine of presidential elections, irrespective of the campaigns.
Most famous among those thinkers is Allan Lichtman, whose "13 Keys to the White House" model (which looks at factors such as incumbency, the outcome of the previous midterm election and per capita economic growth) has forecast the popular-vote winner in each of the last seven elections.
More recently, the brilliant data analyst Nate Silver has employed a three-factor model (presidential approval rating, economic outlook and opponent's ideology) to forecast the 2012 outcome under a variety of scenarios. At least implicitly, he, too, is suggesting that the campaign itself is irrelevant to the result of the election.
One immediate reason to be skeptical of the models' forecasting prowess is that they point in opposite directions: Lichtman has interpreted his keys to forecast that President Barack Obama will be re-elected in 2012, while Silver rates Obama's chances at less than 50 percent.
As described elsewhere in this article, Silver's actual model for predicting the outcomes of the 2012 election is more elaborate than the three-factor model that he used for making his long-term forecast, which set out a variety of scenarios whose electoral outcomes depended in part on who the Republican Party nominee would be. Silver, too, criticized models that rely only on long-term or underlying macroeconomic and macropolitical factors – which Klain refers to as extrinsic factors – to make predictions of the outcome, including Lichtman's model and that of other political scientists and economists who did not look at conditions that were more proximate to the election date, especially as reflected in the results of opinion surveys. However, Klain argued that intrinsic factors are critical to the outcome of elections. In short, campaigns matter, and campaign spending matters.
In a response, Silver began by stating,
Unfortunately, Mr. Klain's article attributes to me a number of views that I am ambivalent about or actively disagree with, so it deserves a fairly long reply. I will also use this opportunity to respond to some criticisms that I have been receiving from political scientists. The irony is that I agree with Mr. Klain more than he realizes.
But let's start with Mr. Klain's central question: how much difference does campaign strategy make in determining the outcome of presidential elections?
Do all the ads, speeches, mailings, debates, online activity and rallies really change minds? Or is the outcome of the election the product of underlying fundamentals that are scarcely affected by such efforts?
This is obviously something of a false juxtaposition. It is extremely unlikely that campaigns don't matter at all. Now and then, you'll see a political scientist come fairly close to expressing this viewpoint, but that is certainly not the majority opinion within the discipline. The question, instead, is how much campaigns matter, and that is a difficult question to answer.
I strongly agree with Mr. Klain that political scientists as a group badly overestimate how accurately they can forecast elections from economic variables alone. I have written up lengthy critiques of several of these models in the past, which suffer from fundamental problems regardless of which variables they choose.
One of the things it took me a long time to learn about forecasting is that there's a difference between fitting data to past results and actually making a prediction. A regression model built from historical data is really just a description of statistical relationships that existed in the past. The forecaster hopes or assumes that the relationships will also apply in the future, but there is often a significant deterioration in performance..... Presidential forecasting models that rely on economic data are likely to be especially susceptible to these problems. Most of them are fit to data from a small sample of 10 to 20 past elections but have a choice of literally hundreds of defensible economic or political variables to sort through.
A more tangible question is how well economic statistics alone can really predict elections. I have written previously that a good assumption is that they can explain perhaps 50 percent of the results. But based on some further research that I will soon publish, I suspect that estimate was too high, and that the answer is more like 30 or 40 percent when the models are applied to make real, out-of-sample forecasts. Economic variables that perform better than that over a small subset of elections tend to revert to the mean or even perform quite poorly over larger samples.
So say that 60 percent of elections cannot be explained by economic variables. Should all of the remaining credit go to campaigns?
No, of course not. First, the fact that widely available published economic statistics cannot explain more than about 40 percent of election results does not mean that the actual living and breathing economy cannot....
After further discussing how and why campaigns and candidates do make a difference, Silver concluded:
I apologize if some of this seems prickly. I lived through the Moneyball wars in baseball and then saw how much progress the sport made once everyone learned how much they had in common.
Baseball games, however, are played 162 times per year, so the learning process is accelerated. But presidential elections are held only once every 4 years, and we make the same mistakes over and over again. The outcome of the election isn't especially predictable right now, but here are four predictions you can take to the bank:
- 1. Next year, the strategists of the winning campaigns will be praised as brilliant.
- 2. Next year, the strategists of the losing campaign will be blamed for a long series of mistakes.
- 3. Next year, some of the political science models will hit the outcome right on the nose.
- 4. Next year, some of the political science models will miss wildly in one direction or another.