menu

Reasonable Doubt

Doug Pinkham, Public Affairs Council President

If a gambler makes a bad bet, he loses the grocery money. If a financial analyst can’t accurately forecast earnings, he loses his job.

But if a political pundit makes predictions that are outlandish and mostly wrong, he gets better TV ratings. Commentator Dick Morris, who predicted that Mitt Romney would win in a landslide and the Republicans would easily recapture the Senate, provides an obvious example.

(While we’re piling on, I should mention that Morris also predicted the 2008 presidential race would pit Hillary Clinton against Condoleezza Rice. His book on that memorable matchup is still available from Amazon.com.)

Morris is not the only one unskilled at making predictions. It turns out that we might be better off flipping a coin than relying on the opinions of most professional prognosticators.

So said Philip Tetlock in his 2006 book Expert Political Judgment: How Good Is It? How Can We Know? Tetlock conducted a 20-year study that found so-called experts to be incompetent at forecasting the future. That’s because they tend to have a narrow view of the world, which attracts a large audience of true believers. What Tetlock called “boom and doom pundits” were the most unreliable group.

Now, six years later, Nate Silver, author of the FiveThirtyEight blog in The New York Times, has combined mathematical modeling with clever insights to transform the field of political forecasting. Leading up to the November elections, he was criticized by commentators who insisted the Obama-Romney race was a “tossup.” But Silver called all 50 states and the District of Columbia correctly, which was one state better than his record in 2008. His predictions for 2012 Senate races were just as accurate.

Despite Silver’s new celebrity status in Washington, his recent book, The Signal and the Noise: Why So Many Predictions Fail — But Some Don’t, is not really about politics. It’s about the art and science of forecasting. He explains various prediction theories and tells us why some things aren’t terribly difficult to predict (baseball player performance), while other things are frustratingly hard (earthquakes).

Critics, including some pollsters who don’t appreciate his ratings of their performance, have tried to dismiss Silver as a stats geek. That’s an over-simplification. Silver argues that the emergence of “Big Data” is transforming the world in many ways, but data analysis still requires human intervention. “It is when we deny our role in the process that the odds of failure rise,” he writes. “Before we demand more of our data, we need to demand more of ourselves.”

In political forecasting, the real experts understand opinion poll design, historical trends, economic indicators and how different variables relate to each other. For example, while Democratic voter enthusiasm and lower-than-average unemployment in Ohio set the stage for an Obama victory, making that forecast required judgment about the influence of these factors.

Silver also believes that forecasters should talk probabilities rather than make hard-and-fast predictions about the future. Many have pointed out that he called the elections correctly, but few have noted that on Election Day he gave Obama a 90.9 percent chance of winning. He also computed that Romney had a 13.8 percent chance of capturing the popular vote and that there was a 6.4 percent chance of a recount. All of these numbers changed as events unfolded in the days and weeks leading up to Nov. 6.

Silver’s success — and the similar performance of other “master quants” such as Simon Jackman and Drew Linzer of Stanford University — has got me thinking about the lack of rigor in the way some companies do their public affairs forecasting. Many engage in scenario planning, but often it’s for the purpose of crisis management (“What might go wrong?”) or general business planning (“What will our competitive environment be?”). Neither approach will necessarily prepare a firm to deal with a dynamic issues landscape.

Just as voter support for a candidate in Ohio can be a predictor of that candidate’s success in Michigan, support for a ballot initiative in one state can influence legislation in nearby jurisdictions. A scandal in one company can affect a whole industry’s reputation. Activism in Europe can spell activism in the U.S. — or not, depending on the issue.

Everyone wants to have certainty — especially CEOs. Often, senior management looks to public affairs executives to predict not only the outcome of elections but also the trajectory of issues. Rather than trying to be a fortune-teller, it would make more sense to assess the odds of various scenarios occurring, what factors would affect those odds and what strategies would be needed should those scenarios become reality.

This is really just a form of risk management, though the concept of “conditional probability” can also be used to consider the chances of policy changes opening up political opportunities.

How will an improving economy affect concern about environmental issues? How would cutbacks in defense spending influence opinions about national security? How will lower natural gas prices affect incentives for renewable energy production?

These are questions that go beyond simple calculations. As with election poll analysis, they require human intervention. And that’s where the insights and experience of a skilled public affairs executive can make a difference.

To read the entire article on the Public Affairs Council blog, click here.

Don't forget to sign up for K Street Café's mailing-list to receive news about exclusive content and events.