Truth in Advertising – Meet Prediction Markets

Most published papers on prediction markets (there aren&#8217-t many) paint a wildly rosy picture of their accuracy. Perhaps it is because many of these papers are written by researchers having affiliations with prediction market vendors.

Robin Hanson is Chief Scientist at Consensus Point. I like his ideas about combinatorial markets and market scoring rules, but I think he over-sells the accuracy and usefulness of prediction markets. His concept of Futarchy is an extreme example of this. Robin loves to cite HP&#8217-s prediction markets in his presentations. Emile Servan-Schreiber (Newsfutures) is mostly level-headed but still a big fan of prediction markets. Crowdcast&#8217-s Chief Scientist is Leslie Fine- their Board of Advisors includes Justin Wolfers and Andrew McAfee. Leslie seems to have a more practical understanding than most, as evidenced by this response to the types of questions that Crowdcast&#8217-s prediction markets can answer well: &#8220-Questions whose outcomes will be knowable in three months to a year and where there is very dispersed knowledge in your organization tend to do well.&#8221- She gets it that prediction markets aren&#8217-t all things to all people.

An Honest Paper

To some extent, all of the researchers over-sell the accuracy and the range of useful questions that may be answered by prediction markets. So, it is refreshing to find an honest article written about the accuracy of prediction markets. Not too long ago, Sharad Goel, Daniel M. Reeves, Duncan J. Watts, David M. Pennock published Prediction Without Markets. They compared prediction markets with alternative forecasting methods for three types of public prediction markets: Football and baseball games and movie box office receipts.

They found that prediction markets were just slightly more accurate than alternative methods of forecasting. As an added bonus, these researchers considered the issue that prediction market accuracy should be judged by its effect on decision-making. So few researchers have done this! A very small improvement in accuracy is not considered material (significant), if it doesn&#8217-t change the decision that is made with the forecast. It&#8217-s a well-established concept in public auditing, when deciding whether an error is significant and requires correction. I have discussed this concept before.

While they acknowledge that prediction markets may have a distinct advantage over other forecasting methods, in that they can be updated much more quickly and at little additional cost, they rightly suggest that most business applications have little need for instantaneously updated forecasts. Overall, they conclude that &#8220-simple methods of aggregating individual forecasts often work reasonably well relative to more complex combinations (of methods).&#8221-

For Extra Credit

When we compare things, it is usually so that we can select the best option. In the case of prediction markets it is not a safe assumption that the choices are mutually exclusive. Especially in enterprise applications, prediction markets are heavily dependent on the alternative information aggregation methods as a primary source of market information. Of course, there are other sources of information and the markets are expected to minimize bias to generate more accurate predictions.

In the infamous HP prediction markets, the forecasts were eerily close to the company&#8217-s internal forecasts. It wasn&#8217-t difficult to see why. The same people were involved with both predictions! The General Mills prediction markets showed similar correlations, even when only some of the participants were common to both methods. The implication of these cases is that you cannot replace the existing forecasting system with a prediction market and expect the results to be as accurate. The two (or more) methods work together.

Not only do most researchers (Pennock et al, excepted) recommend adoption of prediction markets, based on insignificant improvements in accuracy, they fail to consider the effect (or lack thereof) on decision-making in their cost/benefit analysis. Even if some do the cost/benefit math, they don&#8217-t do it right.

Where a prediction market is dependent on other forecasting methods, the marginal cost is the total cost of running the market. There is no credit for eliminating the cost of alternative forecasting methods. The marginal benefit is that expected by choosing a different course of action than the one that would have been taken based on a less accurate prediction. That is, a slight improvement in prediction accuracy that does not change the course of action has no marginal benefit.

Using this approach, a prediction market that is only &#8220-slightly&#8221- more accurate, than those from alternative forecasting approaches, is just not good enough. So far, there is little, if any, evidence that prediction markets are anything more than &#8220-slightly&#8221- better than existing methods. Still, most of our respected researchers continue to tout prediction markets. Even a technology guru like Andrew McAfee doesn&#8217-t get it , in this little PR piece he wrote, shortly after joining Crowdcast&#8217-s Board of Advisors.

Is it a big snow job or just wishful thinking?

[Cross-posted from Toronto Prediction Market Blog]

Leave a Reply

Your email address will not be published. Required fields are marked *