The New Hampshire Democratic primary was one of the few(?) events in which prediction markets did not give an “-accurate”- forecast for the winner. In a typical “-accurate”- prediction, the candidate that has the contract with the highest price ends up winning the election.
This result, combined with an increasing interest/hype about the predictive accuracy of prediction markets, generated a huge backslash. Many opponents of prediction markets pointed out the “-failure”- and started questioning the overall concept and the ability of prediction markets to aggregate information.
Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. In such a case, an obvious trading strategy would be to buy the frontrunner’-s contract and then simply wait for the market to expire to get a guaranteed, huge profit. If for example Obama was trading at 66 cents and Clinton at 33 cents (indicating that Obama is twice as likely to be the winner), and the markets were “-always accurate”- then it would make sense to buy Obama’-s contract the day before the election and get $1 back the next day. If this was happening every time, then this would not be an efficient market. This would be a flawed, inefficient market.
In fact, I would like to argue that the late streak of successes of the markets to always pick the winner of the elections lately has been an anomaly, indicating the favorite bias that exists in these markets. The markets were more accurate than they should, according to the trading prices. If the market never fails then the prices do not reflect reality, and the favorite is actually underpriced.
The other point that has been raised in many discussions (mainly from a mainstream audience) is how we can even define probability for an one-time event like the Democratic nomination for the 2008 presidential election. What it means that Clinton has 60% probability of being the nominee and Obama has 40% probability? The common answer is that “-if we repeat the event for many times, 60% of the cases Clinton will be the nominee and 40% of the cases, it will be Obama”-. Even though this is an acceptable answer for someone used to work with probabilities, it makes very little sense for the “-average Joe”- who wants to understand how these markets work. The notion of repeating the nomination process multiple times is an absurd concept.
The discussion brings in mind the ferocious battles between Frequentists and Bayesians for the definition of probability. Bayesians could not accept that we can use a Frequentist approach for defining probabilities for events. “-How can we define the probability of success for an one-time event?”- The Frequentist would approach the prediction market problem by defining a space of events and would say:
After examining prediction markets for many state-level primaries, we observed that 60% of the cases the frontrunners who had a contract priced at 0.60 one day before the election, were actually the winners of the election. In 30% of the cases, the candidates who had a contract priced at 0.30 one day before the election, were actually the winners of the election, and so on.
A Bayesian would criticize such an approach, especially when the sample size of measurement is small, and would point to the need to have an initial belief function, that should be updated as information signals come from the market. Interestingly enough, the two approaches tend to be equivalent in the presence of infinite samples, which is however rarely the case.
Crossposted from my blog
All well and good in theory. But this does not really deal with the question of market inefficiency. Nor does it take into account the innumerable occasions when the probabilities in the market do not sum up to 1.
True, efficiency is an issue and you know that I am not convinced that the markets are efficient
However, we should try to separate two things: Market efficiency and market accuracy. Efficiency is the rate in which the market incorporates new information and prevents any arbitrage opportunities. Accuracy is the probability in which the market predicts the correct outcome of an event. The main claim to fame for the markets is that they self-report their accuracy, and that “the prices are probabilities”.
We can measure the effectiveness of the market by following the outline discussed above. One axis is the price of the contract at time t before the expiration of the contract and the other axis is the rate in which this event happens. (…60% of the cases the event that trades at 0.6 happens, 30% of the cases the event that trades at 0.3 happens, and so on…). A perfectly accurate market should have a straight line as an outcome when time t gets close to 0. Any deviation of the experimental results indicates an accuracy bias. There are many papers that indicate the favorite-longshot biases in the market (underprice the favorite, overprice the longshots) so there is no need to really repeat this here. An interesting thing is to see how big t can be and still have reasonable accuracy. Furthermore, if we have systematic and robust biases, then we can use a calibration function that will adjust the market prices, compensating for the biases, to reflect real-life probabilities.
Measuring efficiency is a trickier concept. The general definition of efficiency is that “the market immediately incorporates all available information”. Being able to predict price movements indicates inefficiency. Having prices for an event summing up to anything other than 1, indicates inefficiency. However, it is difficult to have a definite proof that the market is efficient. We can only say that “we were not able to spot inefficiencies”. It is very difficult to prove that “the market is efficient”.
The two metrics are, of course, highly connected close to the expiration of the contract. If the market is not efficient, then it will not be accurate, as it will not have had incorporated all the available information, if any material information becomes available just before the expiration of the contract.
Are you a Bayesian or a Frequentist?
Are you a Bayesian or a Frequentist? (Or Bayesian Statistics 101)
[…] I’m not happy he served InTrade’s past forecasting successes in absolute terms —and not in terms of probabilities. That shows James Surowiecki can’t be the ultimate leader of the field of […]
The truth about prediction markets