[…] In virtually every case, the prediction market forecast is closer to the official HP forecast than it is to the actual outcome. Perhaps these markets are better at forecasting the forecast than they are at forecasting the outcome! Looking further into the results, while most of the predictions have a smaller error than the HP official forecasts, the differences are, in most cases, quite small. For example, in Event 3, the HP forecast error was 59.549% vs. 53.333% for the prediction market. They’re both really poor forecasts. To the decision-maker, the difference between these forecasts is not material.
There were eight markets that had HP official forecasts. In four of these (50%), the forecast error was greater than 25%. Even though, only three of the prediction market forecast errors were greater than 25%, this can hardly be a ringing endorsement for the accuracy of prediction markets (at least in this study). […]
To the despair of the Nashville imbecile, Paul’-s analysis is quite similar to mine (circa February 14, 2009):
The prediction market technology is not a disruptive technology, and the social utility of the prediction markets is marginal. Number one, the aggregated information has value only for the totally uninformed people (a group that comprises those who overly obsess with prediction markets and have a narrow cultural universe). Number two, the added accuracy (if any) is minute, and, anyway, doesn’t fill up the gap between expectations and omniscience (which is how people judge forecasters). In our view, the social utility of the prediction markets lays in efficiency, not in accuracy. In complicated situations, the prediction markets integrate expectations (informed by facts and expertise) much faster than the mass media do. Their accuracy/efficiency is their uniqueness. It is their velocity that we should put to work.
Prediction markets are not a disruptive technology, but merely another means of forecasting.
Go reading Paul’-s analysis in full.
I would like to add 2 things to Paul’-s conclusion:
One bit of criticism about my pamphlet (The Truth About Prediction Markets) goes like this: Velocity without accuracy is dumb.
That is not true.
Let’-s imagine, for the sake of the exercise, that Barack Obama does not pick up Kathleen Sebelius to head HHS. The velocity argument remains valid: Fed by the vertical media (in this case, Yahoo News republishing the Associated Press), the prediction markets integrated expectations (informed by facts and expertise) much faster than the mass media did.
Any argument about the velocity of the prediction markets cannot be contradicted. No way.
I have spent several hours re-reading the 2004 AEI-Brookings book, “-Information Markets”- (by which they mean “-prediction markets”-). It is a collection of un-enlightening research articles —-except for the IEM article, which is outstanding, both on the factual and theoretical sides.
In the conclusion of their introduction, Robert Hahn and Paul Tetlock wrote that they want their readers to contemplate the idea that prediction markets could make a “-big”- difference and “-revolutionize public- and private-sector decision-making”-. Well, 4 years later, it is clear that those big dreams didn’-t pan out. Not a single mass media outlet has praised the public prediction markets for their work on the 2008 US presidential election (I am taking about a post-mortem analysis about Election Day, not the primaries). Not a single one. (Not even Justin Wolfers.) And the number of corporations using enterprise prediction markets is still minute. The thinkers who wrote this book (“-Information Markets”-) all made the mistake to put the emphasis on accuracy instead of efficiency. That was the foundation flaw. We should reset and reboot the field of prediction markets.
Previously: The truth about prediction markets
I am re-reading a 2007 scientific article from Region Focus’ Vanessa Sumo:
– Ask The Market – Companies are leading the way in the use of prediction markets. The public sector may soon follow. – (PDF)
Here is what I see on the frontpage:
– “-one or two weeks in advance“-
– “-even up to five weeks in advance“-
Marketing-wise, velocity is a much more potent argument than the argument on accuracy. Who cares about an added accuracy of +2.7% (and that’-s debated)? If any, that’-s peanuts.
You cannot make a case against velocity. Impossible.
UPDATE: Put the PDF link in the address box of your browser (as opposed to clicking on it, or right-clicking on it).
Come to the wonderful world of collective intelligence, wisdom of crowds, and prediction markets!…- The sun shines bright, the market-generated predictions are vastly superior to the polls as election predictors, and the track record of the public prediction markets stretches as far as the eye can see. There are opportunities aplenty in the field of prediction markets, and the trading technology is cheap. Every working enterprise can have its own internal prediction exchange, and inside every exchange, a set of enterprise prediction markets that correctly predicts the future of business, which their happy, all-American CEO listens to. Life is good in the magic world of prediction markets…- it’-s paradise on Earth.
Ha! ha! ha! ha!…- That’-s what they tell you, anyway…- —-because they are selling an image (just as Bernie Madoff did). They are selling it thru their vendor websites, vendor conferences, vendor-inspired articles in blogs, newspapers and magazines, and interviews of vendor data-fed professors in the media.
The prediction market technology is not a disruptive technology, and the social utility of the prediction markets is marginal. Number one, the aggregated information has value only for the totally uninformed people (a group that comprises those who overly obsess with prediction markets and have a narrow cultural universe). Number two, the added accuracy (if any) is minute, and, anyway, doesn’-t fill up the gap between expectations and omniscience (which is how people judge forecasters). In our view, the social utility of the prediction markets lays in efficiency, not in accuracy. In complicated situations, the prediction markets integrate expectations (informed by facts and expertise) much faster than the mass media do. Their accuracy/efficiency is their uniqueness. It is their velocity that we should put to work.
Here’-s now our definition of prediction markets:
A prediction market is a market for a contract that yields payments based on the outcome of a partially uncertain future event, such as an election. A contract pays $100 only if candidate X wins the election, and $0 otherwise. When the market price of an X contract is $60, the prediction market believes that candidate X has a 60% chance of winning the election. The price of this event derivative represents the imputed perceived likelihood of the partially uncertain future outcome (i.e., its aggregated expected probability). A 60% probability means that, in a series of events each with a 60% probability, the favored outcome is expected to occur 60 times out of 100, and the unfavored outcome is expected to occur 40 times out of 100.
Each prediction exchange organizes its own set of real-money and/or play-money markets, using either a CDA or a MSR mechanism —-with or without an automated market maker.
Prediction markets enable us to attain collective intelligence. Prediction markets produce dynamic, objective probabilistic predictions on the outcomes of future events by aggregating disparate pieces of information that the traders bring when they agree on prices. The event derivative traders are informed by the primary indicators (i.e., the primary sources of information), like the polls, for instance. These informed speculators then execute their transactions based on their anticipations about the future —-anticipations that will be either confirmed or infirmed.
The value of a set of prediction markets consists in the added accuracy that these prediction markets provide relative to the other meta predictive mechanisms, times the value of accuracy in improved decisions, minus the cost of maintaining these prediction markets, relative to the cost of the other meta predictive mechanisms. A highly accurate set of prediction markets has little value if some other meta predictive mechanism(s) can provide similar accuracy at a lower cost, or if very few substantial decisions are influenced by accurate predictions on its topic.
PS: I am updating a bit the content of this webpage, over time —-so as to finesse the message.
As I write this, Intrade gives the advantage to McCain over Obama and has the Republican party even with the Democratic party to win the election, whereas all the other prediction markets, meaning IEM, Betfair, and the NewsFutures play-money kind still favor a Democrat in the White House. That disconnect prompted Chris to wonder aloud whether Intrade is faster than the other markets to incorporate the latest polls, perhaps because of its “-bigger liquidity”-.
That’-s an interesting reaction on several levels.
First, reactivity and accuracy are not to be confused for one another. Given that market prices are supposed to be more accurate and more stable that fickle U.S. raw polls (Berg et al, 2008), one should not necessarily be impressed by the market that is quickest to mirror the latest polls. I very much doubt that traders in the “-other”- markets have not heard about the latest polls giving McCain an edge. Rightly or wrongly – it is too soon to tell – they just gave those polls less weight that the Intrade traders apparently did.
Second, the argument from “-bigger liquidity”- is not receivable. Recently, Paul Tetlock analyzed Tradesports data in depth and found that more liquidity may in fact make the market dumber. He concludes: “-In both sports and financial prediction markets, the calibration of prices to event probabilities does not improve with increases in liquidity- and the forecasting resolution of market prices actually worsens with increases in liquidity.”-
My personal theory is that Intrade has a hair-trigger Republican bias which is not found in the other markets, because Intrade appeals to, and is marketed to, the more Republican-leaning segments of the U.S. population. In my opinion, the Intrade/Tradesports Republican bias was already evident in the 2004 election, as this analysis shows.
Of course, I may be completely wrong. In any case, I find today’-s dual disconnect between the polls and most of the markets, on the one hand, and between Intrade and the other markets, on the other hand, to be two very interesting data points that should be duly recorded so we can come back to them later, with hindsight.
Panos Ipeirotis in a comment here:
[W]e should try to separate two things: Market efficiency and market accuracy. Efficiency is the rate in which the market incorporates new information and prevents any arbitrage opportunities. Accuracy is the probability in which the market predicts the correct outcome of an event. The main claim to fame for the [prediction] markets is that they self-report their accuracy, and that “the prices are probabilities”.
We can measure the effectiveness of the market by following the outline discussed above. One axis is the price of the contract at time t before the expiration of the contract and the other axis is the rate in which this event happens. (…60% of the cases the event that trades at 0.6 happens, 30% of the cases the event that trades at 0.3 happens, and so on…). A perfectly accurate market should have a straight line as an outcome when time t gets close to 0. Any deviation of the experimental results indicates an accuracy bias. There are many papers that indicate the favorite-longshot biases in the market (underprice the favorite, overprice the longshots) so there is no need to really repeat this here. An interesting thing is to see how big it can be and still have reasonable accuracy. Furthermore, if we have systematic and robust biases, then we can use a calibration function that will adjust the market prices, compensating for the biases, to reflect real-life probabilities.
Measuring efficiency is a trickier concept. The general definition of efficiency is that “the market immediately incorporates all available information”. Being able to predict price movements indicates inefficiency. Having prices for an event summing up to anything other than 1, indicates inefficiency. However, it is difficult to have a definite proof that the market is efficient. We can only say that “we were not able to spot inefficiencies”. It is very difficult to prove that “the market is efficient”.
The two metrics are, of course, highly connected close to the expiration of the contract. If the market is not efficient, then it will not be accurate, as it will not have had incorporated all the available information, if any material information becomes available just before the expiration of the contract.
The New Hampshire Democratic primary was one of the few(?) events in which prediction markets did not give an “-accurate”- forecast for the winner. In a typical “-accurate”- prediction, the candidate that has the contract with the highest price ends up winning the election.
This result, combined with an increasing interest/hype about the predictive accuracy of prediction markets, generated a huge backslash. Many opponents of prediction markets pointed out the “-failure”- and started questioning the overall concept and the ability of prediction markets to aggregate information.
Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. In such a case, an obvious trading strategy would be to buy the frontrunner’-s contract and then simply wait for the market to expire to get a guaranteed, huge profit. If for example Obama was trading at 66 cents and Clinton at 33 cents (indicating that Obama is twice as likely to be the winner), and the markets were “-always accurate”- then it would make sense to buy Obama’-s contract the day before the election and get $1 back the next day. If this was happening every time, then this would not be an efficient market. This would be a flawed, inefficient market.
In fact, I would like to argue that the late streak of successes of the markets to always pick the winner of the elections lately has been an anomaly, indicating the favorite bias that exists in these markets. The markets were more accurate than they should, according to the trading prices. If the market never fails then the prices do not reflect reality, and the favorite is actually underpriced.
The other point that has been raised in many discussions (mainly from a mainstream audience) is how we can even define probability for an one-time event like the Democratic nomination for the 2008 presidential election. What it means that Clinton has 60% probability of being the nominee and Obama has 40% probability? The common answer is that “-if we repeat the event for many times, 60% of the cases Clinton will be the nominee and 40% of the cases, it will be Obama”-. Even though this is an acceptable answer for someone used to work with probabilities, it makes very little sense for the “-average Joe”- who wants to understand how these markets work. The notion of repeating the nomination process multiple times is an absurd concept.
The discussion brings in mind the ferocious battles between Frequentists and Bayesians for the definition of probability. Bayesians could not accept that we can use a Frequentist approach for defining probabilities for events. “-How can we define the probability of success for an one-time event?”- The Frequentist would approach the prediction market problem by defining a space of events and would say:
After examining prediction markets for many state-level primaries, we observed that 60% of the cases the frontrunners who had a contract priced at 0.60 one day before the election, were actually the winners of the election. In 30% of the cases, the candidates who had a contract priced at 0.30 one day before the election, were actually the winners of the election, and so on.
A Bayesian would criticize such an approach, especially when the sample size of measurement is small, and would point to the need to have an initial belief function, that should be updated as information signals come from the market. Interestingly enough, the two approaches tend to be equivalent in the presence of infinite samples, which is however rarely the case.
Crossposted from my blog