The truth about (enterprise) prediction markets

No Gravatar

Paul Hewitt:

[…] In virtually every case, the prediction market forecast is closer to the official HP forecast than it is to the actual outcome. Perhaps these markets are better at forecasting the forecast than they are at forecasting the outcome! Looking further into the results, while most of the predictions have a smaller error than the HP official forecasts, the differences are, in most cases, quite small. For example, in Event 3, the HP forecast error was 59.549% vs. 53.333% for the prediction market. They’re both really poor forecasts. To the decision-maker, the difference between these forecasts is not material.

There were eight markets that had HP official forecasts. In four of these (50%), the forecast error was greater than 25%. Even though, only three of the prediction market forecast errors were greater than 25%, this can hardly be a ringing endorsement for the accuracy of prediction markets (at least in this study). […]

To the despair of the Nashville imbecile, Paul&#8217-s analysis is quite similar to mine (circa February 14, 2009):

The prediction market technology is not a disruptive technology, and the social utility of the prediction markets is marginal. Number one, the aggregated information has value only for the totally uninformed people (a group that comprises those who overly obsess with prediction markets and have a narrow cultural universe). Number two, the added accuracy (if any) is minute, and, anyway, doesn’t fill up the gap between expectations and omniscience (which is how people judge forecasters). In our view, the social utility of the prediction markets lays in efficiency, not in accuracy. In complicated situations, the prediction markets integrate expectations (informed by facts and expertise) much faster than the mass media do. Their accuracy/efficiency is their uniqueness. It is their velocity that we should put to work.

Prediction markets are not a disruptive technology, but merely another means of forecasting.

Go reading Paul&#8217-s analysis in full.

I would like to add 2 things to Paul&#8217-s conclusion:

  1. We have been lied to about the real value of the prediction markets. Part of the &#8220-field of prediction markets&#8221- (which is a terminology that encompasses more people and organizations than just the prediction market industry) is made up of liars who live by the hype and will die by the hype.
  2. Prediction markets have value in specific cases where it could be demonstrated that an information aggregation mechanism is the appropriate method that should be put at work in those cases (and not in others). Neither the Ivory Tower economic canaries nor the self-described prediction market &#8220-practitioners&#8221- have done this job.

12 thoughts on “The truth about (enterprise) prediction markets

  1. Bentley207B said:

    Hi Chris…

    We should keep in mind that the HP test prediction markets were run over a decade ago. I would think that we have a better understanding of prediction markets, today. Their constraints may have been more onerous than they disclosed in their paper. They confined the trials to a small number of participants, and it appears they were selected mainly from the marketing functional area. Maybe their definition of diversity was much too narrow.

    The authors don’t tell us which products’ sales were being forecast. It could be that the markets with the largest errors were those for relatively new products (much more difficult to forecast market acceptance). It would have been nice to know this, but they were probably restricted in the information that could be disclosed in the paper.

    Still, the paper was a good start. We did learn a great deal from their trials. It is a shame that there are so few published examples. There is a great opportunity to learn from each other and improve the forecasts for everyone.

  2. Bentley207B said:

    I happened to be on Inkling’s web site to see what information they provide about their corporate case studies (http://inklingmarkets.com/homes/howtouse). Even though there aren’t many (9), most of them are still running up to two years later. Not much information is provided as to the usefulness or accuracy of the predictions, but the fact they’re still running indicates that they must be useful, given the cost of running the markets.

    What is really interesting is the number of participants in the markets – they all had substantially more participants than any published prediction market. One had 5,000 and two others were over 1,000 participants. Even the market with the fewest traders still had 220 – far more than the small numbers used in other published markets. And, as far as we know, these markets would have utilized an automated market maker.

    The conclusion is that, even when liquidity is not expected to be an issue, these corporate decision-makers decided (correctly) to utilize a substantial number of traders to ensure that they would get the best possible results. It would be very interesting to see how accurate their predictions were.

  3. Jed Christiansen said:

    The HP paper took place a decade ago with a handful of people and a handful of markets. I would hardly extrapolate any major conclusions from that across the last ten years of PM development!

    Regarding # of participants in a market, that can be interpreted in many different ways. Was it the number of people that had an account in the market, or the number of people active? Of the people with an account, was it registered for them (aka from a corporate directory automatically) or did they have to take action? And then, how many people were trading in each market?

    And Inkling does use an automated market maker; virtually any corporate PM has to for UI reasons.

    Again, I found in my research that as few as 16 people actively trading in a market can produce calibrated results. (Where a 20% estimate occurred 20% of the time, etc.) Sure, more people is always better but you can still get good results from a small active trader base.

  4. Chris F. Masse said:

    The bulk of Paul’s HP analysis is timeless. You would analyze today’s EPMs and would end up with the same conclusions. Do transfer some modern-day data to Paul, and do ask him to look into them, and let’s see.

  5. Bentley207B said:

    Jed, I was looking into the, admittedly old, study on HP’s markets, because they continue to be referenced (positively) by just about every consultant, academic, presenter and journalist, when talking about prediction markets. Once it gets repeated enough, it becomes “true”. I just decided to go back to the paper and look at the numbers myself.

    The authors of the paper did come to conclusions regarding those prediction markets. I’m not faulting them for having done so, maybe I’m just saying they should have reported them as “observations” rather than conclusions.

    As for the Inkling corporate case studies, I’m sure that not all of the participants really participated, but it does indicate that they believed they would need a large number of people to arrive at useful results (or at least a reasonable level of trading). I have followed your research (at least in basic terms), but I would be concerned with the consistency of the predictive accuracy of small groups. Given the relatively low cost of using additional participants in these markets, why focus on using fewer traders (and run the risk of losing “diversity” in the crowd)?

    Believe it or not, I do think that prediction markets can be accurate and useful in corporate decision-making. It would be nice to have more data on actual applications, so that everyone could learn more about what works and what does not.

  6. Jed Christiansen said:

    Regarding low numbers in prediction markets, I agree that it’s always better to have more people trading than less. The point of my studying was to try and put a lower bound on the number of people needed for a calibrated forecast. In some situations, getting additional participants in markets is actually *very* costly.

    I may sound like a broken record, but accuracy is only one of many potential benefits of prediction markets. Sometimes it’s a LOT better than current forecasts (I’ve seen a case of error rates getting cut in half), and other times it is only a marginal increase. That has to be balanced with the costs of forecasting errors, which may mean that even a 1% increase is worth the costs.

    I do hope that more papers with real-world case studies are written; unfortunately many companies still don’t want to release any of that information.

  7. Chris F. Masse said:

    “unfortunately many companies still don’t want to release any of that information.”

    The PM industry could set up private prediction markets pertaining to the industry, and each PM operator, and then release the data for everyone to see.

  8. Bentley207B said:

    I know that it can be very costly to add large numbers of participants, but unless there is a sufficient number, the mechanics of the aggregation method break down. Then, you are more likely to have inconsistent results. As you noted, Jed, sometimes the predictions are a lot better and sometimes they’re just marginally better. Sometimes they’re *worse* than the current forecast. The problem is we don’t know which is going to be true, and that is one of the things we’re trying to determine with prediction markets.

    Sometimes a 1% improvement in a forecast is valuable, other times it’s not. Sometimes a 5-10% improvement will be irrelevant (i.e. if you’re still out by 40-50%). It depends on the issue being predicted.

    Maybe we should be trying to get these prediction markets working as well as possible before we try to make them more cost effective (with fewer participants). As it stands, apart from generalities, we really don’t know what it takes to make prediction markets *consistently* useful predictors of the future.

  9. Jed Christiansen said:

    I think one of the hangups we get when talking about prediction markets is comparing them to other forecasting methods. Yes, that’s one measurement of success, but it’s rather limiting.

    Some of the more interesting markets are on topics that were previously thought to be “un-forecastable.” (I often ask people to think about PM’s on project management RAG status.) There’s nothing to compare a PM forecast to (other than reality), so any decent result is a 100%+ improvement! Todd Proebsting’s first PM at Microsoft is a classic example of this exact idea.

    In my experience, there are a LOT of factors that go into making a successful PM. I’ve run PM’s where I got hundreds of people to register without any promise of even a cash or gift reward and got great results. I’ve run PM’s where it was pulling teeth to get 20+ people involved (from a list of several hundred) when they were getting paid to participate. As always, I like to refer to what Emile of NewsFutures talks about: Rewards, Recognition, and Relevance. There’s no “magic mix” of these, but fundamentally that’s what the success of each market comes down to. (NOT the number of traders…)

    Getting a good answer only works if you’re asking a sensible question to a sensible group of people. To me, PM’s have the most difficultly with the sensible question aspect of the equation.

  10. Bentley207B said:

    I couldn’t agree more on the need for sensible questions, whose answers are in need of prediction. Several conference presenters and papers have stressed this, but no one goes into the details as to how to select appropriate questions. Maybe we should focus on that.

    I think we would all be interested in your comments on what did and did not work in motivating trading behaviour in the actual PMs that you operated. Let’s look at the data and get as many answers as we can. There may not be hard and fast rules, but there may be approaches that are more likely than others to succeed.

    One thing is for sure. There is a great hunger, here, for practical advice on how to operate effective prediction markets. I mean details, not generalities. There is, however, a severe lack of such advice.

    I applaud your efforts to operate markets with the fewest number of active participants, but, I can’t resist, I still think you need larger groups to ensure adequate diversity within the crowd. I’ll keep an open mind, however.

  11. Chris F. Masse said:

    Gartner: The “benefit” of enterprise prediction markets is “moderate” and “early users, who have begun to overestimate their accuracy and overall usefulness, are now somewhat disillusioned with the technology.”

    http://www.midasoracle.org/200…..kets-2008/

  12. Jed Christiansen said:

    In response to Paul, I think that it’s very difficult to discuss these factors in a meaningful way. It’s almost like trying to answer the question “What makes a good piece of art?” Each prediction market is unique based on what it’s trying to accomplish, with whom its trying to accomplish, and the tools used to do that.

    Unfortunately most of the projects that I’ve worked on are covered under NDA’s, so I can’t release data or findings from them. I would assume this is likely the case for most of the PM software vendors.

    I think more and more of the PM software companies are going to be providing deeper consulting services specifically because of this; they’re the central node of knowledge and experience across many prediction markets. Not only do their clients purchase their software, but they often need to use their consulting services to help make sure the markets run as best as they can.

Leave a Reply to Bentley207B Cancel reply

Your email address will not be published. Required fields are marked *