Assessing the usefulness of enterprise prediction markets

No Gravatar

Do you need to have experience in running an enterprise prediction exchange in order to assess the pertinence of enterprise prediction markets?

Paul Hewitt:

Hi Jed…

As for qualifications, I have been making business decisions for almost 30 years. I am a chartered accountant and a business owner. Starting in university and continuing to this day, I have been researching information needs for corporate decision making. As Chris points out, I’m not a salesperson for any of the software developers. In fact, if I have a bias, it is to be slightly in favour of prediction markets. That said, I still haven’t seen any convincing evidence that they work as promised by ANY of the vendors.

As for whether I have ever run or administered a prediction market, the answer is no. Does that mean I am not qualified to critique the cases that have been published? Hardly. You don’t have to run a PM to know that it is flawed. Those that do, end up trying to justify minuscule “improvements” in the accuracy of predictions. They also fail to consider the consistency of the predictions. Without this, EPMs will never catch on. Sorry, but that is just plain common sense.

The pilot cases that have been reported are pretty poor examples of prediction market successes. In almost every case, the participants were (at least mostly) the same ones that were involved with internal forecasting. The HP markets, yes, the Holy Grail of all prediction markets, merely showed that prediction markets are good at aggregating the information already aggregated by the company forecasters! They showed that PMs are only slightly better than other traditional methods – and mainly because of the bias reduction. Being slightly better is not good enough in the corporate world.

I think I bring a healthy skepticism to the assessment of prediction markets. I truly want to believe, but I need to be convinced. I am no evangelist, and there is no place for that in scientific research. Rather than condemn me for not administering a PM, why not address the real issues that arise from my analyses?

Paul Hewitt&#8217-s blog

Previously: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

The truth about CrowdClaritys extraordinary predictive power (which impresses Jed Christiansen so much)

No Gravatar

Paul Hewitt:

At first blush, it appears that we finally have a bona fide prediction market success! If we&#8217-re going to celebrate, I&#8217-d suggest Prosecco, not Champagne, however.

There are a number of reasons to be cautious. These represent only a couple of markets. We don&#8217-t know why Urban Science people appear to be so adept at forecasting GM sales in turbulent times. There is no information on the CrowdClarity web site to indicate why the markets were successful nor how their mechanism might have played a role in the PM accuracy. I&#8217-m guessing that it would have been really easy to beat GM&#8217-s forecasts in November, as they would likely have been even more biased than usual, mainly for political reasons. I&#8217-m not sure how Edmunds.com&#8217-s may have been biased or why their predictions were not accurate. Maybe they are not so good at predicting unless the market is fairly stable.

The CrowdClarity web site boasts that a few days after the markets were opened, the predictions were fairly close to the eventual outcome. This is a good thing, but, at this point it is not useful. No one knew, at that time, that those early predictions would turn out to be reasonably accurate. As a result, no one would have relied upon these early predictions to make decisions.

I&#8217-m even more skeptical of the company&#8217-s contention that markets can be operated with as few as 13 participants. Here we go again, trying to fake diversity.

It is interesting that a prediction market comprised of participants outside of the subject company did generate more accurate predictions than GM insiders (biased) and Edmunds.com (experts). The question that needs to be answered is why. Clearly, Urban Science people did have access to better information, but why?

Unless we know why the prediction markets were successful at CrowdClarity, it is hard to get excited. There are too many examples of prediction markets that are not significantly better than traditional forecasting methods. This one could be a fluke.

I&#8217-ll have more to say, soon, when I write about the prediction markets that were run at General Mills. There the authors of the study found that prediction markets were no better than the company internal forecasting process.

Paul Hewitt&#8217-s analysis is more interesting than Jed Christiansen&#8217-s naive take.

Paul Hewitt&#8217-s blog

Next: Assessing the usefulness of enterprise prediction markets

Share This:

Finally, a positive corporate prediction market case study… -well, according to Jed Christiansen

No Gravatar

Jed Christiansen:

To recap, the prediction market beat the official GM forecast (made at the beginning of the month) easily, which isn’t hugely surprising considering the myopic nature of internal forecasting. But the prediction market also beat the Edmunds.com forecast. This is particularly interesting, as Edmunds would have had the opportunity to review almost the entire month’s news and data before making their forecast at the end of the month. […]

Assume that even with three weeks’ early warning Chevrolet was only able to save 10% of that gap, it’s still $80million in savings. Even if a corporate prediction market for a giant company like GM cost $200,000 a year, that would still be a return on investment of 40,000 %. And again, that’s in the Chevrolet division alone. […]

Make up your own mind by reading the whole piece.

Next: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

Next: Assessing the usefulness of enterprise prediction markets