The truth about CrowdClaritys extraordinary predictive power (which impresses Jed Christiansen so much)

No Gravatar

Paul Hewitt:

At first blush, it appears that we finally have a bona fide prediction market success! If we&#8217-re going to celebrate, I&#8217-d suggest Prosecco, not Champagne, however.

There are a number of reasons to be cautious. These represent only a couple of markets. We don&#8217-t know why Urban Science people appear to be so adept at forecasting GM sales in turbulent times. There is no information on the CrowdClarity web site to indicate why the markets were successful nor how their mechanism might have played a role in the PM accuracy. I&#8217-m guessing that it would have been really easy to beat GM&#8217-s forecasts in November, as they would likely have been even more biased than usual, mainly for political reasons. I&#8217-m not sure how may have been biased or why their predictions were not accurate. Maybe they are not so good at predicting unless the market is fairly stable.

The CrowdClarity web site boasts that a few days after the markets were opened, the predictions were fairly close to the eventual outcome. This is a good thing, but, at this point it is not useful. No one knew, at that time, that those early predictions would turn out to be reasonably accurate. As a result, no one would have relied upon these early predictions to make decisions.

I&#8217-m even more skeptical of the company&#8217-s contention that markets can be operated with as few as 13 participants. Here we go again, trying to fake diversity.

It is interesting that a prediction market comprised of participants outside of the subject company did generate more accurate predictions than GM insiders (biased) and (experts). The question that needs to be answered is why. Clearly, Urban Science people did have access to better information, but why?

Unless we know why the prediction markets were successful at CrowdClarity, it is hard to get excited. There are too many examples of prediction markets that are not significantly better than traditional forecasting methods. This one could be a fluke.

I&#8217-ll have more to say, soon, when I write about the prediction markets that were run at General Mills. There the authors of the study found that prediction markets were no better than the company internal forecasting process.

Paul Hewitt&#8217-s analysis is more interesting than Jed Christiansen&#8217-s naive take.

Paul Hewitt&#8217-s blog

Next: Assessing the usefulness of enterprise prediction markets

Share This:

Finally, a positive corporate prediction market case study… -well, according to Jed Christiansen

No Gravatar

Jed Christiansen:

To recap, the prediction market beat the official GM forecast (made at the beginning of the month) easily, which isn’t hugely surprising considering the myopic nature of internal forecasting. But the prediction market also beat the forecast. This is particularly interesting, as Edmunds would have had the opportunity to review almost the entire month’s news and data before making their forecast at the end of the month. […]

Assume that even with three weeks’ early warning Chevrolet was only able to save 10% of that gap, it’s still $80million in savings. Even if a corporate prediction market for a giant company like GM cost $200,000 a year, that would still be a return on investment of 40,000 %. And again, that’s in the Chevrolet division alone. […]

Make up your own mind by reading the whole piece.

Next: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

Next: Assessing the usefulness of enterprise prediction markets