ABC News, in 2008:
Business leaders rely on metrics and data to inform decisions around new products and opportunities, but traditional forecasting methods suffer from bias and lack of first-hand information. That’s why business forecasting is an ideal target for the application of crowd wisdom. While bets are made anonymously, some prediction market software applications have built-in reward systems for accurate forecasters. And the accuracy of prediction markets over traditional forecasting methods is proven again and again. […] Prediction markets will then aggregate this knowledge to produce actionable, people-powered forecasts. The result is an ultra-rich information source that will lay the foundation for smarter, better-informed company decisions. […]
– Paul Hewitt comments on Robin Hanson’-s blog. Many exchanges with Robin Hanson. Read it all.
– Paul Hewitt:
[…] My point is that the case for prediction markets has not been made, at all. There is a tiny bit of proof that they are as good as alternative methods, and in a very few cases, very slightly better. Also, you need to be aware that even the slightly better prediction markets had the benefit of the alternative forecasting institution available to it. That is, the official forecasters at HP were also participants in the ever-so-slightly better prediction markets. […]
–->- I personally stay away from any discussion about conditional prediction markets (and futarchy). I prefer focusing on the ’-simple’- prediction markets.
“-The wisdom of crowds”- has apparently seeped a bit into popular culture, or at least the geekier end of it.
On the heels of British illusionist Derren Brown’-s invoking of “-the wisdom of crowds”- as a (false) part of his explanation of how he appeared to predict winning lottery numbers, last night a character in the American TV show House invoked the wisdom of crowds as part of an explanation for how he obtained a diagnosis of his medical condition.
(The character –- a highly intelligent, geeky, successful video game designer –- posted his medical symptoms on the internet and offered $25,000 for a successful diagnosis. Then, mentioning “-wisdom of crowd”- based reasoning, concluded that the most frequent diagnosis appearing in emailed responses was likely correct. As the story turned out, the crowd-sourced diagnosis was incorrect. Instead, the correct diagnosis was submitted by series main character Greg House, working from home after quitting his job at the hospital. The “-wisdom of crowds”- element doesn’-t make it into the official episode summary.)
Although the crowd was wrong (the better to highlight how clever our main character is when, later, he provides the correct diagnosis), at least the basic “-wisdom of crowds”- logic illustrated in the episode was correct. As a fan of the show, I appreciate that it doesn’-t insult my intelligence by dressing up clever cons with misleading science-based patter.
Do you need to have experience in running an enterprise prediction exchange in order to assess the pertinence of enterprise prediction markets?
As for qualifications, I have been making business decisions for almost 30 years. I am a chartered accountant and a business owner. Starting in university and continuing to this day, I have been researching information needs for corporate decision making. As Chris points out, I’m not a salesperson for any of the software developers. In fact, if I have a bias, it is to be slightly in favour of prediction markets. That said, I still haven’t seen any convincing evidence that they work as promised by ANY of the vendors.
As for whether I have ever run or administered a prediction market, the answer is no. Does that mean I am not qualified to critique the cases that have been published? Hardly. You don’t have to run a PM to know that it is flawed. Those that do, end up trying to justify minuscule “improvements” in the accuracy of predictions. They also fail to consider the consistency of the predictions. Without this, EPMs will never catch on. Sorry, but that is just plain common sense.
The pilot cases that have been reported are pretty poor examples of prediction market successes. In almost every case, the participants were (at least mostly) the same ones that were involved with internal forecasting. The HP markets, yes, the Holy Grail of all prediction markets, merely showed that prediction markets are good at aggregating the information already aggregated by the company forecasters! They showed that PMs are only slightly better than other traditional methods – and mainly because of the bias reduction. Being slightly better is not good enough in the corporate world.
I think I bring a healthy skepticism to the assessment of prediction markets. I truly want to believe, but I need to be convinced. I am no evangelist, and there is no place for that in scientific research. Rather than condemn me for not administering a PM, why not address the real issues that arise from my analyses?
At first blush, it appears that we finally have a bona fide prediction market success! If we’-re going to celebrate, I’-d suggest Prosecco, not Champagne, however.
There are a number of reasons to be cautious. These represent only a couple of markets. We don’-t know why Urban Science people appear to be so adept at forecasting GM sales in turbulent times. There is no information on the CrowdClarity web site to indicate why the markets were successful nor how their mechanism might have played a role in the PM accuracy. I’-m guessing that it would have been really easy to beat GM’-s forecasts in November, as they would likely have been even more biased than usual, mainly for political reasons. I’-m not sure how Edmunds.com’-s may have been biased or why their predictions were not accurate. Maybe they are not so good at predicting unless the market is fairly stable.
The CrowdClarity web site boasts that a few days after the markets were opened, the predictions were fairly close to the eventual outcome. This is a good thing, but, at this point it is not useful. No one knew, at that time, that those early predictions would turn out to be reasonably accurate. As a result, no one would have relied upon these early predictions to make decisions.
I’-m even more skeptical of the company’-s contention that markets can be operated with as few as 13 participants. Here we go again, trying to fake diversity.
It is interesting that a prediction market comprised of participants outside of the subject company did generate more accurate predictions than GM insiders (biased) and Edmunds.com (experts). The question that needs to be answered is why. Clearly, Urban Science people did have access to better information, but why?
Unless we know why the prediction markets were successful at CrowdClarity, it is hard to get excited. There are too many examples of prediction markets that are not significantly better than traditional forecasting methods. This one could be a fluke.
I’-ll have more to say, soon, when I write about the prediction markets that were run at General Mills. There the authors of the study found that prediction markets were no better than the company internal forecasting process.
To recap, the prediction market beat the official GM forecast (made at the beginning of the month) easily, which isn’t hugely surprising considering the myopic nature of internal forecasting. But the prediction market also beat the Edmunds.com forecast. This is particularly interesting, as Edmunds would have had the opportunity to review almost the entire month’s news and data before making their forecast at the end of the month. […]
Assume that even with three weeks’ early warning Chevrolet was only able to save 10% of that gap, it’s still $80million in savings. Even if a corporate prediction market for a giant company like GM cost $200,000 a year, that would still be a return on investment of 40,000 %. And again, that’s in the Chevrolet division alone. […]
Make up your own mind by reading the whole piece.