Why did all the prediction markets get the Olympic decision to reject Chicago so wrong?

No Gravatar

The blogger at Sabernomics sees &#8220-this as a win for prediction markets, not a failure.&#8221-

I don&#8217-t share his views, but I wanted to link to his piece for you to make up your own mind about the issue.

Share This:

Never try to divine the IOC decisions on Olympics venues, Mike.

Prof Michael Giberson,

No &#8220-careful observer knew this in advance&#8221- (about Chicago being a lemon), for the simple reason that if they knew, they would have downgraded Chicago on the InTrade and BetFair prediction markets, and Ben Shannon would have not bet $6,000 on Chicago.

I look forward to your contrite correction on the frontpage of Knowledge Problem &#8212-in bold, and with a link to Midas Oracle, stating that &#8220-Midas Oracle is the only website in the world to have told you *not* to bet on Chicago a€”and to stay (far) away from any Olympics venue prediction market.&#8221-

My thesis holds: The International Olympic Committee (IOC) is a close aristocratic group that does not leak out good information.

IOC

Previously: The Chicago candidacy, which was favored by the prediction markets (and gullible bettors like Ben Shannon), is the one that fared the worst.

Previously: Chicago wona€™t have the Olympics in 2016.

ADDENDUM:

– BetFair&#8217-s event derivative prices:

chicago-olympics-betfair

– InTrade&#8217-s event derivative prices:

chicago-olympics-intrade

– HubDub&#8217-s event derivative prices:

Who will recieve the winning bid to host the 2016 Olympics?

Could we have divined that Chicago was a lemon?

IOC

Prof Michael Giberson:

Chris, isna€™t it odd for you to state a€?Chicago had not the slightest chance to begin with.a€? The phrase implies you believe that the probability of Chicagoa€™s selection was near zero all along, but you have been claiming that it is impossible to predict anything about the outcomes of IOC selection processes.

Also, the NYT article reports on backbiting and disarray at the USOC. While the article is published after the IOC decision, presumably any careful observer knew this in advance [*] and you are suggesting it was relevant to the outcomes of the IOC market, i.e. you are suggesting it is a reason to have believed the Chicago selection as particularly unlikely. Again, this suggestion is contrary to your earlier views suggesting IOC decisions are unpredictable because there is no good information to aggregate.

I look forward to your correction!

[*] You presume too much, doc.

If, as you said quite cockily, &#8220-any careful observer knew this in advance&#8220-, then the (mass or vertical) media would have reported about that, and, logically:

  1. the prediction market traders would have downgraded Chicago early on-
  2. Ben Shannon (who is a smart man and a well informed bettor) would not have bet 6,000 bucks on Chicago.

The proof is in the pudding, doc.

You are wrong and I am right.

Assessing the usefulness of enterprise prediction markets

No Gravatar

Do you need to have experience in running an enterprise prediction exchange in order to assess the pertinence of enterprise prediction markets?

Paul Hewitt:

Hi Jed…

As for qualifications, I have been making business decisions for almost 30 years. I am a chartered accountant and a business owner. Starting in university and continuing to this day, I have been researching information needs for corporate decision making. As Chris points out, I’m not a salesperson for any of the software developers. In fact, if I have a bias, it is to be slightly in favour of prediction markets. That said, I still haven’t seen any convincing evidence that they work as promised by ANY of the vendors.

As for whether I have ever run or administered a prediction market, the answer is no. Does that mean I am not qualified to critique the cases that have been published? Hardly. You don’t have to run a PM to know that it is flawed. Those that do, end up trying to justify minuscule “improvements” in the accuracy of predictions. They also fail to consider the consistency of the predictions. Without this, EPMs will never catch on. Sorry, but that is just plain common sense.

The pilot cases that have been reported are pretty poor examples of prediction market successes. In almost every case, the participants were (at least mostly) the same ones that were involved with internal forecasting. The HP markets, yes, the Holy Grail of all prediction markets, merely showed that prediction markets are good at aggregating the information already aggregated by the company forecasters! They showed that PMs are only slightly better than other traditional methods – and mainly because of the bias reduction. Being slightly better is not good enough in the corporate world.

I think I bring a healthy skepticism to the assessment of prediction markets. I truly want to believe, but I need to be convinced. I am no evangelist, and there is no place for that in scientific research. Rather than condemn me for not administering a PM, why not address the real issues that arise from my analyses?

Paul Hewitt&#8217-s blog

Previously: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

The truth about CrowdClaritys extraordinary predictive power (which impresses Jed Christiansen so much)

No Gravatar

Paul Hewitt:

At first blush, it appears that we finally have a bona fide prediction market success! If we&#8217-re going to celebrate, I&#8217-d suggest Prosecco, not Champagne, however.

There are a number of reasons to be cautious. These represent only a couple of markets. We don&#8217-t know why Urban Science people appear to be so adept at forecasting GM sales in turbulent times. There is no information on the CrowdClarity web site to indicate why the markets were successful nor how their mechanism might have played a role in the PM accuracy. I&#8217-m guessing that it would have been really easy to beat GM&#8217-s forecasts in November, as they would likely have been even more biased than usual, mainly for political reasons. I&#8217-m not sure how Edmunds.com&#8217-s may have been biased or why their predictions were not accurate. Maybe they are not so good at predicting unless the market is fairly stable.

The CrowdClarity web site boasts that a few days after the markets were opened, the predictions were fairly close to the eventual outcome. This is a good thing, but, at this point it is not useful. No one knew, at that time, that those early predictions would turn out to be reasonably accurate. As a result, no one would have relied upon these early predictions to make decisions.

I&#8217-m even more skeptical of the company&#8217-s contention that markets can be operated with as few as 13 participants. Here we go again, trying to fake diversity.

It is interesting that a prediction market comprised of participants outside of the subject company did generate more accurate predictions than GM insiders (biased) and Edmunds.com (experts). The question that needs to be answered is why. Clearly, Urban Science people did have access to better information, but why?

Unless we know why the prediction markets were successful at CrowdClarity, it is hard to get excited. There are too many examples of prediction markets that are not significantly better than traditional forecasting methods. This one could be a fluke.

I&#8217-ll have more to say, soon, when I write about the prediction markets that were run at General Mills. There the authors of the study found that prediction markets were no better than the company internal forecasting process.

Paul Hewitt&#8217-s analysis is more interesting than Jed Christiansen&#8217-s naive take.

Paul Hewitt&#8217-s blog

Next: Assessing the usefulness of enterprise prediction markets

Share This:

Finally, a positive corporate prediction market case study… -well, according to Jed Christiansen

No Gravatar

Jed Christiansen:

To recap, the prediction market beat the official GM forecast (made at the beginning of the month) easily, which isn’t hugely surprising considering the myopic nature of internal forecasting. But the prediction market also beat the Edmunds.com forecast. This is particularly interesting, as Edmunds would have had the opportunity to review almost the entire month’s news and data before making their forecast at the end of the month. […]

Assume that even with three weeks’ early warning Chevrolet was only able to save 10% of that gap, it’s still $80million in savings. Even if a corporate prediction market for a giant company like GM cost $200,000 a year, that would still be a return on investment of 40,000 %. And again, that’s in the Chevrolet division alone. […]

Make up your own mind by reading the whole piece.

Next: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

Next: Assessing the usefulness of enterprise prediction markets

Derren Brown’s lottery win = A split camera trick disguised as “wisdom of crowds”

No Gravatar

Derren Brown: How to Win the Lottery (Channel 4 in the U.K.)

derren-brown

On 9 September 2009, [British illusionist] Derren Brown conducted a live TV broadcast in which he suggested that he had successfully predicted the winning National Lottery numbers prior to them being drawn. During the broadcast a number of blank lottery balls were displayed on a glass stand in clear view of the camera, and after the lottery draw had been made, the balls were rotated to reveal the winning numbers. It was claimed by Derren Brown that the only other people in the studio were two camera operators, to avoid legal issues, and that the stunt had been authorised by Camelot, the National Lottery operators.

Great Britain is buzzing like crazy about the stunt.

He claimed it was based on an old trick which tells how a crowd of people at a country fair accurately estimated the weight of an ox when their guesses were all averaged out. He gathered a panel of 24 people who wrote down their predictions after studying the last year’s worth of numbers. Then they added up all the guesses for each ball and divided it by 24 to get the average guess. On the first go they only got one number right, on the second attempt they managed three and on the third they guessed four. By the time of last week’s draw they had honed their technique to get six correct guesses, and these were the numbers shown on the Wednesday night programme. [Derren] Brown claims that the predictions were correct because of the “wisdom of the crowd” theory which suggests that a large group of people making average guesses will come up with the correct figure as an average of all their attempts. He also suggested that if the people were motivated by money, it may not work.

Well, we know a lot about the “wisdom of crowds“, here, as Midas Oracle specializes in collective intelligence. The idea of the “wisdom of crowds” is to aggregate bits of information that are dispersed in a population of independently minded individuals. The result of that information aggregation is a predictive power slightly superior (on average, over the long term) to what one single individual can produce —even a gifted one. However, the “wisdom of crowds” is not powerful enough to predict the future with 100% certainty. For that, you would have to reverse the psychological arrow of time —so as to remember the future as opposed to the past. Physicists tell us this is impossible in our universe. Hence, Derren Brown used a trick [WATCH THE 3RD VIDEO BELOW] —and concealed it with some blahblah about the “wisdom of crowds”.

“Check the ball on the right after Derren Brown says ‘23′. Notice it mysteriously jumps up and is slightly higher than the other 5 balls. (apologies for the camera wobble but my camera is on a tripod, the wobble is from the camera on the show which is programmed to wobble so you can’t see the switch of the balls). So no magic, NLP, psychology or mind-tricks. Just good old fashioned camera trickery.

How Derren Brown ‘divined’ the lotto numbers:

For the tip, thanks to Emile Servan-Schreiber of NewsFutures —we’re impatient to see the new version of their software / prediction market website.

Next: Why did illusionist Derren Brown invoke the “wisdom of crowds” in his lottery win explanation?

UPDATE:

His next event: Trying to beat the casino.

Science Of Scams – Advert from Phillis Dorris on Vimeo.

Another video

Another video

Derren Browns lottery win = A split camera trick disguised as wisdom of crowds

No Gravatar

Derren Brown: How to Win the Lottery (Channel 4 in the U.K.)

derren-brown

On 9 September 2009, [British illusionist] Derren Brown conducted a live TV broadcast in which he suggested that he had successfully predicted the winning National Lottery numbers prior to them being drawn. During the broadcast a number of blank lottery balls were displayed on a glass stand in clear view of the camera, and after the lottery draw had been made, the balls were rotated to reveal the winning numbers. It was claimed by Derren Brown that the only other people in the studio were two camera operators, to avoid legal issues, and that the stunt had been authorised by Camelot, the National Lottery operators.

Great Britain is buzzing like crazy about the stunt.

He claimed it was based on an old trick which tells how a crowd of people at a country fair accurately estimated the weight of an ox when their guesses were all averaged out. He gathered a panel of 24 people who wrote down their predictions after studying the last year&#8217-s worth of numbers. Then they added up all the guesses for each ball and divided it by 24 to get the average guess. On the first go they only got one number right, on the second attempt they managed three and on the third they guessed four. By the time of last week&#8217-s draw they had honed their technique to get six correct guesses, and these were the numbers shown on the Wednesday night programme. [Derren] Brown claims that the predictions were correct because of the &#8220-wisdom of the crowd&#8221- theory which suggests that a large group of people making average guesses will come up with the correct figure as an average of all their attempts. He also suggested that if the people were motivated by money, it may not work.

Well, we know a lot about the &#8220-wisdom of crowds&#8220-, here, as Midas Oracle specializes in collective intelligence. The idea of the &#8220-wisdom of crowds&#8221- is to aggregate bits of information that are dispersed in a population of independently minded individuals. The result of that information aggregation is a predictive power slightly superior (on average, over the long term) to what one single individual can produce &#8212-even a gifted one. However, the &#8220-wisdom of crowds&#8221- is not powerful enough to predict the future with 100% certainty. For that, you would have to reverse the psychological arrow of time &#8212-so as to remember the future as opposed to the past. Physicists tell us this is impossible in our universe. Hence, Derren Brown used a trick [WATCH THE 3RD VIDEO BELOW] &#8212-and concealed it with some blahblah about the &#8220-wisdom of crowds&#8221-.

&#8220-Check the ball on the right after Derren Brown says &#8216-23&#8242-. Notice it mysteriously jumps up and is slightly higher than the other 5 balls. (apologies for the camera wobble but my camera is on a tripod, the wobble is from the camera on the show which is programmed to wobble so you can&#8217-t see the switch of the balls). So no magic, NLP, psychology or mind-tricks. Just good old fashioned camera trickery. &#8221-

How Derren Brown &#8216-divined&#8217- the lotto numbers:

For the tip, thanks to Emile Servan-Schreiber of NewsFutures &#8212-we&#8217-re impatient to see the new version of their software / prediction market website.

Next: Why did illusionist Derren Brown invoke the &#8220-wisdom of crowds&#8221- in his lottery win explanation?

UPDATE:

His next event: Trying to beat the casino.

Science Of Scams &#8211- Advert from Phillis Dorris on Vimeo.

Another video

Another video

Do businesses need enterprise prediction markets?

Competitive advantage can be obtained either by differentiation or by low cost. Enterprise prediction markets certainly don&#8217-t foster the innovation process, and they are surely not the cheapest forecasting tool. EPMs require special software, the hiring of consultant(s), the participation of all, and a budget for the prizes. EPMs are costly, and they take time to deliver. As of today, I can&#8217-t see why any sane CEO should be implementing EPMs as a decision-making support. At the contrary, I would say that any sane CEO should fire any employee who tried to sneak in internal prediction markets, and should dismember any existing corporate prediction exchange. Right now.

It has been suggested that EPMs have helped Best Buy getting it right on the ‘HD-DVD versus Blu-Ray’ issue. It&#8217-s a boatload of bullsh*t. I know a lot about technology intelligence. It should be done by a smart and curious operator. There is no need of enterprise prediction markets to do this task. The tools you need consist of a bunch of IT news aggregators and a good search engine. Consider this:

The Inevitable Move Of iTunes To The Cloud

In the &#8216-cloud&#8217- piece above, there are facts and there are speculations. You&#8217-ve got much more technology intelligence reading the &#8216-cloud&#8217- piece above than you would get from a crude, plain and simple prediction market. Gimme a break with EPMs. Make no sense at all.

Contrast EPMs (which are costly) with public prediction markets (a la InTrade or BetFair), where probabilistic predictions are offered for free. That makes all the difference for the reason that the added accuracy brought by prediction markets is very small. Market-generated odds are handed out for free to journalists &#8212-still, few of them take the bait. The market-powered crystal ball is worth peanuts.

The reason CEOs are paid millions is that only a small percent of the population of business administration managers has the ability to cut through the non-sense and the balls to cut the cost of the non-sense. It is a rare skill. I am calling on CEOs to end EPMs. Right now.

The Singularity University looks at prediction markets and collective intelligence.

No Gravatar

David Orban:

In its ten tracks Singularity University (SU) tries to cover as much as possible of a vast amount of material. The specifics are steered by the track chairs, with a lot of input from both the students, the teaching fellows, and also sometimes from the outside. The Futures Studies &amp- Forecasting track does indeed cover prediction markets, and yes, if not a proper market, tasks, ideas, and group activities are often evaluated using group raking tools within SU.

David

David Orban
Advisor &amp- European Lead, Singularity University
NASA Ames, Bldg 17 Moffett Field, CA 94035, USA
http://www.singularityu.org/david

Share This: