InTrade on the elimination of Osama Bin Laden – [ANALYSIS]

Mike Giberson:

How do we know, now, that Intrade&#8217-s market price was not an accurate estimate of the probability bin Laden was killed or captured by September 2011? Is an prior estimate of 50 percent likelihood that a tossed coin will come up heads wrong if the coin comes up as &#8220-100 percent&#8221- heads (and not half-heads and half-tails)?

I&#8217-m not buying Chris&#8217-s implied definition of success and failure.

However, one might ask Robin Hanson about what the Intrade market&#8217-s performance implies about the usefulness of his Policy Analysis Market idea.

Note that I was contrasting the InTrade-Bin-Laden failure with the high expectations set by Robin Hanson, Justin Wolfers and James Surowiecki.

Also, other than statisticians, most people don&#8217-t have a probabilistic approach of InTrade&#8217-s predictions. That&#8217-s the big misunderstanding, which is one part of the big fail of the prediction markets.

InTrades prediction markets on secretive events are just an Irish scam. – [ANALYSIS]

Paul Hewitt:

How do we know the Intrade price was not accurate? Well, the raid wasn’t just executed on a whim. It had been planned for quite some time. Therefore, the true likelihood must have been much higher than the four or five percent chance the market was telling us.

Many people knew of the plans, albeit they were very high ranking, sworn-to-secrecy types. Since the market did not reflect the potential success of the planned raid, either the market was inefficient or the market, in an aggregate sense, did not possess enough information to make a reasonably informed prediction. In this case, I have to believe the market participants (every last one of them) knew next to nothing about the outcome being predicted.

The Intrade market prediction was nothing more than an aggregation of guesses. This is very different from an accurate prediction (based on calibration) that turns out to be wrong.

Markets such as these have no use, whatsoever, in decision-making. The useful information was that gathered by the SEALs and other secret services, and that was the information provided to the real decision-maker, The President. I would argue that these types of markets have no place as betting markets either. There is no way to test the calibration, so we don’t know whether they are “fair” markets (unlike the accurate calibration of horse races and casino games).

In other words, stop wasting our time operating and analyzing these markets. They are never going to be useful.

British electors dont want to change their electoral system. – [PREDICTION MARKET]

Question asked on May 5, 2011 (United Kingdom Alternative Vote referendum):

At present, the UK uses the “first past the post” system to elect MPs to the House of Commons. Should the “alternative vote” system be used instead?

Here&#8217-s BetFair&#8217-s prediction market on the YES side:

Tipped by Mike Robb.

InTrade was not able to predict the elimination of Osama Bin Laden. – [PREDICTION POST-MORTEM]

I already blogged about the big fail of the prediction markets. Here&#8217-s more from the NYT, Eddy Elfenbein, and Barry Ritholtz.

Why you should never trust John Battelles opinion on tech – [PREDICTION POST-MORTEM]

John Battelle pumped up &#8220-Color&#8221- at inception, and now we have confirmation that it is a lemon.

John Battelle badmouthed the iPad at inception, and now we know that it is revolutionizing computing.

U.S. Supreme Court Prediction Market – [PAPER]

Recently posted to SSRN: FantasySCOTUS: Crowdsourcing a Prediction Market for the Supreme Court, a draft paper by Josh Blackman, Adam Aft, &amp- Corey Carpenter assessing the accuracy of the Harlan Institute&#8217-s U.S. Supreme Court prediction market, FantasySCOTUS.org. The paper compares and contrasts the accuracy of FantasySCOTUS, which relied on a &#8220-wisdom of the crowd&#8221- approach, with the Supreme Court Forecasting Project, which relied on a computer model of Supreme Court decision making. From the paper&#8217-s abstract:

During the October 2009 Supreme Court term, the 5,000 members made over 11,000 predictions for all 81 cases decided. Based on this data, FantasySCOTUS accurately predicted a majority of the cases, and the top-ranked experts predicted over 75% of the cases correctly. With this combined knowledge, we can now have a method to determine with a degree of certainty how the Justices will decide cases before they do. . . . During the October 2002 Term, the [FantasySCOTUS] Project’s model predicted 75% of the cases correctly, which was more accurate than the [Supreme Court] Forecasting Project’s experts, who only predicted 59.1% of the cases correctly. The FantasySCOTUS experts predicted 64.7% of the cases correctly, surpassing the Forecasting Project’s Experts, though the difference was not statistically significant. The Gold, Silver, and Bronze medalists in FantasySCOTUS scored staggering accuracy rates of 80%, 75% and 72% respectively (an average of 75.7%). The FantasySCOTUS top three experts not only outperformed the Forecasting Project’s experts, but they also slightly outperformed the Project’s model &#8211- 75.7% compared with 75%.

You can download a copy of the draft paper here.

[Crossposted at Agoraphilia, Midas Oracle, and MoneyLaw.]