This contract will settle (expire) at 100 ($10.00) if:
– Dominique Strauss-Kahn is found guilty of at least one of the seven counts in a trial by jury or judge –- Dominique Strauss-Kahn pleads guilty to at least one of the seven counts
The contract will settle (expire) at 0 ($0.00) if (including, but not limited to):
– Dominique Strauss-Kahn is found not guilty of all seven counts in a trial by jury or judge – All counts are dropped – The case is dismissed – A mistrial is declared (for all counts) – Dominique Strauss-Kahn pleads guilty to lesser charges only as part of a plea agreement (please note that if Strauss-Kahn pleads guilty to any of the original charges as part of plea agreement the contract will expire at 100)
This market covers only the charges covered in the complaint. Any other criminal or civil cases will not be considered when expiring this market.
The contracts will be paused if possible when a verdict is about to be announced and will be expired as soon as that decision is made public.
The end date on the contract is for indication purposes only. This date may be extended if necessary.
Any changes to the result after the contract has expired will not be taken into account –- Contract Rule 1.4
Due to the nature of this prediction market contract you are obligated to read Contract Rule 1.7 (Unforeseen Circumstances) and Contract Rule 1.8 (Time Protection). Intrade may invoke these rules in its absolute discretion if deemed appropriate.
Please contact the exchange by emailing firstname.lastname@example.org if you have any questions regarding this contract or interpretation of these contract specific rules, related exchange news articles or exchange rules before you place an order to trade.
How do we know the Intrade price was not accurate? Well, the raid wasn’t just executed on a whim. It had been planned for quite some time. Therefore, the true likelihood must have been much higher than the four or five percent chance the market was telling us.
Many people knew of the plans, albeit they were very high ranking, sworn-to-secrecy types. Since the market did not reflect the potential success of the planned raid, either the market was inefficient or the market, in an aggregate sense, did not possess enough information to make a reasonably informed prediction. In this case, I have to believe the market participants (every last one of them) knew next to nothing about the outcome being predicted.
The Intrade market prediction was nothing more than an aggregation of guesses. This is very different from an accurate prediction (based on calibration) that turns out to be wrong.
Markets such as these have no use, whatsoever, in decision-making. The useful information was that gathered by the SEALs and other secret services, and that was the information provided to the real decision-maker, The President. I would argue that these types of markets have no place as betting markets either. There is no way to test the calibration, so we don’t know whether they are “fair” markets (unlike the accurate calibration of horse races and casino games).
In other words, stop wasting our time operating and analyzing these markets. They are never going to be useful.
How do we know, now, that Intrade’-s market price was not an accurate estimate of the probability bin Laden was killed or captured by September 2011? Is an prior estimate of 50 percent likelihood that a tossed coin will come up heads wrong if the coin comes up as “-100 percent”- heads (and not half-heads and half-tails)?
I’-m not buying Chris’-s implied definition of success and failure.
However, one might ask Robin Hanson about what the Intrade market’-s performance implies about the usefulness of his Policy Analysis Market idea.
Note that I was contrasting the InTrade-Bin-Laden failure with the high expectations set by Robin Hanson, Justin Wolfers and James Surowiecki.
Also, other than statisticians, most people don’-t have a probabilistic approach of InTrade’-s predictions. That’-s the big misunderstanding, which is one part of the big fail of the prediction markets.
Recently posted to SSRN: FantasySCOTUS: Crowdsourcing a Prediction Market for the Supreme Court, a draft paper by Josh Blackman, Adam Aft, &- Corey Carpenter assessing the accuracy of the Harlan Institute’-s U.S. Supreme Court prediction market, FantasySCOTUS.org. The paper compares and contrasts the accuracy of FantasySCOTUS, which relied on a “-wisdom of the crowd”- approach, with the Supreme Court Forecasting Project, which relied on a computer model of Supreme Court decision making. From the paper’-s abstract:
During the October 2009 Supreme Court term, the 5,000 members made over 11,000 predictions for all 81 cases decided. Based on this data, FantasySCOTUS accurately predicted a majority of the cases, and the top-ranked experts predicted over 75% of the cases correctly. With this combined knowledge, we can now have a method to determine with a degree of certainty how the Justices will decide cases before they do. . . . During the October 2002 Term, the [FantasySCOTUS] Project’s model predicted 75% of the cases correctly, which was more accurate than the [Supreme Court] Forecasting Project’s experts, who only predicted 59.1% of the cases correctly. The FantasySCOTUS experts predicted 64.7% of the cases correctly, surpassing the Forecasting Project’s Experts, though the difference was not statistically significant. The Gold, Silver, and Bronze medalists in FantasySCOTUS scored staggering accuracy rates of 80%, 75% and 72% respectively (an average of 75.7%). The FantasySCOTUS top three experts not only outperformed the Forecasting Project’s experts, but they also slightly outperformed the Project’s model –- 75.7% compared with 75%.