# How Business Insider got it wrong with the Tiger Woods divorce cost odds

PaddyPower is a bookmaker, not a prediction exchange. Hence, the Tiger Woods divorce cost odds are computed by an analyst, not by the market.

1. It is not the &#8220-punters&#8221- who have fabricated the odds, but a PaddyPower employee.

2. It is not a set of &#8220-market odds&#8221-, but a set of bookmaker odds.

3. The bookmaker analyst does not have access to any confidential contract. So, the PaddyPower press release is aimed at suckers in the media.

# Implied Probability of an Outcome -BetFair Edition

&#8220-Does prediction market guru [= Chris Masse] understand probabilities?&#8220-, asks our good friend Niall O&#8217-Connor.

&#8212-

&#8212-

Let&#8217-s ask economics PhD Michael Giberson:

Yes, I think you are right. I just looked at your exchange with Niall and Niall&#8217-s post, and haven&#8217-t thought through just how the over-round may affect things.

But it seems okay to do it just the way you say, because the digital odds implies a precise numerical prediction and that prediction can be stated in the form of a probability. Call the calculated number an implied probability of the event, and then you don&#8217-t have to worry that a complete group of related market prices don&#8217-t add to 100 percent.

If a trader believes that event X should be trading at 70 percent and sees current digital odds of 1.56 at Betfair ( =&gt- 64.1 percent), he should buy (considering fees, etc.). If the digital odds move to 1.4 ( =&gt- 71.4 ) then sell or at least don&#8217-t buy.

Niall may be hung up on using a pure concept of probability. The purity is not useful- your explanation is useful. You win.

(Feel free to quote from this email, should you wish.)

-Mike

&#8212-

UPDATE: Michael Giberson precises his comment&#8230-

Niall, I agree that Professor Sauer&#8217-s presentation explains how to estimate true probabilities from odds that do not sum to one. I was taking Chris Masse to be explaining a related, but slightly different task: the conversion of the digital odds that Betfair quotes to an implied probability.

The point of my slightly snide comment concerning purity reflects the pragmatic view that a trader could use the method Chris describes to convert from digital odds to an implied probability (which may be easier for some traders to think with and trade on). A single quote of digital odds implies a particular probability estimate. Chris&#8217-s math gets the trader from the one number to the other. (=useful to traders)

To get to the estimate of true probabilities, as you have explained, a trader must have a complete set of odds for all possible outcomes for an event. This additional information requirement would completely stymie a trader wishing to arrive at the true probability estimates in cases in which some of the data is unavailable. (= not as useful to traders)

Read the previous blog posts by Chris. F. Masse:

• Pervez Musharraf prediction markets –Eric Zitzewitz Edition
• The Over-Round Explained
• WHY THE PREDICTION MARKETS WILL LIKELY F**K UP SUPER TUESDAY 2008.
• Still unconvinced by prediction market journalist Justin Wolfers
• Oprah Winfrey
• RIGHT-CLICK THIS IMAGE, AND FILL IN THIS SURVEY, PLEASE.
• Papers on Prediction Markets

# BetFair Digital Odds = BetFair Probabilities

Odds that Hillary Clinton gets the 2008 Democratic nomination = 1.56 (digital odds taken at 9:15 AM EST)

To get the implied probability expressed in percentage:

• Take the number &#8220-1&#8243–
• Divided it by the digital odds (here &#8220-1.56&#8243-)-
• Then multiply the result by 100-
• 64.1% = ( 1 / 1.56 ) x 100

BetFair-generated implied probability is not far away from InTrade&#8217-s 62.1%.

Psstt&#8230- This present post was prompted by Niall O&#8217-Connor, who puts all his faith in the BetFair instant &#8220-over-round&#8221- &#8212-which indeed doesn&#8217-t add up to the virgin and perfect &#8220-100%&#8221- that Niall is seeking (like the Monthy Python were seeking the Holy Grail). Good luck for your quest, Niall.

Your mother was a hamster and your father smelt of elderberries!

&#8212-

External Resource: Interpreting Prediction Market Prices as Probabilities – (PDF file) – by Justin Wolfers and Eric Zitzewitz

&#8212-

&#8212-

# Assessing Probabilistic Predictions 101

Lance Fortnow:

[…] Notice that when we have a surprise victory in a primary, like Clinton in New Hampshire, much of the talk revolves on why the pundits, polls and prediction markets all &#8220-failed.&#8221- Meanwhile in sports when we see a surprise victory, like the New York Giants over Dallas and then again in Green Bay, the focus is on what the Giants did right and the Cowboys and Packers did wrong. Sports fans understand probabilities much better than political junkies—upsets happen occasionally, just as they should.

Previously: Defining Probability in Prediction Markets – by Panos Ipeirotis – 2008

[…] Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. […]

Previously: Can prediction markets be right too often? – by David Pennock – 2006

[…] But this begs another question: didn’t TradeSports call too many states correctly? […] The bottom line is we need more data across many elections to truly test TradeSports’s accuracy and calibration. […] The truth is, I probably just got lucky, and it’s nearly impossible to say whether TradeSports underestimated or overestimated much of anything based on a single election. Such is part of the difficulty of evaluating probabilistic forecasts. […]

Previously: Evaluating probabilistic predictions – by David Pennock – 2006

[…] Their critiques reflect a clear misunderstanding of the nature of probabilistic predictions, as many others have pointed out. Their misunderstanding is perhaps not so surprising. Evaluating probabilistic predictions is a subtle and complex endeavor, and in fact there is no absolute right way to do it. This fact may pose a barrier for the average person to understand and trust (probabilistic) prediction market forecasts. […] In other words, for a predictor to be considered good it must pass the calibration test, but at the same time some very poor or useless predictors may also pass the calibration test. Often a stronger test is needed to truly evaluate the accuracy of probabilistic predictions. […]

# Defining Probability in Prediction Markets

The New Hampshire Democratic primary was one of the few(?) events in which prediction markets did not give an &#8220-accurate&#8221- forecast for the winner. In a typical &#8220-accurate&#8221- prediction, the candidate that has the contract with the highest price ends up winning the election.

This result, combined with an increasing interest/hype about the predictive accuracy of prediction markets, generated a huge backslash. Many opponents of prediction markets pointed out the &#8220-failure&#8221- and started questioning the overall concept and the ability of prediction markets to aggregate information.

Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. In such a case, an obvious trading strategy would be to buy the frontrunner&#8217-s contract and then simply wait for the market to expire to get a guaranteed, huge profit. If for example Obama was trading at 66 cents and Clinton at 33 cents (indicating that Obama is twice as likely to be the winner), and the markets were &#8220-always accurate&#8221- then it would make sense to buy Obama&#8217-s contract the day before the election and get \$1 back the next day. If this was happening every time, then this would not be an efficient market. This would be a flawed, inefficient market.

In fact, I would like to argue that the late streak of successes of the markets to always pick the winner of the elections lately has been an anomaly, indicating the favorite bias that exists in these markets. The markets were more accurate than they should, according to the trading prices. If the market never fails then the prices do not reflect reality, and the favorite is actually underpriced.

The other point that has been raised in many discussions (mainly from a mainstream audience) is how we can even define probability for an one-time event like the Democratic nomination for the 2008 presidential election. What it means that Clinton has 60% probability of being the nominee and Obama has 40% probability? The common answer is that &#8220-if we repeat the event for many times, 60% of the cases Clinton will be the nominee and 40% of the cases, it will be Obama&#8221-. Even though this is an acceptable answer for someone used to work with probabilities, it makes very little sense for the &#8220-average Joe&#8221- who wants to understand how these markets work. The notion of repeating the nomination process multiple times is an absurd concept.

The discussion brings in mind the ferocious battles between Frequentists and Bayesians for the definition of probability. Bayesians could not accept that we can use a Frequentist approach for defining probabilities for events. &#8220-How can we define the probability of success for an one-time event?&#8221- The Frequentist would approach the prediction market problem by defining a space of events and would say:

After examining prediction markets for many state-level primaries, we observed that 60% of the cases the frontrunners who had a contract priced at 0.60 one day before the election, were actually the winners of the election. In 30% of the cases, the candidates who had a contract priced at 0.30 one day before the election, were actually the winners of the election, and so on.

A Bayesian would criticize such an approach, especially when the sample size of measurement is small, and would point to the need to have an initial belief function, that should be updated as information signals come from the market. Interestingly enough, the two approaches tend to be equivalent in the presence of infinite samples, which is however rarely the case.

Crossposted from my blog