Tax Futures, In Real Life

No Gravatar

I am very pleased to announce the world&#8217-s first tax futures on Intrade. I thank John Delaney and everyone there for their help and enthusiasm in getting these off the ground.

The contracts will forecast the highest marginal single-filer federal tax rates for 2009, 2010 &amp- 2011. I expect trade to be concentrated in the 2011 contracts, as Bush&#8217-s 2001 tax cuts are scheduled to expire that year, reverting the rate in question from 35% to 39.6%, while the lower bracket rates each increase by 3%. While it is less likely, Congress may also alter the Bush tax cuts for tax years before 2011, but such changes would probably impact 2011 as well.

If reasonable liquidity can be sustained in these markets, I hope that contracts will be added to predict corporate taxes, and other factors that contribute to individual effective tax rates, like the Alternative Minimum Tax and the social security cap. Given the tremendous hedging utility of such markets, maintaining a liquid two-way market might be tricky, although there are some obvious ways for any market-makers to hedge what might become a position more short of taxes than usual.

Please read the last post on &#8220-Policy Event Derivatives&#8221- for some background on the potential benefits of such markets. I should add that while I am confident in their long-term value of making better group decisions and sharing risk, I am sensitive to some foreseeable pathologies, and don&#8217-t want to give the impression of being too cavalier at this point. There are potential problems and side-effects stemming from the use of such markets that will be addressed later.

[Cross-posted from Risk Markets And Politics]

Previous blog posts by Jason Ruspini:

  • My response to the CFTC on event contracts
  • The CFTC safe-harbor option for event markets
  • CFTC regulation and election contracts
  • Asymmetry in Obama nomination market
  • Prediction Markets: Powerful enough to be dangerous?
  • 2009 tax futures yielding 1.5%
  • Intrade, with carry

Merger Markets on Microsoft-Yahoo

No Gravatar

HP began to explore prediction markets in 1996, but did not even consider applying them to the 2002 HP-Compaq merger. Similarly, Yahoo and Microsoft are two of the companies mentioned most often as being involved in prediction markets (along with their main competitor Google), but I&#8217-ll bet none are considering the by-far-most-valuable markets they could create, on their just-announced proposed merger.

Decision markets could say whether this merger is good for shareholders, by estimating the combined stock price given a merger, and given no merger. Similarly, decision markets could say whether this merger is good for these firms&#8217- customers, by estimating the price and/or quantity of web ads given a merger, and given no merger. This might help convince regulators to approve the merger.

My main doubt here is whether ad price and quantity are good enough measures of the merger&#8217-s social benefits – what other outcomes could such markets estimate, to speak more clearly? And this is a very clear demonstration that these companies are just not serious about finding the highest value applications of prediction markets.

Cross-posted from Overcoming Bias.

Implied Probability of an Outcome -BetFair Edition

&#8220-Does prediction market guru [= Chris Masse] understand probabilities?&#8220-, asks our good friend Niall O&#8217-Connor.

&#8212-

Probability

&#8212-

Let&#8217-s ask economics PhD Michael Giberson:

Yes, I think you are right. I just looked at your exchange with Niall and Niall&#8217-s post, and haven&#8217-t thought through just how the over-round may affect things.

But it seems okay to do it just the way you say, because the digital odds implies a precise numerical prediction and that prediction can be stated in the form of a probability. Call the calculated number an implied probability of the event, and then you don&#8217-t have to worry that a complete group of related market prices don&#8217-t add to 100 percent.

If a trader believes that event X should be trading at 70 percent and sees current digital odds of 1.56 at Betfair ( =&gt- 64.1 percent), he should buy (considering fees, etc.). If the digital odds move to 1.4 ( =&gt- 71.4 ) then sell or at least don&#8217-t buy.

Niall may be hung up on using a pure concept of probability. The purity is not useful- your explanation is useful. You win.

(Feel free to quote from this email, should you wish.)

-Mike

&#8212-

UPDATE: Michael Giberson precises his comment&#8230-

Niall, I agree that Professor Sauer&#8217-s presentation explains how to estimate true probabilities from odds that do not sum to one. I was taking Chris Masse to be explaining a related, but slightly different task: the conversion of the digital odds that Betfair quotes to an implied probability.

The point of my slightly snide comment concerning purity reflects the pragmatic view that a trader could use the method Chris describes to convert from digital odds to an implied probability (which may be easier for some traders to think with and trade on). A single quote of digital odds implies a particular probability estimate. Chris&#8217-s math gets the trader from the one number to the other. (=useful to traders)

To get to the estimate of true probabilities, as you have explained, a trader must have a complete set of odds for all possible outcomes for an event. This additional information requirement would completely stymie a trader wishing to arrive at the true probability estimates in cases in which some of the data is unavailable. (= not as useful to traders)

Read the previous blog posts by Chris. F. Masse:

  • Pervez Musharraf prediction markets –Eric Zitzewitz Edition
  • The Over-Round Explained
  • WHY THE PREDICTION MARKETS WILL LIKELY F**K UP SUPER TUESDAY 2008.
  • Still unconvinced by prediction market journalist Justin Wolfers
  • Oprah Winfrey
  • RIGHT-CLICK THIS IMAGE, AND FILL IN THIS SURVEY, PLEASE.
  • Papers on Prediction Markets

NewsFutures do *NOT* favor event derivative management by traders.

Emile Servan-Schreiber of NewsFutures:

January 29th, 2008 at 7:25 am

Another important difference with NewsFutures (where people have been “trading news” in 2000) is that hubdub doesn’t give away any prizes to performers. That, perhaps, is a direct consequence of the false good idea of letting people create their own markets. This not only creates many opportunities for fraud (if, for instance, the creator of the market also controls the outcome), but it also encourages incoherent outcome definitions, unverifiable outcomes, and duplicate or junkyard markets. Same problems that Inkling’s public markets suffer from.

Also, the mere idea of a Web 2.0 makeover of prediction markets is laughable. To paraphrase a good ol’ song from the 90’s, prediction markets were web 2.0 before web 2.0 was cool.

Emile Servan-Schreiber&#8217-s criticism is pertinent. However, his conclusion (&#8217-no&#8217- to self-management of event derivatives) is too radical. Without Inkling Markets, we wouldn&#8217-t have had Michael Giberson, who loves experimenting with play-money prediction markets. Somebody will come up, one day, with the right technology (e.g., a reputation system) patching the flaws that EJSS addresses. One day, in the future, we will be able to enjoy both worlds, because they will have merged into one: the libertarian prediction exchanges and the disciplined prediction exchanges.

My good doctor Emile, remember JFK, who pushed his country to do things &#8220-not only because they are easy, but because they are hard&#8220-, &#8230-and succeeded. :-D Just because event derivative management by traders is problematic does not mean that we should give up right now. Kudos to Inkling Markets and HubDub for trying, and acknowledging criticism from veterans. :-D

&#8212-

UPDATE: Emile Servan-Schreiber comments&#8230-

No one has a monopoly on user-driven content. Every exchange out there lets people propose their own ideas for markets that might be of interest to themselves and others. For instance, on NewsFutures, a lot of the general forum discussion is back-and-forth between the users and the admins about which markets to create next. Where NF differs from FX, Inkling and Hubdub is that the NF admins (which, by the way, are recruited from the user-base itself) have the final control on wording as well as settlement. That&#8217-s what guarantees the coherence of the exchange, which in turn means we are able to offer prizes, whereas the likes of FX, Inkling and Hubdub likely cannot because they give too much control away to unknown entities to guarantee the fairness of the contest.

I like NewsFutures, and I get all that. But I&#8217-m saying that, nowadays, on the Web, people want DIY tools. That&#8217-s why HubDub and Inkling Markets are appealing to them. They don&#8217-t have to discuss &#8220-back-and-forth&#8221-. They create the event derivative they want. Straight from the producer to the trader.

Read the previous blog posts by Chris. F. Masse:

  • RIGHT-CLICK THIS IMAGE, AND FILL IN THIS SURVEY, PLEASE.
  • Papers on Prediction Markets
  • The Journal of Prediction Markets
  • The 45-degree Line
  • Implied Probability of an Outcome –BetFair Edition
  • Justin Wolfers on Rudy Giuliani = not convincing… yet
  • The Florida primaries thru the prism of the InTrade prediction markets

Fundamentals of Prediction Markets: Probabilities, Prediction Timescale, and Absolute & Relative Accuracy

No Gravatar

Jed Christiansen outputs the best explainer on prediction markets I&#8217-ve seen in years. Go read it.

– Fundamentals of Prediction Markets
– Different types of Prediction Markets
– Problem #1 – Understanding Probabilities
– Problem #2 – Prediction timescale
– Problem #3 – Assessing accuracy
– Problem #4 – Compared to what?
– Summary – How have the political prediction markets really performed?

Assessing Probabilistic Predictions 101

No Gravatar

Lance Fortnow:

[…] Notice that when we have a surprise victory in a primary, like Clinton in New Hampshire, much of the talk revolves on why the pundits, polls and prediction markets all &#8220-failed.&#8221- Meanwhile in sports when we see a surprise victory, like the New York Giants over Dallas and then again in Green Bay, the focus is on what the Giants did right and the Cowboys and Packers did wrong. Sports fans understand probabilities much better than political junkies—upsets happen occasionally, just as they should.

Previously: Defining Probability in Prediction Markets – by Panos Ipeirotis – 2008

[…] Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. […]

Previously: Can prediction markets be right too often? – by David Pennock – 2006

[…] But this begs another question: didn’t TradeSports call too many states correctly? […] The bottom line is we need more data across many elections to truly test TradeSports’s accuracy and calibration. […] The truth is, I probably just got lucky, and it’s nearly impossible to say whether TradeSports underestimated or overestimated much of anything based on a single election. Such is part of the difficulty of evaluating probabilistic forecasts. […]

Previously: Evaluating probabilistic predictions – by David Pennock – 2006

[…] Their critiques reflect a clear misunderstanding of the nature of probabilistic predictions, as many others have pointed out. Their misunderstanding is perhaps not so surprising. Evaluating probabilistic predictions is a subtle and complex endeavor, and in fact there is no absolute right way to do it. This fact may pose a barrier for the average person to understand and trust (probabilistic) prediction market forecasts. […] In other words, for a predictor to be considered good it must pass the calibration test, but at the same time some very poor or useless predictors may also pass the calibration test. Often a stronger test is needed to truly evaluate the accuracy of probabilistic predictions. […]

Robin Hansons concept of… Info Value

No Gravatar

Robin Hanson:

Info Value = the added accuracy the markets provide relative to other mechanisms, times the value that accuracy can give in improved decisions, minus the cost of maintaining the markets, relative to the cost of other mechanisms.

A highly accurate market has little value if other mechanisms can provide similar accuracy at a lower cost, or if few substantial decisions are influenced by accurate forecasts on its topic.

Wow, great formula. [BTW, I have slightly edited RH’s first sentence.]

I&#8217-m sure Mike Giberson will write another blog post for Midas Oracle about that formula &#8212-all that for free. Crowd-sourcing works for me. :-D

Better Pricing for Tournament Prediction Markets

No Gravatar

Last year while working out a few thoughts on arbitrage opportunities in basketball tournament prediction markets at Inkling, it occurred to me that the Inkling pricing mechanism was just a little bit off for such applications. The question is whether something better can be done. An answer comes from the folks at Yahoo Research: yes.

Inkling’s markets come in a couple of flavors, so far as I know all using an automated market maker based on a logarithmic market scoring rule (LMSR). In the multi-outcome case – for example, a market to pick the winner of a 65-team single elimination tournament – the market ensures that all prices sum to exactly 100. If a purchase of team A shares causes its share price to increase by 5, then the prices of all 64 other team shares will decrease by a total of 5.

The logic of the LMSR doesn’t tell you exactly how to redistribute the counter-balancing price decreases. In Inkling’s case they appear to redistribute the counter-balancing price movements in proportion to each team’s previous share price (so, for example, a team with an initial price of 10 would decrease twice as much as a team with a previous price of 5). While for generic multi-outcome prediction markets this approach seems reasonable, it doesn’t seem right for a tournament structure. (I raised this point in a comment posted here at Midas Oracle last September, and responses in that comment thread by David Pennock and Chris Hibbert were helpful.)

The problem arises for pricing tournament markets because the tournament structure imposes certain relationships between teams that the generic pricing rule ignores. Incorporating the structure into the price rule in principle seems like the way to go. Robin Hanson, in his original articles on the LMSR, suggests a Bayes net could be used in such cases. Now three scientists at Yahoo Research have shown this approach works.

In “Pricing Combinatorial Markets For Tournaments,” Yiling Chen, Sharad Goel and David Pennock demonstrate that the pricing problem involved in running a LMSR-based combinatorial market for tournaments is computationally tractable so long as the shares are defined in a particular manner. In the abstract the authors report, “This is the first example of a tractable market-maker driven combinatorial market.”

An introduction to the broader research effort at Yahoo describes the “Bracketology” project in a less technical manner:

Fantasy stock market games are all the rage with Internet users…. Though many types of exchanges abound, they all operate in a similar fashion.

For the most part, each bet is managed independently, even when the bets are logically related. For example, picking Duke to win the final game of the NCAA college basketball tournament in your online office pool will not change the odds of Duke winning any of its earlier round games, even though that pick implies that Duke will have had to win all of those games to get to the finals.

This approach struck the Yahoo! Research team of Yiling Chen, Sharad Goel, George Levchenko, David Pennock and Daniel Reeves as fundamentally flawed. In a research project called “Bracketology,” they set about to create a “combinatorial market” that spreads information appropriately across logically related bets.…

In a standard market design, there are only about 400 possible betting options for the 63-game [sic] NCAA basketball tournament. But in a combinatorial market, where many more combinations are possible, the number of potential combinations is billions of billions. “That’s why you’ll never see anyone get every game right,” says Goel.…

At its core, the Bracketology project is about using a combinatorial approach to aggregate opinions in a more efficient manner. “I view it as collaborative problem solving,” Goel explains. “This kind of market collects lots of opinions from lots of people who have lots of information sources, in order to accurately determine the perceived likelihood of an event.”

Now that they know they can manage a 65-team single elimination tournament, I wonder about more complicated tournament structures. For example, how about a prediction market asking which Major League Baseball teams will reach the playoffs? Eight teams total advance, three division leaders and a wild-card team from the National League and the same from the American League. The wild-card team is the team with the best overall record in the league excepting the three division winners.

In principle the MLB case seems doable, though it would be a lot more complicated that a mere 65-team tournament that has only billions of billions of possible outcomes.

[NOTE: A longer version of this post appeared at Knowledge Problem as “At the intersection of prediction markets and basketball tournaments.”]

Prediction Market Efficiency vs. Prediction Market Accuracy

No Gravatar

Panos Ipeirotis in a comment here:

[W]e should try to separate two things: Market efficiency and market accuracy. Efficiency is the rate in which the market incorporates new information and prevents any arbitrage opportunities. Accuracy is the probability in which the market predicts the correct outcome of an event. The main claim to fame for the [prediction] markets is that they self-report their accuracy, and that “the prices are probabilities”.

We can measure the effectiveness of the market by following the outline discussed above. One axis is the price of the contract at time t before the expiration of the contract and the other axis is the rate in which this event happens. (…60% of the cases the event that trades at 0.6 happens, 30% of the cases the event that trades at 0.3 happens, and so on…). A perfectly accurate market should have a straight line as an outcome when time t gets close to 0. Any deviation of the experimental results indicates an accuracy bias. There are many papers that indicate the favorite-longshot biases in the market (underprice the favorite, overprice the longshots) so there is no need to really repeat this here. An interesting thing is to see how big it can be and still have reasonable accuracy. Furthermore, if we have systematic and robust biases, then we can use a calibration function that will adjust the market prices, compensating for the biases, to reflect real-life probabilities.

Measuring efficiency is a trickier concept. The general definition of efficiency is that “the market immediately incorporates all available information”. Being able to predict price movements indicates inefficiency. Having prices for an event summing up to anything other than 1, indicates inefficiency. However, it is difficult to have a definite proof that the market is efficient. We can only say that “we were not able to spot inefficiencies”. It is very difficult to prove that “the market is efficient”.

The two metrics are, of course, highly connected close to the expiration of the contract. If the market is not efficient, then it will not be accurate, as it will not have had incorporated all the available information, if any material information becomes available just before the expiration of the contract.

Panos Ipeirotis

Defining Probability in Prediction Markets

No Gravatar

The New Hampshire Democratic primary was one of the few(?) events in which prediction markets did not give an &#8220-accurate&#8221- forecast for the winner. In a typical &#8220-accurate&#8221- prediction, the candidate that has the contract with the highest price ends up winning the election.

This result, combined with an increasing interest/hype about the predictive accuracy of prediction markets, generated a huge backslash. Many opponents of prediction markets pointed out the &#8220-failure&#8221- and started questioning the overall concept and the ability of prediction markets to aggregate information.

Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. In such a case, an obvious trading strategy would be to buy the frontrunner&#8217-s contract and then simply wait for the market to expire to get a guaranteed, huge profit. If for example Obama was trading at 66 cents and Clinton at 33 cents (indicating that Obama is twice as likely to be the winner), and the markets were &#8220-always accurate&#8221- then it would make sense to buy Obama&#8217-s contract the day before the election and get $1 back the next day. If this was happening every time, then this would not be an efficient market. This would be a flawed, inefficient market.

In fact, I would like to argue that the late streak of successes of the markets to always pick the winner of the elections lately has been an anomaly, indicating the favorite bias that exists in these markets. The markets were more accurate than they should, according to the trading prices. If the market never fails then the prices do not reflect reality, and the favorite is actually underpriced.

The other point that has been raised in many discussions (mainly from a mainstream audience) is how we can even define probability for an one-time event like the Democratic nomination for the 2008 presidential election. What it means that Clinton has 60% probability of being the nominee and Obama has 40% probability? The common answer is that &#8220-if we repeat the event for many times, 60% of the cases Clinton will be the nominee and 40% of the cases, it will be Obama&#8221-. Even though this is an acceptable answer for someone used to work with probabilities, it makes very little sense for the &#8220-average Joe&#8221- who wants to understand how these markets work. The notion of repeating the nomination process multiple times is an absurd concept.

The discussion brings in mind the ferocious battles between Frequentists and Bayesians for the definition of probability. Bayesians could not accept that we can use a Frequentist approach for defining probabilities for events. &#8220-How can we define the probability of success for an one-time event?&#8221- The Frequentist would approach the prediction market problem by defining a space of events and would say:

After examining prediction markets for many state-level primaries, we observed that 60% of the cases the frontrunners who had a contract priced at 0.60 one day before the election, were actually the winners of the election. In 30% of the cases, the candidates who had a contract priced at 0.30 one day before the election, were actually the winners of the election, and so on.

A Bayesian would criticize such an approach, especially when the sample size of measurement is small, and would point to the need to have an initial belief function, that should be updated as information signals come from the market. Interestingly enough, the two approaches tend to be equivalent in the presence of infinite samples, which is however rarely the case.

Crossposted from my blog