Why collecting and synthesizing the dispersed available information?

No Gravatar

Sean Park (after a long, boring introduction to the subject):

[…] The ‘failure’ of New Hampshire was the result of primarily two factors:

  1. It wasn’t a failure. No market is always right. More importantly markets reflect the information available to and the interests of their participants. Basically markets are very efficient mechanisms (I would claim the most efficient) for processing information. No more, no less.
  2. In this particular instance, the probability of the market producing an erroneous forecast was high due to the lack of liquidity. This is a problem of all political markets in the US. Show me a market on the New Hampshire primaries with tens of thousands of participants and millions of dollars traded and I will show you a market that creates more valuable information. BUT it would still on occasion be ’surprised.’

Basically I guess what I’m trying to say is the expectations seem to be set all wrong by many inside the community. I think “prediction markets” – creating markets in information and outcomes is a wonderfully important and valuable thing to do. Equally however I think that anyone that represents such markets as being able to predict the future is a charlatan. What they can do is collect and synthesize powerfully and efficiently all the dispersed available information – using money as the relevance filter. This is very valuable in its own right and is defensible. Promoting prediction markets to true sceptics (ie mainstream American politicians) on the basis that they are a Delphic Oracle is surely a path to certain tears and ultimately is almost guaranteed to fail. [*]

Markets don’t compute unknown unknowns. That doesn’t mean they are useless, just that they have to be understood in context.

[*] How to promote the prediction markets, then? As information collecting tools? Who should use these tools, then? Experts or ignorants? Sean Park does not elaborate further. None of the questions I have asked are answered.

HubDub wil redefine the play-money exchange landscape.

No Gravatar

In &#8220-private beta&#8221-.

So I can&#8217-t say anything.

Or, next thing, I&#8217-m a dead blogger.

As soon as you catch this post, RUSH THERE AND TAKE A VIRTUAL TOUR. Awesome. Nigel Eccles is the man. John Delaney, David Jack, Adam Siegel and Emile Servan-Schreiber can return to the locker room. Robin Hanson and Justin Wolfers are history artifacts, starting today. The whole world will look completely different after the HubDub launch.

How come nobody got that idea (news aggregation + prediction exchange) before HubDub???

Have Googles enterprise prediction markets been accurate?

No Gravatar

Justin Wolfers:

So we decided to move beyond asking, “Do prediction markets work?” and instead use them as a tool for better understanding how information flows within a (very cool) corporation.

I am more interested in the accuracy of the enterprise prediction markets than in corporate micro-geography issues.

Related Links: Using Prediction Markets to Track Information Flows: Evidence From Google – (PDF file – PDF file) – by Bo Cowgill (Google economic analyst), Justin Wolfers (University of Pennsylvania) and Eric Zitzewitz (Dartmouth College)

Robin Hanson is not convinced by the Google experiment with enterprise prediction markets -to say the least.

No Gravatar

Robin Hanson in a comment on Marginal Revolution:

This is important work for organizational sociology, but not for prediction markets, as this does little to help us find and field high value markets.

Finally, somebody who speaks the truth.

See also the comment of economist Michael Giberson.

Related Links: Using Prediction Markets to Track Information Flows: Evidence From Google – (PDF file – PDF file) – by Bo Cowgill (Google economic analyst), Justin Wolfers (University of Pennsylvania) and Eric Zitzewitz (Dartmouth College)

ROBIN HANSON TELLS THE TRUTH ON GOOGLES ENTERPRISE PREDICTION MARKETS.

No Gravatar

Robin Hanson:

Yes prediction markets are cool, Google is cool, and it is cool that Google had location data to show how location influences trading. But cool need not be useful. People are not asking the hard questions here: what value exactly is Google getting out of these markets, aside from helping them look cool?

Robin Hanson is a modern-day hero. Speaks the truth. Has a clear vision. Doesn&#8217-t mind to act as a contrarian, now and then. Like Winston Churchill. Is a real leader.

Related Links: Using Prediction Markets to Track Information Flows: Evidence From Google – (PDF file – PDF file) – by Bo Cowgill (Google economic analyst), Justin Wolfers (University of Pennsylvania) and Eric Zitzewitz (Dartmouth College)

Prediction Market Efficiency vs. Prediction Market Accuracy

No Gravatar

Panos Ipeirotis in a comment here:

[W]e should try to separate two things: Market efficiency and market accuracy. Efficiency is the rate in which the market incorporates new information and prevents any arbitrage opportunities. Accuracy is the probability in which the market predicts the correct outcome of an event. The main claim to fame for the [prediction] markets is that they self-report their accuracy, and that “the prices are probabilities”.

We can measure the effectiveness of the market by following the outline discussed above. One axis is the price of the contract at time t before the expiration of the contract and the other axis is the rate in which this event happens. (…60% of the cases the event that trades at 0.6 happens, 30% of the cases the event that trades at 0.3 happens, and so on…). A perfectly accurate market should have a straight line as an outcome when time t gets close to 0. Any deviation of the experimental results indicates an accuracy bias. There are many papers that indicate the favorite-longshot biases in the market (underprice the favorite, overprice the longshots) so there is no need to really repeat this here. An interesting thing is to see how big it can be and still have reasonable accuracy. Furthermore, if we have systematic and robust biases, then we can use a calibration function that will adjust the market prices, compensating for the biases, to reflect real-life probabilities.

Measuring efficiency is a trickier concept. The general definition of efficiency is that “the market immediately incorporates all available information”. Being able to predict price movements indicates inefficiency. Having prices for an event summing up to anything other than 1, indicates inefficiency. However, it is difficult to have a definite proof that the market is efficient. We can only say that “we were not able to spot inefficiencies”. It is very difficult to prove that “the market is efficient”.

The two metrics are, of course, highly connected close to the expiration of the contract. If the market is not efficient, then it will not be accurate, as it will not have had incorporated all the available information, if any material information becomes available just before the expiration of the contract.

Panos Ipeirotis

Better Pricing for Tournament Prediction Markets

No Gravatar

Last year while working out a few thoughts on arbitrage opportunities in basketball tournament prediction markets at Inkling, it occurred to me that the Inkling pricing mechanism was just a little bit off for such applications. The question is whether something better can be done. An answer comes from the folks at Yahoo Research: yes.

Inkling’s markets come in a couple of flavors, so far as I know all using an automated market maker based on a logarithmic market scoring rule (LMSR). In the multi-outcome case – for example, a market to pick the winner of a 65-team single elimination tournament – the market ensures that all prices sum to exactly 100. If a purchase of team A shares causes its share price to increase by 5, then the prices of all 64 other team shares will decrease by a total of 5.

The logic of the LMSR doesn’t tell you exactly how to redistribute the counter-balancing price decreases. In Inkling’s case they appear to redistribute the counter-balancing price movements in proportion to each team’s previous share price (so, for example, a team with an initial price of 10 would decrease twice as much as a team with a previous price of 5). While for generic multi-outcome prediction markets this approach seems reasonable, it doesn’t seem right for a tournament structure. (I raised this point in a comment posted here at Midas Oracle last September, and responses in that comment thread by David Pennock and Chris Hibbert were helpful.)

The problem arises for pricing tournament markets because the tournament structure imposes certain relationships between teams that the generic pricing rule ignores. Incorporating the structure into the price rule in principle seems like the way to go. Robin Hanson, in his original articles on the LMSR, suggests a Bayes net could be used in such cases. Now three scientists at Yahoo Research have shown this approach works.

In “Pricing Combinatorial Markets For Tournaments,” Yiling Chen, Sharad Goel and David Pennock demonstrate that the pricing problem involved in running a LMSR-based combinatorial market for tournaments is computationally tractable so long as the shares are defined in a particular manner. In the abstract the authors report, “This is the first example of a tractable market-maker driven combinatorial market.”

An introduction to the broader research effort at Yahoo describes the “Bracketology” project in a less technical manner:

Fantasy stock market games are all the rage with Internet users…. Though many types of exchanges abound, they all operate in a similar fashion.

For the most part, each bet is managed independently, even when the bets are logically related. For example, picking Duke to win the final game of the NCAA college basketball tournament in your online office pool will not change the odds of Duke winning any of its earlier round games, even though that pick implies that Duke will have had to win all of those games to get to the finals.

This approach struck the Yahoo! Research team of Yiling Chen, Sharad Goel, George Levchenko, David Pennock and Daniel Reeves as fundamentally flawed. In a research project called “Bracketology,” they set about to create a “combinatorial market” that spreads information appropriately across logically related bets.…

In a standard market design, there are only about 400 possible betting options for the 63-game [sic] NCAA basketball tournament. But in a combinatorial market, where many more combinations are possible, the number of potential combinations is billions of billions. “That’s why you’ll never see anyone get every game right,” says Goel.…

At its core, the Bracketology project is about using a combinatorial approach to aggregate opinions in a more efficient manner. “I view it as collaborative problem solving,” Goel explains. “This kind of market collects lots of opinions from lots of people who have lots of information sources, in order to accurately determine the perceived likelihood of an event.”

Now that they know they can manage a 65-team single elimination tournament, I wonder about more complicated tournament structures. For example, how about a prediction market asking which Major League Baseball teams will reach the playoffs? Eight teams total advance, three division leaders and a wild-card team from the National League and the same from the American League. The wild-card team is the team with the best overall record in the league excepting the three division winners.

In principle the MLB case seems doable, though it would be a lot more complicated that a mere 65-team tournament that has only billions of billions of possible outcomes.

[NOTE: A longer version of this post appeared at Knowledge Problem as “At the intersection of prediction markets and basketball tournaments.”]

Robin Hansons concept of… Info Value

No Gravatar

Robin Hanson:

Info Value = the added accuracy the markets provide relative to other mechanisms, times the value that accuracy can give in improved decisions, minus the cost of maintaining the markets, relative to the cost of other mechanisms.

A highly accurate market has little value if other mechanisms can provide similar accuracy at a lower cost, or if few substantial decisions are influenced by accurate forecasts on its topic.

Wow, great formula. [BTW, I have slightly edited RH’s first sentence.]

I&#8217-m sure Mike Giberson will write another blog post for Midas Oracle about that formula &#8212-all that for free. Crowd-sourcing works for me. :-D

Rushkoff on Crowd Sourcing

No Gravatar

Douglas Rushkoff answering this year&#8217-s Edge question:

The Internet. I thought that it would change people. I thought it would allow us to build a new world through which we could model new behaviors, values, and relationships. &#8230- For now, at least, it&#8217-s turned out to be different. &#8230- The open source ethos has been reinterpreted through the lens of corporatism as &#8220-crowd sourcing&#8221- – meaning just another way to get people to do work for no compensation.

Unfortunately, that&#8217-s close to the truth for most play-money prediction market business plans.