Why CrowdCast ditched Robin Hansons MSR as the engine of its IAM software

No Gravatar


Leslie Fine of CrowdCast:


As Emile points out, in 2003 I started experimenting with (and empirically validating) alternatives to the traditional stock-market metaphor that will be more viable in corporate settings. We found the level of confusion and lack of interest in the usual fare led to a death spiral of disuse and inaccuracy. BRAIN was a first stake in the ground in prediction market mechanism design with usability as a fundamental premise.

When I joined Crowdcast (then Xpree) in August of 2008, Mat and the team already recognized the confusion around, and consequent poor adoption of, the MSR mechanism. The number of messages I fielded in my first month here asking me to explain pricing, shorting, how to make money, etc. was astounding. We all knew that we had to start from scratch, and rebuild a mechanism that was easy to use, expressive both in terms of the question one can ask and the message space in which one can answer, and provided a high level of user engagement. We have abandoned the MSR in favor of a new method that users are already finding much simpler and that requires a lower level of participation and sophistication than the usual stock market analogy.

I wish I could go into more detail. However, we need to keep a little bit of a lid on things for our upcoming launch. I can only beg your patience a little while longer, and I hope you will judge our offering worth the wait.


Nota Bene: IAM = information aggregation mechanism

UPDATE: They are out with their new collective forecasting mechanism.

Share This:

Nigel Eccles (the CEO of HubDub) and Robin Hanson (the inventor of MSR) have some explaining to make about the extreme zigzagging of the Barack Obama event derivative (in blue on this static compound chart). Look at the right end of the chart.

No Gravatar


Nigel Eccles:

There was a bug in that chart which is now fixed. However the excess volatility is still there. The problem is that our early markets were created with a liquidity parameter which was too low. That is fixed with more recent markets. However we are also looking at modifying the MSR in some significant ways.


– the latest InTrade predictions

– Emile Servan-Schreiber&#8217-s post on market arbitrage


No Gravatar

If US laws were gambling compatible, would a FaceBook betting application solve the chicken-and-egg problem that any brand-new prediction exchange is facing? (Short sellers will come to the exchange only if there are enough backers, who will come only if there is enough liquidity, etc.) Could MySpace, FaceBook and LinkedIn (who have registered people by the millions, already) provide a starting launch for future prediction exchanges?

YooPick for FaceBook

YooPick @ FaceBook

PS: Yet another hit for Robin Hanson&#8217-s MSR. :-D

The best presentations from the worlds best conference on enterprise prediction markets -ever

No Gravatar

Awesome slides in bold.

Brought to you by Koleman Strumpf (circa November 2007):

Henry Berg, Microsoft &lt-slides&gt-
Discussant: Robin Hanson (George Mason Department of Economics) &lt-slides&gt-

Christina Ann LaComb, GE (The Imagination Market- abstract is free, text is gated) &lt-slides&gt-
Discussant: Marco Ottaviani (Kellogg School of Management, Management and Strategy) &lt-slides&gt-

Dawn Keller, Best Buy (Best Buy’s TAGTRADE Market) &lt-slides&gt-

Bo Cowgill, Google (Putting Crowd Wisdom to Work) &lt-slides&gt-

Jim Lavoie, Co-Founder and CEO, Rite-Solutions &lt-slides&gt-

David Perry, Co-Founder and President, Consensus Point &lt-slides&gt-

Mat Fogarty, Founder and CEO, Xpree Inc &lt-slides&gt-

Tom W. Bell, Chapman University School of Law &lt-slides&gt-

Better Pricing for Tournament Prediction Markets

No Gravatar

Last year while working out a few thoughts on arbitrage opportunities in basketball tournament prediction markets at Inkling, it occurred to me that the Inkling pricing mechanism was just a little bit off for such applications. The question is whether something better can be done. An answer comes from the folks at Yahoo Research: yes.

Inkling’s markets come in a couple of flavors, so far as I know all using an automated market maker based on a logarithmic market scoring rule (LMSR). In the multi-outcome case – for example, a market to pick the winner of a 65-team single elimination tournament – the market ensures that all prices sum to exactly 100. If a purchase of team A shares causes its share price to increase by 5, then the prices of all 64 other team shares will decrease by a total of 5.

The logic of the LMSR doesn’t tell you exactly how to redistribute the counter-balancing price decreases. In Inkling’s case they appear to redistribute the counter-balancing price movements in proportion to each team’s previous share price (so, for example, a team with an initial price of 10 would decrease twice as much as a team with a previous price of 5). While for generic multi-outcome prediction markets this approach seems reasonable, it doesn’t seem right for a tournament structure. (I raised this point in a comment posted here at Midas Oracle last September, and responses in that comment thread by David Pennock and Chris Hibbert were helpful.)

The problem arises for pricing tournament markets because the tournament structure imposes certain relationships between teams that the generic pricing rule ignores. Incorporating the structure into the price rule in principle seems like the way to go. Robin Hanson, in his original articles on the LMSR, suggests a Bayes net could be used in such cases. Now three scientists at Yahoo Research have shown this approach works.

In “Pricing Combinatorial Markets For Tournaments,” Yiling Chen, Sharad Goel and David Pennock demonstrate that the pricing problem involved in running a LMSR-based combinatorial market for tournaments is computationally tractable so long as the shares are defined in a particular manner. In the abstract the authors report, “This is the first example of a tractable market-maker driven combinatorial market.”

An introduction to the broader research effort at Yahoo describes the “Bracketology” project in a less technical manner:

Fantasy stock market games are all the rage with Internet users…. Though many types of exchanges abound, they all operate in a similar fashion.

For the most part, each bet is managed independently, even when the bets are logically related. For example, picking Duke to win the final game of the NCAA college basketball tournament in your online office pool will not change the odds of Duke winning any of its earlier round games, even though that pick implies that Duke will have had to win all of those games to get to the finals.

This approach struck the Yahoo! Research team of Yiling Chen, Sharad Goel, George Levchenko, David Pennock and Daniel Reeves as fundamentally flawed. In a research project called “Bracketology,” they set about to create a “combinatorial market” that spreads information appropriately across logically related bets.…

In a standard market design, there are only about 400 possible betting options for the 63-game [sic] NCAA basketball tournament. But in a combinatorial market, where many more combinations are possible, the number of potential combinations is billions of billions. “That’s why you’ll never see anyone get every game right,” says Goel.…

At its core, the Bracketology project is about using a combinatorial approach to aggregate opinions in a more efficient manner. “I view it as collaborative problem solving,” Goel explains. “This kind of market collects lots of opinions from lots of people who have lots of information sources, in order to accurately determine the perceived likelihood of an event.”

Now that they know they can manage a 65-team single elimination tournament, I wonder about more complicated tournament structures. For example, how about a prediction market asking which Major League Baseball teams will reach the playoffs? Eight teams total advance, three division leaders and a wild-card team from the National League and the same from the American League. The wild-card team is the team with the best overall record in the league excepting the three division winners.

In principle the MLB case seems doable, though it would be a lot more complicated that a mere 65-team tournament that has only billions of billions of possible outcomes.

[NOTE: A longer version of this post appeared at Knowledge Problem as “At the intersection of prediction markets and basketball tournaments.”]