[…] Prediction markets are gaining interest because the Internet allows greater worldwide access to them, as well as to the ever-increasing amount of data stored on any topic imaginable (which theoretically allows participants to make more informed predictions, individually and in aggregate). These factors, plus the enormous amount of computing power that will make it possible to instantly calculate exponentially small odds, are stimulating new research on advanced computational models in prediction markets. These models could be capable of analyzing entire events such as the annual NCAA collegiate basketball tournament, which begins a 63-game schedule with 263 possible outcomes by the tournament’-s end. […]
Growing opportunities in internal private-sector prediction markets are also revealing divergent philosophies among the markets’- designers. Many of the public markets feature price-adjustment algorithms built around answering discrete multiple-choice outcomes, such as which candidate will win an election or if a product will launch in month x, y, or z. […]
IEM steering committee member Thomas Rietz, a professor of finance at the university, says the aggregate zero-risk design of the IEM allows the markets to perfectly reflect the aggregate forecast opinions of its participants. By aggregate zero-risk, Rietz explains that when a trader enters a particular bilateral (either/or) market, he or she must buy one share of each choice, called a bundle, for a total cost of $1. If the trader holds the bundle until the market concludes, there is neither profit nor gain. If the trader guesses the outcome successfully, and sells the losing unit of the bundle to another trader while the market is running, he or she picks up the original $1 bet plus whatever price was agreed upon for the losing share that was sold. If the trader chooses to hold onto the loser and sell the eventual winner, however, they will incur the $1 loss at payout time. At any given time, the number of eventual winning shares and losing shares is equal and held by the traders. So, the university bears no counterparty risk and there is no need to provide hedging margins that irrationally affect outcomes. “-The price you would be willing to buy or sell for today is your expectation of its value in the future—the prices can be directly interpreted as a forecast,”- Rietz says. “-In ordinary futures markets, there is a long-lasting debate, going back to John Maynard Keynes in the 1930s, over whether prices can legitimately be used as forecasts, and it all hinges on whether or not people demand a return or face a risk in aggregate when they’-re investing in these contracts.”- […]
One enduring research problem on combinatorial markets is mitigating the effects a virtually unlimited spectrum of outcomes will have on creating markets that are so thin in trades they do not serve their purpose of aggregating information. In such markets, which might bear a resemblance to an enterprise prediction market in that there are not enough participants to provide a statistically valid spread of opinion, Pennock says a market-maker algorithm might serve as a price setter within widely acceptable limits. “-I believe that approximation algorithms will be fine for the market maker, because people don’-t really care about making bets on things that are incredibly unlikely, like 10?6 chance,”- Pennock says. “-But as long as you’-re betting on something with a 10% chance of happening, we’-ll be able to approximate pretty quickly with a market-maker price.”- […]
David Pennock’-s website and blog
I’-ve developed a combinatorial betting tech that lets a few or many users edit an always-coherent joint probability distribution over all value combinations of some set of base variables. Far futures base variables might include the years of important tech milestones, population, wealth, or mortality values at particular future dates, etc. Each user edit would be backed by a bet, a bet invested in assets paying competitive interest/returns. This combo bet tech worked well in published lab tests, several firms have used it, and I’-m now working with Consensus Point to deliver a robust commercial implementation. More on the tech here, here, and here.
See the explainer from David Pennock, which we will link to, again, later on.
I previously wrote that that San Francisco vendor conference is not worth the $400 they are asking. However, in all honesty to my readers, I shall notify that they have just made one (small) change that goes in the right direction. World’-s #1 prediction market researcher Robin Hanson is now scheduled to talk about combinatorial prediction markets (a very hot topic these days) —-instead of stuff about how to quantify prediction market value (a too much theoretical issue for business people).
A vendor conference with no editorial line is unlikely to be the receptacle of the truth about enterprise prediction markets. Vendors (4 will be present) do oversell.