Do businesses need enterprise prediction markets?

Competitive advantage can be obtained either by differentiation or by low cost. Enterprise prediction markets certainly don&#8217-t foster the innovation process, and they are surely not the cheapest forecasting tool. EPMs require special software, the hiring of consultant(s), the participation of all, and a budget for the prizes. EPMs are costly, and they take time to deliver. As of today, I can&#8217-t see why any sane CEO should be implementing EPMs as a decision-making support. At the contrary, I would say that any sane CEO should fire any employee who tried to sneak in internal prediction markets, and should dismember any existing corporate prediction exchange. Right now.

It has been suggested that EPMs have helped Best Buy getting it right on the ‘HD-DVD versus Blu-Ray’ issue. It&#8217-s a boatload of bullsh*t. I know a lot about technology intelligence. It should be done by a smart and curious operator. There is no need of enterprise prediction markets to do this task. The tools you need consist of a bunch of IT news aggregators and a good search engine. Consider this:

The Inevitable Move Of iTunes To The Cloud

In the &#8216-cloud&#8217- piece above, there are facts and there are speculations. You&#8217-ve got much more technology intelligence reading the &#8216-cloud&#8217- piece above than you would get from a crude, plain and simple prediction market. Gimme a break with EPMs. Make no sense at all.

Contrast EPMs (which are costly) with public prediction markets (a la InTrade or BetFair), where probabilistic predictions are offered for free. That makes all the difference for the reason that the added accuracy brought by prediction markets is very small. Market-generated odds are handed out for free to journalists &#8212-still, few of them take the bait. The market-powered crystal ball is worth peanuts.

The reason CEOs are paid millions is that only a small percent of the population of business administration managers has the ability to cut through the non-sense and the balls to cut the cost of the non-sense. It is a rare skill. I am calling on CEOs to end EPMs. Right now.

Prediction markets: Sticking to the letter of the contract VERSUS Interpreting intent

No Gravatar

Chris Hibbert (of Zocalo):

I disagree, Chris. Much experience on FX has shown that interesting questions (those that aren’t routine repetitions of previous questions) often result in realities that diverge from the obvious expectations of nearly everyone involved in describing the possibilities. In those situations, we’ve found that trying to interpret intent leads to more confusion than sticking to the letter of the question as asked.

If a [prediction market] sticks to its written description of what the claims mean, then careful readers are rewarded, and they learn that they have a good chance to predict how the judge will interpret the question and events in the world. If questions are determined based on “intent”, then everyone has to spend time deciding which aspect of the question the judge will decide was more important, when reality decides not to conform to the question’s expectations.

Sometimes (as you argue was the case with the North Korea question) the result is surprising and disappointing, but choosing the other approach leads to much less participation as people who see that something surprising is preparing to happen or has happened back out of their bets rather than waiting to find out what the judge decides is important. I’m much happier when the participants spend their time figuring out what will happen in the world, rather than when they have to spend their time predicting how the judge will react. Strict construction gives us a predictable world.

See also Jason Ruspini&#8217-s comment on the same topic&#8230-

CrowdCast = market mechanism = binary spreads with a market maker

No Gravatar

Leslie Fine (CrowdCast Chief Scientist) to me:

Actually, our mechanism is a market, it&#8217-s just not a stock market. We use an automated market maker to efficiently price every bet, adjust crowd beliefs, and price an interim sell. In essence, participants trade binary spreads with the market maker.

Because our new version was not yet market-ready, I did not enter the markets vs. non-markets debate when you were having it some months ago. However, among other reasons, we avoid collective forecasting because it is too similar to collaborative forecasting, which is key in supply chain. Honestly, when all is said and done, our clients care not what the mechanism is. They care that we can efficiently gather team intelligence and translate it into actionable business intelligence. That is our mission.

CrowdCast website

Previously: CrowdCast = Collective Forecasting = Collective Intelligence That Predicts

Share This:

Can prediction markets help improve economic forecasts?

No Gravatar

At VOX, David Hendry and James Reade examine the question, &#8220-How should we make economic forecasts?&#8221- Among the ideas discussed is whether prediction markets could be used to improve economic forecasting. Interesting suggestion and seeming to be worthy of additional exploration, but the authors don&#8217-t go too deep here.  Instead, they assert that &#8220-prediction markets can be viewed as a form of &#8230- model averaging,&#8221- and then drift into a discussion of forecast averaging. I&#8217-m not sure that forecast averaging is a good way to look at prediction markets.

Here is what they say:

Prediction markets can be viewed as a form of forecast pooling or model averaging, a common forecast technique (Bates and Granger 1969, Hoeting et al 1999 and Stock and Watson 2004). That is, forecasts from different models are combined to produce a single forecast. In prediction markets, each market participant makes a forecast based on his or her own forecasting model, and the market price is some function of each of these individual forecasts.

Since the &#8220-prediction&#8221- implied by a prediction market is set by the marginal transaction, it depends not at all on the distribution of earlier trades, nor on the valuations of parties priced out of the market at the current price.

For example, consider two event markets: in the first 999 contracts trade at $0.50 and the 1000th and final trade is at $0.75- in the second 999 contracts trade at $0.76 and the 1000th and final trade is at $0.75.  In the typical interpretation of prediction markets, the event is &#8220-predicted&#8221- to result with a 75 percent probability in both cases.  However, averaging among the different predictions doesn&#8217-t get you that result.

(Well, strictly speaking the market price is &#8220-some function&#8221- of the prices &#8211- namely, one in which all trades but the last are weighted zero and the last trade is weighted one. You can call this &#8220-averaging,&#8221- but that isn&#8217-t the most useful explanation of the function.)

I&#8217-m not arguing that forecast averaging might not be a good idea in many situations, just that averaging doesn&#8217-t seem like a good way to explain what a prediction market is doing.

Share:

The Accuracy Of Prediction Markets

A Lesson in Prediction Markets from the Game of Craps &#8211- by Paul Hewitt

Why Public Prediction Markets Fail &#8211- by Paul Hewitt

Both articles are required reading for Jed Christiansen and Panos Ipeirotis (alias &#8220-Prof Panos&#8221-). :-D

Why CrowdCast ditched Robin Hansons MSR as the engine of its IAM software

No Gravatar

Dump

Leslie Fine of CrowdCast:

Chris,

As Emile points out, in 2003 I started experimenting with (and empirically validating) alternatives to the traditional stock-market metaphor that will be more viable in corporate settings. We found the level of confusion and lack of interest in the usual fare led to a death spiral of disuse and inaccuracy. BRAIN was a first stake in the ground in prediction market mechanism design with usability as a fundamental premise.

When I joined Crowdcast (then Xpree) in August of 2008, Mat and the team already recognized the confusion around, and consequent poor adoption of, the MSR mechanism. The number of messages I fielded in my first month here asking me to explain pricing, shorting, how to make money, etc. was astounding. We all knew that we had to start from scratch, and rebuild a mechanism that was easy to use, expressive both in terms of the question one can ask and the message space in which one can answer, and provided a high level of user engagement. We have abandoned the MSR in favor of a new method that users are already finding much simpler and that requires a lower level of participation and sophistication than the usual stock market analogy.

I wish I could go into more detail. However, we need to keep a little bit of a lid on things for our upcoming launch. I can only beg your patience a little while longer, and I hope you will judge our offering worth the wait.

Regards,
Leslie

Nota Bene: IAM = information aggregation mechanism

UPDATE: They are out with their new collective forecasting mechanism.

Share This:

Flawed New Hampshire polls = Non-accurate New Hampshire prediction markets

No Gravatar

The most comprehensive analysis ever conducted of presidential primary polls:

&#8220-a handful of methodological missteps and miscalculations combined to undermine the accuracy of predictions about presidential primary winners in New Hampshire and three other states.&#8221-

Via Mister the Great Research Scientist David Pennock &#8211-who is an indispensable element of the field of prediction markets.

As I blogged many times, prediction markets react to polls&#8230- See the addendum below&#8230- – [UPDATE: See also Jed’s comment.] – Prediction markets should not be hyped as crystal balls, but simply as an objective and continuous way to aggregate expectations. So, if you think of it, their social utility is much smaller than what the advocates of the &#8220-idea futures&#8221-, &#8220-wisdom of crowds&#8221- or &#8220-collective intelligence&#8221- concepts told us. Much, much, much, much smaller&#8230- They all make the mistake to put accuracy forward. (By the way, somewhat related to that issue, please go reading the dialog between Robin Hanson and Emile Servan-Schreiber.)

Addendum

California Institute of Technology economist Charles Plott:

What you&#8217-re doing is collecting bits and pieces of information and aggregating it so we can watch it and understand what people know. People picked this up and called it the &#8220-wisdom of crowds&#8221- and other things, but a lot of that is just hype.

New Hampshire – The Democrats

The Hillary Clinton event derivative was expired to 100.

Dem NH Clinton

Dem NH Obama

Dem NH Edwards

New Hampshire – The Republicans

The John McCain event derivative was expired to 100.

Rep NH McCain

Rep NH Romney

Rep NH Huckabee

Rep NH Giuliani

No TweetBacks yet. (Be the first to Tweet this post)

Blogging Against The Hype

No Gravatar

gartner_hype_cycle

I have been blogging a lot about the damage done by some Ivory Tower economic professors and some commercial practitioners who exaggerate the benefits of the prediction markets. (Some people are not very happy with what I said. :-D ) The Gartner consultants have a word for that &#8212-&#8221-hype&#8221-. Hyping is defined as the act of publicizing in an exaggerated and often misleading manner. The way Internet citizens can guard against hype is to read bloggers and journalists (whatever you call them) who publish high-quality reports and opinions about the brand-new products and the fresh startups. It is a difficult task. It requires a solid expertise and a way to deflect away commercial influence and pressure (e.g., from some professionals who think that bloggers shouldn&#8217-t publish anything without their prior &#8220-consent&#8221-). If you want a role model for such an impartial journalist, I recommend to look at search engine expert Danny Sullivan of Search Engine Land. If you have 2 minutes, you could go there and scan his hype-bursting talking points.

Addendum:

gartner-2008

The truth about (enterprise) prediction markets

No Gravatar

Paul Hewitt:

[…] In virtually every case, the prediction market forecast is closer to the official HP forecast than it is to the actual outcome. Perhaps these markets are better at forecasting the forecast than they are at forecasting the outcome! Looking further into the results, while most of the predictions have a smaller error than the HP official forecasts, the differences are, in most cases, quite small. For example, in Event 3, the HP forecast error was 59.549% vs. 53.333% for the prediction market. They’re both really poor forecasts. To the decision-maker, the difference between these forecasts is not material.

There were eight markets that had HP official forecasts. In four of these (50%), the forecast error was greater than 25%. Even though, only three of the prediction market forecast errors were greater than 25%, this can hardly be a ringing endorsement for the accuracy of prediction markets (at least in this study). […]

To the despair of the Nashville imbecile, Paul&#8217-s analysis is quite similar to mine (circa February 14, 2009):

The prediction market technology is not a disruptive technology, and the social utility of the prediction markets is marginal. Number one, the aggregated information has value only for the totally uninformed people (a group that comprises those who overly obsess with prediction markets and have a narrow cultural universe). Number two, the added accuracy (if any) is minute, and, anyway, doesn’t fill up the gap between expectations and omniscience (which is how people judge forecasters). In our view, the social utility of the prediction markets lays in efficiency, not in accuracy. In complicated situations, the prediction markets integrate expectations (informed by facts and expertise) much faster than the mass media do. Their accuracy/efficiency is their uniqueness. It is their velocity that we should put to work.

Prediction markets are not a disruptive technology, but merely another means of forecasting.

Go reading Paul&#8217-s analysis in full.

I would like to add 2 things to Paul&#8217-s conclusion:

  1. We have been lied to about the real value of the prediction markets. Part of the &#8220-field of prediction markets&#8221- (which is a terminology that encompasses more people and organizations than just the prediction market industry) is made up of liars who live by the hype and will die by the hype.
  2. Prediction markets have value in specific cases where it could be demonstrated that an information aggregation mechanism is the appropriate method that should be put at work in those cases (and not in others). Neither the Ivory Tower economic canaries nor the self-described prediction market &#8220-practitioners&#8221- have done this job.

The fact that Inkling needs five bullet points and a graph to explain short selling is a good indication it’s too complicated.

No Gravatar

That was Jason Trost&#8217-s comment.

But see, first, Chris Hibbert&#8217-s comment:

My main complaint about using the “short-selling” terminology in prediction markets, is that it uses a term from finance that describes a complicated scenario to describe a simple scenario it doesn’t apply to. In financial markets, short selling means that you accrue money in order to take on a conditional obligation. When you bet against a proposition (on InTrade, Foresight Exchange or (I think) Inkling), you spend money and gain a conditional asset. In the prediction market case, you don’t have any further obligation- there’s no possibility of a margin call. The asset has a non-negative value.

I actually think the way NewsFutures describes binary outcomes is the simplest. They never talk about selling unless you already own the asset. If you don’t own any of the asset, you can either buy it, or click a button to see the opposite view, which you can also buy. They don’t have “yes” and “no”, they just have complementary wordings and titles for opposing outcomes.

Go reading all the comments, there.