Boycott the $400 vendor conference on prediction markets.

No Gravatar

I renew my call for boycotting the $400 vendor conference on prediction markets. Don&#8217-t pay $400 to listen to prediction market software vendors. (They should pay you $400, rather, to listen to their marketese.) They highly exaggerate the usefulness of prediction markets (and enterprise prediction markets, in particular). They don&#8217-t have a single use case to demonstrate their usefulness. There is no way you will get any R.O.I. out of those vendor conferences held in a phone booth.

Plus, the SF organizer of these conferences is a guy that has the detestable habit of hiding his identity under many pseudonyms (a female secretary, a &#8220-legal assistant&#8221-, a foundation director, etc.). This guy is a mythomaniac. Stay away.

Read Paul Hewitt&#8217-s blog, instead. It is free &#8212-and it tells the truth.

Assessing the usefulness of enterprise prediction markets

No Gravatar

Do you need to have experience in running an enterprise prediction exchange in order to assess the pertinence of enterprise prediction markets?

Paul Hewitt:

Hi Jed…

As for qualifications, I have been making business decisions for almost 30 years. I am a chartered accountant and a business owner. Starting in university and continuing to this day, I have been researching information needs for corporate decision making. As Chris points out, I’m not a salesperson for any of the software developers. In fact, if I have a bias, it is to be slightly in favour of prediction markets. That said, I still haven’t seen any convincing evidence that they work as promised by ANY of the vendors.

As for whether I have ever run or administered a prediction market, the answer is no. Does that mean I am not qualified to critique the cases that have been published? Hardly. You don’t have to run a PM to know that it is flawed. Those that do, end up trying to justify minuscule “improvements” in the accuracy of predictions. They also fail to consider the consistency of the predictions. Without this, EPMs will never catch on. Sorry, but that is just plain common sense.

The pilot cases that have been reported are pretty poor examples of prediction market successes. In almost every case, the participants were (at least mostly) the same ones that were involved with internal forecasting. The HP markets, yes, the Holy Grail of all prediction markets, merely showed that prediction markets are good at aggregating the information already aggregated by the company forecasters! They showed that PMs are only slightly better than other traditional methods – and mainly because of the bias reduction. Being slightly better is not good enough in the corporate world.

I think I bring a healthy skepticism to the assessment of prediction markets. I truly want to believe, but I need to be convinced. I am no evangelist, and there is no place for that in scientific research. Rather than condemn me for not administering a PM, why not address the real issues that arise from my analyses?

Paul Hewitt&#8217-s blog

Previously: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

The truth about CrowdClaritys extraordinary predictive power (which impresses Jed Christiansen so much)

No Gravatar

Paul Hewitt:

At first blush, it appears that we finally have a bona fide prediction market success! If we&#8217-re going to celebrate, I&#8217-d suggest Prosecco, not Champagne, however.

There are a number of reasons to be cautious. These represent only a couple of markets. We don&#8217-t know why Urban Science people appear to be so adept at forecasting GM sales in turbulent times. There is no information on the CrowdClarity web site to indicate why the markets were successful nor how their mechanism might have played a role in the PM accuracy. I&#8217-m guessing that it would have been really easy to beat GM&#8217-s forecasts in November, as they would likely have been even more biased than usual, mainly for political reasons. I&#8217-m not sure how Edmunds.com&#8217-s may have been biased or why their predictions were not accurate. Maybe they are not so good at predicting unless the market is fairly stable.

The CrowdClarity web site boasts that a few days after the markets were opened, the predictions were fairly close to the eventual outcome. This is a good thing, but, at this point it is not useful. No one knew, at that time, that those early predictions would turn out to be reasonably accurate. As a result, no one would have relied upon these early predictions to make decisions.

I&#8217-m even more skeptical of the company&#8217-s contention that markets can be operated with as few as 13 participants. Here we go again, trying to fake diversity.

It is interesting that a prediction market comprised of participants outside of the subject company did generate more accurate predictions than GM insiders (biased) and Edmunds.com (experts). The question that needs to be answered is why. Clearly, Urban Science people did have access to better information, but why?

Unless we know why the prediction markets were successful at CrowdClarity, it is hard to get excited. There are too many examples of prediction markets that are not significantly better than traditional forecasting methods. This one could be a fluke.

I&#8217-ll have more to say, soon, when I write about the prediction markets that were run at General Mills. There the authors of the study found that prediction markets were no better than the company internal forecasting process.

Paul Hewitt&#8217-s analysis is more interesting than Jed Christiansen&#8217-s naive take.

Paul Hewitt&#8217-s blog

Next: Assessing the usefulness of enterprise prediction markets

Share This:

Finally, a positive corporate prediction market case study… -well, according to Jed Christiansen

No Gravatar

Jed Christiansen:

To recap, the prediction market beat the official GM forecast (made at the beginning of the month) easily, which isn’t hugely surprising considering the myopic nature of internal forecasting. But the prediction market also beat the Edmunds.com forecast. This is particularly interesting, as Edmunds would have had the opportunity to review almost the entire month’s news and data before making their forecast at the end of the month. […]

Assume that even with three weeks’ early warning Chevrolet was only able to save 10% of that gap, it’s still $80million in savings. Even if a corporate prediction market for a giant company like GM cost $200,000 a year, that would still be a return on investment of 40,000 %. And again, that’s in the Chevrolet division alone. […]

Make up your own mind by reading the whole piece.

Next: The truth about CrowdClarity’s extraordinary predictive power (which impresses Jed Christiansen so much)

Next: Assessing the usefulness of enterprise prediction markets

Do businesses need enterprise prediction markets?

Competitive advantage can be obtained either by differentiation or by low cost. Enterprise prediction markets certainly don&#8217-t foster the innovation process, and they are surely not the cheapest forecasting tool. EPMs require special software, the hiring of consultant(s), the participation of all, and a budget for the prizes. EPMs are costly, and they take time to deliver. As of today, I can&#8217-t see why any sane CEO should be implementing EPMs as a decision-making support. At the contrary, I would say that any sane CEO should fire any employee who tried to sneak in internal prediction markets, and should dismember any existing corporate prediction exchange. Right now.

It has been suggested that EPMs have helped Best Buy getting it right on the ‘HD-DVD versus Blu-Ray’ issue. It&#8217-s a boatload of bullsh*t. I know a lot about technology intelligence. It should be done by a smart and curious operator. There is no need of enterprise prediction markets to do this task. The tools you need consist of a bunch of IT news aggregators and a good search engine. Consider this:

The Inevitable Move Of iTunes To The Cloud

In the &#8216-cloud&#8217- piece above, there are facts and there are speculations. You&#8217-ve got much more technology intelligence reading the &#8216-cloud&#8217- piece above than you would get from a crude, plain and simple prediction market. Gimme a break with EPMs. Make no sense at all.

Contrast EPMs (which are costly) with public prediction markets (a la InTrade or BetFair), where probabilistic predictions are offered for free. That makes all the difference for the reason that the added accuracy brought by prediction markets is very small. Market-generated odds are handed out for free to journalists &#8212-still, few of them take the bait. The market-powered crystal ball is worth peanuts.

The reason CEOs are paid millions is that only a small percent of the population of business administration managers has the ability to cut through the non-sense and the balls to cut the cost of the non-sense. It is a rare skill. I am calling on CEOs to end EPMs. Right now.

Why CrowdCast ditched Robin Hansons MSR as the engine of its IAM software

No Gravatar

Dump

Leslie Fine of CrowdCast:

Chris,

As Emile points out, in 2003 I started experimenting with (and empirically validating) alternatives to the traditional stock-market metaphor that will be more viable in corporate settings. We found the level of confusion and lack of interest in the usual fare led to a death spiral of disuse and inaccuracy. BRAIN was a first stake in the ground in prediction market mechanism design with usability as a fundamental premise.

When I joined Crowdcast (then Xpree) in August of 2008, Mat and the team already recognized the confusion around, and consequent poor adoption of, the MSR mechanism. The number of messages I fielded in my first month here asking me to explain pricing, shorting, how to make money, etc. was astounding. We all knew that we had to start from scratch, and rebuild a mechanism that was easy to use, expressive both in terms of the question one can ask and the message space in which one can answer, and provided a high level of user engagement. We have abandoned the MSR in favor of a new method that users are already finding much simpler and that requires a lower level of participation and sophistication than the usual stock market analogy.

I wish I could go into more detail. However, we need to keep a little bit of a lid on things for our upcoming launch. I can only beg your patience a little while longer, and I hope you will judge our offering worth the wait.

Regards,
Leslie

Nota Bene: IAM = information aggregation mechanism

UPDATE: They are out with their new collective forecasting mechanism.

Share This:

The truth about (enterprise) prediction markets

No Gravatar

Paul Hewitt:

[…] In virtually every case, the prediction market forecast is closer to the official HP forecast than it is to the actual outcome. Perhaps these markets are better at forecasting the forecast than they are at forecasting the outcome! Looking further into the results, while most of the predictions have a smaller error than the HP official forecasts, the differences are, in most cases, quite small. For example, in Event 3, the HP forecast error was 59.549% vs. 53.333% for the prediction market. They’re both really poor forecasts. To the decision-maker, the difference between these forecasts is not material.

There were eight markets that had HP official forecasts. In four of these (50%), the forecast error was greater than 25%. Even though, only three of the prediction market forecast errors were greater than 25%, this can hardly be a ringing endorsement for the accuracy of prediction markets (at least in this study). […]

To the despair of the Nashville imbecile, Paul&#8217-s analysis is quite similar to mine (circa February 14, 2009):

The prediction market technology is not a disruptive technology, and the social utility of the prediction markets is marginal. Number one, the aggregated information has value only for the totally uninformed people (a group that comprises those who overly obsess with prediction markets and have a narrow cultural universe). Number two, the added accuracy (if any) is minute, and, anyway, doesn’t fill up the gap between expectations and omniscience (which is how people judge forecasters). In our view, the social utility of the prediction markets lays in efficiency, not in accuracy. In complicated situations, the prediction markets integrate expectations (informed by facts and expertise) much faster than the mass media do. Their accuracy/efficiency is their uniqueness. It is their velocity that we should put to work.

Prediction markets are not a disruptive technology, but merely another means of forecasting.

Go reading Paul&#8217-s analysis in full.

I would like to add 2 things to Paul&#8217-s conclusion:

  1. We have been lied to about the real value of the prediction markets. Part of the &#8220-field of prediction markets&#8221- (which is a terminology that encompasses more people and organizations than just the prediction market industry) is made up of liars who live by the hype and will die by the hype.
  2. Prediction markets have value in specific cases where it could be demonstrated that an information aggregation mechanism is the appropriate method that should be put at work in those cases (and not in others). Neither the Ivory Tower economic canaries nor the self-described prediction market &#8220-practitioners&#8221- have done this job.

Paul Hewitt on enterprise prediction markets

No Gravatar

– Here, in response to Jed Christiansen. (Scroll down.)

– Here, on his own blog.

Interesting. (Paul should learn to pepper his posts with external links, though. Otherwise, a web visitor out of the loop can&#8217-t get the background of an issue that is discussed. The foundation of the Web is hyper-linking, Paul.)

The worlds #1 resource on enterprise prediction markets

No Gravatar

– Do not waste $400 on a &#8220-prediction market conference&#8221- run by a San Francisco clown and attended by suckers.

– Quit listening to the Ivory Tower economic canaries who are over-hyping the prediction markets &#8212-and have no experience whatsoever in the field of forecasting.

– Instead, do read this Inkling Markets resource, and do grill Adam Siegel on the phone. It is free, and he is the Real McCoy. [I hope that NewsFutures and CrowdCast will soon provide the same kind of EPM dossier on their respective website.]

inkling

Did Florian Riahi of Texodus Predictions really read those academic papers about prediction markets?

No Gravatar

I described in a previous post why I delisted his company from my list of prediction market consultants.

I want to share a remark with you, today. Here is a man from Holland who recruited by e-mail some US-based &#8220-advisors&#8221- &#8212-one ocean away. One curious online recruit he made is professor Christopher Wlezien, the co-author of an academic paper&#8230- that claims that prediction markets are *NO* better than damped polls:

For now, our results suggest the need for much more caution and less naive cheerleading about election markets on the part of prediction market advocates.

I bet that Florian Riahi didn&#8217-t read that paper, and I bet that professor Christopher Wlezien accepted the advisory slot in order to make the simple point that the &#8220-prediction market advocates&#8221- are just a bunch of baloneys who don&#8217-t read academic papers. :-D

Previously:

– How that prediction market consultant in Holland attracts economic advisers on the cheap

– I bet that those academic scholars…