HP began to explore prediction markets in 1996, but did not even consider applying them to the 2002 HP-Compaq merger. Similarly, Yahoo and Microsoft are two of the companies mentioned most often as being involved in prediction markets (along with their main competitor Google), but I’-ll bet none are considering the by-far-most-valuable markets they could create, on their just-announced proposed merger.
Decision markets could say whether this merger is good for shareholders, by estimating the combined stock price given a merger, and given no merger. Similarly, decision markets could say whether this merger is good for these firms’- customers, by estimating the price and/or quantity of web ads given a merger, and given no merger. This might help convince regulators to approve the merger.
My main doubt here is whether ad price and quantity are good enough measures of the merger’-s social benefits – what other outcomes could such markets estimate, to speak more clearly? And this is a very clear demonstration that these companies are just not serious about finding the highest value applications of prediction markets.
Cross-posted from Overcoming Bias.
“[T]his is a very clear demonstration that these companies are just not serious about finding the highest value applications of prediction markets.”
Robin: I realize that one reason you make these comments is to provoke someone into disclosing an application you might anoint as a “high value.” Bravo for that. But I’m going to pretend that you’re being serious. If a company was using a ‘high value’ market for something — why would Robin Hanson know about it?
Although PM experiments make a company look ‘cool,’ disclosing specific high profile applications isn’t necessarily in the firm’s interests. I can vouch for this based on my own experience. Of a firm’s major stakeholders, we don’t know who would find a particular application unfair or irresponsible. This is particularly true if the application will create winners and losers — as would often be the case with high value markets.
An email from a friend who is administering prediction markets inside a large company: “I’ve been told that *the existence of the market on certain topics* should be regarded as a secret. On other occasions, I’ve been told NOT to run a certain market AT ALL because of the likely response of the press and/or partners if the market’s existance was disclosed.”
If the *existence* of these markets is regarded as a sensitive — wouldn’t it would be even *more sensitive* for a company to say it is heavily relying on the markets’ decision?
There was an interesting quote in this article about political data mining: “It doesn’t benefit our clients for them to see a newspaper story about how great our technology is. Every campaign that we work with wants you to believe that it’s shoe leather that wins the race, or great issues, or the love of the people, but the fact of the matter is a lot of it is the nitty-gritty organization.”
The same could be said of markets. The clients of these markets would rather we believe in the power their business acumen — not in the power of collective judgment.
There are a number of reasons a firm might not want to disclose its specific high-value uses of PMs. I’m sure you’ve noticed that PM vendors and consultants often do not disclose their clients’ identities. Speculate about why these clients prefer to remain unnamed. Then ask yourself if any of these reasons might also apply to MSFT/YHOO/GOOG announcing a decision market about this merger — or any other of your anointed “high value” applications of prediction markets.
Well I guess it would be fun to have some prediction markets on whether it will ever be revealed that Google, Yahoo, or Microsoft now has markets on the effects of a Yahoo-Microsoft merger. I’d initially bet against it, but would defer to strong trading on the other side.
Also, private markets won’t persuade regulators to allow or prohibit the merger.
Robin: Last time I bumped into you, you were talking about implementing a market about which applicants would be getting into schools.
No offense intended, but I took this as evidence that you just aren’t serious about finding the highest value applications of prediction markets.
The high stakes in academia are articles published, faculty appointments, departmental budgets, salaries and donations to endowments. Where is your market for those? Not to mention: Suppsedly you “invented” prediction markets, and you’re only now getting around to setting them up in your own domain?
I’m starting to wonder if you’re just trying to look cool, or if you’re “serious” about trying to find the highest value prediction markets.
Bo, I said companies were not serious, not the employees. I am happy to grant that you personally are serious. I will similarly grant that no university I know is serious about finding the highest value applications of prediction markets, including my university. Even more than high tech companies, universities are far more interested in associating with cool tech than in risking disruptions by actually applying it internally.
We don’t have good indicators or definitions of whether ‘companies are serious.’ You’re not being helpful by pointing out that PMs aren’t being used for everything in corporations, or even the applications that seem obvious to you.
Lets think about things we might be able to measure. For example, we might be able to measure the % of employees at a company who have had a prediction market run on a topic directly relevant to their day-to-day job. Without disclosing the topics: If there are a lot of employees working on projects related to the markets, it may be safe to assume that markets are being run on important topics for the company.
What % would you select?
Bo, my “serious” I said I meant trying to field the highest value applications. That is naturally measured in accounting terms – value minus cost. Measures of popularity or familiarity would not at all be the same thing.
If you re-read my comment again, you will see that I did not say anything about the popularity of the markets. I proposed measuring the number of employees working in a job that has a prediction market on it. This is a function of the firms decision to implement markets on a wide variety of relevant topics for the business.
I know this is a heuristic, and there would be greater value in trying to measure the value added in dollars. However, a monetization study is likely to be 1) more time consuming to produce, and 2) just as unreliable.
We aren’t likely to get honest answers from people by asking them to estimate the value of additional information at various specific hypothetical moments in the past. People are not good at making these type of estimations, especially when there is no incentive to get it right (and a lot of reasons to get it wrong).
On the costs side: Even if we had good data about how much individual employees were spending on the site browsing, etc — it is unclear how to price such their time. Is it work, or is it leisure? Should we model this as employees having an hourly rate (even though most are salaried)? I know you had some thoughts on this before which I don’t remember (do feel free to share), but I’m not convinced these issues can be resolved in a persuasive way.
Once people realize what’s going on with the methodology of such research, they’ll realize what a totally hackable and unreliable study it is — and it will lose its persuasive value at Google as well as externally.
In summary: Doing a value-of-information calculation on prediction markets itself does not seem to offer very much information value. Because of the low rigor, it would offer little additional persuasive value at a great cost.
Also, just wanted to drop this link for Chris, who seems to think that you invented value-of-information calculations in the shower one morning a few weeks ago and released it to the world via Midas Oracle.
I have publicized the Wikipedia link, as you ordered, my good Lord.
I can’t have much optimism about a business practice whose proponents aren’t even willing to try to offer a cost-benefit calculation. You could count how many employees had ever gone to a TQM meeting, but that wouldn’t tell you if TQM is valuable or not.
[…] Robin Hanson: […] I meant trying to field the highest value applications. That is naturally measured in accounting terms – value minus cost. Measures of popularity or familiarity would not at all be the same thing. […]
[…] EXCELLENT Google paper has been totally ignored by the prediction market bloggers (PDF file); and my intellectual fight with prediction market ubber-expert Robin Hanson has been totally misrepresent…. At BetFair, I’ll work hard to crash bloggers and forums’ rankings in Google’s […]