Beyond the Continuous Double Auction – Part II, Existing alternatives

No Gravatar

This is Part II of a series of posts examining alternatives to the Continuous Double Auction market. Part I, posted at my CASTrader Blog, examines the problems with Continuous Double Auctions. This Part II examines some existing alternatives to them.

Invention of new types of markets has experienced a relative renaissance in recent years (mainly by the people who hang around Midas Oracle), many of which I surveyed before on my blog, and new types of markets are being invented as we speak (PDF). It&#8217-s hard to keep up, and I will caution that I am by no means any kind of expert on market design. That said, let&#8217-s examine an old-school market, as well as one of the new inventions.

Call Auction Market. Ironically, what the New York Stock Exchange replaced in the 1800s when they adopted CDAs has some interesting properties and advantages. A call market is typically organized as a price scan auction which basically amounts to this: poll every market participant and ask them how much they would tenatively offer to buy or sell at a given price. The search continues until buy/sell demand is balanced, at which price the market is cleared. The call market was rightly abandoned in the 1800s as markets grew due to the impossibilities of managing them in the pre-electronic era. I imagine it was mind-numbingly boring for traders as well. Recently, though, other types of call markets, such as crossing networks that batch orders at prices set in CDA markets have re-emerged, and some researchers have called for a major revival of the old call market (you may want to turn your sound off before clicking that link). Call markets have the interesting property of higher liquidity and lower short-term volatility relative to a CDA. When you realize CDAs are a sequential operation, and call markets are batch, it&#8217-s easy to see why this is true. In a batch operation, buy and sell orders are likely to offset, keeping price movement to a minimum vs. a bunch of sequential order fills alternating at the bid/ask. The advantage shifts towards price-takers, resulting in more trading and better price discovery, in my opinion. It&#8217-s not hard to see that a call auction where everyone&#8217-s offer was secret (and treated equally) would eliminate many of the shenanigans of CDAs. The following was said of call auction markets:

“Recent advances in computer technology have considerably expanded the call auction&#8217-s functionality. We suggest that the problems we are facing concerning liquidity, volatility, fragmentation and price discovery are largely endemic to the continuous market, and that the introduction of electronic call auction trading in the U.S. would be the most important innovation in market structure that could be made.”

That was said over 10 years ago, and it&#8217-s not hard to see why call markets are attractive, so why aren&#8217-t they taking over? Aside from call market proponent&#8217-s conspiracy theories that the bad boys of Wall Street would lose out, there is the major problem of immediacy. You just don&#8217-t trade a call market whenever you want to. The now defunct Arizona Stock Exchange was a call market that cleared once a day, for example. While some might argue that clearing once a day is a more productive use of people&#8217-s time, so long as some other market is clearing the same securities continuously, trading will flow to the other market, because there are simply more opportunities there.

Hanson&#8217-s Combinatorial Market. I&#8217-ve been fascinated by Hanson&#8217-s combinatorial market, although I must admit I don&#8217-t fully understand all of it&#8217-s intricacies. This market doesn&#8217-t use a limit order book at all (unless you want to use one because you want it to scale well), liquidity is always present via a market maker, and there is no bid-ask spread, because the price you pay is a continuous function of how much you want to buy or sell. The less the amount, the less your price will diverge from the last trade price. In the absence of other friction, the smallest trades are possible, even efficient, because liquidity is continuous (via a market maker that has a continuous price function). What&#8217-s more, you can have a functioning market with just a few traders, unlike a CDA. Hanson&#8217-s market is designed to function well in thin markets. Unfortunately, all of these characteristics are provided by one market maker adapted more for event markets than securities markets. This market maker decides what the price will be rather than the market players themselves. Furthermore, the market maker can be set up with different behaviors (scoring rules) and is subject to losing money (although the bounds of the loss is known ahead of time). While Hanson&#8217-s market maker may be an ideal way to subsidize liquidity in a fledgling prediction market, it doesn&#8217-t appear to me to be adaptable to a securities market.

The ideal market. An ideal dark market for CASTrader would be one that operated continuously, scaled well, and is general purpose like a CDA, while having the efficiency, volatility and trade encouraging characteristics of a call market, combined with the continuous liquidity and ability to function in thin markets of Hanson&#8217-s combinatorial market. To boot, I&#8217-d like it to be fair to traders of all sizes, as well as easy to program relative to a CDA. Is that possible? See Part III, over at CASTrader.

Saudi Arabia prediction markets, anyone??

No Gravatar

King Abdullah of Saudi Arabia has said he does not want to go down in history as Mr. Bush’s Arab Tony Blair.

New York Times

King Abdullah of Saudi Arabia in Riyadh in March

Awad Awad/Agence France-Presse — Getty Images
King Abdullah of Saudi Arabia in Riyadh in March, during a meeting of Arab heads of state in which he called the United States presence in Iraq “an illegal foreign occupation,” infuriating the White House.

&#8212-

Saudi Arabia prediction markets, anyone??

Thanks, but No Thanks. Difficult to get info. Saudi Arabia is like a giant cult, with a two-tier population &#8212-the people, and the rulers (the multi-married &#8220-princes&#8221-, who proliferate like rabbits).

That&#8217-s why I think Robin Hanson made an error when he picked up the Middle East as the geopolitical target of his DARPA&#8217-s Policy Analysis Market. Mid-East politics is too arcane for us, Westerners. And the whole Iraq war mess shows you that the Americans, in particular, don&#8217-t get the Arabo-Muslim world.

Would Mid-East prediction markets with strong incentives and high participation improve our intelligence??? Yes, in theory. But, as we have seen on Midas Oracle in the past weeks, the scholars have great ideas for brand-new event derivatives, but&#8230- as I&#8217-m used to ask&#8230- how many divisions??? They have no traction.

The field of prediction markets is where great ideas meet their coffin. The emphasis is put on a bunch of aloof scholars just because they can use a scientific calculator. We need thinkers/managers who both can make us dream with socially relevant event derivatives and understand the practicalities of the prediction markets.

Business Intelligence & Prediction Markets

No Gravatar

&#8212-

Wikipedia (on Business Intelligence):

Business intelligence (BI) is a business management term which refers to applications and technologies which are used to gather, provide access to, and analyze data and information about their company operations. Business intelligence systems can help companies have a more comprehensive knowledge of the factors affecting their business, such as metrics on sales, production, internal operations, and they can help companies to make better business decisions. Business Intelligence should not be confused with competitive intelligence, which is a separate management concept.

&#8212-

Wikipedia (on Competitive Intelligence):

&#8220-Competitive Intelligence (CI) is both a process and a product. The process of Competitive Intelligence is the action of gathering, analyzing, and applying information about products, domain constituents, customers, and competitors for the short term and long term planning needs of an organization. The product of Competitive Intelligence is the actionable output ascertained by the needs prescribed by an organization.&#8221-

Key points of these definitions:
1) Competitive Intelligence is an ethical and legal business practice. (This is important as CI professionals emphasize that the discipline is not the same as industrial espionage which is both unethical and usually illegal).
2) The focus is on the external business environment.
3) There is a process involved in gathering information, converting it into intelligence and then utilizing this in business decision making. CI professionals emphasize that if the intelligence gathered is not usable (or actionable) then it is not intelligence.

The term is often viewed as synonymous with Competitor Analysis but Competitive Intelligence is more than analyzing competitors — it is about making the organization more competitive relative to its existing set of competitors and potential competitors. Customers and key external stakeholders define the set of competitors for the organization and, in so doing, describe what could be a substitute for the business, votes, donations or other activities of the organization. The term is often abbreviated as CI, and most large businesses now have some Competitive Intelligences functions with staff involved often being members of professional associations such as the Society of Competitive Intelligence Professionals. CI activities often use a &#8220-Competitive Intelligence Solution&#8221-, usually via their intranet and internal alerts, which can also lead to Competitive Response Solution.

The Society of Competitive Intelligence Professionals (SCIP) is an organization for those who are interested in learning more about Competitive Intelligence. Established in 1986, they provide education and networking opportunities for business professionals, and provide up to date market research and analysis. “Members of the SCIP have backgrounds in market research, strategic analysis, science and technology.”

&#8212-

Wikipedia (on Business Intelligence):

When implementing a BI programme one might like to pose a number of questions and take a number of resultant decisions, such as:
Goal Alignment queries: The first step determines the short and medium-term purposes of the programme. What strategic goal(s) of the organization will the programme address? What organizational mission/vision does it relate to? A crafted hypothesis needs to detail how this initiative will eventually improve results / performance (i.e. a strategy map).
Baseline queries: Current information-gathering competency needs assessing. Does the organization have the capability of monitoring important sources of information? What data does the organization collect and how does it store that data? What are the statistical parameters of this data, e.g. how much random variation does it contain? Does the organization measure this?
Cost and risk queries: The financial consequences of a new BI initiative should be estimated. It is necessary to assess the cost of the present operations and the increase in costs associated with the BI initiative? What is the risk that the initiative will fail? This risk assessment should be converted into a financial metric and included in the planning.
Customer and Stakeholder queries: Determine who will benefit from the initiative and who will pay. Who has a stake in the current procedure? What kinds of customers/stakeholders will benefit directly from this initiative? Who will benefit indirectly? What are the quantitative / qualitative benefits? Is the specified initiative the best way to increase satisfaction for all kinds of customers, or is there a better way? How will customers&#8217- benefits be monitored? What about employees,&#8230- shareholders,&#8230- distribution channel members?
Metrics-related queries: These information requirements must be operationalized into clearly defined metrics. One must decide what metrics to use for each piece of information being gathered. Are these the best metrics? How do we know that? How many metrics need to be tracked? If this is a large number (it usually is), what kind of system can be used to track them? Are the metrics standardized, so they can be benchmarked against performance in other organizations? What are the industry standard metrics available?
Measurement Methodology-related queries: One should establish a methodology or a procedure to determine the best (or acceptable) way of measuring the required metrics. What methods will be used, and how frequently will the organization collect data? Do industry standards exist for this? Is this the best way to do the measurements? How do we know that?
Results-related queries: Someone should monitor the BI programme to ensure that objectives are being met. Adjustments in the programme may be necessary. The programme should be tested for accuracy, reliability, and validity. How can one demonstrate that the BI initiative (rather than other factors) contributed to a change in results? How much of the change was probably random?

Recession probability index rises to 16.9%

No Gravatar

The Bureau of Economic Analysis reported today that U.S. real GDP grew at an annual rate of 1.3% in the first quarter of 2007, moving our recession probability index up to 16.9%. This post provides some background on how that index is constructed and what the latest move up might signify.

What sort of GDP growth do we typically see during a recession? It is easy enough to answer this question just by selecting those postwar quarters that the National Bureau of Economic Research (NBER) has determined were characterized by economic recession and summarizing the probability distribution of those quarters. A plot of this density, estimated using nonparametric kernel methods, is provided in the following figure- (figures here are similar to those in a paper I wrote with UC Riverside Professor Marcelle Chauvet, which appeared last year in Nonlinear Time Series Analysis of Business Cycles). The horizontal axis on this figure corresponds to a possible rate of GDP growth (quoted at an annual rate) for a given quarter, while the height of the curve on the vertical axis corresponds to the probability of observing GDP growth of that magnitude when the economy is in a recession. You can see from the graph that the quarters in which the NBER says that the U.S. was in a recession are often, though far from always, characterized by negative real GDP growth. Of the 45 quarters in which the NBER says the U.S. was in recession, 19 were actually characterized by at least some growth of real GDP.

chauvet3.gif

One can also calculate, as in the blue curve below, the corresponding characterization of expansion quarters. Again, these usually show positive GDP growth, though 10 of the postwar quarters that are characterized by NBER as part of an expansion exhibited negative real GDP growth.

chauvet4.gif

The observed data on GDP growth can be thought of as a mixture of these two distributions. Historically, about 20% of the postwar U.S. quarters are characterized as recession and 80% as expansion. If one multiplies the recession density in the first figure by 0.2, one arrives at the red curve in the figure below. Multiplying the expansion density (second figure above) by 0.8, one arrives at the blue curve in the figure below. If the two products (red and blue curves) are added together, the result is the overall density for GDP growth coming from the combined contribution of expansion and recession observations. This mixture is represented by the yellow curve in the figure below.

chauvet5.gif

It is clear that if in a particular quarter one observes a very low value of GDP growth such as -6%, that suggests very strongly that the economy was in recession that quarter, because for such a value of GDP growth, the recession distribution (red curve)is the most important part of the mixture distribution (yellow curve). Likewise, a very high value such as +6% almost surely came from the contribution of expansions to the distribution. Intuitively, one would think that the ratio of the height of the recession contribution (the red curve) to the height of the mixture distribution (the yellow curve) corresponds to the probability that a quarter with that value of GDP growth would have been characterized by the NBER as being in a recession. Actually, this is not just intuitively sensible, it in fact turns out to be an exact application of Bayes&#8217- Law. The height of the red curve measures the joint probability of observing GDP growth of a certain magnitude and the occurrence of a recession, whereas the height of the yellow curve measures the unconditional probability of observing the indicated level of GDP growth. The ratio between the two is therefore the conditional probability of a recession given an observed value of GDP growth. This ratio is plotted as the red curve in the figure below.

chauvet6.gif

Such an inference strategy seems quite reasonable and robust, but unfortunately it is not particularly useful&#8211- for most of the values one would be interested in, the implication from Bayes&#8217- Law is that it&#8217-s hard to say from just one quarter&#8217-s value for GDP growth what is going on. However, there is a second feature of recessions that is extremely useful to exploit&#8211- if the economy was in an expansion last quarter, there is a 95% chance it will continue to be in expansion this quarter, whereas if it was in a recession last quarter, there is a 75% chance the recession will persist this quarter. Thus suppose for example that we had observed -10% GDP growth last quarter, which would have convinced us that the economy was almost surely in a recession last quarter. Before we saw this quarter&#8217-s GDP number, we would have thought in that case that there&#8217-s a 0.75 probability of the recession continuing into the current quarter. In this situation, to use Bayes&#8217- Law to form an inference about the current quarter given both the current and previous quarters&#8217- GDP, we would weight the mixtures not by 0.2 and 0.8 (the unconditional probabilities of this quarter being in recession and expansion, respectively), but rather by magnitudes closer to 0.75 and 0.25 (the probabilities of being in recession this period conditional on being in recession the previous period). The ratio of the height of the resulting new red curve to the resulting new yellow curve could then be used to calculate the conditional probability of a recession in quarter t based on observations of the values of GDP for both quarters t and t – 1. Starting from a position of complete ignorance at the start of the sample, we could apply this method sequentially to each observation to form a guess about whether the economy was in a recession at each date given not just that quarter&#8217-s GDP growth, but all the data observed up to that point.

Once can also use the same principle, which again is nothing more than Bayes&#8217- Law, working backwards in time&#8211- if this quarter we see GDP growth of -6%, that means we&#8217-re very likely in a recession this quarter, and given the persistence of recessions, that raises the likelihood that a recession actually began the period before. The farther back one looks in time, the better inference one can arrive at. Seeing this quarter&#8217-s GDP numbers helps me make a much better guess about whether the economy might have been in recession the previous quarter. We then work through the data iteratively in both directions&#8211- start with a state of complete ignorance about the sample, work through each date to form an inference about the current quarter given all the data up to that date, and then use the final value to work backwards to form an inference about each quarter based on GDP for the entire sample.

All this has been described here as if we took the properties of recessions and expansions as determined by the NBER as given. However, another thing one can do with this approach is to calculate the probability law for observed GDP growth itself, not conditioning at all on the NBER dates. Once we&#8217-ve done that calculation, we could infer the parameters such as how long recessions usually last and how severe they are in terms of GDP growth directly from GDP data alone, using the principle of maximum likelihood estimation. It is interesting that when we do this, we arrive at estimates of the parameters that are in fact very similar to the ones obtained using the NBER dates directly.

What&#8217-s the point of this, if all we do is use GDP to deduce what the NBER is eventually going to tell us anyway? The issue is that the NBER typically does not make its announcements until long after the fact. For example, the most recent release from the NBER Business Cycle Dating Committee was announced to the public in July 2003. Unfortunately, what the NBER announced in July 2003 was that the recession had actually ended in November 2001&#8211- they are telling us the situation 1-1/2 years after it has happened.

Waiting so long to make an announcement certainly has some benefits, allowing time for data to be revised and accumulating enough ex-post data to make the inference sufficiently accurate. However, my research with the algorithm sketched above suggests that it really performs quite satisfactorily if we just wait for one quarter&#8217-s worth of additional data. Thus, for example, with the advance 2007:Q1 GDP data just released, we form an inference about whether a recession might have started in 2006:Q4. The graph below shows how well this one-quarter-delayed inference would have performed historically. Shaded areas denote the dates of NBER recessions, which were not used in any way in constructing the index. Note moreover that this series is entirely real-time in construction&#8211- the value for any date is always based solely on information as it was reported in the advance GDP estimates available one quarter after the indicated date.

rec_prob_midas.gif

Although the sluggish GDP growth rates of the past year have produced quite an obvious move up the recession probability index, it is still far from the point at which we would conclude that a recession has likely started. At Econbrowser we will be following the procedure recommended in the research paper mentioned above&#8211- we will not declare that a recession has begun until the probability rises above 2/3. Once it begins, we will not declare it over until the probability falls back below 1/3.

So yes, the ongoing sluggish GDP growth has come to a point where we would worry about it, but no, it&#8217-s not at the point yet where we would say that a recession has likely begun.

[James Hamilton is professor of economics at the University of California, San Diego. The above is cross-posted from Econbrowser].

Leading political indicators

No Gravatar

American politics does not suffer from a shortage of polls. Zogby. Gallup. Rasmussen. SurveyUSA. Mason-Dixon. Polimetrix&#8230- In an information-glutted world, what matters is not the supply of sources, but the ability to glean trustworthy information from the larger swath of poor data.

Different polling organizations have different strengths and weaknesses. Some use &#8220-tight screens&#8221- to scope out likely voters- others simply sample registered voters, without making any attempt to tighten the survey base to &#8220-likely voters.&#8221- Tight screening is especially crucial to gauge the true state of a primary, when committed base opinion can diverge significantly from less engaged moderate voters, and more importantly, influence those moderates over time to converge to the more partisan perspective. Some use human interviewers, although recently that has given way to IVR (Interactive Voice Recording) polls (the kind where a computer talks to you and asks you to &#8220-press 1 if you will definitely support X, 2 if probably&#8230-&#8221-)

I have found tight-screen, IVR polling to be the most reliable. IVR not only has no marginal cost, but it eliminates all the biases resulting from trying to give the most pleasant-sounding answer possible (the &#8220-sexy grad student effect&#8221- that exaggerated Kerry&#8217-s margin by 15 points in Pennsylvania 2004 exit polling, for example). IVR possible responses can also be randomly rotated from respondent to respondent to eliminate recency biases (first and last responses in a list exaggerated because those are at the forefront of a person&#8217-s memory of the list, not because s/he will vote that way).

The poster-child of IVR tight-screen polling success is Scott Rasmussen&#8217-s Rasmussen Reports. I have only tracked them over the last two election cycles (2004 and 2006), but considering that 2004 was a GOP wave and 2006 a Democratic wave election, I think the data is sufficient to form a valid judgment. Rasmussen&#8217-s track record is simply stupendous. It predicted 49 out of 50 states in 2004 correctly, usually within two percentage points of the actual outcome. In 2006, Rasmussen achieved similarly impressive results &#8212- all the more impressive when you consider that most polling models tend to err in favor of one party or the other. (&#8220-Likely voter&#8221- models tend to favor Republicans, and registered voter-based models tend to exaggerate Democratic strength.)

My other favorite sources include Gallup and Mason-Dixon. Gallup comes closer to the &#8220-registered voter&#8221- model than the tighter Rasmussen model, so Gallup usually lags tighter-screen polls. By election eve, however, the two models usually converge. Gallup&#8217-s election-eve congressional generic vote is hands-down the best in the business. However, their numbers for party primaries have poor predictive value, because they don&#8217-t make much effort to hunt down likely voters.

Differing survey methods can yield very different results. Rasmussen has long shown a much closer Democratic nomination race than most established, &#8220-registered voter&#8221- pollsters &#8212- most recently, it showed a 32-32 tie between Clinton and Obama, with Edwards wallowing 15 points behind. Gallup&#8217-s last numbers tightened drastically to a 31-26 race between Clinton and Obama (Gallup&#8217-s numbers are also hard to compare with Rasmussen&#8217-s because Gallup includes Gore).

Many smart Democrats, notably MyDD&#8217-s Chris Bowers, believe that Gallup and others are mistakenly including lots of &#8220-low information voters&#8221- who simply lag the opinions and thought processes of more-attuned Democratic partisans.

Now that more establishmentarian polling firms are coming in line with Rasmussen&#8217-s results, one can infer that the likely voter/ Chris Bowers theory has gotten the better of the argument.

A survey of pollsters wouldn&#8217-t be complete without knowing which ones to stay away from. Stay away from Zogby and CNN polling. James Carville&#8217-s and Stan Greenberg&#8217-s DemocracyCorps polling outfit is not trustworthy, either &#8212- for example, when they doubled the percentage of blacks in an October 2006 survey sample to bump the Democrats&#8217- generic advantage by 5 points, to reinforce the Democratic narrative of a building wave.

Lastly, partisan pollsters in a competitive election season should always be taken with a grain of salt &#8212- they will use heuristic subtleties to create the best impression possible for their party&#8217-s candidates. Strategic Vision, a Republican outfit, deserves a three- or four-point handicap. Franklin Pierce generated a dubious Romney result for New Hampshire right after its lead pollster, Rich Killion, went to work for the Romney campaign. Such polls should be trusted only as a last resort.

For those of us who wish to divine movements in politics futures, discerning trustworthy data from bad data is paramount. Poll-rigging is the high art of Washington, DC, and as any interest group &#8212- or candidate &#8212- knows, it&#8217-s easier than easy to produce a poll that diverges wildly from reality, if the heuristics are threatening enough.

(cross-posted from my blog, The Tradesports Political Maven)

BetFair vs. TradeSports-InTrade

No Gravatar

&#8220-Anonymous&#8221- to Patri Friedman (thanks to Jason Ruspini for the link):

Have you tried Betfair? www.betfair.com . Chris Masse over at Midas Oracle www.midasoracle.com &#8212- the main prediction markets blog &#8212- tends to support them over InTrade. The interface is not as nice as InTrade&#8217-s though (they give punter odds rather than decimal probability prices).

#1. BetFair won&#8217-t let any U.S. resident opens an account, because BetFair has decided to abide by US laws. (See: &#8220-We wish to reiterate our well documented and long-standing policy of not accepting US customers, funds, or bets.&#8220-) That said, some US residents have managed to open BetFair accounts via the complicity of British friends. As you all know, TradeSports-Intrade was created and became successful on two premises: number one, BetFair won&#8217-t enter the US market until it is legal, and, number two, US-based prediction exchanges (betting exchanges, event futures exchanges other than hedging-oriented) are not legally allowed to perform operations. Thus, the situation we have today: TradeSports-Intrade is the de facto monopoly in the US market of unregulated event derivatives. (MatchBook is trying to pierce with a marketing strategy a la TradeBetX. And, of course, online sportsbooks are TradeSports-Intrade&#8217-s competitors.)

#2. There are many real-money prediction exchanges (betting exchange, event futures exchanges). To trade or to get probabilities, you should select the one that has the most volume on the (regulated or unregulated) event derivative you&#8217-re interested in. For US politics, it&#8217-s TradeSports-InTrade. For British and Irish politics, it&#8217-s BetFair.

#3. BetFair is indeed a formidable operator &#8212-big, powerful, ethical, with a fantastic technical team, and a robust and sophisticated software. Very long term, BetFair is going to take over Trade-Sports-InTrade in the US.

#4. The prices (which the economists allow us to interpret as probabilities, when these prices come from prediction exchanges, as opposed to bookmakers) can be expressed in four ways: 0&#8211-100, American, fractional or decimal. It&#8217-s all equivalent. For instance, you take the number &#8220-1&#8243-, you divide it by the BetFair&#8217-s &#8220-last price matched&#8221- expressed as&#8221-decimal odds&#8221-, you multiply it by &#8220-100&#8243-, and you get your 0&#8211-100 price/probability.

BetFair: Republican Nicolas Sarkozy as next French President

Total matched on this event: $728,806
Betting summary – Volume: $463,083
Last price matched: 1.32 [“1” divided by “1.32” and multiplied by “100” = 75.8%]

BetFair explainer on decimal odds:

What are Decimal Odds?
All prices quoted on Betfair are &#8216-Decimal&#8217- Odds. Decimal Odds differ from the Odds traditionally quoted in the UK in that they include your stake as part of your total return. If you place a bet of ?10 at Decimal Odds of 4.0 and win, then your total return (including stake) is ?40. In the UK this would be quoted as 3/1, returning to you winnings of ?30 plus your original stake of ?10.
Decimal Odds are simpler to use than Traditional Odds, and are the most common form of Odds quoted in countries outside the UK. In addition, for the mathematically minded, Decimal Odds relate more closely to probability: in a race with four equally-matched horses, the probability of each horse winning is 25%. Each horse will have Traditional Odds of 3/1 or Decimal Odds of 4.0. Hence, the probability of an outcome equals 1 divided by its Decimal Odds (1 / 4.0 = 25%).
Decimal Odds also offer many more prices – Betfair offer every price between 1.01 and 2.0 to two decimal places. With no margins to protect, our customers deserve to see every price available.

#5. Any software for prediction markets (and betting exchanges) should be able to convert the prices in these four different formats &#8212-on top of being translated in many foreign languages.

#6. British consultant wannabe Jed Christiansen, freshly minted &#8220-from the London School of Economics&#8221-, has a new blog post out (he blogs on a monthly basis) on BetFair versus TradeSports-InTrade (BetFair being &#8220-betting&#8221-, and InTrade-TradeSports being &#8220-financial&#8221-), which is the most ridiculous statement I have ever read since Paris Hilton declared to the world that she was going to morph herself into a &#8220-savvy business titan&#8221-.

– Jed Christiansen tries to divine the &#8220-psychological approach&#8221- of BetFair and TradeSports-InTrade. Oh, mon Dieu!&#8230- Jed Christiansen&#8217-s thinking is rotten from the start. Just like the beauty is in the eye of the beholder, the marketing &#8220-approach&#8221- is conditional to its adoption by the customers/consumers. Any popular products (here, event derivatives, prediction markets) belong to its customers/consumers (here, the traders and the info consumers), and then the &#8220-approach&#8221- is to listen to the improvement they suggest to the service. Any incremental innovation of your prediction market software is just the anticipation of future traders&#8217- needs. Event derivative traders on both sides of the Atlantic have the same needs. The traders are in the driver&#8217-s seat. If they are satisfied, they patronize and come en masse (no pun intended), and if they are not, they leave. Traders don&#8217-t give the first fig about the &#8220-psychological approach&#8221- of the prediction exchanges. You can&#8217-t spin the British and Irish traders one way, and the American and Canadian traders another way. Trader&#8217-s needs are imperial and universal. It&#8217-s the traders who shape the prediction exchanges (betting exchanges) their way.

Jed Christiansen makes a big fuss out of tiny differences between BetFair and TradeSports-InTrade. For instance, BetFair outputs prices as &#8220-decimal odds&#8221-, and, of course, the Grand Inquisitor views it as a sign from God that BetFair is &#8220-betting&#8221-. But that&#8217-s bullshit, as the readers of Midas Oracle all know. With a simple computation, you can transform the &#8220-decimal odds&#8221- into 0&#8211-100 prices. (You take the number &#8220-1&#8243-, you divide it by the BetFair&#8217-s &#8220-last price matched&#8221- expressed as&#8221-decimal odds&#8221-, you multiply it by &#8220-100&#8243-, and you get your 0&#8211-100 price/probability.)

– BetFair uses the words &#8220-back&#8221- and &#8220-lay&#8221- (instead of &#8220-bid&#8221- and &#8220-ask&#8221-) and the Grand Inquisitor views it as yet another sign from God that BetFair is &#8220-betting&#8221- (as opposed to &#8220-financial&#8221-).

– Jed Christiansen states that the &#8220-psychological approach&#8221- of BetFair (being &#8220-betting&#8221-) is dictated by the fact that its competitors are the British bookies (and the online bookmakers, I will add). The main competitors of InTrade-TradeSports are the US illegal bookmakers and the offshore sportsbooks. BetFair and InTrade-TradeSports both have the same kind of competitors, the fixed-odds bookmakers. (Being illegal in America, InTrade-TradeSports can&#8217-t market to the sophisticated US horse race bettors.)

Jed Christiansen has opted for religion over theory. He religiously believes that BetFair is &#8220-betting&#8221- and InTrade-TradeSports is &#8220-financial&#8221-, and he will take any insignificant piece of evidence to make his case.

&#8212-

External Link: Nisan Gabbay on BetFair

Previous: BetFair Case Study – Betting Exchange – Prediction Markets

[…] Betfair on the other hand was built like a stock market exchange, where odds functioned as the share prices. […]

UPDATE: Yahoo! research scientist David Pennock comments&#8230-

I think Jed Christiansen is correct to a large degree. Betfair speaks the punter’s (gambler’s) language. TradeSports speaks Wall Street’s language. I have a bookie friend who upon first look at TradeSports couldn’t make heads or tails of it. Chris is right in that both betfair and TradeSports perform the same service. However their target audience, at least initially, is different.

NEXT: User Interface &amp- Target Audience: BetFair, TradeSports-InTrade, MatchBook, etc.

UPDATE: Jed Christiansen&#8230-

That was the point I was trying to make. In the end, both types of sites accomplish the exact same thing- an event futures market. But I was pointing out the differences in how the sites work that come from their positioning in the marketplace.

In my perfect world, a trader could choose how they interacted with an exchange. They could choose a basic interaction, or a user interface with lots of options. They could choose to see contracts in decimal odds or percentages, etc. It’s not as easy as it sounds, which is why we probably haven’t seen it yet.

UPDATE: David Stalcup&#8230-

In fact you can see odds listed decimal, 0–100 prices or moneyline format (-110) at TradesSorts. You just make that choice at TradeBetX with your TradeSports login info. You have the option of viewing odds in any format. TradeBetX/TradeSports are the same company, just different branding and options at TradeBetX.

Jack Welch to Alex Forshaw: FOSTER YOUR INTERPERSONAL SKILLS.

No Gravatar

The Sentence Of The Day:

When Jack Welch gave a guest lecture at MIT&#8217-s Sloan School of Management in 2005, someone in the crowd asked, &#8220-What should we be learning in business school?&#8221- Welch&#8217-s reply: &#8220-Just concentrate on networking. Everything else you need to know, you can learn on the job.&#8221- Sloan&#8217-s dean, Richard Schmalensee, was stunned because &#8220-Jack was essentially saying a graduate business degree was a waste of time.&#8221- […]

Irak WMDs prediction markets

No Gravatar

The interesting John De Palma writes to me:

In response to the comment you made beneath your Iraq War blog entry&#8230-

Wolfers/Zitzewitz wrote: &#8220-&#8230- the public information on the probability of weapons of mass destruction in Iraq appears to have been of dubious quality, so it is perhaps unsurprising that both the markets were as susceptible as general public opinion to being misled.&#8221-
(The paper is posted here &#8211-PDF file. Figure 5 charts the historical pricing of the TradeSports WMD contracts.)

Also, Wolfers/Zitzewitz/Snowberg wrote in a separate paper: &#8220-Figure 5 shows the price of a contract on whether or not weapons of mass destruction (WMD) will be found in Iraq. Note that at some points the value of the contract exceeded 80%, yet weapons were never found. It is likely that this market performed poorly since the cost of gaining new information was quite high. Since WMD can be non-existent almost everywhere, but still exist somewhere, it was difficult to bet against the strong case made by the White House, at least initially.&#8221- Paper: PDF file.

An article in the NY Times last year reminded me of the TradeSports WMD market. From the New York Times:
&#8220-The Iraqi dictator was so secretive and kept information so compartmentalized that his top military leaders were stunned when he told them three months before the war that he had no weapons of mass destruction, and they were demoralized because they had counted on hidden stocks of poison gas or germ weapons for the nation&#8217-s defense.&#8221-
This assertion in the New York Times article suggests that even Iraqi &#8220-top military leaders&#8221- would have been on the wrong side of the TradeSports WMD trade.

A few years ago when Prof. Wolfers spoke at Columbia Univ, he used the WMD market as an example of an inefficiently priced one, consistent with what he wrote in those papers. It seemed to me that the footprint of what he was saying could be in contracts of that nature being priced too high. (Though I would think it difficult to empirically support that point since even if contracts like that are priced too rich historically, it could reflect the necessary posted margin involved with going short.)

Thanks.

WMDs prediction markets

From the Wolfers/Zitzewitz paper: Prediction Markets – 2004 – PDF file

MIDAS ORACLE PROCLAIMS REPUBLICAN NICOLAS SARKOZY AS THE FRENCH PRESIDENT-ELECT.

No Gravatar

#1. Results of the first round: Republican Nicolas Sarkozy (31.2%), Socialist Segolene Royal (25.9%) &#8212-updated

Results First Round French Presidential Election

Then come Centrist Francois Bayrou (18.6%), Right-Wing Extremist Jean-Marie Le Pen (10.4%) and then the &#8220-small&#8221- candidates.

#2. The second round will be in two weeks: Republican Nicolas Sarkozy vs. Socialist Segolene Royal

All the polls have shown that the Republican Nicolas Sarkozy will beat her easily.

&#8212-

PREDICTION MARKETS: The market-generated probability of the Socialist candidate has increased. Easy to understand why. There was an uncertainty about the Socialist candidate: &#8220-Will she make it to the second round?&#8221-. Now this uncertainty is over. Blogger Mike Smithson makes it like it is big news, but that&#8217-s bullshit.

French Republican Nicolas Sarkozy: 77% at BetFair.

&#8212-

The Economist (Apr 12th 2007):

No French presidential election in 50 years has looked as unpredictable as this year’s, the first round of which takes place on April 22nd. […]

Complete bullshit. Give us the market-generated probabilities, instead of the &#8220-sentiment&#8221- of the journalos.

Previous: WHY THE ECONOMIST SHOULD ADD MARKET-GENERATED PROBABILITIES NEXT TO ITS CONTENT. + NEXT SUNDAY, THE FRENCH WILL ELECT REPUBLICAN NICOLAS SARKOZY AS THEIR PRESIDENT.

&#8212-

TAKEAWAY: French President Nicolas Sarkozy will reform the French economy &#8212-France is a socialist country, right now.

UPDATE: Emile Servan-Schreiber of NewsFutures has published an update about the 2007 French presidential election on the French group blog &#8220-AgoraVox&#8221-.

The pictures of the BetFair Bet-o-Mobile

No Gravatar

BetFair Bet-o-Mobile: a vehicle with 19 TV screens, wireless Internet access and live satellite TV feeds

BetFair Bet-o-Mobile

AV Interactive:

[Betting exchange (real-money prediction exchange, event derivative exchange)] Betfair has commissioned a mobile display unit that will tour UK sporting events. The vehicle, which uses 19 TV screens (including a 50in plasma) has wireless internet access and live Sky feeds. It was built for Betfair by Event Marketing Solutions (EMS) which delivered the vehicle in 12 weeks, ready for a first appearance at the Grand National. The unit is designed to not just let punters watch events, but to help introduce them to Betfair&#8217-s betting exchange concept.

Here are pictures of the BetFair Bet-o-Mobile (with members of the BetFair team at Aintree). Pictures courtesy of the P.R. department at BetFair. (Thanks for sharing. :) )

BetFair Bet-o-Mobile 5

BetFair Bet-o-Mobile 9

BetFair Bet-o-Mobile 1

BetFair Bet-o-Mobile 3

BetFair Bet-o-Mobile 4

BetFair Bet-o-Mobile 6

BetFair Bet-o-Mobile 7

BetFair Bet-o-Mobile 2

BetFair Bet-o-Mobile 8

And we end the series of pictures with la blonde de service. :)

SMOKIN&#8217-&#8230-!!!&#8230- as yelled by &#8220-The Mask&#8221- (played by Jim Carrey), in that movie.

&#8212-

More On BetFair:

BetFair’s explainer on betting exchanges – at Midas Oracle &amp- BetFair – 2006-12-23

Trading bets at BetFair – Betting exchange explainer – Backing vs. Laying – at Midas Oracle &amp- BetFair – 2007-01-23

BetFair multiples – at Midas Oracle &amp- BetFair – 2007-01-20

Prediction markets timeline – by Chris Masse (at Midas Oracle) –

BetFair vs. TradeSports-InTrade – by Chris Masse (at Midas Oracle) – 2007-04-24

X Groups &amp- X Universes – by Chris Masse (at Midas Oracle) – 2007-02-13

&#8212-