- Slow innovation: Aside from a few cosmetic tweaks, reliability improvements and the Starting Price feature, Betfair hasn’t innovated much over the last few years. For a company that boasts several hundred developers, it should be able to release more major new features. Betfair gets very little traffic from organic search and has no social features apart from a forum.
- Tax on top traders: About a year ago Betfair introduced a “Premium Charge” on their most successful traders, taxing their profits up to 20%. This runs contrary to typical volume rebate schemes where the more one trades, the smaller the transaction costs one incurs. The company claims the tax is to offset the cost of bringing new punters to the platform, but appears to outsiders as a clear move to increase revenue taking advantage of Betfair’s position as a monopoly.
- Expensive transaction costs: Betfair takes 5% of traders’ winnings. If a trader bets ?100 and wins ?1000, Betfair will charge ?50 for the transaction. This is very expensive in a world of $8 online stock executions. As betting exchanges become more financial in nature, these transaction costs will shrink substantially.
- Market Size and Competition: As Greg Wood from the Guardian wrote recently, horse racing liquidity has hit a ceiling. Will Betfair be able to maintain the revenue growth? With high costs and a smaller profit margin than Paddy Power, Betfair has found itself in a bit of “grow or die” situation. It will need to find ways to entice more customers to join its platform and spend their betting dollars with them. Betfair is looking to new sports – particularly football – and overseas markets like the US, China and India as opportunities for growth.
- Headcount: Betfair has a tech team close to 500 people. While there is strength in numbers at times, the most successful tech projects in history started with small, nimble teams. The more tech people involved on a product, the less agile a company can be. Adapting to changing tech trends can be a crucial ingredient to remaining competitive in today’s internet startup world.
Ben Lewis: “-Great works of art are still being made today but the great contemporary art bubble will surely go down in history as the epitome of the vanity and folly of our age.”-
I highly recommend you watch this 2009 documentary.
EMH (at least the interesting version) says prices are our best estimates, so to deny EMH is to assert that prices are predictably wrong. And for EHM violations to be relevant for regulatory policy, price errors must be so systematic as to allow a government agency to follow some bureaucratic process to identify when prices are too high, vs. too low, and act on that info.
So the clearest way for EMH skeptics to show they are right is to collect a track record showing that they can predict, ahead of time, when prices are too high, vs. too low. There’s little point in picking out some year old event, and saying, “see that price drop was too big.” Monday morning quarterbacking is way too easy.
But if just before a price drop you’d been on record saying the price was too high, or if just after you’d said the price was too low, well then we could include your purported error in a EMH-skeptic track record. And with enough skeptics identifying enough purported price errors, it wouldn’t take long to collect enough data to see if EMH skeptics really do have a system for identifying price errors. (Of course some would do well just by chance, so we’d need to look at the whole set.)
With a proven skeptic track record, we could then begin a conversation about whether their system was the sort that regulators should embody in some official government process, in order to improve our financial system. (Or whether skeptics should just post their errors, and let speculators fix prices.)
But all this continual harping year after year on how EMH is obviously wrong, based on selective stories of past prices you say were obviously wrong, sounds awful suspicious when you don’t bother to publicly flag price errors at the time, much less to collect and publicize a track record of such error flags. (E.g., care to declare which prices are wrong today?) What’s up with that?
Download this post to watch the video —-if your feed reader does not show it to you.
Sean Park is minding a conference presentation. Go there and exchange ideas with him —-well, if you have any.
Leslie Fine (CrowdCast Chief Scientist) to me:
Actually, our mechanism is a market, it’-s just not a stock market. We use an automated market maker to efficiently price every bet, adjust crowd beliefs, and price an interim sell. In essence, participants trade binary spreads with the market maker.
Because our new version was not yet market-ready, I did not enter the markets vs. non-markets debate when you were having it some months ago. However, among other reasons, we avoid collective forecasting because it is too similar to collaborative forecasting, which is key in supply chain. Honestly, when all is said and done, our clients care not what the mechanism is. They care that we can efficiently gather team intelligence and translate it into actionable business intelligence. That is our mission.
A usable market is: a market where you exert the least effort for the greatest understanding possible, that allows you to comfortably engage at the level (and in the role) you wish, results in your maximum possible satisfaction, and where your actions in the market feed back positively into the market.
You can jot down your thoughts on the Usable Markets blog…-
Prediction markets failed to accurately predict the unexpected effect a few tears had on the New Hampshire primaries- and some analysts rushed to blame the tool and undermine its reliability and applicability. Let me restate some fundamentals and my view, in a snapshot:
- Markets are not prophets, prophets do not exist.
- A mechanism’-s forecastability should not be judged against a virtual fool-proof prophet- we’-d better compare it with other existing or widely-used mechanisms and -to my partial and context-bound knowledge- markets outperform all those.
- Markets are the only tool that intrinsically suggests their probability of failure. If Obama’-s stock is traded at 70 cents, this suggests that there is a 30% probability of Obama losing- I’-d say markets are by character modest and no fanfare has any place in describing their suggestions.
- Markets are primarily an aggregation/meta mechanism- as such, garbage-in-garbage-out effects are expected to happen, so we’-d need to keep focus on minimizing garbage rather than blaming the market/compiler.
- Maturity of the mechanism and its use, as long as trading volume (in real-money intrade for example), have not yet reached a fully efficient level (more on this to come soon), but these result in significant profit opportunities, so I expect things to just keep getting better.
cross-posted from my blog