Well the Oscars have come and gone and although most pundits are chattering about Slumdog’-s amazing 8 wins and Hugh Jackman’-s dulcet tones, I’-m more impressed by Hubdub’-s amazing success.
Out of the major races, we got EVERY SINGLE ONE RIGHT. We were also the only major prediction exchange to correctly predict the Best Actor race (Betfair, HSX, InTrade, Newsfutures and even Nate Silver all gave the gold to Mickey Rourke). Also, in five of the big 6 races, we showed higher confidence than InTrade predictors.
|Best Actor||63%||33.5% (wrong)|
|Best Sup Actor||100%||95%|
|Best Sup Actress||64%||58.8%|
From the complete 24 award lineup, we nailed 19, generally by impressive condifence margins. Check out all of our settled markets here.
Not only have Hubdubbers had a successful night, each of my personal Oscar predictions were correct and I added another 40 thousand Hubbucks to my coffers. Award season is now finally behind us, but American Idol is just getting started!
Crossposted from Newspundits
Sorry, but according to your numbers you should have guessed 5 out of 6 correctly. Not 6 out of 6.
Congrats HubDub! I suspect that you may have had more liquidity drive your outperformance.
However, I hypothesize, all else being equal, that real money outperforms play money where liquidity (i.e. participants) are constant.
That is what everything believes, indeed… but… all this has to documented.
Guessing all the frontrunners correctly is something to brag ONLY if the reported confidences are high enough. If they are not and you get them all correctly, then the markets have biases and are NOT accurate.
The set of Hubdub traders could also exhibit certain biases which led them to prefer Milk. Alternatively, a few large hubdub traders could have set the price.
Jason Ruspini was thinking about HubDUb having more women than the other prediction exchanges. Interesting hypothesis.
Or more gays. Ha! ha! ha!
Or more progressive liberals. Either way it’s likely a bias, and will not always play out the way hubdub wants. Then again, a few sharp traders with immense liquidity could have moved the price as well.
“Or more progressive liberals.” Yeah. Or more Californians.
I can think of a number of users who would riot if you tried to claim that Hubdub had a higher percentage of liberals. Women are a definite minority, but I’m not sure how our demos compare to the other prediction exchanges. I am not convinced that women would be any more capable of evaluating the Best Actor market than men though.
The bias argument strikes me as ridiculous with a sample size of 6. Its like getting 3 heads out of 4 coin flips and declaring it an unfair coin. The argument also seems to assume some independence in the markets, which is clearly wrong.
At first glance it seems absurd to say that a prediction market which has fewer correct predictions is “more correct” as claimed at http://behind-the-enemy-lines……ilure.html. After looking into it a little more, first impressions are indeed correct. The analysis is completely false.
First, it’s not too hard to come up with hypothetical prediction markets that are completely wrong but with the same analysis would be presumed to be better markets. Example: put the probability of Heath Ledger at 100% and all other probabilities at 0. That would certainly be a terrible prediction market. But there was a 100% chance that it got 1/6 correct, as it predicted!
Second, the probabilities cited are conditional probabilities and therefore not comparable. Presumably the better market is the one whose probabilities are more nearly correct. If A means Hubdub’s probabilities are correct (or nearly so) and B means Intrade’s probabilities are correct, then the question is whether P(A) or P(B) is greater.
According to the analysis at the link, the probability of getting 5/6 correct for Intrade was about 43%, assuming the market probabilities were correct. The probability of getting 6/6 correct for Hubdub was about 26%, assuming the Hubdub probabilities were correct. But these are CONDITIONAL probabilities, so they can’t be used to compare P(A) and P(B) without some more information. Conditional probability ( http://en.wikipedia.org/wiki/C…..robability ) requires us to also know the probability that both Hubdub’s probabilities are correct and Hubdub gets 6/6 correct. So that’s where the trouble is… this overlapping probability is impossible to determine!
For a given outcome O, P(A) = P(A and O) / P(O|A). In the absence of any knowledge about P(A and O) vs P(B and O). The smaller value of P(O|A)=26%, as compared to P(O|B)=43%, actually provides evidence that if anything it may be MORE LIKELY that Hubdub’s predictions are correct.
In any case, winning 5/6 or 6/6 is just anecdotal evidence. You don’t want to assume you have a tricky nickel just because you get two heads in a row, but 6 in a row may get you thinking, and a lot more than that… But I’m guessing the Steelers don’t care what anybody said their probability of winning the Superbowl was. Unless somebody wants to make a prediction markets on prediction markets, the best thing to say is simply Congratulations Hubdub!
“The bias argument strikes me as ridiculous with a sample size of 6″
So, why claiming higher accuracy than any other major prediction market exchange with a sample size of 6? Its like getting 6 heads out of 6 coin flips, while the others got 5 out of 6 and then claiming that your “coin” is better than anyone else’s coin.
Looking again, I guess we should be comparing P(A|O) to P(B|O) since the outcome is known. The ratio of these is P(A|O)/P(B|O) = P(O|A)/P(O|B) P(A)/P(B) = 0.6 P(A)/P(B) by Bayes Theorem… P(O) cancels out. So now we’re up a creek since P(A) and P(B) cannot be calculated… same story: not enough data to say anything meaningful.
Just to repeat my comment from another thread:
What is ridiculous is celebrating the “absolute success” of having *all* the frontrunners to be the actual winners. This “absolute success” is itself *not* the most probable event! So HubDub failed in the same way that InTrade/BetFair/etc failed to get the “correct outcome” in one out of the 6 markets.
I’m merely trying to understand why hubdub was more accurate on pricing the best actor award. I am guessing that it is the work of a few sharp traders. It could also be the result of a population bias, that I do not know of.
Surely there is *some* independence in the markets. If they were completely interdependent then the pricing would be much closer. Or am I clearly wrong?
More importantly, your article brags about how accurate hubdub is, and then you analogize its picks to flipping a coin. Dubious analogy considering none of your confidences were 50%
I wasn’t at all suggesting a bias but the exact opposite, as a result of greater diversity among traders (not that cognitive diversity would necessarily follow, etc) But I would also guess that it was the work of a few traders with relatively large accounts. The fact that Rourke and Penn were initialized at the same price didn’t hurt either.
Jenni, whatever the cause in the end, it was fun to listen to some hypothesis.
“your article brags about how accurate hubdub is, and then you analogize its picks to flipping a coin.”
You score a point, Daniel.
– Once your first comment is approved on Midas Oracle, all your further comments are published automatically without any need to get approval from the administrator (moi).
– However, when the commenter uses his/her OpenID (as Panos does), it seems that I have to approve the comment manually, each time. It is not a problem for moi, but I just wanted to jot down that note so that Panos understands why his comments are “held in moderation” each time. I will investigate the issue, but I don’t think I can do something about it. I will try to approve Panos’ comments ASAP. Sorry for the inconvenience, Panos.
Actually I was comparing the *argument* to flipping coins, not our markets. I think my point was that it was a bad premise to start with… but triathematician addressed it more mathematically.
But I do wonder how the numbers would stack up if you ran the full 24 awards?
OK, Jenni. Thanks.
“I do wonder how the numbers would stack up if you ran the full 24 awards?”
Well, whatever analysis you want to do, do it, and publish about it.
Predictions that are too good to be true? – Statistical Modeling, Causal Inference, and Social Science
Following your logic, then, I think InTrade should have truly shown superiority over all other prediction markets by getting all 6 of the major awards wrong!
This is truly an astounding piece of good news! Just think: under your paradigm, mediocrity is to be rewarded, while success will be penalized! I can’t wait to share what I’ve learned here with my poker pals; I don’t know that I’ll be able to convince them of your to-win-is-to-lose-and-to-lose-is-to-win strategy, but it’s worth a shot, no?
(Are you familiar with the Superman comics? If not, you might want to take a look at Bizzaro World, a cubical planet obviously modeled on math very similar to yours.)
“Following your logic, then, I think InTrade should have truly shown superiority over all other prediction markets by getting all 6 of the major awards wrong!”
Did you even read the posting?
* Prediction markets are supposed to report _probabilities_
* It is wrong to expect that _all_ events trading with probability less than 1 will end up being correct.
* Celebrating such “successes” leads to backfire when the markets end up *not* being correct.
* When the markets “fail”, it is lame to remember to bring up the excuse “prices are probabilities”. You have to have a consistent message in the good and bad times.
“But I do wonder how the numbers would stack up if you ran the full 24 awards?”
This is the analysis that I actually wanted to perform to figure out whether there is a longshot-favorite bias in the Oscars markets. But it was too much work to collect the data.