In summary, “-momentum”- can exist, but the places where you’-ll see it is in races where current public opinion is out of step with best predictions. The mere information that a race has a 5-point swing is not enough to predict a future shift in that direction. As Nate emphasizes, such a prediction is only appropriate in the context of real-world information, hypotheses of “-factors above and beyond the direction in which the polls have moved in the past.”-
Tag Archives: statistics
Pollster John Zogby attacks statistician Nate Silver. – You take other peoples polls, compare records for predictions, add in some purely arbitrary (and not transparent) weights, then make your own projections and rankings.
Don’-t Create Standards You Will Find Hard to Maintain Yourself.
Be Honest.
Understand That There’-s Much More to Being a Good Pollster.
Appreciate Innovation.
Do Some Polling.
UPDATE: Prof Andrew Gelman’-s take.
Felix Salmon wins the ASA 2010 Excellence in Statistical Reporting Award
American Statistical Association:
ALEXANDRIA VA, MAY 14, 2009 – Felix Salmon, a well known financial blogger who writes extensively about statistics, has been named the recipient of the 2010 Excellence in Statistical Reporting Award (ESRA) of the American Statistical Association (ASA). Salmon does quantitative, statistically minded reporting on topics ranging from the costs of counterfeiting to bank fraud to Nigerian spammers
ASA’s ESRA Committee selected Salmon “-for his body of work, which exemplifies the highest standards of scientific reporting,” according to the award citation. “His insightful use of statistics as a tool to understanding the world of business and economics, areas that are critical in today’-s economy, sets a new standard in statistical investigative reporting.”-
Salmon came to the United States in 1997 from England, where he worked at Euromoney magazine. He also wrote daily commentary on Latin American markets for the former news service Bridge News, freelanced for a variety of publications, helped set up the New York bureau of a financial web site, and created the Economonitor blog for Roubini Global Economics. He has been blogging since 1999 and wrote the Market Movers blog for Portfolio.com. Salmon currently blogs at Thomson Reuters. ( http://blogs.reuters.com/felix-salmon/ ). He is a graduate of the University of Glasgow.
Previous winners of the ESRA include Sharon Begley, Newsweek magazine- Mark Buchanan, freelance science writer- Gina Kolata, New York Times- and John Berry, Bloomberg News.
The ESRA was created to encourage and recognize members of the communications media who have best displayed an informed interest in the science of statistics and its role in public life. The award can be given for a single statistical article or for a body of work. In selecting the recipient, consideration is given to:
Correctness, clarity, fairness, brevity, and professionalism of the communication
Importance, relevance and overall effectiveness in impacting the intended audience
Impact on the growth and national or regional exposure of statistics
Appreciation and emphasis of the statistical aspects of a particular issue or event
Excellent coverage of research on statistics or statistical issues
About the American Statistical Association
The American Statistical Association (ASA), a scientific and educational society founded in Boston in 1839, is the second oldest continuously operating professional society in the United States. For 170 years, ASA has been providing its 18,000 members serving in academia, government, and industry and the public with up-to-date, useful information about statistics. The ASA has a proud tradition of service to statisticians, quantitative scientists, and users of statistics across a wealth of academic areas and applications. For additional information about the American Statistical Association, please visit the association’s web site at http://www.amstat.org or call 703.684.1221.
Nate Silver explains how he builds his forecasting models. – [VIDEO]
ACCUSATION: Nate Silver over-uses Wikipedia. – [VIDEO]
SXSW: Nate Silver explains how he approached political forecasting for the 2008 US presidential elections. – [VIDEO]
In part #2, he speaks about the books he is writing:
Time-Series Forecasting
– The different kinds of forecasting software —-including time-series forecasting.
– Some forecasting startups make use of the open source and/or SaaS models. I need your feedback on all this. Please, comment below, e-mail me, wave me, or tweet me.
– Lokad –- (concurrent time-series forecasting) –- Video
– Data Applied –- (neural network algorithm)
External Links: Lokad –- Data Applied
Climate Stats = Sausage Making
How to Make Your Own Hockey Stick –- Required reading for our good friend Caveat Bettor.
More info on “-climategate”- at Memeorandum
“-Hide the decline”-
“-Hide the decline”-
Defining Probability in Prediction Markets
The New Hampshire Democratic primary was one of the few(?) events in which prediction markets did not give an “-accurate”- forecast for the winner. In a typical “-accurate”- prediction, the candidate that has the contract with the highest price ends up winning the election.
This result, combined with an increasing interest/hype about the predictive accuracy of prediction markets, generated a huge backslash. Many opponents of prediction markets pointed out the “-failure”- and started questioning the overall concept and the ability of prediction markets to aggregate information.
Interestingly enough, such failed predictions are absolutely necessary if we want to take the concept of prediction markets seriously. If the frontrunner in a prediction market was always the winner, then the markets would have been a seriously flawed mechanism. In such a case, an obvious trading strategy would be to buy the frontrunner’-s contract and then simply wait for the market to expire to get a guaranteed, huge profit. If for example Obama was trading at 66 cents and Clinton at 33 cents (indicating that Obama is twice as likely to be the winner), and the markets were “-always accurate”- then it would make sense to buy Obama’-s contract the day before the election and get $1 back the next day. If this was happening every time, then this would not be an efficient market. This would be a flawed, inefficient market.
In fact, I would like to argue that the late streak of successes of the markets to always pick the winner of the elections lately has been an anomaly, indicating the favorite bias that exists in these markets. The markets were more accurate than they should, according to the trading prices. If the market never fails then the prices do not reflect reality, and the favorite is actually underpriced.
The other point that has been raised in many discussions (mainly from a mainstream audience) is how we can even define probability for an one-time event like the Democratic nomination for the 2008 presidential election. What it means that Clinton has 60% probability of being the nominee and Obama has 40% probability? The common answer is that “-if we repeat the event for many times, 60% of the cases Clinton will be the nominee and 40% of the cases, it will be Obama”-. Even though this is an acceptable answer for someone used to work with probabilities, it makes very little sense for the “-average Joe”- who wants to understand how these markets work. The notion of repeating the nomination process multiple times is an absurd concept.
The discussion brings in mind the ferocious battles between Frequentists and Bayesians for the definition of probability. Bayesians could not accept that we can use a Frequentist approach for defining probabilities for events. “-How can we define the probability of success for an one-time event?”- The Frequentist would approach the prediction market problem by defining a space of events and would say:
After examining prediction markets for many state-level primaries, we observed that 60% of the cases the frontrunners who had a contract priced at 0.60 one day before the election, were actually the winners of the election. In 30% of the cases, the candidates who had a contract priced at 0.30 one day before the election, were actually the winners of the election, and so on.
A Bayesian would criticize such an approach, especially when the sample size of measurement is small, and would point to the need to have an initial belief function, that should be updated as information signals come from the market. Interestingly enough, the two approaches tend to be equivalent in the presence of infinite samples, which is however rarely the case.
Crossposted from my blog