Betting

Last substantive revision: 2016-11-12

Just some notes for now. My thoughts on this topic haven’t been stress-tested.

From Superforecasting:

Each team would effectively be its own research project, free to improvise whatever methods it thought would work, but required to submit forecasts at 9 a.m. eastern standard time every day from September 2011 to June 2015. By requiring teams to forecast the same questions at the same time, the tournament created a level playing field—and a rich trove of data about what works, how well, and when. Over four years, IARPA posed nearly five hundred questions about world affairs.

One of the things that confuses me about EA/rationalist betting is that there is no central authority generating questions on which to bet, so that each person is betting on whatever they want to. The same is true of Prediction Book as far as I know (I’ve never used the site). To game one’s calibration, one need only predict appropriately on some obvious things to “nudge” the calibration back to a good value. To take an example, suppose for events one has predicted to occur with a 50% chance, the actual percentage of them occurring so far has been 90% (i.e. one is underconfident). Then, to bring the latter percentage down (to be “well-calibrated”), one need only look for some events that are unlikely to happen (i.e. much less than a 50% chance of occurring), but then state that one predicts they will happen with 50% chance.

Other thoughts:

Not all of the above points are unique to betting.

I plan to write more about why I like betting later.


CC0
The content on this page is licensed under the CC0 1.0 Universal Public Domain Dedication.