Betting
Just some notes for now. My thoughts on this topic haven’t been stress-tested.
From Superforecasting:
Each team would effectively be its own research project, free to improvise whatever methods it thought would work, but required to submit forecasts at 9 a.m. eastern standard time every day from September 2011 to June 2015. By requiring teams to forecast the same questions at the same time, the tournament created a level playing field—and a rich trove of data about what works, how well, and when. Over four years, IARPA posed nearly five hundred questions about world affairs.
One of the things that confuses me about EA/rationalist betting is that there is no central authority generating questions on which to bet, so that each person is betting on whatever they want to. The same is true of Prediction Book as far as I know (I’ve never used the site). To game one’s calibration, one need only predict appropriately on some obvious things to “nudge” the calibration back to a good value. To take an example, suppose for events one has predicted to occur with a 50% chance, the actual percentage of them occurring so far has been 90% (i.e. one is underconfident). Then, to bring the latter percentage down (to be “well-calibrated”), one need only look for some events that are unlikely to happen (i.e. much less than a 50% chance of occurring), but then state that one predicts they will happen with 50% chance.
Other thoughts:
At least in some cases, betting in the EA/rationalist community seems to create a “winner/loser” mentality, where people are hyperfocused on the results on a particular bet (example: people posting how much they won or lost after the 2016 US presidential election). If betting is truly about rationality, it’s the cumulative bets and overall track-record that matter. I think the case is similar to the one to avoid the news. Tyler Cowen is quoted to be making the same point here.
I think it’s interesting that many rationalists are against news, but end up betting on news-worthy topics (but are these just the ones more likely to show up in my Facebook feed?). Personally I find news boring, and don’t bother to follow along on world events, and don’t know why I should expend so much effort just to prove that I am well-calibrated or whatever. A lot of interesting topics aren’t even on the betting table.
Bryan Caplan: “What’s the most important thing I learned from this twelve-year bet? This: While my commitment to betting has great cognitive benefits, it also has great emotional costs. During the last two years, I have spent far too much time wondering if I could salvage my perfect betting record, and far too much time checking electionbettingodds.com. This bet interfered with my inner harmony, my commitment to detachment from this corrupt society. From now on, I’ll remember these costs before I bet.” (This can also be a separate point about the under-acknowledged emotional costs of betting.)
I think even worse than self-selecting which bets to take is to announce some bet one is willing to make in an effort to lure people into taking on bets.
I have some concern that people are to some extent offering and taking on bets because it looks “cool” and “everyone else is doing it”.
Similarly to the point above, there are various motivations that lead people to bet.
Once a bet has been made, there are various new incentives to affect the outcome of the bet. I don’t think these are acknowledged very often, although as long as people are betting on relatively “large” events that are difficult to affect, this point is minor.
Claims that betting “taxes bullshit” or “fights hyperbole” rely on the penalty of losing to be greater than the benefit of making those hyperbolic claims in the first place. And, again, one cannot just look at the outcome of a single bet and claim that “I fought hyperbole and won”.
Until betting frequently becomes the norm, betting does not seem especially lucrative. It requires people who are willing to take you on the bet, to give you odds that are favorable enough, the cost of being informed about the thing you are betting on, the cost of due diligence to ensure your opponent will pay up, etc. Not that being a lucrative opportunity is especially important, since betting is claimed to have other benefits.
A possible scenario I want to think more about: a world where there is a constant stream of low-stakes bets (from some authority?) on which one is required to take a position. The authority then matches you with someone taking the opposite position. Or perhaps people submit their subjective probabilities and the authority matches people up with suitable odds (as long as subjective probabilities differ, the authority can devise odds that are profitable to both sides each given their own expectation?).
Calibration graphs as presented in places like PredictionBook don’t give the “real” calibration because they assume all predictions are independent. Vipul Naik makes the same point in a Facebook post announcing some predictions he has made.
Not all of the above points are unique to betting.
I plan to write more about why I like betting later.
I understand some of the aversion to a Bayesian framework. Bayesians do [tend] to fetishize bets. When offered the two bets in Sir Percy’s coin toss, there is a certain appeal to refusing both bets. Bets often come with stigma and this (when paired with loss aversion) can make both bets seem unappealing despite the fact that we are told a Bayesian reasoner always prefers one bet or the other.
But the thing is, a bounded Bayesian reasoner may also prefer not to take the bets. If I expect my credence for H to vary wildly then I may delay my decision as long as possible. Furthermore, if the bets are for money (rather than utility) then I’m all for risk aversion.