Wei Dai
Something I find unfortunate is the apparent lack of enthusiasm for Wei Dai and his online content in the effective altruism community. His posts and comments on LessWrong seem to me to be clear, important, and relevant to effective altruism. And yet I rarely see his content referenced in the community (especially when compared to people like Nick Bostrom, Carl Shulman, and Eliezer Yudkowsky).
I wonder what the reasons are for this. Some ideas I can think of are:
- Wei is not really on board with altruism. You can see posts like “Shut Up and Divide?” and comments like this where he says “I think a highly rational person would have high moral uncertainty at this point and not necessarily be described as ‘altruistic’.” I don’t think this is common knowledge though, so people who encounter just his philosophical ideas (without going through a lot of his output, and possibly being disappointed by his views on altruism) should still be enthusiastic.
- He often emphasizes uncertainty, so EAs might think there isn’t much “actionable” insights in his writings. But EAs themselves often seem uncertain, so I don’t think this really explains anything.
- Although Wei writes about philosophical problems, he doesn’t write so much about empirical issues in global poverty and health.
- He doesn’t seem to do a lot of self-promotion.
- Many of his insights are only in comments scattered around LW/EA Forum, rather than easily-citable posts. (HT Pablo Stafforini)
- He seems like a “quiet nerd” based on this post about fashion. But isn’t this true for many other people popular in EA?
- His anti-academia bent. He doesn’t really write papers. (Even though he has an impressive list of online output.)
- He does not comment on Facebook very much. (Edit: this is now changing…)
- Maybe it’s not so surprising that people aren’t so enthusiastic about any particular person. There are probably other people who are like this that even I am not paying attention to.
- I’m wrong about one of my assumptions (that Wei’s writings are relevant, or that EAs aren’t paying attention).
Overall I’m still pretty confused about this.
“Yes, you’re a freak and nobody but you and a few other freaks can ever get any useful thinking done” (Eliezer to Wei; source)
February 2019 update: My impression is that since I wrote this page (in January 2018), Wei has been making a lot more comments on LessWrong (after September 2017, he hardly made any comments until the end of February 2018, when he began making comments again). During 2018, LessWrong 2.0 also left beta. My feeling is that Wei’s comments have been getting a lot of upvotes and that people are paying attention to him. I’m therefore tempted to conclude that the reason was “He doesn’t seem to do a lot of self-promotion” (namely, self-promotion in the form of leaving public comments). If this really is the case, it’s depressing to me that this sort of continual reminder is necessary in order for people to figure out who is “smart”.
September 2019 update: Re-reading this page, I realized that I was talking about the effective altruism community, not LessWrong. It seems like the effective altruism community still hasn’t really “discovered” Wei Dai. To give three recent examples, see this answer by Pablo Stafforini, which mentions Scott Alexander, Nick Beckstead, Nick Bostrom, Paul Christiano, Katja Grace, Robin Hanson, Eliezer Yudkowsky, Holden Karnofsky, Carl Shulman, and Brian Tomasik (all of whom I think are good inclusions) but not Wei Dai. The second example is the GPI research agenda. Looking at the cited literature, I see Eliezer Yudkowsky, Carl Shulman, Nick Beckstead, Holden Karnofsky, Katja Grace, Paul Christiano, Robin Hanson, Brian Tomasik, Toby Ord, etc., but again, no Wei Dai. The third example is the EA sequences (viewed on 2019-10-01) document compiled by Richard Ngo (with help from others?). I can find Eliezer, Carl, Nick Bostrom, Nick Beckstead, Paul, Toby Ord, Holden, Brian Tomasik, Robin Hanson, etc. (interestingly, no Katja). Again, no Wei.
January 2022 update: Although people take him seriously on LW, and he seems to be well-known among AI safety people specifically, it still seems like his views (about e.g. human safety problems and the importance of metaphilosophy) are “fringe”, in the sense that people don’t take his views seriously enough to change their plans/research agendas based on them.
In other news, Pablo Stafforini has informed me that his list now includes writings from Wei Dai!