Effective altruism
Effective altruism (EA) is “a philosophy and social movement which applies evidence and reason to determining the most effective ways to improve the world”. On this page I describe my personal involvement with and thoughts on EA.
Personal involvement
I’ve been involved with the effective altruism community for some time. I first heard about EA through LessWrong (I think it was a post about GiveWell), probably in 2011 or 2012. I remember reading Holden Karnofsky’s critique of the Singularity Institute (now MIRI) right when it came out (May 2012), and I remember it reaching the status of “most upvoted LW post ever”. Although I was intellectually in agreement with effective altruism, I didn’t actually do anything about it, thinking that working hard in school would be a form of altruism (i.e. that in the long term, working hard in school and having more opportunities would allow me to best contribute to the world).
In January 2014, I contacted Cognito Mentoring for the first time. Although I myself didn’t seem to ask directly about EA (looking back at our correspondence), a friend contacted them regarding effective career choice, and this roused my interested in effective altruism as well.
In July 2014, I attended a Seattle Effective Altruists meetup; although I didn’t contribute much to the discussion, my interest in EA increased. Part of the reason I attended was due to my interest, but part of it too was that I was working on a research project at the University of Washington over the summer, and the July meetup was conveniently situated next to campus – so the activation energy to attend had been considerably lowered. The topic of the meetup was also “Donating vs. Working directly for impact”, which was a topic of particular interest to me going into college.
Following my first meetup was a period of several months where I frequently attended Seattle EA meetups. I also became more involved in online discussions of EA, and eventually in November 2014 started the Cause Prioritization Wiki as a place to store my research on cause prioritization. Also around this time, I tried to start an effective altruism group at the University of Washington. The group didn’t get much traction, and as of August 2016, it had only had one meetup in November 2014. During the 2016–2017 school year, Rohin Shah and Ethan Bashkansky restarted the group, with meetings taking place throughout the school year. I have attended multiple meetings but have played only a minor role in this revival. As of the following school year, the group seems to be dead again.
I continue to be involved in online discussions of EA, but have since become much less involved in Seattle EA meetups.
From March 2016 or so to May 2017, I did more concentrated work in global health, working with Vipul Naik. As part of this work, I made several Wikipedia pages related to global health.
Starting in May 2017, I’ve been working (again with Vipul) on broader topics including infrastructure and economic growth.
Beliefs
My thoughts on the EA movement have gone through several revisions over the years.
Initially, probably during 2011–2013, EA was merely a curiosity and I was either a person who had only heard about it or at most a “lurker”. (I was more interested in LessWrong/rationality than helping the world specifically.)
The next phase, during 2014–2015, is when I became most excited about EA, attending meetups and consuming a lot of the “standard” texts and learning the standard arguments in the movement.
The next phase, during 2016 or so, is when my view of EA became more refined and somewhat negative. I started to think that while the movement had a lot of interesting people associated with it, the most interesting/smart people tended to be on the periphery of EA (without necessarily calling themselves “effective altruists”) rather than directly involved in it at the center, and that most of the “intellectual work” was being done by a few of the “top” people. Under this view, there were a few Serious People associated with the EA movement, but the vast majority of people who associated with the movement were followers/promoters more than originators of ideas.
I also started to think that EA uses a clever definition to make it irrefutable in some sense, which makes discussion and criticism of it difficult.
The general feeling was that maybe effective altruism just does to philanthropy what Bastiat does to economics (and EA might not even be as new as I used to think).
Then, during 2017–2018, I somewhat revised my previous thinking (or rather, how I felt about what I already believed). Looking outside of my filter bubble, I reflected more on how people outside the EA/rationality sphere often could not even parse basic arguments, and how they seemed oblivious to ideas that I considered obvious (e.g. standard EA talking points, some things in the Sequences).
It is still true that I’ve only been consistently impressed with a few people associated with the movement, but that no longer felt like such a bad thing. To have a movement where the vast majority of people cannot generate interesting ideas but where at least they could follow tricky arguments started to seem remarkable rather than disappointing.
During 2019, my feelings about effective altruism (and the rationality community) again became more negative. I can’t say my beliefs are so different from those of the previous couple of years, but my emotions and internal monologue are pretty different. One aspect of this is that there were more updates I made where my opinion of someone went down than where they went up. There were many people who seemed kind of smart but where I didn’t have enough evidence to definitely say whether or not they are smart. This year, I feel like I got that necessary additional evidence to rule out many of these “potentially smart people”. I don’t want to give away who these people were, so I don’t want to give many details, but at a high level here are some things I observed: making an obviously false statement; making a decision that I think is reasonable, but where I think the reasoning is not that good; praising or endorsing blog posts that I think aren’t that novel or interesting; claiming a higher level of understanding of a topic than I think they are demonstrating; giving the impression that they know something other people don’t, where in fact there wasn’t anything.
I also met many people involved in EA/rationality (in Europe), and updated towards thinking that the vast majority of these people don’t really do anything interesting with their lives. Prior to meeting people, there was some plausible deniability: the fact that I hadn’t come across these people online could have meant that they are privately working on interesting projects, building up an accurate world model, etc. But generally speaking, I came away pretty disappointed. There was one notable exception to this. (Usual disclaimers about small and potentially biased sample size, people not revealing all of their accomplishments, etc., apply. I’m always happy to update my views if I come across contrary evidence.)
My internal monologue started to frequently have thoughts like “people are bad”, “we are so screwed”, “the world is going to end”, “we are all going to die”, “I hate everything”, “Grognor was right about everything”, etc. (Again, it’s not like my opinions here have changed this year, but the pattern of thoughts has definitely changed.)
Overall, I’m not comfortable considering myself part of the “EA movement”. However, I’m happy to consider myself an “effective altruist” in the sense of “someone who spends a lot of time trying to figure out how to best help the world” and to interact and collaborate with people who do consider themselves part of the movement.
2020: aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa
2021: I posted an answer on the EA Forum about my reservations about identifying as an effective altruist.
See also
- Reasons for altruism
- I have a collection of Effective altruism links that might interest people.
- Effective altruism and Asperger syndrome
External links
- Google Custom Search with an effective altruism label that I maintain
- Bryan Caplan weighs in on the rationality community (the rationality community has close ties with the EA community)