Reasons for altruism

View source | View history | Atom feed for this file

It occurs to me that there are many reasons (including many weird reasons) for altruism, so I’ll try writing some of these down. For this page, I assume moral anti-realism. I’m pretty confused about the implications of various philosophical ideas so I’m likely to disagree with many of the points below in the future. (At the moment, I’m not even sure what I believe.)

I’ll also list some reasons why people may appear to behave altruistically (i.e. will sometimes take actions which help others), even if they don’t terminally care about these people who are helped:2

I think one reason to pay attention to the variety of possible reasons for altruistic/cooperative behavior is that when you learn something new about the world, you can decide whether that changes how you should act, and the way you change how you act can depend on the kind of altruism you have. For example, if you find out a way in which being selfish is coherent (when you previously thought it was incoherent), that can give you a reason to act more selfishly if the incoherence was one of the main reasons you were being altruistic, whereas if you just cared about the well-being of others you wouldn’t really change how you act.

More quotes (to be added to the right place later):

Abram Demski https://www.greaterwrong.com/posts/h9qQQA3g8dwq6RRTo/counterfactual-mugging-why-should-you-pay/answer/iFcnSYrvsp7Xn2WrN:

When most people say “I am selfish”, they mean something like “the things that happen to this physical instantiation of this algorithm, over the next 50 years or so”. That worked well in the ancestral environment, but under no principled understanding of the world and how identity works does that make sense. So I think selfish people have a lot of work to do, if they don’t want to suck at being selfish.


  1. “Wei_Dai comments on Max Tegmark on our place in history: ‘We’re Not Insignificant After All’ ”. LessWrong. Retrieved March 7, 2018.

    What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this. It ought to give the few people in the know enormous power (relative to just about anyone else who has existed or will exist) to affect the future, but, even among those who do realize that we’re at a bottleneck, few try to shape the future in any substantial way, to nudge it one way or another. Instead, they just go about their “normal” lives, and continue to spend their money on the standard status symbols and consumer goods.

    ↩︎
  2. It’s not entirely clear to me whether there is a fundamental difference between “caring about someone intrinsically/as a terminal value” vs “caring about someone instrumentally/for cooperation” (I think there are operationalizations one can choose that would make them different, e.g. you could observe whether someone continues to behave altruistically if it became obvious that the other side couldn’t pay back). Looking at the causal history, it seems like the former arose because the latter happened to work well, and it seems like “baking in” the values as terminal values was a particularly effective implementation of getting the cooperation to work (or something like that).↩︎