It occurs to me that there are many reasons (including many weird reasons) for altruism, so I’ll try writing some of these down. For this page, I assume moral anti-realism. I’m pretty confused about the implications of various philosophical ideas so I’m likely to disagree with many of the points below in the future. (At the moment, I’m not even sure what I believe.)
- The typical reason seems to be that you value the welfare of others and the discount factor you apply for them (for not being you) isn’t too low. So when making decisions, you take your own welfare into account, plus the aggregate (discounted) welfare of others.
- Worse case analysis: the worst that could happen from being too altruistic is that your own life could have been better. But how much better? If you already live a comfortable life, it seems doable but perhaps not straightforward or easy to live a much better life. On the other hand, the stakes of being too selfish seem quite high.
- Even if you discount the welfare of others to be arbitrarily low, as long as you have moral uncertainty and use something like the “expected moral value” approach, and there are sufficiently many other beings you could help, swamping could occur. This is because worldviews that subscribe to altruism think there is “more at stake” in the world.
- You could use a Moral Parliament framework to make a “veil of ignorance” or FDT-like deal in which in worlds where you can have a lot of impact, you act essentially like an altruist, whereas in worlds where you can’t have much impact you act basically like a selfish person. And it appears that in this world we can have a lot of impact.1
- Decision theoretic “categorial imperative”-type ideas, as suggested by Gary Drescher (in Good and Real) and others; see this page for more people who argue for the relevance of logical decision theories to everyday life.
- Even weirder decision theoretic reasons. See e.g. “On SETI”: “The preceding analysis takes a cooperative stance towards aliens. Whether that’s correct or not is a complicated question. It might be justified by either moral arguments (from behind the veil of ignorance we’re as likely to be them as us) or some weird thing with acausal trade (which I think is actually relatively likely).” (Discussion.) See also “When is unaligned AI morally valuable?”
- If one of the Big World theories is correct, there could be many distant copies of you across the universe or multiverse. If you care about these copies, you might take actions to help them (e.g. acausal trades) that end up helping many others. So here you don’t necessarily care about non-copies but you end up helping them anyway.
- A variant of the previous one is that even if the World is small, you might care about the pseudo-copies of you that other people create as they model your behavior. See this post for a similar idea.
- You have many shared memories/experiences/decision algorithms/genes as many other people. If what you value about yourself is one of these things, you might care about the instances of these things even when they are in other people. Similarly you could take a “pan-everythingism about everything” approach, where many other things in the world are you to varying degrees. And presumably other humans are more like you than, say, rocks. This is apparently similar to Parfit’s view on identity and egoism vs altruism, see e.g. the notes here: ‘Much of ethics is concerned with questions of egoism and altruism. Should I be concerned about myself alone, or others to some degree? But all of such considerations are based on assumptions of clear personal identity. If my future self is me only to some degree; if it has some psychological connectedness with my current self, but only to some degree; then it seems that egoism rests on a mistake. As a result, I should be attracted more towards accounts that weigh all interests equally, over those that tell me to weigh my interests alone or more heavily – for after all, those future interests are “mine” only to some variable degree.’ (I haven’t actually read Parfit.)
- Other reasons for cooperation (e.g. in Newcomb-like situations) might produce altruistic actions.
- From Stuart Armstrong’s ADT paper: “In the author’s view, the confusion illustrated by this section shows that selfishness is not a stable concept: those agents capable of changing their definitions of selfishness will be motivated to do so.”
- Paul Christiano, in a post about UDASSA: “I don’t understand continuity of experience or identity, so I am simply not going to try to be selfish (I don’t know how).”
“Wei_Dai comments on Max Tegmark on our place in history: ‘We’re Not Insignificant After All’ ”. LessWrong. Retrieved March 7, 2018.
What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this. It ought to give the few people in the know enormous power (relative to just about anyone else who has existed or will exist) to affect the future, but, even among those who do realize that we’re at a bottleneck, few try to shape the future in any substantial way, to nudge it one way or another. Instead, they just go about their “normal” lives, and continue to spend their money on the standard status symbols and consumer goods.