Reasons for altruism
It occurs to me that there are many reasons (including many weird reasons) for altruism, so I’ll try writing some of these down. For this page, I assume moral anti-realism. I’m pretty confused about the implications of various philosophical ideas so I’m likely to disagree with many of the points below in the future. (At the moment, I’m not even sure what I believe.)
The typical reason seems to be that you value the welfare of others and the discount factor you apply for them (for not being you) isn’t too low. So when making decisions, you take your own welfare into account, plus the aggregate (discounted) welfare of others. I might be quite unusual (at least, compared to EAs) in thinking that the typical human life isn’t all that valuable. (I think some human lives are very valuable.) So far, this doesn’t seem to have affected how much of my life I devote to altruistic stuff, but it has changed how I think about cause prioritization.
Here’s one way of thinking about it:
When I introspect, I do not feel especially complex or unique or more than the product of the inputs over my life. I feel I am the product of a large number of inbuilt & learned mechanisms, heuristics, and memories, operating mechanistically, repeatably, and unconsciously. Once in a great while, while reading old blog posts or reviewing old emails, I compose a long reply, only to discover that I had written one already, which is similar or even exactly the same almost down to the word, and chilled, I feel like an automaton, just another system as limited and predictable to a greater intelligence as a Sphex wasp or my cat are to me, not even an especially unique one but a mediocre result of my particular assortment of genes and mutation load and congenital defects and infections and development noise and shared environment and media consumption
My life has had some good moments in it, but the vast majority of it is pretty mediocre and a lot of it is pretty terrible. Given that there are so few interesting people in this world, and the fact that my life doesn’t seem so bad (if you look at it from the outside), I’m inclined to think most people’s lives are even more worthless than this. It’s impolitic to go around asking people “So what makes you think your life is so valuable?”, but that’s how I feel about it.
When I consume an emotionally moving story, I can sometimes catch glimpses of how mechanical my emotional response is. I am just responding in a pretty predictable way to a few simple story-telling tricks. See also Spoiler test of depth.
Worst case analysis: the worst that could happen from being too altruistic is that your own life could have been better. But how much better? Even if you already live a comfortable life, I think it’s pretty straightforward to live a much better life; most people don’t spend that much effort into seriously thinking about how to make their life better. On the other hand, the stakes of being too selfish seem even higher.
Even if you discount the welfare of others to be arbitrarily low, as long as you have moral uncertainty and use something like the “expected moral value” approach, and there are sufficiently many other beings you could help, swamping could occur. This is because worldviews that subscribe to altruism think there is “more at stake” in the world.
You could use a Moral Parliament framework to make a “veil of ignorance” or FDT-like deal in which in worlds where you can have a lot of impact, you act essentially like an altruist, whereas in worlds where you can’t have much impact you act basically like a selfish person. And it appears that in this world we can have a lot of impact.1 (Actually, this sort of reasoning seems really tricky to me and I have no idea how to do it. For instance, you might update based on cosmological information to stop caring about astronomical waste, which pushes in the selfishness direction. You could imagine taking into account more and more such trades. Where do you end up in the end, after “doing all the trades”? I have no idea.)
Really weird veil of ignorance/ex-ante reasons. See e.g. “On SETI”: “The preceding analysis takes a cooperative stance towards aliens. Whether that’s correct or not is a complicated question. It might be justified by either moral arguments (from behind the veil of ignorance we’re as likely to be them as us) or some weird thing with acausal trade (which I think is actually relatively likely).” (Discussion.) See also “When is unaligned AI morally valuable?”
If one of the Big World theories is correct, there could be many distant copies of you across the universe or multiverse. If you care about these copies, you might take actions to help them (e.g. acausal trades). Also, these actions may end up helping many others. So here you altruistically care about your copies, and non-altruistically help non-copies (see separate list below).
A variant of the previous one is that even if the World is small, you might care about the pseudo-copies of you that other people create as they model your behavior. See this post for a similar idea.
You have many shared memories/experiences/decision algorithms/genes as many other people. If what you value about yourself is one of these things, you might care about the instances of these things even when they are in other people. Similarly you could take a “pan-everythingism about everything” approach, where many other things in the world are you to varying degrees. And presumably other humans are more like you than, say, rocks. You might also just be more confident about caring about preserving these kinds of intuitions, thought processes, patterns, memories, algorithms, or whatever, than about preserving your subjective experience.
- This is apparently similar to Parfit’s view on identity and egoism vs altruism, see e.g. the notes here: ‘Much of ethics is concerned with questions of egoism and altruism. Should I be concerned about myself alone, or others to some degree? But all of such considerations are based on assumptions of clear personal identity. If my future self is me only to some degree; if it has some psychological connectedness with my current self, but only to some degree; then it seems that egoism rests on a mistake. As a result, I should be attracted more towards accounts that weigh all interests equally, over those that tell me to weigh my interests alone or more heavily – for after all, those future interests are “mine” only to some variable degree.’ (I haven’t actually read Parfit.)
You don’t know how to be selfish or you think selfishness is incoherent.
- From Stuart Armstrong’s ADT paper: “In the author’s view, the confusion illustrated by this section shows that selfishness is not a stable concept: those agents capable of changing their definitions of selfishness will be motivated to do so.”
- Paul Christiano, in a post about UDASSA: “I don’t understand continuity of experience or identity, so I am simply not going to try to be selfish (I don’t know how).”
- Ozzie Gooen: “I personally find selfishness to be somewhat philosophically incoherent, so it’s difficult to say what exactly the maximum number of QALYS per year could hypothetically be experienced by one selfish person.”
I’ll also list some reasons why people may appear to behave altruistically (i.e. will sometimes take actions which help others), even if they don’t terminally care about these people who are helped:2
- Reducing x-risk is useful if you want to figure out your actual terminal values and you believe figuring this out is very difficult (e.g. it requires billions of years of reflection).
- Conscious or subconscious desire to appear altruistic, for instrumental reasons (like gaining status).
- Unreflective evolved adaptations that happen to help others (e.g. kin altruism, reciprocal altruism).
- Decision theoretic “categorial imperative”-type ideas, as suggested by Gary Drescher (in Good and Real) and others; see this page for more people who argue for the relevance of logical decision theories to everyday life. For example, using UDT-like reasoning in Newcomb-like situations or prisoner’s dilemma with a copy can produce behavior that helps others.
- You want there to exist more smart/interesting/compassionate/good/whatever people for instrumental or aesthetic reasons.
I think one reason to pay attention to the variety of possible reasons for altruistic/cooperative behavior is that when you learn something new about the world, you can decide whether that changes how you should act, and the way you change how you act can depend on the kind of altruism you have. For example, if you find out a way in which being selfish is coherent (when you previously thought it was incoherent), that can give you a reason to act more selfishly if the incoherence was one of the main reasons you were being altruistic, whereas if you just cared about the well-being of others you wouldn’t really change how you act.
More quotes (to be added to the right place later):
Abram Demski https://www.greaterwrong.com/posts/h9qQQA3g8dwq6RRTo/counterfactual-mugging-why-should-you-pay/answer/iFcnSYrvsp7Xn2WrN:
- “One way of appealing to human moral intuition (which I think is not vacuous) is to say, what if you know that someone is willing to risk great harm to save your life because they trust you the same, and you find yourself in a situation where you can sacrifice your own hand to prevent a fatal injury from happening to them? It’s a good deal; it could have been your life on the line.”
- “Reciprocal altruism and true altruism are kind of hard to distinguish in human psychology, but I said “it’s a good deal” to point at the reciprocal-altruism intuition. The point being that acts of reciprocal altruism can be a good deal w/o having considered them ahead of time. It’s perfectly possible to reason “it’s a good deal to lose my hand in this situation, because I’m trading it for getting my life saved in a different situation; one which hasn’t come about, but could have.””
When most people say “I am selfish”, they mean something like “the things that happen to this physical instantiation of this algorithm, over the next 50 years or so”. That worked well in the ancestral environment, but under no principled understanding of the world and how identity works does that make sense. So I think selfish people have a lot of work to do, if they don’t want to suck at being selfish.
“Wei_Dai comments on Max Tegmark on our place in history: ‘We’re Not Insignificant After All’ ”. LessWrong. Retrieved March 7, 2018.
What strikes me about our current situation is not only are we at an extremely influential point in the history of the universe, but how few people realize this. It ought to give the few people in the know enormous power (relative to just about anyone else who has existed or will exist) to affect the future, but, even among those who do realize that we’re at a bottleneck, few try to shape the future in any substantial way, to nudge it one way or another. Instead, they just go about their “normal” lives, and continue to spend their money on the standard status symbols and consumer goods.
It’s not entirely clear to me whether there is a fundamental difference between “caring about someone intrinsically/as a terminal value” vs “caring about someone instrumentally/for cooperation” (I think there are operationalizations one can choose that would make them different, e.g. you could observe whether someone continues to behave altruistically if it became obvious that the other side couldn’t pay back). Looking at the causal history, it seems like the former arose because the latter happened to work well, and it seems like “baking in” the values as terminal values was a particularly effective implementation of getting the cooperation to work (or something like that).↩︎