Generalist tendencies in people with a single cause area focus

View source | View history | Atom feed for this file

I describe something I’ve been puzzling over for several months, along with some possible solutions. I still feel confused about this and I think some of the solutions I list aren’t that good (but I list them for completeness). I consider this page to be rough notes based on scattered musings.

The basic idea is to consider people who have a “single cause area focus”, i.e. people who have “locked-in” to a single cause area or who are “diehard” about their cause area, and then to ask how “generalist” we should expect them to be. I should mention that I’m most interested in people who have “locked-in” to AI safety/alignment as a cause area, and the examples below will mostly be from there.

When I try to think of what my a priori view would be, I think I expect people to be less generalist than they actually are. The basic feeling is something like, if this really is the One True Cause, then shouldn’t one want to devote almost all of one’s resources to it? So the puzzle for me is to reconcile my a priori view with what is actually going on. Obviously this question interests me because I am wondering how generalist I should be.

Here are some reasons I’ve thought of for why we might see generalist tendencies in people with a single cause area focus:

Could this sort of generalist tendency be bad? In other words, why might it make sense to not become a generalist? I think the main arguments against would be things like division of labor and economy of scale. Also, things like how you need to “climb higher” to get to the frontier nowadays in some fields, so the only way you will make novel contributions is by specializing to get to the frontier.


  1. See e.g. “Don’t be afraid to learn things outside your field”, although this is just about mathematics.

    Also related are things like the following quote from “Augmenting Long-term Memory”:

    The world isn’t divided up into neatly separated components, and I believe it’s good to collide very different types of questions. One moment Anki is asking me a question about the temperature chicken should be cooked to. The next: a question about the JavaScript API. Is this mixing doing me any real good? I’m not sure. I have not, as yet, found any reason to use JavaScript to control the cooking of a chicken. But I don’t think this mixing does any harm, and hope it is creatively stimulating, and helps me apply my knowledge in unusual contexts.

    and:

    But for creative work and for problem-solving there is something special about having an internalized understanding. It enables speed in associative thought, an ability to rapidly try out many combinations of ideas, and to intuit patterns, in ways not possible if you need to keep laboriously looking up information.

    From “Anki Tips: What I Learned Making 10,000 Flashcards”:

    Take the production of insight, for instance. I find that insight often arises when two ideas that have been recently activated in memory collide and I think, “Oh, wait, that’s related to that.”

    If everything is carefully partitioned, you limit opportunities for this serendipity. Topic organization says “ideas about computer science don’t belong with those about economics,” except applying ideas across disciplines is precisely where the insights are likely to be most fertile.

    ↩︎