Generalist tendencies in people with a single cause area focus
I describe something I’ve been puzzling over for several months, along with some possible solutions. I still feel confused about this and I think some of the solutions I list aren’t that good (but I list them for completeness). I consider this page to be rough notes based on scattered musings.
The basic idea is to consider people who have a “single cause area focus”, i.e. people who have “locked-in” to a single cause area or who are “diehard” about their cause area, and then to ask how “generalist” we should expect them to be. I should mention that I’m most interested in people who have “locked-in” to AI safety/alignment as a cause area, and the examples below will mostly be from there.
When I try to think of what my a priori view would be, I think I expect people to be less generalist than they actually are. The basic feeling is something like, if this really is the One True Cause, then shouldn’t one want to devote almost all of one’s resources to it? So the puzzle for me is to reconcile my a priori view with what is actually going on. Obviously this question interests me because I am wondering how generalist I should be.
Here are some reasons I’ve thought of for why we might see generalist tendencies in people with a single cause area focus:
The knowledge is actually from before locking in on the cause area. So maybe the most expensive part of the cost has already been paid (in terms of learning a new field), and the rest is just paying small amounts to keep up with a field or to write small things.
Knowing a lot of other fields is actually useful for your “locked in” cause/problem. You can sort of see this in MIRI’s research guide. There’s a lot of different subjects in there that are supposed to help with alignment research.
“Division of labor is all well and good, but if you’ve spent much time around others in a business you soon realise that it isn’t all that it’s cracked up to be. There’s a reason why so many of histories prolific inventors had an enormous array of skills in many different areas: because the only person you can really count on to be there is yourself. Employees and colleagues come and go, the only constant is you.” (source)
Explore/exploit trade-off and dealing with the possibility that one’s own cause area might not be the best. See things like Cause X, multi-armed bandit, etc.
What allowed you to find the good cause in the first place was due to your curiosity, generalist tendency, etc. Maybe it’s pretty hard to restrain your high curiosity even once you’ve locked in on a cause you think is most important. In this case you might want to try to become less curious?
Here’s Wei Dai: “One solution [to the problem of stupidity from having high status] that might work (and I think has worked for me, although I didn’t consciously choose it) is to periodically start over. Once you’ve achieved recognition in some area, and no longer have as much interest in it as you used to, go into a different community focused on a different topic, and start over from a low-status (or at least not very high status) position. Of course this doesn’t work unless there are several things that you can work on whose marginal utilities aren’t too far apart. (It probably doesn’t apply to Eliezer for example.)” (source)
These people don’t see the world as being broken up into different fields; rather they are better at “seeing reality for how it is”. That means they cross field boundaries more easily. Many things really are connected and show up in many different fields simultaneously. I think some people would even say that one of the ways to discover things is to find connections between fields.1
Scott Alexander writes:
Eliezer had a lot of weird and varying interests, but one of his talents was making them all come together so you felt like at the root they were all part of this same deep philosophy. This didn’t work for other people, and so we ended up with some people being amateur decision theory mathematicians, and other people being wannabe self-help gurus, and still other people coming up with their own theories of ethics or metaphysics or something. And when Eliezer did any of those things, somehow it would be interesting to everyone and we would realize the deep connections between decision theory and metaphysics and self-help. And when other people did it, it was just “why am I reading this random bulletin board full of stuff I’m not interested in?”
And Eliezer himself:
The map is not the territory. Nature is a single flow, one continuous piece. Any division between the science of psychology and the science of physics is a division in human knowledge, not a division that exists in things themselves. If you dare break up knowledge into academic fields, sooner or later the unreal boundaries will come back to bite. So long as psychology and physics feel like separate disciplines, then to say that physics determines choice feels like saying that psychology cannot determine choice. So long as electrons and desires feel like different things, to say that electrons play a causal role in choices feels like leaving less room for desires.”
See also how diverse the problems that motivated Wei Dai to develop UDT are.
Tooby and Cosmides in “The Psychological Foundations of Culture”:
Disciplines such as astronomy, chemistry, physics, geology, and biology have developed a robust combination of logical coherence, causal description, explanatory power, and testability, and have become examples of how reliable and deeply satisfying human knowledge can become. Their extraordinary florescence throughout this century has resulted in far more than just individual progress within each field. These disciplines are becoming integrated into an increasingly seamless system of interconnected knowledge and remain nominally separated more out of educational convenience and institutional inertia than because of any genuine ruptures in the underlying unity of the achieved knowledge.
Apparently this idea has been named “Intertwingularity”.
The role of popularization/getting help from experts in many fields. It isn’t because of Eliezer’s decision theory work that he is now well-known; rather, it is his popularization work (though the Sequences are not purely popularization) and fiction writing. So might it make sense to go into other fields to popularize them?
Eliezer writes:
Back when Bitcoin was gaining a little steam for the first time, enough that nerds were starting to hear about it, I said to myself back then that it wasn’t my job to think about cryptocurrency. Or about clever financial investment in general. I thought that actually winning there would take a lot of character points I didn’t have to spare, if I could win at all. I thought that it was my job to go off and solve AI alignment, that I was doing my part for Earth using my own comparative advantage; and that if there was really some low-hanging investment fruit somewhere, somebody else needed to go off and investigate it and then donate to MIRI later if it worked out.
I think that this pattern of thought in general may have been a kind of wishful thinking, a kind of argument from consequences, which I do regret. In general, there isn’t anyone else doing their part, and I wish I’d understood that earlier to a greater degree. But that pattern of thought didn’t actually fail with respect to cryptocurrency. In 2017, around half of MIRI’s funding came from cryptocurrency donations. That part more or less worked.
Especially the “In general, there isn’t anyone else doing their part” part.
See the exchange between Wei Dai and Eliezer here. In particular: “Partially dispel the view of MIRI wherein we’re allegedly supposed to pontificate on something called ‘AI risk’ and look gravely concerned about it. Lure young economists into looking at the intelligence explosion microeconomics problem.”
“Is there a Connection Between Greatness in Math and Philosophy?” Maybe people are just good at many things so they do all of them.
The sensitivity of one’s cause prioritization to many factors, such that if some of those factors change, one might want to switch to a different cause area. This might then mean having generalist tendencies to be able to know which cause area to switch into. See e.g. “AI timelines and cause prioritization”.
Eliezer writes:
I remember when I finally picked up and started reading through my copy of the Feynman Lectures on Physics, even though I couldn’t think of any realistic excuse for how this was going to help my AI work, because I just got fed up with not knowing physics. And – you can guess how this story ends – it gave me a new way of looking at the world, which all my earlier reading in popular physics (including Feynman’s QED) hadn’t done. Did that help inspire my AI research? Hell yes. (Though it’s a good thing I studied neuroscience, evolutionary psychology, evolutionary biology, Bayes, and physics in that order – physics alone would have been terrible inspiration for AI research.)
-
If you’re trying to advance the state of the art in one particular domain, it suffices to dig very deep into that field and probe it as much as possible. But if you’re trying to do the most good in a global sense, you need a global perspective, which requires learning a little bit of everything.
From this interview:
ELIEZER: To put it in a nutshell, my best and worst mistake was thinking that intelligence was a big, complicated kluge with no simple principles behind it. The reason that it’s my best mistake, as mistakes go, is that this belief that there were no simple answers caused me to go out and study neuroscience, cognitive psychology, and various A.I. machine learning stuff, and this whole big grab bag of information that was actually very useful to know.
As mistakes go, this mistake motivated me to go out and learn a whole lot of different things. Which is certainly a very good sort of mistake to make if you view it from that perspective. But for other and even more complicated reasons that we may or may not end up getting into later in this particular interview, I later realized that A.I. work was going to have to meet a higher standard of precision than I’d been visualizing.
Eliezer writes more about this in this post.
Anna Salamon describes the 50/50 rule in a comment: “The 50/50 rule is a proposed heuristic claiming that about half of all progress on difficult projects will come from already-known-to-be-project-relevant subtasks […]. The other half of progress on difficult projects (according to this heuristic) will come from taking an interest in the rest of the world, including parts not yet known to be related to the problem at hand”. (However, see the note at the top of that comment, which says she might change her mind about this if she were to actually write a blog post about it.)
https://www.greaterwrong.com/posts/XvN2QQpKTuEzgkZHY/being-the-pareto-best-in-the-world gives an argument for specializing in multiple directions and being generalist so as to claim more problems that only you can solve.
Possibly related, a comment about people specializing in modes of cognition.
https://www.greaterwrong.com/posts/XcDSmXecYiubPjxAj/eric-drexler-on-learning-about-everything
Could this sort of generalist tendency be bad? In other words, why might it make sense to not become a generalist? I think the main arguments against would be things like division of labor and economy of scale. Also, things like how you need to “climb higher” to get to the frontier nowadays in some fields, so the only way you will make novel contributions is by specializing to get to the frontier.
See e.g. “Don’t be afraid to learn things outside your field”, although this is just about mathematics.
Also related are things like the following quote from “Augmenting Long-term Memory”:
The world isn’t divided up into neatly separated components, and I believe it’s good to collide very different types of questions. One moment Anki is asking me a question about the temperature chicken should be cooked to. The next: a question about the JavaScript API. Is this mixing doing me any real good? I’m not sure. I have not, as yet, found any reason to use JavaScript to control the cooking of a chicken. But I don’t think this mixing does any harm, and hope it is creatively stimulating, and helps me apply my knowledge in unusual contexts.
and:
But for creative work and for problem-solving there is something special about having an internalized understanding. It enables speed in associative thought, an ability to rapidly try out many combinations of ideas, and to intuit patterns, in ways not possible if you need to keep laboriously looking up information.
From “Anki Tips: What I Learned Making 10,000 Flashcards”:
Take the production of insight, for instance. I find that insight often arises when two ideas that have been recently activated in memory collide and I think, “Oh, wait, that’s related to that.”
If everything is carefully partitioned, you limit opportunities for this serendipity. Topic organization says “ideas about computer science don’t belong with those about economics,” except applying ideas across disciplines is precisely where the insights are likely to be most fertile.