2 min read

How learning efficiently applies to alignment research

As we are trying to optimize for actually solving the problem, we should not fall into the trap of learning just to learn. We should instead focus on learning efficiently with respect to how it helps us generate insights that lead to a solution for alignment. This is also the framing we should have in mind when we are building tools for augmenting alignment researchers.

With the above in mind, I expect that part of the value of learning efficiently involves some of the following:

  • Efficient learning involves being hyper-focused on identifying the core concepts and how they all relate to one another. This mode of approaching things seems like it helps us attack the core of alignment much more directly and bypasses months/years of working on things that are only tangential.
  • Developing a foundation of a field seems key to generating useful insights. The goal is not to learn everything but to build a foundation that allows you to bypass spending way too much time tackling sub-optimal sub-problems or dead-ends for way too long. Part of the foundation-building process should reduce the time it shapes you into an exceptional alignment researcher rather than a knower-of-things.
  • As John Wentworth says with respect to the Game Tree of Alignment: "The main reason for this exercise is that (according to me) most newcomers to alignment waste years on tackling not-very-high-value sub-problems or dead-end strategies."
  • Lastly, many great innovations have not come from unique original ideas. There's an iterative process passed amongst researchers and it seems often the case that the greatest ideas come from simply merging ideas that were already lying around. Learning efficiently (and storing those learnings for later use) allows you to increase the number of ideas you can merge together. If you want to do that efficiently, you need to improve your ability to identify which ideas are worth storing in your mental warehouse to use for a future merging of ideas.

Link to LW comment:

jacquesthibs’s Shortform - LessWrong
Comment by jacquesthibs - Regarding how learning efficiently applies to alignment research As we are trying to optimize for actually solving the problem,[https://www.lesswrong.com/posts/smBkyR2GkzMtrKuwK/the-first-filter] we shouldnot fall into the trap of learning just to learn. We should instea…