(Linkpost) Results for a survey of tool use and workflows in alignment research
In March 22nd, 2022, we released a survey with an accompanying post for the purpose of getting more insight into
How learning efficiently applies to alignment research
As we are trying to optimize for actually solving the problem, we should not fall into the trap of learning
Differential Training Process: Delaying capabilities until inner aligned
I've been ruminating on an idea ever since I read the section on deception in "The Core
Near-Term AI capabilities probably bring low-hanging fruits for global poverty/health
I'm an alignment researcher, but I still think we should be vigilant about how models like GPT-N could
Is the "Valley of Confused Abstractions" real?
Epistemic Status: Quite confused. Using this short post as a signal for discussion.
Here's a link to the
Foresight for AGI Safety Strategy
For discussion: Link to LessWrong post. Link to EA Forum post.
This post is about I think why we should
Notes on Cicero
Link to YouTube explanation by Yannic Kilcher:
Link to paper (sharing on GDrive since it's behind a paywall
Detail about factual knowledge in Transformers
This post is currently in the Appendix of a much longer post I'm currently editing and waiting for
Current Thoughts on my Learning System
TLDR of what I've been thinking about lately:
* Learning is a set of skills. You need to practice
What does "Effective" in EA mean to you?
In the lead-up to EAG SF, I took some time to think about what EA means to me.
When I