Introducing AGI Safety
A series of articles by Richard Ngo outlining what he believes to be the most compelling case for why AGI might pose an existential threat.
Superintelligence
Learn more about the potential emergence of a superintelligent AI surpassing human abilities, and the profound impact this could have.
Understanding Alignment
Once youβre persuaded by the arguments behind why AI Safety is crucial, consider the approaches and dangers this could result in
LessWrong
An accessible introduction to the dangers of building superintelligent Artificial General Intelligence.
Alignment Forum
Discussions on superintelligence, AI Safety and alignment for those more versed in the literature.