✨
AI Summary
- Will MacAskill warns civilization is unprepared for AGI, which poses catastrophic risks beyond typical AI concerns like bias
- Explains alignment is necessary but insufficient, and how AGI could enable government coups and 'value lock-in' of harmful systems
- Addresses why panic is limited, what can be done, and whether effective altruism should prioritize AGI risks