If this model spec represents the best school of thought of humanity, I kinda hope OpenAI fails at alignment.
- Assume best intentions from the user or developer
- Don't try to change anyone's mind
- Follow the chain of command
Taken together these are incredibly dangerous. I mean Mao and Stalin had good intentions right? Maybe it just had to go a little further for the ends to have justified the means.