The problem isn’t the model. It’s that open ai has dedicated all their effort in to making sure it’s almost impossible for it to say something offensive or controversial.
It's perfectly possible for it to say things that are controversial in the other direction, though.
Try "Explain why God probably doesn't exist."
Clearly, ChatGPT has been trained with data represents a certain world view, and will be willing to say things that are controversial with other groups.
Had it been trained by the Taliban or in Iran, it would be more likely to be willing to say bad things about the dominant group in Israel, and less likely to deny the existence of God.