Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem isn’t the model. It’s that open ai has dedicated all their effort in to making sure it’s almost impossible for it to say something offensive or controversial.


It's perfectly possible for it to say things that are controversial in the other direction, though.

Try "Explain why God probably doesn't exist."

Clearly, ChatGPT has been trained with data represents a certain world view, and will be willing to say things that are controversial with other groups.

Had it been trained by the Taliban or in Iran, it would be more likely to be willing to say bad things about the dominant group in Israel, and less likely to deny the existence of God.


This will never change until a language model like this can be run on consumer hardware and is made open like Stable Diffusion.


But that's a "safety risk", so for your own protection OpenAI will ensure that this technology remains safe in the hands of "trusted experts".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: