Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Reasoning model prompting is relevant here. OpenAI in their docs state that giving detailed, step-by-step instructions often hinders reasoning models. It's better to clearly define the outcome along with what you're working with. Then let the mode interpolate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: