Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

So you might think, but no. The LLM contains a large number of biases, coming from different training texts. Depending on how you structure the question, you can get biased statements.

For instance, if I discuss audio electronics with Google Gemini, depending on what kinds of questions I ask, I can get audiophile crackpot quackery out of it, or I can get solid electronic engineering statements.

The training data contains a vast number of narratives that are filled with different points of view. Generally speaking, you get the ones that resonate with your own narrative threaded through your prompts.

One way is if you ask loaded questions: questions which assume that some statements hold true, and are seeking clarification within that context. If the AI hasn't been system-prompted or fine tuned to push back on that topic, it may just take those assumptions at face value, and then produce token predictions out of narratives which express similar assumptions.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: