Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Depends on your use case. If you don't need them to be the source of truth, then they work great, but if you do, the experience sucks because they're so unreliable.

The problems start when people start hyperventilating because they think since LLMs can generate tests for a function for you, that they'll be replacing engineers soon. They're only suitable for generating output that you can easily verify to be correct.



Indeed, isn’t that the point?

LLM training is designed to distill a massive corpus of facts, in the form of token sequences, into a much, much smaller bundle of information that encodes (somehow!) the deep structure of those facts minus their particulars.

They’re not search engines, they’re abstract pattern matchers.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: