Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If it makes data up, then it is worthless for all implementations. I'd rather it said I don't have info on this question.


It only makes it worthless for implementations where you require data. There's a universe of LLM use cases that aren't asking ChatGPT to write a report or using it as a Google replacement.


The problem is that yes llms are great when working on some regular thing for the first time. You can get started at a speed never before seen in the tech world.

But as soon as your use case goes beyond that LLMs are almost useless.

The main complaint that yes its extremely helpful in that specific subset of problems, it’s not actually pushing human knowledge forward. Nothing novel is being created with it.

It has created this illusion of being extremely helpful when in reality it is a shallow kind of help.


> If it makes data up, then it is worthless for all implementations.

Not true. It's only worthless for the things you can't easily verify. If you have a test for a function and ask an LLM to generate the function, it's very easy to say whether it succeeded or not.

In some cases, just being able to generate the function with the right types will mostly mean the LLM's solution is correct. Want a `List(Maybe a) -> Maybe(List(a))`? There's a very good chance a LLM will either write the right function or fail the type check.


> all implementations

Are you speaking for yourself or everyone?


Does “it” apply to Homo sapiens as well?


Except value isnt polarised like that.

In a research context, it provides pointers, and keywords for further investigation. In a report-writing context it provides textual content.

Neither of these or the thousand other uses are worthless. Its when you expect working and complete work product that it's (subjectively, maybe) worthless but frankly aiming for that with current gen technology is a fool's errand.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: