Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> You're using them wrong. Everyone is though I can't fault you specifically.

If everyone is using them wrong, I would argue that says something more about them than the users. Chat-based interfaces are the thing that kicked LLMs into the mainstream consciousness and started the cycle/trajectory we’re on now. If this is the wrong use case, everything the author said is still true.

There are still applications made better by LLMs, but they are a far cry from AGI/ASI in terms of being all-knowing problem solvers that don’t make mistakes. Language tasks like transcription and translation are valuable, but by no stretch do they account for the billions of dollars of spend on these platforms, I would argue.



LLM providers actually have an incentive not to write literature on how to use LLM optimally, as that causes friction which means less engagement/money spent on the provider. There's also the typical tin-foil hat explanation of "it's bad so you'll keep retrying it to get the LLM to work which means more money for us."


Isn't this more a product of the hype though? At worst you're describing a product marketing mistake, not some fundamental shortcoming of the tech. As you say "chat" isn't a use case, it's a language-based interface. The use case is language prediction, not an encyclopedic storage and recall of facts and specific quotes. If you are trying to get specific facts out of an LLM, you'd better be using it as an interface that accesses some other persistent knowledge store, which has been incorporated into all the major 'chat' products by now.


Surely you're not saying everyone is using them wrong. Let's say only 99% of them are using LLMs wrong, and the remaining 1% creates $100B of economic value. That's $100B of upside.

Yes the costs of training AI models these days are really high too, but now we're just making a quantitative argument, not a qualitative one.

The fact that we've discovered a near-magical tech that everyone wants to experiment with in various contexts, is evidence that the tech is probably going somewhere.

Historically speaking, I don't think any scientific invention or technology has been adopted and experimented with so quickly and on such a massive scale as LLMs.

It's crazy that people like you dismiss the tech simply because people want to experiment with it. It's like some of you are against scientific experimentation for some reason.


“If everything smells like shit, check your shoe.”




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: