Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It really depends on the task. Like Sabine, I’m operating on the very frontier of a scientific domain that is extremely niche. Every single LLM out there is worse than useless in this domain. It spits out incomprehensible garbage.

But ask it to solve some leet code and it’s brilliant.



The question I ask afterwords then is: is solving some leet code brilliant? Is designing a simple inventory system brilliant if they've all been accomplished already? My answer tends towards no, since they still make mistakes in the process, and it harms newer developers from learning.


It is a matter of speech. That said I have seen LLMs do brilliant work. There are just some things like the hard sciences where its understanding is only surface deep.


At non-extremely niche tasks they fail as well.

I should start collecting examples, if only for threads like this. Recently I tried to llm a tsserver plugin that treats lines ending with "//del" as empty. You can only imagine all the sneaky failures in the chat and the total uselessness of these results.

Anything that is not literally millions (billions?) of times in the training set is doomed to be fantasized about by an LLM. In various ways, tones, etc. After many such threads I came to conclusion that people who find it mostly useful are simply treading water as they probably have done most of their career. Their average product is a react form with a crud endpoint and excitement about it. I can't explain their success reports otherwise, cause it rarely works on anything beyond that.


LLMs are basically a search engine for Stack Overflow and Github that doesn't suck as bad as Google does.

If your job is copy-pasting from Stack Overflow then LLMs are an upgrade.


Welcome to the new digital divide people, and the start of a new level of "inequality" in this world. This thread is proof that we've diverged and there is a huge subset of people that will not have their minds changed easily.


Surely you understand why an LLM that has no knowledge of your niche wouldn't be useful right?


Hallucinating incorrect information is worse than useless. It is actively harmful.

I wonder how much this affects our fundraising, for example. No VC understands the science here, so they turn to advisors (which is great!) or to LLMs… which has us starting off on the wrong foot.


Good thing humans never make mistakes.


Good scientists and engineers know how to say “I don’t know.”


I work in a field that is not even close to a scientific nishe - software reverse engineering - and LLM will happily lie to me all the time, for every question I have. I find out useful to generate some initial boilerplate but... that's it. AI autocompletion saved me an order of magnitude more time, and nobody is hyped about it.


How many actual humans are useful in your niche scientific domain?

And how many actual humans, with a fair bit of training, can become a little bit less than useless?

I mean, my parents used to have this dog that would just look at you like "go get you own damn ball, stupid human" if you threw a ball around him.

--edit--

and, yes, the dog also made grammatical mistakes.


Sabine is lex Friedman for women. Stay in your lane about quantum physics and stop trying to opine on LLMs. I’m tired of seeing the huge amount of FUD from her.


What she is saying is correct about the utility of LLMs in scientific research though.

When your user says that your product doesn’t work for them, saying they’re using it wrong is not an excuse.


I think lex friedman is far, far worse




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: