Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sure, lots of drudgery, but none of your examples are things that you could trust an LLM to do correctly when correctness counts. And correctness always counts in science.

Edit to add: and regardless, I'm less interested in the "LLM's aren't ever useful to science" part of the point. The point that actual LLM usage in science will mostly be for cases where they seem useful but actually introduce subtle problems is much more important. I have observed this happening with trainees.



I have also seen trainees introduce subtle problems when they think they know more than they do.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: