Hacker Newsnew | past | comments | ask | show | jobs | submit | canjobear's commentslogin

Here's what gave it away for me

> The remaining difference is noise, not a fundamental language gap. The real Rust advantage isn't raw speed -- it's pipeline ownership.


There’s an unmistakable rhythm beginning with first paragraph. The trigger was “Same problems, same Apple M4 Pro, real numbers.” in third for me.

I’m scarred to detect these things by my own AI usage.

https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing


Many locales ban physical gambling as well. It’s a defensible policy.

I ran into an interesting incident of this recently. I got a Google Scholar alert about a paper with some experiments related to a paper I had published a while ago, by one "N. Tvlg". I read the paper with interest but I started noticing that although the arguments sounded good, they didn't really make sense, and also the descriptions of the results didn't really match the figures. Eventually I came across a cluster of citations to completely unrelated papers---my field is computational linguistics and these were citations to, like, studies of battery technologies for electric cars. I looked up "N Tvlg" on Google Scholar and they had "published" several articles very recently in totally divergent fields, and upon inspection, all of them had citations back to this materials science research buried deeply somewhere. Clearly these were LLM generated papers trying to build up citation count and h-rank for someone's career.

Where there’s a ranking, there’s someone out there trying to cheat at it. Citation count is a joke.

The purpose of scientific publication used to be to deliver useful scientific results to one's peers. This meant that everyone ran their own personal filter of which peers were working on interesting things, and which collections (journals) were reproducing the most interesting ones. This system still works relatively well for most conscientious researchers. The idea that we should also use publication metrics to rank researchers was never part of this system, and it obviously leads to all sorts of spam (that most scientists just work around) but that seems to really upset non-scientists.

This isn’t about honest researchers resorting to fraud to publish their null results because they were blocked by big bad Nature. It’s about journals and authors churning out pure junk papers whose only goal is to game metrics like citation count.

Peer review existed before 1951 in the US at least. See for example Einstein’s reaction to negative reviews when he tried to publish in Physical Review in 1935 https://paeditorial.co.uk/post/albert-einstein-what-did-he-t...

> Not because the methods it displaced had stopped working, but because the money, the talent, and the prestige had moved elsewhere. The researchers who understood decision theory, Bayesian inference, and operations research didn’t lose their arguments. They lost their audience.

So, what's the problem with it?

It reads to me like Claude wrote the article too.


It's clearly in a different category from the "highbrow" examples like Solaris, just by virtue of being entertaining to a broad audience. In contrast Solaris is the kind of movie where there's a five minute unbroken scene that's just a guy driving in traffic and thinking about his life. (Like the author, I like them both!)

It's not obvious why it wouldn't, especially if it gets to read Poincaré and Riemann.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: