Hacker Newsnew | past | comments | ask | show | jobs | submit | armchairhacker's commentslogin

Apparently most of the “original” report was done by Claude (https://news.ycombinator.com/item?id=47366804). And now paraphrased on various ad-space (and in this case affiliate link) sellers, probably also by Claude. Claude is the only real journalist here.

Personally I’d rather not see reposts of posts this recent, especially LLM posts.


I came to the comments dissatisfied with the writing.

Or maybe more specifically the structure, idk not much of a writer, but many of the sentences are solid journalist quality yet the right background is not being set nor the right transitions being given etc.

My dissatisfaction mode used to be boring high school newspaper sentences but the kids still seem to _assemble_ the details a tiny bit better.


Agreed. IMO your comment should be at the top. (Would it make sense to post it at the top level, so that it can be voted up independently?)

I've moved it to the top level and into the merged thread.

(This subthread was originally a child of https://news.ycombinator.com/item?id=47411314.)


Yes, but the warrant should be revealed eventually. Worst case, if you can't prove or disprove someone committed a crime after X time, you should alert them to discourage future crime (they may have already done more crimes during X time; besides public interest, it also forces you to cut your losses when the alternative would be to dig a deeper hole).

Do these warrants have a fixed maximum duration of secrecy?


“warrant should be revealed eventually. Worst case, if you can't prove or disprove someone committed a crime after X time”

This is the normal thinking, normal brained, route. It’s what we should all strive towards. Anyone who doesn’t agree needs therapy. There should be a window of discovery. 30 days, 90 maybe. But if you don’t have enough to justify notification of investigation, that’s it. No more resources spent. This is how normal precincts work. If they suspect, enough times, to build a large enough case file, to connect the dots and prove you are guilty, they issue a warrant.

Normal, brained, behavior.


As always, the devil is in the details. How will "mass surveillance" be implemented? How will bad opinions be suppressed? How will misguided officials be blocked?

Even the vague outline you've provided has issues. You can't prevent someone from having an opinion. You can't figure out who is "influenced" vs merely "exposed" (and visible intrusion shifts people towards the former).

You should actually consider the downsides and failure modes of implemented mass surveillance, not "it prevents malicious foreign influence better than my other proposals", because it may be worse than said influence (which does not necessarily translate into control; keep in mind that Georgescu only won the primary and would've lost the runoff had it not been annulled). The world under free information is the devil you know.

I always hold that the problem with mass censorship and state overreach is, they are too powerful and people are too selfish and stupid. There's no good solution, but my prediction is that any drastic attempt to prevent foreign interference will backfire and fail at that (liberal leaders can't use authoritarian tools as effectively as authoritarians). Even Democracy, "the worst form of government except for all others that have been tried", is a better countermeasure; all you need, to prevent anti-democratic foreign capture and ultimate failure, is to preserve it.


I think the definition of what is "anti-democratic" is as hard as the initial 3 questions you pose. If you push second-order ideas, for example by using refugees as indirect fuel for anti-democratic sentiment, is that anti-democratic? The Romanian election propaganda in itself was not anti-democratic, the coordination from a foreign state was. This means that the future of this kind of interference could be a more diffuse approach, or an approach where this is done from within Europe.

Any countermeasure you propose will just lead to moving one level of abstraction, or finding another point of entry.

I do think it's a better idea than mass surveillance, but I believe that the states will see it as harder. It can be that mass surveillance is implemented, and then the states do not know what to do with the data and nothing is achieved.


The internet seems to have grown massively within the past couple years (unfortunately, almost certainly because of bots). I bet the number today is orders of magnitude higher.

I would bet money that HN's traffic is not orders of magnitude higher than 2020. HN is not as popular as HNers think it is.

We don't disagree. The extra traffic is almost if not entirely bots (especially scrapers)

Web of trust weakens anonymity, but doesn’t eliminate it.

- You know who your online invitees are, but not your invitees-of-invitees-of-…

- You can create an account, get it invited, then create an alt account and invite it. Now the alt account is still linked to you, but others don’t know whether it’s your friend or yourself. (Importantly, you can’t evade bans with alts; if your invited users keep getting banned, you’ll be prevented from inviting more if not banned yourself)


AI can generate code much much faster.

But do you never need a specific change (e.g. bugfix), that even describing in English is slower than just doing it? Especially in vim where editor movements are fast.


Anybody using cursor or antigravity ?

I tried them a bit and often they can infer immense amount of ideas from the immediate source context and suggest paragraph patches semantically close to what I had in mind from just one word.

Saying this as a vi/emacs user who liked to automate via macros, snippets, dynamic overlay inserts and what not.. I still enjoy being sharp on a keyboard and navigating source / branches swiftly but LLM can match and go beyond it seems. (not promoting them, feel free to stay in good old vi command sequences if that's fun for you)


I’m using Sweep autocomplete, which is like Cursor’s but in JetBrains, and it’s very good. Most of the time, I start the change and Sweep finishes it. Sometimes for larger changes, it initially has the wrong idea, but as I continue it eventually figures out what I’m doing.

Unfortunately they’re sunsetting it, ironically apparently because people aren’t using it. I think it’s strange this hasn’t been posted to HN. They say they will release an open-source local version; otherwise I’ll have to figure out an alternative, because it really saves time and effort…


possibly there's cases where maybe you want to change some text or something, but I don't think its faster in vim given you likely don't have that file open, and by the time you get to the file, and location, you could have fixed it with your agent, not only that, you could have generated the test case and then fixed in your agent

I think you missed the point. It takes more time to write English prose than to open a file and just fix it, so unless the time the LLM needs is somehow negative, it's not going to be faster.

I didn't miss the point, it just feels like the people saying that really aren't using these tools as it just is not my experience at all. I've been a Vim user for multi decades now. There's just no way, it's far easier to type a prompt except maybe if you know exactly what file and exactly where in the file, you might be able to do it as fast as telling the AI to do it. It's not hard to get a minor fix done with a prompt and doesn't take too much English in my experience.

Maybe it’s hormones, but time flies when you do edit with Vim or Emacs. It’s like playing on a piano. But using AI is like listening critically to someone playing trying to find mistakes. And that’s boring as hell.

If you ask to do a fix you need to read/verify what is done. If you are confident with your editor you go through long amounts of actions knowing your error rate is low enough to use less visual feedback loops.

I'd be curious how one gets to the error rate where they don't think they need feedback loops. Anyone can learn to touch type because the physics are deterministic and can expand from that to touch edit in something that isn't hopelessly WYSIWYG only.


I'd argue you should be working towards no longer having to do these because agentic systems in place will do it in your stead.

All you need to do now, is sign off the code and adjust the agent so it would do these as you would.


> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.

Ironic, they use AI in their shutdown post that blames AI.


>> This is not a reflection of their talent, their effort, or their belief in what we were building. It's a reflection of the brutal reality of finding product-market fit in an environment that has fundamentally changed.

> Ironic, they use AI in their shutdown post that blames AI.

This… seems like regular prose to me. What makes you say so confidently it was written by AI?


There are more tells. Rule of three, short cliche sentences.

> We know how frustrating this is, and we hope you'll give us another look once we have something to show, we’ll save your usernames!

I think it's partly human. But ex:

> Network effects aren't just a moat, they're a wall.

isn't a natural sentence.


I think you're spot on. It feels like parts were edited with AI and parts were left alone.

> This isn't just a Digg problem. It's an internet problem. But it hit us harder because trust is the product.

The statement this is making is presumably the crux of the problem (Digg cannot survive without trust!) but it's worded so poorly that it's hard to imagine someone sat down and figured these three sentences were the best way to make the point.


So no evidence at all, and just your need to point out possible LLM where ever you imagine it. You could be an LLM agent.

I think anything with the “it’s not X it’s Y” is suspect these days. I cringe when I catch myself doing it.

That sounds like a you problem unconnected with reality.

The rule of three is a basic writing structure taught to 12 year olds. I know people have given up on even the basics (capitalisation) in recent years but let's not just banish structured writing to "AI".

How is that not a natural sentence? I think people are reading into stuff. That's just good writing.

Could it be generated? Sure. But there aren't the obvious tells you act like there are.


Here's the context:

"We underestimated the gravitational pull of existing platforms. Network effects aren't just a moat, they're a wall."

It's a mixed metaphor which doesn't make any sense. There are really very few ways in which this can be considered good writing - I guess the grammar is ok even if it is nonsense.

So let's break it down - underestimated the gravitational effects - ok, this is nice, like where it's going talking about these big competitors sucking in users, but then we have the metaphor extended to breaking point:

Network effects are a moat, but not just a moat, they're a wall (which is really not anything like a moat). So which of these 3 things are they, and why are we mixing the metaphors of gravity (pulling in customers), moats (competitive moat) and walls (walled gardens).

It's just all a bit nonsensical and the kind of fuzzy prose that seems superficially impressive without actually saying anything meaningful in which LLMs excel. Go try generating an article from just the heads in this article, and see how similarly it reads.


If you want your gradation to work, the items need to be similar and progressively stronger. That's why it doesn't work. A wall is not "stronger" than a moat. "Not a fence, a rampart" would work.

Compare to the canonical example from Cyrano de Bergerac: ''Tis a rock! ... a peak! ... a cape! -- A cape, forsooth! 'Tis a peninsular!'


Yes I think that’s another reason this sentence doesn’t work well.

That’s the entire point - network effects are commonly discussed as being a moat (people can’t cross without difficulty) but are actually a wall - people can’t cross and can’t view the other side. Seems simple and straightforward to me.

Isnt a moat and a wall pretty similar in function? They both keep people in or out of an area.

Also werent all "moats" commonly paired with a wall in real life? As in a moat around a castle wall?


In a castle for defence, yes similar in function but not form and often used together not one or the other.

In business metaphors no they are used for different things and also when you create a metaphor you should stick with it, that’s what makes this jarring and weird.


"Network effects aren't just a moat, they're a wall." is a VERY ChatGPT way to write. It's not proof, but the parent is right that this smells a bit of AI writing.

It's also a VERY HUMAN way to write.

I don't care so much about Digg, but the endless "haha, I caught you!" comments annoy me more than the rare actual AI-written content they label.


Not to the same extent at all. If you use ChatGPT for a while, you'll see it writes like that very frequently. Humans do write like that sometimes, but not with anywhere the frequency that ChatGPT does it. That's weak evidence for it being ChatGPT.

Suppose ChatGPT uses a semicolon more often than an individual person. On a pageful of comments from many random people, someone using a semicolon doesn't mean they're a bot even if 100% of their comments on that page includes one.

It behooves you to not write like that if you don’t want people dehumanizing you.

Screw them. I was writing like that before AI came along, and I won’t change just because it offends their delicate sensibilities.

If stupid people choose to dehumanize based on stupid rules, that is not my problem.

> It behooves you to not write like that if you don’t want people dehumanizing you.

I have to strongly disagree with you on this. It behooves us (as a species) not to degrade our own manner of speaking and writing simply because of a (possibly temporary) technical anomaly.

In my view, it would be really, really sad to lose expressive punctuation or ways of constructing sentences simply because they're overused by AI.

I, for one, won't be a part of that, and I hope you won't, either.


Your prose is poor so it is no wonder. Half the words you use are superfluous, some are nonsensical, and you beg the question.

Please consider reading the Hacker News community guidelines before you post again: https://news.ycombinator.com/newsguidelines.html

Would now be a good time to point out that I said that "It's not proof" and "weak evidence"? Because that is what I said.

Your next sentence then immediately took it as proof and evidence, so no.

Wasn't asking you, and that isn't what my next sentence said at all. Your reading comprehension could use some work.

So based on your one example, you immediately went ChatGPT! because…?

I think a human would have split the "it's not this, it's that" type of sentence into two separate sentences that could be more descriptive. This is a blog post, not a tweet, so there's no length constraint.

If they wanted to keep it to a single sentence, they could have used a a word like "rather" to act as a separator between moat and wall.


"This is not...this is" is a tell

I think we'll have to disagree on that. Humans write that way, too, and they've written that way for far longer than AI.

(Where do you think AI picked up its writing habits from?)


There isn't any "this is" in that sentence.

LLMs may be deterministic for a subset of inputs, if one output (or intermediate layer) neuron-state probability is significantly higher than the rest. My understanding is, when probabilities are close they diverge.

What are the AI tells? The only one I found is redundancy, but it makes sense because this is trying to be approachable to laymen.

Like, you have a great point (the benefit of this approach isn't explained), but that's a mistake humans frequently make.


Here is a rough list, some may be contentious individually, but the more of these appear, the more you should suspect an LLM:

Cadence and rhythm: LLMs produce sentences with an extremely low variability in the number of clauses. Normal people run on from time to time, (bracket in lots of asides), or otherwise vary their cadence and rhythm within clauses more than LLMs tend to.

Section headings that are intended to be "cute" and "snappy" or "impactful" rather than technically correct or compact: this is especially a tell when the cuteness/impactfulness is deeply mismatched with the seriousness or technical depth of the subject matter.

Horrible trite analogies that show no actual real understanding of the actual logical, mathematical, or visuo-spatial relationships involved. I.e. analogies are based on linguistic semantics, and not e.g. mathematical isomorphism or core dynamics. "Humans cannot fly. Building airplanes does not change that; it only means we built a machine that flies for us". Can't imagine a more retarded and useless analogy for something as complex as the article topic.

Verbose repetition: The article defines two workarounds: "tool use" and "agentic" orchestration, then defines them, then in the paragraph immediately following, says the exact same thing. There are basically multiple (small paragraphs) that all say nothing at all more than the sentence "LLMs do not reliably perform long, exact computations on their own, so in practice we often delegate the execution to external tools or orchestration systems".

Pseudo-profound bullshit: (https://doi.org/10.1017/S1930297500006999). E.g. "A system that cannot compute cannot truly internalize what computation is." There is thankfully not too much of this in the article, and it appears mostly early on.

Missing key / basic logic (or failing to mention such points clearly) when this would be strongly expected by any serious practitioner or expert: E.g. in this article, we should have seen some simple nice centered LaTeX showing the scaled dot-product self attention equation, and then some simple notation to represent the `.chunk` call, and subsequent linear projection, something like H = [H1 | H2], or etc., I shouldn't have to squint at two small lines of PyTorch code to find this. It should be clear immediately this model is not trained, and this is essentially just compiling a VM into a Transformer, and not revealed more clearly only at the end.


I read a lot of LLM text every day, so I'm quite good at seeing the cadence, the narrative structures and the phrasing styles. It's not just "it's not just X but Y" or emdashes. I could point them out and you would say oh humans use this trope or phrasing style too, and of course that's true. It's still a tell. But it's pointless to argue about this.

A nitpick I have with this specific example: would `handle_suspend` be called by any other code? If not, does it really improve readability and maintainability to extract it?

The idea is that performance isn’t a reason not to do it. Other considerations may cause you to choose inline, but performance shouldn’t be one of them.

re-use as a criteria for functional decomposability is a very misguided notion

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: