Hacker Newsnew | past | comments | ask | show | jobs | submit | jojomodding's commentslogin

Nonvoters implicitly consent to the outcome ahead of time. Which means that they can carry part of the blame--if they didn't like it they could have voted against it.

Is pdf.js the renderer that VSCode uses? You know, the one where everything becomes extremely blurry when you zoom in for absolutely no reason at all except (I would imagine) developer incompetence?

Its the one firefox uses and it works well.

> are free to sell 0day for profit.

This is not true in many jurisdictions.


Anyone can sell a vuln in any jurisdiction and never be caught. Lets not pretend the law is actually worth a damn here.

We need an anonymous bounty system.


Are you claiming that if I sell 0day through a broker to the national Government of a given jurisdictions that the national Government of that jurisdiction is going to criminally penalize me?

If so, that's a bit naive. In the actual world, that buyer wants to buy more stuff from me, not penalize me.


I sympathise with the author and the argument. I know the text is a rant. As such, I can understand that the proposed consequences might not make sense. Yet still, there is a fun game you can play, where you replace AI by "chess engine" and you get a text that would be fitting for a late 90s chess grandmaster but seen as totally anachronistic today:

"Chess players who don't use engines will be left behind", they say. I can't emphasize enough how much I hate it when I hear/read shit like that because I'm pretty sure, in fact, that what will happen is the exact opposite.

People who rely on engines are the ones who will be left behind. They'll forget how to think, how to move the pieces, how to solve a simple straightforward mate in 3, how to tell victory from stalemate... they'll forget how to fucking LEARN. I think that's the part that makes me the saddest. What a beautiful thing it is just to play chess.

If you think Deep Blue can do better than you, why would you just let it? Why wouldn't you aim to be better, to learn how to be or do something that a chess computer would never do?


The problem is AI is being pushed and used as the equivalent of using a chess engine to tell you the best move during a match.

Maybe there's a way AI can be used to make developers better but it mostly just seems to be the equivalent of grand masters saying how great vibe playing is because now they can play 1000x more games every day. But don't worry, they're still steering the games.


Sounds like something Magnus Carlsen might say. I hear he's doing quite well out of the game of chess, and pointedly not playing how a computer would play, even though Deep Blue is clearly capable of winning more than he is and from more difficult positions.

Also, the world isn't as trivially solved by computation as a game of chess, so maybe delegating your job or how to be a better human to ChatGPT isn't as much of a winning strategy as getting the computer to suggest chess moves.


That is a really accurate analogy.

Deeper reasoning, longer term planning, and more efficient solutions have always separated amateurs from experts. That experience cannot be applied asynchronously or reduced to supervision. It has to be "in the loop" and there is always a lot of out-of-band information that only an experienced eye would notice and can't be trained into a model.


> "Chess players who don't use engines will be left behind"

Unfortunately this is absolutely true for classical chess at the professional level, w.r.t. preparation.

Not detracting from your point though, for the other 99.9% of chess players.


I do not get why not needing proof objects is desirable. It seems good to have a defined way to store proofs that has a very tight spec and can thus have competing implementations, like in https://arena.lean-lang.org/. The LCF approach couples the proof format to the module system of a programming language.

Occasionally, inspecting that proof term is useful to see what happened in a proof.

Then again, I also like dependent types.


You can use reflection in dependently-typed proof systems to avoid storing proof objects for expensive computations and replace them with a proof of a general algorithm instead (much like in the LCF approach: the general algorithm proof is the equivalent of a compiler checking that the module system use in a LCF tactic is sound).

Its a software where you type your maths proofs in and a "yes" comes out. Except if your proof is broken, then a "no" comes out. Of course, sometimes the computer is just a bit dumb at intuiting the intermediate steps, so you need to explain like a 10-year-old child (or else you get a "no" as you failed to convince the computer). And storing all these explanations takes memory. And you have to speak the somewhat idiosyncratic language of the computer, which you can imagine a bit like LaTeX but more easily parseable and less ambiguous.

The blog article author is saying that Lean is like a younger child (that needs more intermediate steps explained), stores proofs more inefficiently (taking more memory) and that its computer proof language is harder to read for humans. Additionally there is a technical point about dependent types, which are the principle around which the computer proof language is organized (the same way a programming language might be organized around object-orientism).


Thanks for taking the time to explain, rather than drive by downvote. It doesn't seem like this would be useful to me, or certainly I have no use for it that I can think of.

Giving something that has no internal concept of time (or identity for that matter) a prison sentence of n years seems kinda ineffectual.

Prison sentence? For writing sloppy code? Now that's an interesting idea...

“Generate 100,000 tokens about why you feel bad.” :P

Nothing is stopping you from using LLMs when contributing to their project (I think). One reason might simply be that they would rather spend the (very sparse) donation money on anything else but tokens.

https://asahilinux.org/docs/project/policies/slop/

They do currently ban LLM-assisted submissions. To be honest, even if LLMs are technically capable of writing code that assists the project, this at least helps keeps the 'floodgate' closed for certain low-quality PRs that other open-source projects are getting.


Tell your kids about FUCK, the most fun way to play with them bricks...

I was going to say something about the choice of acronym for the project, but, well, a couple people figured it out. :D

Do you mean leaving them on a hard floor so a parent can injure their heel stepping on one at night?

Special high intensity training!

SHIT.. I figured that one..

actually his uncle.


Very cliquey at the top. Rabin is the name of a former Israeli PM as well, so wouldn't be surprised if they were related.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: