Hacker Newsnew | past | comments | ask | show | jobs | submit | more Bootvis's commentslogin

I understand your logic but I found LLM's to be quite strong at C#. It makes little mistakes and the mistakes seem related to the complexity of what I'm doing, not the language itself.


See https://news.ycombinator.com/item?id=46586312.

I cannot speak much for C#, but you may be right. Claude's Opus is really good.


I agree this is easy enough to follow but I'd like to quibble about something else:

Comments should answer the question why you are not using some kind of hash set and do a single pass over the data and why it's OK to reorder the strings. One could reasonable expect that Dedupe shows first occurrences in order.


This can have another explanation as well: the moment a block is found, the miner starts building on top of the previous block but hasn't constructed a new full block of transactions yet as that costs a bit of time to calculate and distribute. In this period, a new block could be found.


Blocks are Merkle trees, only the head transaction contains global seed. So, for one to mine block, one needs to walk Merkle tree up from head and then finish work with small amount of data in the block header.

Thus, the time spent mining block is directly dependent on the logarithm of number of transactions in the block.

If one can mine a block with 3000 transactions (11-12 hashes to the header) in 10 minutes, one can mine a block with one transaction (1 hash to header) about ten times as fast.

The construction of the block is negligible if we talk about complete block mining time.


This is incorrect. The miner just changes the header of the block and rehashes. The transaction set is fixed for many tries.


>If one can mine a block with 3000 transactions (11-12 hashes to the header) in 10 minutes, one can mine a block with one transaction (1 hash to header) about ten times as fast.

Huh? Surely the attempts for both take exactly the same amount of time after you've initially constructed the block, you're calculating only a single hash for each attempt.


The whole series is excellent and as a non regular user of assembly I learned a ton.


Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer.

I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME).


Use AI for that :)


Not kidding, I bet llm’s are excellent at triaging these reports. Humans, in a corporate setting, are apparently not.


It does if you’re a clumsy operator and those are not rare.


Yes, but the machine itself is deterministic and logically sound.


>Yes, but the machine itself is deterministic and logically sound.

Because arithmetic itself, by definition, is.

Human language is not. Which is why being able to talk to our computers in natural language (and have them understand us and talk back) now is nothing short of science fiction come true.


Even worse is if it's in the other room and your fingers can't reach the keys. It delivers no answers at all!


My point is, needing to use something with care doesn't prevent it becoming from wildly successful. LLM's are wrong way more often but are also more versatile than a calculator.


> LLM's are wrong way more often but are also more versatile than a calculator.

LLMs are wrong infinitely more than calculators, because calculators are never wrong (unless they're broken).

If you input "1 + 3" into your calculator and get "4", but you actually wanted to know the answer to "1 + 2", the calculator wasn't "wrong". It gave you the answer to the question you asked.

Now you might say "but that's what's happening with LLMs too! It gave you the wrong answer because you didn't ask the question right!" But an LLM isn't an all-seeing oracle. It can only interpolate between points in its training data. And if the correct answer isn't in its training data, then no amount of "using it with care" will produce the correct answer.


What about a strict subset of C#. The use case for F# seems to be shrinking because MS is putting all its energy in language.


This isn't true, because more is not always better.

C# has lots of anti-features that F# does not have.


This seems like an actually useful computation to do, unlike earlier results. Is that a reasonable reading of this article?


No it’s still completely useless for the real world. Also not actually verifiable


Classic quantum


Now verifiably useless in real life


An AI powered meal planner that helps you create recipes, plan your weeks and manage your groceries:

https://github.com/bobjansen/mealmcp

There is a website too so you don’t actually need to use MCP:

https://meals.bobjansen.net/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: