I understand your logic but I found LLM's to be quite strong at C#. It makes little mistakes and the mistakes seem related to the complexity of what I'm doing, not the language itself.
I agree this is easy enough to follow but I'd like to quibble about something else:
Comments should answer the question why you are not using some kind of hash set and do a single pass over the data and why it's OK to reorder the strings. One could reasonable expect that Dedupe shows first occurrences in order.
This can have another explanation as well: the moment a block is found, the miner starts building on top of the previous block but hasn't constructed a new full block of transactions yet as that costs a bit of time to calculate and distribute. In this period, a new block could be found.
Blocks are Merkle trees, only the head transaction contains global seed. So, for one to mine block, one needs to walk Merkle tree up from head and then finish work with small amount of data in the block header.
Thus, the time spent mining block is directly dependent on the logarithm of number of transactions in the block.
If one can mine a block with 3000 transactions (11-12 hashes to the header) in 10 minutes, one can mine a block with one transaction (1 hash to header) about ten times as fast.
The construction of the block is negligible if we talk about complete block mining time.
>If one can mine a block with 3000 transactions (11-12 hashes to the header) in 10 minutes, one can mine a block with one transaction (1 hash to header) about ten times as fast.
Huh? Surely the attempts for both take exactly the same amount of time after you've initially constructed the block, you're calculating only a single hash for each attempt.
Maybe, but buffer overflows would occur written in assembler written by experts as well. C is a fine portable assembler (could probably be better with the knowledge we have now) but programming is hard. My point: you can roughly expect an expert C programmer to produce as many bugs per unit of functionality as an expert assembly programmer.
I believe it to be likely that the C programmer would even writes the code faster and better because of the useful abstractions. An LLM will certainly write the code faster but it will contain more bugs (IME).
>Yes, but the machine itself is deterministic and logically sound.
Because arithmetic itself, by definition, is.
Human language is not. Which is why being able to talk to our computers in natural language (and have them understand us and talk back) now is nothing short of science fiction come true.
My point is, needing to use something with care doesn't prevent it becoming from wildly successful. LLM's are wrong way more often but are also more versatile than a calculator.
> LLM's are wrong way more often but are also more versatile than a calculator.
LLMs are wrong infinitely more than calculators, because calculators are never wrong (unless they're broken).
If you input "1 + 3" into your calculator and get "4", but you actually wanted to know the answer to "1 + 2", the calculator wasn't "wrong". It gave you the answer to the question you asked.
Now you might say "but that's what's happening with LLMs too! It gave you the wrong answer because you didn't ask the question right!" But an LLM isn't an all-seeing oracle. It can only interpolate between points in its training data. And if the correct answer isn't in its training data, then no amount of "using it with care" will produce the correct answer.