Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Reasoning" will be disproven for this again within a few days I guess.

Context: o1 does not reason, it pattern matches. If you rename variables, suddenly it fails to solve the request.



The 'pattern matching' happens at complex layer's of abstraction, constructed out of combinations of pattern matching at prior layers in the network.

These models can and do work okay with variable names that have never occurred in the training data. Though sure, choice of variable names can have an impact on the performance of the model.

That's also true for humans, go fill a codebase with misleading variable names and watch human programmers flail. Of course, the LLM's failure modes are sometimes pretty inhuman, -- it's not a human after all.


Rename to equally reasonable variable names, or to intentionally misleading or meaningless ones? Good naming is one of the best ways to make reading unfamiliar code easier for people, don't see why actual AGI wouldn't also get tripped up there.


Can't we sometimed expect more from computers than people, especially around something that compilers have done for decades.


Perhaps, but over enough data pattern matching can becomes generalization ...

One of the interesting DeepSeek-R results is using a 1st generation (RL-trained) reasoning model to generate synthetic data (reasoning traces) to train a subsequent one, or even "distill" into a smaller model (by fine tuning the smaller model on this reasoning data).

Maybe "Data is all you need" (well, up to a point) ?


reasoning is pattern matching at a certain level of abstraction.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: