Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As a programmer (and GOFAI buff) for 60 years who was initially highly critical of the notion of LLMs being able to write code because they have no mental states, I have been amazed by the latest incarnations being able to write complex functioning code in many cases. There are, however, specific ways that not being reasoners is evident ... e.g., they tend to overengineer because they fail to understand that many situations aren't possible. I recently had an example where one node in a tree was being merged into another, resulting in the child list of the absorbed node being added to the child list of the kept node. Without explicit guidance, the LLM didn't "understand" (that is, its response did not reflect) that a child node can only have one parent so collisions weren't possible.

> proof that we’re living in 2025, not 1967. But the more commercialised it gets, the more mythical and misleading the narrative becomes

You seem to be living in 2024, or 2023. People generally have far more pragmatic expectations these days, and the companies are doing a lot less overselling ... in part because it's harder to come up with hype that exceeds the actual performance of these systems.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: