Hacker Newsnew | past | comments | ask | show | jobs | submit | aklein's commentslogin

I wrote a clone of this game for the HP-48 as a teen in the 90s. you can still find it if you google hard enough. good times.


I still have my GX from the late 90s which managed to outlast my Metakernel equipped, overly rubberized 49G. If I were to dig it out, the serial cable too.

And a printout of the sysrpl guide, it being quite thick of a print.


until you learn to trust the system and free mental capacity for more useful thinking. at some point compilers became better at assembly instructions than humans. seems inevitable this will happen here. caring about the details and knowing the details are two different things.


LLMs lie constantly. There should be no trust in that system. And no I don't think they will "get better".


How do they "lie constantly"? We are specifically talking about code here, not LLMs writing legal documents.


I've had the LLM "lie" to me about the code it wrote many times. But "lie" and "hallucinate" are incorrect anthropomorphisms commonly used to describe LLM output. The more appropriate term would be garbage.


Just a basic sanity check: did the LLM have the tools to check its output for lies, hallucinations and garbage? Could it compile, run tests, linters etc and still managed to produce something that doesn't work?


I've frankly given up on LLMs for most programming tasks. It takes just as much time (if not more) to coddle it to produce anything useful, with the addition of frustration, and I could have just written far better code myself in the time it takes to get the LLM to produce anything useful. I already have 40 years experience programming, so I don't really need a tin-can to do it for me. YMMV.


Compilers are deterministic tools. AI is not deterministic. It will tell you this if you ask it. AI then, is not a tool. It is an aide. It is not a tool like a compiler, IDE, editor, etc.


https://chatgpt.com/share/699346d3-fcc0-8008-8348-07a423a526...

interesting. if you probe it for its assumptions you get more clarity. I think this is much like those tricky “who is buried in grants tomb” phrasings that are not good faith interactions


This article highlights how experts disagree on the meaning of (non-human) intelligence, but it dismisses the core problem a bit too quickly imo -

“LLMs only predict what a human would say, rather than predicting the actual consequences of an action or engaging with the real world. This is the core deficiency: intelligence requires not just mimicking patterns, but acting, observing real outcomes, and adjusting behavior based on those outcomes — a cycle Sutton sees as central to reinforcement learning.” [1]

An LLM itself is a form of crystallized intelligence, but it does not learn and adapt without a human driver, and that to me is a key component of intelligent behavior.

[1] https://medium.com/@sulbha.jindal/richard-suttons-challenge-...


Humans are social-emotional beings who assign “irrational” value to things for social signaling and emotional (self-)gratification.


Not a lawyer, but my understanding is civilian casualties are not unlawful (according to international law) when the target is legitimate (on the theory that it is otherwise impossible to legally fight a war with an enemy that hides behind its citizens). To be clear this is not to say war crimes are not also happening.


I noticed you interface with the native code via ctypes. I think cffi is generally preferred (eg, https://cffi.readthedocs.io/en/stable/overview.html#api-mode...). Although you'd have more flexibility if you build your own python extension module (eg using pybind), which will free you from a simple/strict ABI. Curious if this strict separation of C & Python was a deliberate design choice.


Yes, when I designed the API I wanted to keep a clear distinction between Python and C. At some point I had two APIs: 1 in Python and the other in high-level C++ and they both shared the same low-level C API. I find this design quite clean and easy to work with if multiple languages are involved. When I'll get to perf I plan to experiment a bit with nanobind (https://github.com/wjakob/nanobind) and see if there's a noticeable difference wrt ctypes.


The call overhead of using ctypes vs nanobind/pybind is enormous

https://news.ycombinator.com/item?id=31378277

Even if the number reported there is off, it's not far off because ctypes just calls out to libffi which is known to be the slowest way to do ffi.


Thanks for pointing this out! I'll definitely have to investigate other approaches. nanobind looks interesting but I don't need to expose complex C++ objects, I just need the 'fastest' way of calling into a C API. I guess the goto for this is CFFI?


It's the same thing - both nanobind and cffi compile the binding. The fact that nanobind let's you expose cpp doesn't prevent you from only exposing c. And IMHO nanobind is better because you don't need to learn another language to use it (ie you don't need to learn cffi's DSL).


Nowadays, with more and more economic output coming from knowledge work (IP), this depreciation and amortization approach feels hopelessly out of date. I don't know what a good replacement is, but I know software and IP more generally shouldn't be treated at all like a material good.


What is MCP?


Model Context Protocol. It is a way to give an LLM access to an API. There's a lot of hype about it right now, and, thus, a great many half-baked articles floating around. https://www.anthropic.com/news/model-context-protocol


Can’t wait for a better TSC Doom framerate.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: