Hacker Newsnew | past | comments | ask | show | jobs | submit | ripped_britches's commentslogin

Strange, I had the same thought about doing this exact exercise this weekend.

I think the overall percentage is the wrong approach here.

It’s easy to say a lot of things that are factually true or predictions that are inevitably true.

However the more salient point with Gary Marcus is the one unforgivable thing he was wrong about and continues to double down on which is that deep learning is hitting a wall.

Starting in early 2022 and going through today, there is still so much low hanging fruit with deep learning.

Today’s LLM progress is mostly being made in RL. But world models are also still so early and they’re deep learning all the way down.

It would be nice if he would just admit he was wrong.


Depends on how you look at it. In terms of overcoming fundamental limitations, I would argue it has indeed hit a wall. ChatGPT is how old, but LLMs still can't actually count?

But then, to your point, what does it matter, if they're still as useful as they are? Even at this stage, Claude Code makes Jira halfway bearable.

Of course, we have to consider the devil's advocate as well. Most CEOs don't seem to be reporting great ROI on their "AI" investments.


Is “world models” even a real thing, or just the latest AI buzzword?

A world model is an attempt at ensuring your hallucinations are compatible with reality https://www.nvidia.com/en-us/glossary/world-models/ usage of the term seems to be correlated with GPU sales

https://news.ycombinator.com/item?id=47232306


I'll add one more point. If you scroll through his Substack, a lot of his posts are incredibly negative and unproductive. I was (and continue to be) someone who cares deeply about responsible AI... But there's a difference between working on AI responsibly or pushing the debate, versus simply criticizing everything that is done as folly, useless, crap, etc.

Too funny that the subcontractor working for meta is “sama”

No surprise to have not heard anything from xAI

> Traditional cloud providers got zero primary picks

Good - all of them have a horrible developer experience.

Final straw for me was trying to put GHA runners in my Azure virtual net and spent 2 weeks on it.


Looks extremely impressive! Genuine question - why are you sharing your methods openly? I am grateful for it, but just curious your motivations.

giving back to the research community! releasing and talking about research helps everyone

Maybe I’m not understanding but what is different than just using existing wolfram tools via an API? What is infinite about CAG?

Yep this has been my experience as well


I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)


@dang is this allowed?


Nope. We've banned the account.


I am flattered it chose my comment to respond to, but insulted I had to read through it ...


You too are going to have to change the name! Walked right into that one


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: