Strange, I had the same thought about doing this exact exercise this weekend.
I think the overall percentage is the wrong approach here.
It’s easy to say a lot of things that are factually true or predictions that are inevitably true.
However the more salient point with Gary Marcus is the one unforgivable thing he was wrong about and continues to double down on which is that deep learning is hitting a wall.
Starting in early 2022 and going through today, there is still so much low hanging fruit with deep learning.
Today’s LLM progress is mostly being made in RL. But world models are also still so early and they’re deep learning all the way down.
It would be nice if he would just admit he was wrong.
Depends on how you look at it. In terms of overcoming fundamental limitations, I would argue it has indeed hit a wall. ChatGPT is how old, but LLMs still can't actually count?
But then, to your point, what does it matter, if they're still as useful as they are? Even at this stage, Claude Code makes Jira halfway bearable.
Of course, we have to consider the devil's advocate as well. Most CEOs don't seem to be reporting great ROI on their "AI" investments.
I'll add one more point. If you scroll through his Substack, a lot of his posts are incredibly negative and unproductive. I was (and continue to be) someone who cares deeply about responsible AI... But there's a difference between working on AI responsibly or pushing the debate, versus simply criticizing everything that is done as folly, useless, crap, etc.
I think local inference is great for many things - but this stance seems to conflate that you can’t have privacy with server side inference, and you can’t have nefariousness with client side inference. A device that does 100% client side inference can still phone home unless it’s disconnected from internet. Most people will want internet-connected agents right? And server side inference can be private if engineered correctly (strong zero retention guarantees, maybe even homomorphic encryption)
I think the overall percentage is the wrong approach here.
It’s easy to say a lot of things that are factually true or predictions that are inevitably true.
However the more salient point with Gary Marcus is the one unforgivable thing he was wrong about and continues to double down on which is that deep learning is hitting a wall.
Starting in early 2022 and going through today, there is still so much low hanging fruit with deep learning.
Today’s LLM progress is mostly being made in RL. But world models are also still so early and they’re deep learning all the way down.
It would be nice if he would just admit he was wrong.
reply