I see a difference between seeing them as valuable in their current state vs being "bullish about LLMs" in the stock market sense.
The big problem with being bullish in the stock market sense is that OpenAI isn't selling the LLMs that currently exist to their investors, they're selling AGI. Their pitch to investors is more or less this:
> If we accomplish our goal we (and you) will have infinite money. So the expected value of any investment in our technology is infinite dollars. No, you don't need to ask what the odds are of us accomplishing our goal, because any percent times infinity is infinity.
Since OpenAI and all the founders riding on their coat tails are selling AGI, you see a natural backlash against LLMs that points out that they are not AGI and show no signs of asymptotically approaching AGI—they're asymptotically approaching something that will be amazing and transformative in ways that are not immediately clear, but what is clear to those who are watching closely is that they're not approaching Altman's promises.
The AI bubble will burst, and it's going to be painful. I agree with the author that that is inevitable, and it's shocking how few people see it. But also, we're getting a lot of cool tech out of it and plenty of it is being released into the open and heavily commoditized, so that's great!
I think that people who don't believe LLMs to be AGI are not very good at Venn diagrams. Because they certainly are artificial, general, and intelligent according to any dictionary.
Good grief. You are deeply confused and/or deeply literal. That's not the accepted definition of AGI in any sense. One does not evaluate each word has an isolated component for testing the truth of a statement in an open compound word. Does your "living room" have organs?
It is that or you can't recognize a tongue-in-cheek comment on goalpost shifting. Wiki page you linked has the original definition of the term from 1997, dig it up. Better yet, look at the history of that page in Wayback machine and see with your own eyes how ChatGPT release changed it.
For reference, 1997 original: By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.
2014 wiki requirements: reason, use strategy, solve puzzles, and make judgments under uncertainty;
represent knowledge, including commonsense knowledge; plan; learn; communicate in natural language;
and integrate all these skills towards common goals.
No, it's really not. Joining words into a compound word enables the new compound to take on new meaning and evolve on its own, and if it becomes widely used as a compound it always does so. The term you're looking for if you care to google it is an "open compound noun".
A dog in the sun may be hot, but that doesn't make it a hot dog.
You can use a towel to dry your hair, but that doesn't make the towel a hair dryer.
Putting coffee on a dining room table doesn't turn it into a coffee table.
Spreading Elmer's glue on your teeth doesn't make it tooth paste.
The White House is, in fact, a white house, but my neighbor's white house is not The White House.
I could go on, but I think the above is a sufficient selection to show that language does not, in fact, work that way. You can't decompose a compound noun into its component morphemes and expect to be able to derive the compound's meaning from them.
You wrote so much while failing to read so little:
> in most cases
What do you think will happen if we will start comparing the lengths of the list ["hot dog", ...] and the list ["blue bird", "aeroplane", "sunny March day", ...]?
No, I read that, and it's wrong. Can you point me to a single compound noun that works that way?
A bluebird is a specific species. A blue parrot is not a bluebird.
An aeroplane is a vehicle that flies through the air at high speeds, but if you broke it down into morphemes and tried to reason it out that way you could easily argue that a two-dimensional flat surface that extends infinitely in all directions and intersects the air should count.
Sunny March day isn't a compound noun, it's a noun phrase.
Can you point me to a single compound noun (that is, a two-or-more-part word that is widely used enough to earn a definition in a dictionary, like AGI) that can be subjected to the kind of breaking apart into morphemes that you're doing without yielding obviously nonsensical re-interpretations?
The big problem with being bullish in the stock market sense is that OpenAI isn't selling the LLMs that currently exist to their investors, they're selling AGI. Their pitch to investors is more or less this:
> If we accomplish our goal we (and you) will have infinite money. So the expected value of any investment in our technology is infinite dollars. No, you don't need to ask what the odds are of us accomplishing our goal, because any percent times infinity is infinity.
Since OpenAI and all the founders riding on their coat tails are selling AGI, you see a natural backlash against LLMs that points out that they are not AGI and show no signs of asymptotically approaching AGI—they're asymptotically approaching something that will be amazing and transformative in ways that are not immediately clear, but what is clear to those who are watching closely is that they're not approaching Altman's promises.
The AI bubble will burst, and it's going to be painful. I agree with the author that that is inevitable, and it's shocking how few people see it. But also, we're getting a lot of cool tech out of it and plenty of it is being released into the open and heavily commoditized, so that's great!