So many people putting expectations up to knock down about models. Infinite reasons to critique them.
Please dispense with anyone's "expectations" when critiquing things! (Expectations are not a fault or property of the object of the expectations.)
Today's models (1) do things that are unprecedented. Their generality of knowledge, and ability to weave completely disparate subjects together sensibly, in real time (and faster if we want), is beyond any other artifact in existence. Including humans.
They are (2) progressing quickly. AI has been an active field (even through its famous "winters") for several decades, and they have never moved forward this fast.
Finally and most importantly (3), many people, including myself, continue to find serious new uses for them in daily work, that no other tech or sea of human assistants could replace cost effectively.
The only way I can make sense out of anyone's disappointment is to assume they simply haven't found the right way to use them for themselves. Or are unable to fathom that what is not useful for them is useful for others.
They are incredibly flexible tools, which means a lot of value, idiosyncratic to each user, only gets discovered over time with use and exploration.
That that they have many limits isn't surprising. What doesn't? Who doesn't? Zeus help us the day AI doesn't have obvious limits to complain about.
> Their generality of knowledge, and ability to weave completely disparate subjects together sensibly, is beyond any other artifact in existence
Very well said. That’s perhaps the area where I have found LLMs most useful lately. For several years, I have been trying to find a solution to a complex and unique problem involving the laws of two countries, financial issues, and my particular individual situation. No amount of Googling could find an answer, and I was unable to find a professional consultant whose expertise spans the various domains. I explained the problem in detail to OpenAI’s Deep Research, and six minutes later it produced a 20-page report—with references that all checked out—clearly explaining my possible options, the arguments for and against each, and why one of those options was probably best. It probably saved me thousands of dollars.
Are they progressing quickly? Or was there a step-function leap about 2 years ago, and incremental improvements since then?
I tried using AI coding assistants. My longest stint was 4 months with Copilot. It sucked. At its best, it does the same job as IntelliSense but slower. Other times it insisted on trying to autofill 25 lines of nonsense I didn't ask for. All the time I saved using Copilot was lost debugging the garbage Copilot wrote.
Perplexity was nice to bounce plot ideas off of for a game I'm working on... until I kept asking for more and found that it'll only generate the same ~20ish ideas over and over, rephrased every time, and half the ideas are stupid.
The only use case that continues to pique my interest is Notion's AI summary tool. That seems like a genuinely useful application, though it remains to be seen if these sorts of "sidecar" services will justify their energy costs anytime soon.
Now, I ask: if these aren't the "right" use cases for LLMs, then what is, and why do these companies keep putting out products that aren't the "right" use case?
Thia might appear to ba a shallow answer but I do not think it is. AI has taken a very long road from early conceptions, by Turing and others, to a tool whose value we can argue about, but which is getting attention and use everywhere.
The mere fact that "are they progressing rapidly" is a question, is a testament to an incredible uptick in speed of progression.
"Is AI progressing quickly?" is the new "Are we there yet?"
have you tried it recently? o3-mini-high is really impressive. If you ease into talking to it about your intent and outlining the possible edge and corner cases it will write nuanced rust code 1000 lines at a time no problem
The use cases I list are all over the past 8 months. One of the things that drove me away from copilot and chatbots is that I just write better code faster than it can. I could sit there for an hour, fiddling with prompts, and copy-pasting output into a text editor, or I could just write the damn code.
Please dispense with anyone's "expectations" when critiquing things! (Expectations are not a fault or property of the object of the expectations.)
Today's models (1) do things that are unprecedented. Their generality of knowledge, and ability to weave completely disparate subjects together sensibly, in real time (and faster if we want), is beyond any other artifact in existence. Including humans.
They are (2) progressing quickly. AI has been an active field (even through its famous "winters") for several decades, and they have never moved forward this fast.
Finally and most importantly (3), many people, including myself, continue to find serious new uses for them in daily work, that no other tech or sea of human assistants could replace cost effectively.
The only way I can make sense out of anyone's disappointment is to assume they simply haven't found the right way to use them for themselves. Or are unable to fathom that what is not useful for them is useful for others.
They are incredibly flexible tools, which means a lot of value, idiosyncratic to each user, only gets discovered over time with use and exploration.
That that they have many limits isn't surprising. What doesn't? Who doesn't? Zeus help us the day AI doesn't have obvious limits to complain about.