I think you have an interesting point of view and I enjoy reading your comments, but it sounds a little absurd and circular to discount people's negativity about LLMs simply because it's their fault for using an LLM for something it's not good at. I don't believe in the strawman characterization of people giving LLMs incredibly complex problems and being unreasonably judgemental about the unsatisfactory results. I work with LLMs every day. Companies pay me good money to implement reliable solutions that use these models and it's a struggle. Currently I'm working with Claude 3.5 to analyze customer support chats. Just as many times as it makes impressive, nuanced judgments it fails to correctly make simple trivial judgements. Just as many times as it follows my prompt to a tee, it also forgets or ignores important parts of my prompt. So the problem for me is it's incredibly difficult to know when it'll succeed and when it'll fail for a given input. Am I unreasonable for having these frustrations? Am I unreasonable for doubting the efficacy of LLMs to address problems that many believe are already solved? Can you understand my frustration to see people characterize me as such because ChatGPT made a really cool image for them once?