On the other hand, when it works it's darn near magic.
I spent like a week trying to figure out why a livecd image I was working on wasn't initializing devices correctly. Read the docs, read source code, tried strace, looked at the logs, found forums of people with the same problem but no solution, you know the drill. In desperation I asked ChatGPT. ChatGPT said "Use udevadm trigger". I did. Things started working.
For some problems it's just very hard to express them in a googleable form, especially if you're doing something weird almost nobody else does.
i started (re)using AI recently. it/i mostly failed until i decided on a rule.
if it's "dumb and annoying" i ask the AI, else i do it myself.
since that AI has been saving me a lot of time on dumb and annoying things.
also a few models are pretty good for basic physics/modeling stuff (get basic formulas, fetching constants, do some calculations). these are also pretty useful. i recently used it for ventilation/co2 related stuff in my room and the calculations matched observed values pretty well, then it pumped me a broken desmos syntax formula, and i fixed that by hand and we were good to go!
---
(dumb and annoying thing -> time-consuming to generate with no "deep thought" involved, easy to check)
> For some problems it's just very hard to express them in a googleable form
I had an issue where my Mac would report that my tethered iPhone's batteries were running low when the battery was in fact fine. I had tried googling an answer, and found many similar-but-not-quite-the-same questions and answers. None of the suggestions fixed the issue.
I then asked the 'MacOS Guru' model for chatGPT my question, and one of the suggestions worked. I feel like I learned something about chatGPT vs Google from this - the ability of an LLM to match my 'plain English question without a precise match for the technical terms' is obviously superior to a search engine. I think google etc try synonyms for words in the query, but to me it's clear this isn't enough.
Google isn't the same for everyone. Your results could be very different from mine. They're probably not quite the same as months ago either.
I may also have accidentally made it harder by using the wrong word somewhere. A good part of the difficulty of googling for a vague problem is figuring out how to even word it properly.
Also of course it's much easier now that I tracked down what the actual problem was and can express it better. I'm pretty sure I wasn't googling for "devices not initializing" at the time.
But this is where I think LLMs offer a genuine improvement -- being able to deal with vagueness better. Google works best if you know the right words, and sometimes you don't.
I spent like a week trying to figure out why a livecd image I was working on wasn't initializing devices correctly. Read the docs, read source code, tried strace, looked at the logs, found forums of people with the same problem but no solution, you know the drill. In desperation I asked ChatGPT. ChatGPT said "Use udevadm trigger". I did. Things started working.
For some problems it's just very hard to express them in a googleable form, especially if you're doing something weird almost nobody else does.