> There comes a point where you have to do so much work just to get a good output, that LLMs cease to be more productive than just writing something out yourself.
I think this gets to the core of the problem with LLM workflows and why there are so many disagreements about effectiveness
Maybe I overestimate my skills or underestimate how long things would take, but I am constantly feeling like when I try to use AI it takes more time, not less
My suspicion is that if you could create a second version of me, give one copy of them an LLM and one copy solves the problem normally, this would be the case
But many people love these tools and feel more productive, so what gives? The problem it is impossible to really measure because we don't have convenient parallel universe clones to test against. It's all just vibes and made up numbers
I think this gets to the core of the problem with LLM workflows and why there are so many disagreements about effectiveness
Maybe I overestimate my skills or underestimate how long things would take, but I am constantly feeling like when I try to use AI it takes more time, not less
My suspicion is that if you could create a second version of me, give one copy of them an LLM and one copy solves the problem normally, this would be the case
But many people love these tools and feel more productive, so what gives? The problem it is impossible to really measure because we don't have convenient parallel universe clones to test against. It's all just vibes and made up numbers