Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Part of the argument I'm developing in my writing here is that LLMs should enable us to write better code, and if that's not happening we need to reevaluate and improve the way we are putting them to use. That chapter is still in my drafts.

So you see, after so much hype and hard and soft promotion efforts ( I count your writing in the latter category), you'd think it should not be "us" figuring it out - should it not be the people who are shoving this crap down our throats?

> That's still a very tiny portion of the software developer population. I know that because I talk to people - there is a desperate need for grounded, hype-free guidance to help the rest of our industry navigate this stuff and that's what I intend to provide.

That's a very arrogant position to assume - on the one hand there is no big secret to using these tools provided you can express yourself at all in written language. However some people for various reasons, I suspect mostly those who wandered into this profession as "coders" in the last years from other, less-paid disciplines, and lacking in basic understanding of computers, plus being motivated purely extrinsically - by money - I suspect those people may treat these tools as wonder oracles and may be stupid enough to think the problem is their "prompting" and not inherent un-reliability of LLMs. But everyone else, that is those of us who understand computers at a bit deeper level, do not want to fix Sams and Darios shit LLMs. These folks promised us no less than superintelligent systems, doing this, doing that, curing cancer, writing all the code in 6 months (or is it now 5 months already), creating a society where "work is optional" etc. So again - where TF is all of this shit promised by people sponsoring your soft promotion of LLMs? Why should we develop dependence on tools built by people who obviously dont know WTF they are talking about and who have been fundamentally wrong on several ocassions over the past few years. Whatever you are trying to do, whether you honestly believe in it or not I am afraid is a fool's errand at best.

 help



> you'd think it should not be "us" figuring it out - should it not be the people who are shoving this crap down our throats?

If they're "shoveling this crap down our throats" why should we expect them to help here?

More to the point: a consistent pattern over the last four years has been that the AI labs don't know what their stuff can do yet.. They will openly admit that. They have clearly established that the best way to find out what models can do is to put them out into the world and wait to hear back from their users.

> That's a very arrogant position to assume - on the one hand there is no big secret to using these tools provided you can express yourself at all in written language. However some people for various reasons, I suspect mostly those who wandered into this profession as "coders" in the last years from other, less-paid disciplines, and lacking in basic understanding of computers

I can't take you calling me "arrogant" seriously when in the very next breath you declare coding agents trivial to use and suggest that anyone having trouble with them is a coder and not a proper software engineer!

A hill I will happily die on is that LLM tools, including coding agents, are deceptively difficult to use. If you accepted that was true yourself, maybe you would be able to get better results out of them.


> If they're "shoveling this crap down our throats" why should we expect them to help here?

No no no - they are not supposed "to help". They own this complete timeline of LLMs. Dario Amodei said several times over that the agents will be writing ALL CODE in 6 months. We are now at least one month into his latest instance of this promise. He also babbled a lot about "PhD" level intelligence, just like the other ghoul at that other company. THEY are the ones who promote the supposed superintelligence creeping up on us closer each day. Whatever benchmarks they always push out with new release. But we should cut them some slack, accept that we are stupid for not wanting to burn our brains in multihour sessions with LLMs and just try to figure it out? We should not accept explaining it away as merely some cheap "hype". These people are not some C-list celebrities. They are billionaire CEOs, running companies supposedly worth into high hundreds of billions of dollars, making huge market influencing statements. I expect those statements to be true. Because if they are not, and they are smart people and will know if they are pushing out untruths on purpose, well that's just criminal behaviour. Now tell me more about how "we" should figure it out.

> A hill I will happily die on is that LLM tools, including coding agents, are deceptively difficult to use. If you accepted that was true yourself, maybe you would be able to get better results out of them.

:) No mate, please drop that "getting good results" nonsense. I have been getting good results too if I babysit them, and for the record, have done a bit more with them than just various model use cases. The issue for me and a lot of other people, that with a lot of care and safeguarding and attention etc, yes you can even build something to deploy in production - and myself and my team have done so - however it is so that they are not worth all the babysitting and especially the immense mental fatigue that comes out of working with them in continuity over a longer time span. At the end of the day, for complex projects its actually faster if I shortcircuit my thinking machine to my code-writing executors and skip the natural language bollocks altogether (save for the original spec). Using LLMs is like putting additional friction in between my brain and my hands.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: