Exactly this. Having spent almost three decades in enterprise context I see a lot of reinvention of something like a poor mans, unstructured, enterprise architecture - because AI agents.
I keep repeating ”what is good for humans in an organization is also good, or even required, for AI agents”.
Imagine every new instance of an AI agent as a new employee.
With humans its ok to slowly accumulate knowledge through word of mouth, trail and error and the general inertia of larger orgs almost seem structured (or unstructured) knowledge-wise for this.
AI agents will never be useful in high value operations in a larger orgs without organizational knowledge available and reliable.
Most coding tasks take place outside of pure tech companies, if I’d venture a guess.
And let’s be honest, enterprises in general do not value that quality - and they face very little in terms of technical challenges that can’t be solved by code on stack-overflow or github.
What most enterprises lack is knowledge about themselves though - this is more a business problem than a technical one however.
This is most likely due to the fact that it is really bad at resetting blinker when the steering wheel is straight’ish again.
Extremely annoying as any other car is much more sensitive (and sensible).
In a tesla an on-ramp to straight highway is rarely enough to stop the blinker, something I’ve never experienced in any other car.
Couple this with, IMO, the best baseline speaker system of any manufacturer… I’ve been driving with the blinker on for several kilometers at times!
Even if we still make a mess I think centralization of the mess is better than distributing it - what I mean is that polluting cities where millions sleep, eat, drink and breathe will probably be worse, net effect, than containing energy pollution to select places.
Running EVs in densely populated regions is probably a lot better for the population on the whole even if the net pollution would stay the same, IMO.
Still no EV is even better, but we’ve created a world where transport is often required so, one step at a time I guess.
Using AI doesn’t really change the fact that keeping ones and zeroes in check is like trying to keep quicksand in your hands and shape it.
Shaping of a codebase is the name of the game - this has always been, and still, is difficult. Build something, add to it, refactor, abstraction doesn’t sit right, refactor, semantics change, refactor, etc, etc.
I’m surprised at how so few seem to get this. Working enterprise code, many codebases 10-20 years old could just as well have been produced by LLMs.
We’ve never been good at paying debt and you kind of need a bit of OCD to keep a code base in check. LLM exacerbates a lack of continuous moulding as iterations can be massive and quick.
I was part of a big software development team once and that necessity I felt there, namely, being able to let go of the small details and focusing on the big picture is even more important when using llms.
The problem is most likely not writing the actual code, but rather understanding an old, fairly large codebase and how it’s stitched together.
SO is (was?) great when you where thinking about how nice a recursive reduce function could replace the mess you’ve just cobbled together, but language x just didn’t yet flow naturally for you.
The argument is perhaps ”enshittification”, and that becoming reliant on a specific provider or even set of providers for ”important thing” will become problematic over time.
As go feels like a straight-jacket compared to many other popular languages, it’s probably very suitable for an LLM in general.
Thinking about it - was this not the idea of go from the start? Nothing fancy to keep non-rocket scientist away from foot-guns, and have everyone produce code that everyone else can understand.
Diving in to a go project you almost always know what to expect, which is a great thing for a business.
Same here, but Azure. About 90% saved, with a very similar stack.
It is a great big cloud play to make enterprises reliant on the competency in their weird service abstractions, which is slowly draining the quite simple ops story an enterprise usually needs.
I keep repeating ”what is good for humans in an organization is also good, or even required, for AI agents”.
Imagine every new instance of an AI agent as a new employee. With humans its ok to slowly accumulate knowledge through word of mouth, trail and error and the general inertia of larger orgs almost seem structured (or unstructured) knowledge-wise for this.
AI agents will never be useful in high value operations in a larger orgs without organizational knowledge available and reliable.