>but medium and long term we need to figure out how to build systems in a way that it can keep up with this increased influx of code.
Why? Why do we need to "write code so much faster and quicker" to the point we saturate systems downstream? I understand that we can, but just because we can, does'nt mean we should.
But that's point of TFA, no? Now that writing code is no longer the bottleneck, the upstream and downstream processes have become the new bottlenecks, and we need to figure out how to widen them.
As I see it, the end goal for all of this is generating software at the speed of thought, or at least at the speed of speech. I want the digital butler to whom I could just say - "I'm not happy with the way things happened to day, please change it so that from here on, it'll be like x" - and it'll just respond with "As you wish", and I'll have confidence that it knows me well enough and is capable enough to have actually implemented the best possible interpretation of what I asked for, and that the few miscommunications that do occur would be easy to fix.
We're obviously not close that yet, but why shouldn't we build towards it?
If we want to continue to ship at that speed we will have to. I’m not sure if we should, but seemingly we are. And it causes a lot of problems right now downstream.
We were already rushing and churning products and code of inferior quality before AI (let's e.g. consider the sorry state of macOS and Windows in the past decade).
Using AI to ship more and more code faster, instead of to make code more mature, will make this worse.
I'm betting on it meaning the product quality going down - and technical debt increasing, which will be dealt with more AI in a downward spiral. Meanwhile college CS majors wont ever bother learning the basics (as AI will handle their coursework, and even their hobby work). Then future AI will train on previous AI output, with the degredation that brings...
Less code isn't as important as it used to be, because the cost of maintaining (simple) code has gone down as well.
With coding agent projects I find that investing in DRY doesn't really help very much. Needing to apply the same fix in two places is a waste of time as a human. An agent will spot both places with grep and update them almost as fast as if there was just one.
It's another case where my existing programming instincts appear to not hold as well as I would expect them to.
When you talk about maintaining code, do you mean having the LLM do it and you maintain a write-only codebase? Because if you're reading the code yourself and you have a bloated tangled codebase it would make things much harder right?
Is the goal basically a codebase where your interactions are mediated through an LLM?
The goal is "good code" based on my list of criteria, which includes both "simple and minimal" and "the design affords future changes".
A bloated table codebase isn't good code, because it's harder to understand and makers changes to than the equivalent non-bloated codebase.
But... bloat does look a little bit different when you no longer need to optimize code for saving humans typing time.
Much of the confusing code I've encountered during my career has been confusing because it had too many layers of indirection, which happened because someone was applying DRY too aggressively because they didn't want to duplicate even the smallest pieces of logic in more then one place.
Good coding agents will only DRY like that if you tell them to.
Why? Why do we need to "write code so much faster and quicker" to the point we saturate systems downstream? I understand that we can, but just because we can, does'nt mean we should.