Before AI, we discuss on how to solve a problem with teammates.
Even if we didn't remember exactly what we wrote 6mo ago, we at least remembered the general idea.
After AI, that understanding often disappears, to the point where we can't even direct the AI to fix the problem because we don't know what's wrong.
Also AI often changes the code in the context of current problem. So, we might get more bugs when fixing one.
Maybe companies will lean more on in-house solutions as code getting cheaper, building their own walled ecosystem. Fine-tunning their internal based on their findings. Keeping all the knowledge for themselves.
On the contrary, I think groups that adopt a share-alike approach will, counterintuitively, deepen their moat by increasing the amount of effective world-history-knowledge reflected in their systems. I thikn this will be true for the same reason that conditional-cooperation as a strategy is the optimal one in most iterated Prisoner's Dilemma games.
After AI, that understanding often disappears, to the point where we can't even direct the AI to fix the problem because we don't know what's wrong.
Also AI often changes the code in the context of current problem. So, we might get more bugs when fixing one.