"But what if the real issue is an N+1 query pattern that the index would merely mask? What if the performance problem stems from inefficient data modeling that a different approach might solve more elegantly?"
In the best case you would have to feed every important information into the context. These are my indexes, this is my function, these are my models. After that the model can find problematic code. So the main problem is go give your model all of the important information, that has to be fixed if the user isn’t doing that part (Obviously that doesn’t mean that the LLM will find a correct problem, but it can improve the results).
In the best case you would have to feed every important information into the context. These are my indexes, this is my function, these are my models. After that the model can find problematic code. So the main problem is go give your model all of the important information, that has to be fixed if the user isn’t doing that part (Obviously that doesn’t mean that the LLM will find a correct problem, but it can improve the results).