Yes, that is true. But metrics cannot pinpoint the cause of the problem, which is where the engineering approach fails. You cannot accurately measure the stress levels of individual developers like you can with force plates. You cannot accurately measure developers taking shortcuts in their code like you can measure gear slippage. Similarly, you cannot accurately predict the critical load of a particular configuration of developers like you do with beams in bridge construction, nor can you accurately measure the weight of a feature request like you can weigh a vehicle. You cannot measure the "velocity" of two developer teams working on a different codebase and then assume that the metrics are comparable just because you're measuring the same quantity.
The only way software development metrics are useful is to get an indication of the performance over time of the same team working on the same codebase. That should give some indication of the overall trends, but when the numbers start going down, how will you accurately determine the cause of that? Will you treat the developer team as a black box and insert more probes, or will you talk to them and rely on their qualitative assessments after all?
The only way software development metrics are useful is to get an indication of the performance over time of the same team working on the same codebase. That should give some indication of the overall trends, but when the numbers start going down, how will you accurately determine the cause of that? Will you treat the developer team as a black box and insert more probes, or will you talk to them and rely on their qualitative assessments after all?