I can see where you're going but I don't think it's correct. There are clearly cases where an error will mess you up whether it goes as far as in C or not. Other times, the language or tooling prevents the error before runtime to ensure correctness. Are you aware of the benefits of Design-by-Contract (Ada/Eiffel), static proving (SPARK), or dependent types (ATS), though? The correctness criteria you can encode in them can straight-up prevent errors at interface or algorithmic expression levels. Three of these have been used for low-level code with two often in real-time and one for an 8-bitter. Depending on automation or interactive use involved, the errors caught at compile-time can increase pretty far past what a basic, low-level, type system can do.
So, we already know we can knock out extra classes of errors with such languages. It was proven in theory and in the field. Using or improving them is just good engineering. We can also make more languages in trial-and-error discovery process to see if we find more benefits. Exceeding C's benefits, though, is already empirically proven to be worthwhile whether it's a Myrddin, a SPARK, or an ATS.
So, we already know we can knock out extra classes of errors with such languages. It was proven in theory and in the field. Using or improving them is just good engineering. We can also make more languages in trial-and-error discovery process to see if we find more benefits. Exceeding C's benefits, though, is already empirically proven to be worthwhile whether it's a Myrddin, a SPARK, or an ATS.