Yes, the respective sins of both floating- and fixed-point math are significantly reduced when you have more bits (for a demo of this try using both at 16 bits; you will have to code very carefully and be very aware of the faults of each).
For many situations though, I find the graceful degradation of floats to cause more subtle bugs, which can be a problem.
Range warnings are going to be calculation specific. Adding a non-zero fixed point number "y" N times to a fixed-point number "x" will result in either an overflow or x+N*y.
If integer overflows trap (curse you Intel, C89), then repeated additions (important in many simulations) will either work as expected or crash.
Floating point operations are (for practical purposes) highly privileged because of extremely mature hardware implementations. Hardware implementations that make other forms of calculations more tractable are possible (and have existed in the past) and should be considered when evaluating FPs fitness for purpose, otherwise we will be stuck in the IEEE local maximum forever.
For many situations though, I find the graceful degradation of floats to cause more subtle bugs, which can be a problem.