Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, the respective sins of both floating- and fixed-point math are significantly reduced when you have more bits (for a demo of this try using both at 16 bits; you will have to code very carefully and be very aware of the faults of each).

For many situations though, I find the graceful degradation of floats to cause more subtle bugs, which can be a problem.



I bet it's much easier to put range warnings into floating point than to switch to fixed point, and that the effect on bugs would be similar.


Range warnings are going to be calculation specific. Adding a non-zero fixed point number "y" N times to a fixed-point number "x" will result in either an overflow or x+N*y.

If integer overflows trap (curse you Intel, C89), then repeated additions (important in many simulations) will either work as expected or crash.

Floating point operations are (for practical purposes) highly privileged because of extremely mature hardware implementations. Hardware implementations that make other forms of calculations more tractable are possible (and have existed in the past) and should be considered when evaluating FPs fitness for purpose, otherwise we will be stuck in the IEEE local maximum forever.


Deciding where to fix your point is also calculation specific if you want to keep up your accuracy. I'd expect the effort to be pretty proportional.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: