Does anyone know a good site that fully explains and shows how floats are handled in computers? I did a google search and it turned up alot of sites talking about it but none of it seems to compute to me. This is what I have found.
Base2 Floating Points (I am assuming this is what DBP uses)
Single Float
Bits: 0-22 = mantissa, 23-30 = exponent, 31 = sign bit
Double Float
Bits: 0-51 = mantissa, 52-62 = exponent, 63 = sign bit
The mantissa of both are calculated using sign magnitude method (I would have though two's compliment would be faster but oh well).
This sounds like it makes alot of sense but some of the float values I get don't seem to follow this. Can anyone post a link to a site that actually shows the binary representaion of some float numbers?
I have also found that Double and Sngle floats take the same number of cycles to calculate in P4's so the only reason not use doubles is the double memory usage.
I also understand that different processors store and calculate floats differently. This may be what is throwing my calculations off.
Any help would be greatly appreciated.