I managed to make it back to the internet today.
I have this to say: MATH IS INACCURATE. It's not perfect.
Also, if we limit the decimals to 3 places in 1/3, then we get 0.333. If we then multiply this number by 3 we get 0.999. If we increment the decimal places to 4 then 1/3 = 0.3333 and then multiply again we get 0.9999. Let me put this in different terms. We'll multiply 0.999 by 1000 and make 999. Now, we'll multiply 1.0 by 1000 and make 1000. 999 is not 1000 as I'm sure you can notice. For maths 0.333333 * 3 = 1.0 is 0.999999 rounded up.
I'll put this in other terms. Take this C program for example:
#include <stdio.h>
#include <stdlib.h>
int main()
{
printf("%f\n", 0.99999f);
printf("%f\n", 1.00000f);
return 0;
}
What do you think the output is? This:
If 0.99999~ is equal to 1.0 then why does this C program output two different values and not the same value? Because they're NOT the same. 0.9 is not 1.0. 0.99 is not 1.0. 0.999 is not 1.0. 0.9999 is not 1.0. A dragon is not a dragon fly, no matter how small a difference in the wording. A potato is not a tomato, no matter how small a difference in the wording. So why would a number be another number due to a small difference in their values? Since a decimal is only a scale then what if we move the scale up and ignore decimals? Is 99999999999 really just 100000000000? No, it is not. Multiply 2 by 2 and you get 4. Multiply 2 by 2.5 and you get 5.
Cheers,
-naota
I'm not a dictator to those that do stuff for me by will. Only those who don't.