> Floating point numbers are not exact, the value of pi is not exact
> either, and I guess that between them they are giving you errors.
Yes. Actually, this particular inexactness is entirely due to the value of pi. The calculation of sin pi is being performed using the Double data type, which cannot represent pi exactly. Since Double uses binary fractions, doing
shows a decimal approximation to the binary approximation. To investigate the representation of pi, subtract from it a number which _can_ be represented easily and exactly as a binary fraction, as follows:
This shows that pi is represented using an approximation that is close to
This value, the computer's pi, differs from true pi by
As others have pointed out, floating point representations of numbers
are not exact. You don't even have to use fancy functions like sine to
see all kinds of nice algebraic properties break down.
let x = 1e8; y = 1e-8 in (y + x) - x == y + (x - x)
evaluates to False.
So from this you can see that addition is not even associative,
neither is multiplication. Distributivity also fails in general.
Floating point computations are always approximate and have some level
of error associated with them. If you want proper real numbers, things
like equality testing become impossible in general. If you look
around, I think there are a couple of libraries in Haskell which let
you work with arbitrary precision reals though.
> The Sine function in the prelude is not behaving as I expected. In the
> following Hugs session I tested sin on 0, 90,180 & 360 degrees.
> Prelude> sin 0
> 0.0 --correct
> Prelude> sin (pi/2)
> 1.0 --correct
> Prelude> sin pi
> 1.22460635382238e-16 --WRONG!
> Prelude> sin (2*pi)
> -2.44921270764475e-16 --WRONG!
> Is this normal behaviour? Or am I using the trig functions in an unexpected
> Haskell-Cafe mailing list
> [hidden email] > http://www.haskell.org/mailman/listinfo/haskell-cafe >
On Sat, Apr 29, 2006 at 04:51:40PM -0400, Cale Gibbard wrote:
> Floating point computations are always approximate and have some level
> of error associated with them. If you want proper real numbers, things
> like equality testing become impossible in general. If you look
> around, I think there are a couple of libraries in Haskell which let
> you work with arbitrary precision reals though.
That's not really true. The exact cases of floating point arithmetic can
be important, and it's really annoying when compilers break them.
For small integers, floating point arithmetic *is* exact, for example, and
also for arithmetic (not division) involving integers divided by powers of
two, provided there's no overflow or underflow. These exact properties
allow the moderately careful programmer to do exact calculations that could
have done using clever integer arithmetic while reusing code that works
with floating point numbers. It can be handy, for example, when computing
the symmetries of a basis set, since you don't need a separate integer
3-vector class (in C++, for example). This isn't a big deal, and it's much
less of a deal in Haskell, where you can profitably use typeclasses to make
the integer 3-vectors relatively easy to work with, but on the other hand,
why bother with an integer class that will behave identically to the
floating-point one whenever it's used? (Yes, the answer is the safety of
*knowing* that you made no approximation, but for such a small piece of
easily audited code, that's not likely to be worth the effort.)
Haskell-Cafe mailing list
[hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe