68 messages
1234
Open this post in threaded view
|

Open this post in threaded view
|

Open this post in threaded view
|

 In reply to this post by Richard A. O'Keefe Richard A. O'Keefe comments: >   [floating point addition is not associative]] > > And this is an excellent example of why violating expected laws is BAD. > The failure of floating point addition to be associative means that  there > are umpteen ways of computing polynomials, for example, and doing it   > different ways will give you different answers.  This is *not* a good > way to write reliable software. [Then we see the scalar product whose value *may* depend on the ev. order] I wonder... Would you say that *no* typical floating-point software is reliable? Jerzy Karczmarczuk _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

 In reply to this post by David Benbennick Ratio Integer may possibly have the same trouble, or maybe something related. I was messing around with various operators on Rationals and found that positive and negative infinity don't compare right. Here's a small program which shows this; if I'm doing something wrong, I'd most appreciate it being pointed out to me. If I fire up ghci, import Data.Ratio and GHC.Real, and then ask about the type of "infinity", it tells me Rational, which as far as I can tell is Ratio Integer...? So far I have only found these wrong results when I compare the two infinities. Uwe > module Main where > import Prelude > import Data.Ratio > import GHC.Real > > pinf = infinity > ninf = -infinity > zero = 0 > > main = >   do putStrLn ("pinf = " ++ (show pinf)) >      putStrLn ("ninf = " ++ (show ninf)) >      putStrLn ("zero = " ++ (show zero)) >      putStrLn ("min pinf zero =\t" ++ (show (min pinf zero))) >      putStrLn ("min ninf zero =\t" ++ (show (min ninf zero))) >      putStrLn ("min ninf pinf =\t" ++ (show (min ninf pinf))) >      putStrLn ("min pinf ninf =\t" ++ (show (min pinf ninf)) ++ "\twrong") >      putStrLn ("max pinf zero =\t" ++ (show (max pinf zero))) >      putStrLn ("max ninf zero =\t" ++ (show (max ninf zero))) >      putStrLn ("max ninf pinf =\t" ++ (show (max ninf pinf))) >      putStrLn ("max pinf ninf =\t" ++ (show (max pinf ninf)) ++ "\twrong") >      putStrLn ("(<) pinf zero =\t" ++ (show ((<) pinf zero))) >      putStrLn ("(<) ninf zero =\t" ++ (show ((<) ninf zero))) >      putStrLn ("(<) ninf pinf =\t" ++ (show ((<) ninf pinf)) ++ "\twrong") >      putStrLn ("(<) pinf ninf =\t" ++ (show ((<) pinf ninf))) >      putStrLn ("(>) pinf zero =\t" ++ (show ((>) pinf zero))) >      putStrLn ("(>) ninf zero =\t" ++ (show ((>) ninf zero))) >      putStrLn ("(>) ninf pinf =\t" ++ (show ((>) ninf pinf))) >      putStrLn ("(>) pinf ninf =\t" ++ (show ((>) pinf ninf)) ++ "\twrong") >      putStrLn ("(<=) pinf zero =\t" ++ (show ((<=) pinf zero))) >      putStrLn ("(<=) ninf zero =\t" ++ (show ((<=) ninf zero))) >      putStrLn ("(<=) ninf pinf =\t" ++ (show ((<=) ninf pinf))) >      putStrLn ("(<=) pinf ninf =\t" ++ (show ((<=) pinf ninf)) ++ "\twrong") >      putStrLn ("(>=) pinf zero =\t" ++ (show ((>=) pinf zero))) >      putStrLn ("(>=) ninf zero =\t" ++ (show ((>=) ninf zero))) >      putStrLn ("(>=) ninf pinf =\t" ++ (show ((>=) ninf pinf))) >      putStrLn ("(>=) pinf ninf =\t" ++ (show ((>=) pinf ninf)) ++ "\twrong") _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

 On Feb 11, 2008 10:18 PM, Uwe Hollerbach <[hidden email]> wrote: > If I fire up ghci, import > Data.Ratio and GHC.Real, and then ask about the type of "infinity", it > tells me Rational, which as far as I can tell is Ratio Integer...? Yes, Rational is Ratio Integer.  It might not be a good idea to import GHC.Real, since it doesn't seem to be documented at http://www.haskell.org/ghc/docs/latest/html/libraries/.  If you just import Data.Ratio, and define > pinf :: Integer > pinf = 1 % 0 > ninf :: Integer > ninf = (-1) % 0 Then things fail the way you expect (basically, Data.Ratio isn't written to support infinity).  But it's really odd the way the infinity from GHC.Real works.  Anyone have an explanation? -- I'm doing Science and I'm still alive. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

Open this post in threaded view
|

 In reply to this post by jerzy.karczmarczuk Jerzy Karczmarczuk wrote: > Would you say that *no* typical floating-point software is reliable? It depends on how you define "reliable". Floating point intentionally trades accuracy for speed, leaving it to the user to worry about round-off errors. It is usually not too hard to get the probability of failure somewhat low in practice, if you don't require a proof. It used to be true - and may still be - that the engineers who implement floating point in the hardware of our CPUs would never fly on commercial airliners. Would you? Would you entrust your country's nuclear arsenal to an automated system that depends on floating point arithmetic? Regards, Yitz _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

 In reply to this post by Ben Franksen Ben Franksen wrote: > ...and the Unimo paper[1] explains how to easily write a 'correct' ListT. > BTW, Unimo is an extreme case of the monad laws holding only w.r.t. > the 'right' equality, i.e. in the laws m == m' is to be understood as >   observe_monad m == observe_monad m' > (and even this '==' is of course /not/ the Eq class method but a semantic > equality.) > [1] http://web.cecs.pdx.edu/~cklin/papers/unimo-143.pdfAre you sure? Maybe I am missing something, but I don't see any claim that the Unimo ListT satisfies the laws any more than the old mtl ListT. It looks to me like Unimo is just an attempt to provide an easier way to create, use, and understand monads, not a change in their semantics. ListT-Done-Right could also be defined via the Unimo framework, and then it would satisfy the monad laws. Thanks, Yitz _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

 In reply to this post by Yitzchak Gale Yitzchak Gale writes: > Jerzy Karczmarczuk wrote: >> Would you say that *no* typical floating-point software is reliable? > > It depends on how you define "reliable". > > Floating point intentionally trades accuracy for speed, ... > It used to be true - and may still be - that the engineers > who implement floating point in the hardware of our > CPUs would never fly on commercial airliners. Would you? > > Would you entrust your country's nuclear arsenal to an > automated system that depends on floating point arithmetic? 1. This is not a possible "trade-off" or not. In scientific/engineering   computation there is really no choice, since you have to compute   logarithms, trigonometric functions, etc., and some inaccuracy is   unavoidable. Of course, one may use intervals, and other extremely   costly stuff, but if the stability of the algorithms is well controlled,   and in normal case it is (especially if the basic arithmetics has some   extra control bits to do the rounding), th issue is far from being   mortal. 2. The story about engineering not flying commercial planes is largely   anecdotical, and you know that. Repeating it here doesn't change much. 3. Nuclear arsenal is never really "entrusted to an automated system",   because of reasons much beyond the fl.point inaccuracies.   On the other hand, in all those software one has to deal with   probabilities, and with imprecise experimental data, so even if for God   knows which purpose everything used exact algebraic numbers, or   controlled transcendental extensions, the input imprecision would kill   all the sense of infinitely precise computations thereupon. 4. The non-reliability of engineering software has many more important   reasons, sometimes incredibly stupid, such as the confusion between   metric and English units in the Mars Climate Orbiter crash...   The Ariane 5 crash was the result not of the floating-point computation   but of the conversion to signed 16-bit numers (from a 64bit double). 5. Of course, in the original posting case, the underlying math/logic is   discrete, and has no similar inaccuracies, so the two worlds should   not be confounded... Here, if some laws get broken, it is the result of   bad conventions, which usually can be easily avoided. Jerzy Karczmarczuk _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

Open this post in threaded view
|

 In reply to this post by David Benbennick On Feb 12, 2008, at 1:50 AM, David Benbennick wrote: > On Feb 11, 2008 10:18 PM, Uwe Hollerbach <[hidden email]>   > wrote: >> If I fire up ghci, import >> Data.Ratio and GHC.Real, and then ask about the type of "infinity",   >> it >> tells me Rational, which as far as I can tell is Ratio Integer...? > > Yes, Rational is Ratio Integer.  It might not be a good idea to import > GHC.Real, since it doesn't seem to be documented at > http://www.haskell.org/ghc/docs/latest/html/libraries/.  If you just > import Data.Ratio, and define > >> pinf :: Integer >> pinf = 1 % 0 > >> ninf :: Integer >> ninf = (-1) % 0 > > Then things fail the way you expect (basically, Data.Ratio isn't > written to support infinity).  But it's really odd the way the > infinity from GHC.Real works.  Anyone have an explanation? An educated guess here: the value in GHC.Real is designed to permit   fromRational to yield the appropriate high-precision floating value   for infinity (exploiting IEEE arithmetic in a simple, easily- understood way).  If I'm right, it probably wasn't intended to be used   as a Rational at all, nor to be exploited by user code. -Jan-Willem Maessen _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

Open this post in threaded view
|

 In reply to this post by Jan-Willem Maessen On Feb 12, 2008 6:12 AM, Jan-Willem Maessen <[hidden email]> wrote: On Feb 12, 2008, at 1:50 AM, David Benbennick wrote:> On Feb 11, 2008 10:18 PM, Uwe Hollerbach <[hidden email]> > wrote:>> If I fire up ghci, import>> Data.Ratio and GHC.Real, and then ask about the type of "infinity",>> it>> tells me Rational, which as far as I can tell is Ratio Integer...? >> Yes, Rational is Ratio Integer.  It might not be a good idea to import> GHC.Real, since it doesn't seem to be documented at> http://www.haskell.org/ghc/docs/latest/html/libraries/.  If you just > import Data.Ratio, and define>>> pinf :: Integer>> pinf = 1 % 0>>> ninf :: Integer>> ninf = (-1) % 0>> Then things fail the way you expect (basically, Data.Ratio isn't > written to support infinity).  But it's really odd the way the> infinity from GHC.Real works.  Anyone have an explanation?An educated guess here: the value in GHC.Real is designed to permit fromRational to yield the appropriate high-precision floating valuefor infinity (exploiting IEEE arithmetic in a simple, easily-understood way).  If I'm right, it probably wasn't intended to be usedas a Rational at all, nor to be exploited by user code. -Jan-Willem MaessenWell... I dunno. Looking at the source to GHC.Real, I see ```infinity, notANumber :: Rational infinity = 1 :% 0 notANumber = 0 :% 0 ``` This is actually the reason I imported GHC.Real, because just plain % normalizes the rational number it creates, and that barfs very quickly when the denominator is 0. But the values themselves look perfectly reasonable... no? Uwe _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

 2008/2/12 Uwe Hollerbach <[hidden email]>: > Well... I dunno. Looking at the source to GHC.Real, I see > >  infinity, notANumber :: Rational > infinity = 1 :% 0 > notANumber = 0 :% 0 > >  This is actually the reason I imported GHC.Real, because just plain % > normalizes the rational number it creates, and that barfs very quickly when > the denominator is 0. But the values themselves look perfectly reasonable... > no? Ummm... I'm going to have to go with no. In particular we can't have signed infinity represented like this and maintain reasonable numeric laws:   1/0 = 1/(-0) = (-1)/0 Rationals are defined not to have a zero denomiator, so I'll bet in more than one place in Data.Ratio that assumption is made. Luke _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

Open this post in threaded view
|

 Richard A. O'Keefe wrote: > > On 12 Feb 2008, at 5:14 pm, [hidden email] wrote: >> Would you say that *no* typical floating-point software is reliable? > > With lots of hedging and clutching of protective amulets around the > word "reliable", of course not.  What I *am* saying is that > (a) it's exceptionally HARD to make reliable because although the > operations >     are well defined and arguably reasonable they do NOT obey the laws that >     school and university mathematics teach us to expect them to obey Ints do not obey those laws, either. It is not exceptionally hard to write reliable software using ints. You just have to check for exceptional conditions. That's also the case for floating point. That said, I suspect that 90% of programs that use float and double would be much better off using something else. The only reason to use floating point is performance. > This is leaving aside all sorts of machine strangeness, like the student > whose neural net program started running hundreds of times slower than > he expected.  I advised him to replace >     s = 0; >     for (i = 0; i < n; i++) s += x[i]*x[i]; > by >     s = 0; >     for (i = 0; i < n; i++) >         if (fabs(x[i]) > 1e-19) >         s += x[i]*x[i]; > and the problem went away.  Dear reader: do you know why I expected this > problem, what it was, and why this is NOT a general solution? I guess it trapped on creating denormals. But again, presumably the reason the student used doubles here was because he wanted his program to be fast. Had he read just a little bit about floating point, he would have known that it is *not* fast under certain conditions. As it were, he seems to have applied what he though was an optimisation (using floating point) without knowing anything about it. A professional programmer would get (almost) no sympathy in such a situation. Roman _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

 Trialog: Roman Leshchinskiy writes: > Richard A. O'Keefe wrote: >> [hidden email] wrote: >>> Would you say that *no* typical floating-point software is reliable? >> >> With lots of hedging and clutching of protective amulets around the >> word "reliable", of course not.  What I *am* saying is that >> (a) it's exceptionally HARD to make reliable because although the >> operations >>  are well defined and arguably reasonable they do NOT obey the laws that >>     school and university mathematics teach us to expect them to obey > > Ints do not obey those laws, either. It is not exceptionally hard to write > reliable software using ints. You just have to check for exceptional > conditions. That's also the case for floating point. > > That said, I suspect that 90% of programs that use float and double would > be much better off using something else. The only reason to use floating > point is performance. I have a bit different perspective... First, when I see the advice "use something else", I always ask "what", and I get an answer very, very rarely... Well? What do you propose? Then, the problem is not always pathological, in the sense of "exceptional conditions". There are touchy points related to the stability of the algorithms for the solution of differential equations. There are doubtful random number generators in Monte-Carlo business. There are ill-conditioned matrices and screwed-up iterative definitions. Algorithms work, work, and ultimately explode or produce rubbish. The "laws" which get broken are "almost" respected for a long time, and then we have the Bald Man (Sorites) paradox... RAO'K very wisely says that people should avoid reinventing wheels, and they should use established packages, written by people who know. The problem *here* is that we would like to have something fabulous in Haskell - for example... And there aren't too many experts, who would convert to the Functional Religion just for fun. What is *much worse*, some potential users who could encourage building such packages in the numerical domain, typically don't believe that FP gives anything interesting. At least, this is the opinion of physicists I spoke to recently. Never mind. We shall dance over their cadavers, unless they dance over ours. In both cases we shall be happy. Jerzy Karczmarczuk _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

Open this post in threaded view
|