Hi café,
Why are && and || in the Prelude right-associative? This contradicts my expectation and the way these work in other languages. That said, I can't think of any harm in it. This came up from a question asked by a student, and I have no idea why the design is this way. Thanks, Richard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
Could it be so that you can shortcut in the expected order (left to right)? Left associative: a && b && c = (a && b) && c which means going into a && b, which means going into a, and if it is False, then going up in the expression tree. If it is right associative: a && b && c = a && (b && c), which means going into a, and if it is False, you are done. If you have a conjunction of n booleans, the complexity of evaluating this expression is linear with respect to the position of the first False (in the case of &&). In the left-associative case, it is linear in the number of &&s. Just a guess. But you got me interested now. Does anyone have the real explanation? Cheers, Ivan On Thu, 11 Apr 2019 at 22:13, Richard Eisenberg <[hidden email]> wrote: Hi café, _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Richard Eisenberg-5
I don't know the historical answer, but I think it's because the true fixity can't be expressed in Haskell. As far as I can tell, there's no operator with the same precedence as && or || that can be meaningfully combined with it. But if these operators were defined just "infix", then we'd have to write junk like x || (y || z). So instead we picked a direction out of a bag and never had a reason to look back. On Thu, Apr 11, 2019, 10:13 PM Richard Eisenberg <[hidden email]> wrote: Hi café, _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Ivan Perez
Am 12.04.19 um 04:26 schrieb Ivan Perez: > Could it be so that you can shortcut in the expected order (left to right)? > > Left associative: > a && b && c = (a && b) && c which means going into a && b, which means > going into a, and if it is False, then going up in the expression tree. For compile-time evaluation of known-to-be-constant values, this is what would indeed happen, but it wouldn't matter because such evaluation is done O(1) times. Generated code will simply evaluate the conditions one after the other and abort as soon as it sees False. > If you have a conjunction of n booleans, the complexity of evaluating > this expression is linear with respect to the position of the first > False (in the case of &&). In the left-associative case, it is linear > in the number of &&s. This isn't the case. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
On 4/12/19 6:42 AM, Joachim Durchholz wrote: > > Am 12.04.19 um 04:26 schrieb Ivan Perez: >> Could it be so that you can shortcut in the expected order (left to >> right)? >> >> Left associative: >> a && b && c = (a && b) && c which means going into a && b, which means >> going into a, and if it is False, then going up in the expression tree. > > For compile-time evaluation of known-to-be-constant values, this is what > would indeed happen, but it wouldn't matter because such evaluation is > done O(1) times. > Generated code will simply evaluate the conditions one after the other > and abort as soon as it sees False. > >> If you have a conjunction of n booleans, the complexity of evaluating >> this expression is linear with respect to the position of the first >> False (in the case of &&). In the left-associative case, it is linear >> in the number of &&s. > This isn't the case. The program below is evidence that it *is* the case: the way the expression is associated has an effect on run time. Adding more (&&) in the condition of the following function doesn't change the run time, but substituting the infixl variant (&.) does result in a measurable growth linear in the number of (&.). Of course, this is true only without optimizations, but the distinction is there, and many people do not have intuition about what is and isn't optimized by GHC, so this is certainly a worthwhile point of discussion. Li-yao import Control.Monad f :: Bool -> IO () f b = if b && True -- 9 more && True && True && True && True && True && True && True && True then error "oops" else return () (&.) :: Bool -> Bool -> Bool (&.) = (&&) infixl 1 &. main :: IO () main = do mapM_ f (replicate 1000000 False) _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by David Feuer
David Feuer <[hidden email]> writes:
> I don't know the historical answer, but I think it's because the true > fixity can't be expressed in Haskell. No, the historical answer is that with lazy evaluation the shortcutting happens in the expected order. We did think about that. -- Jón Fairbairn [hidden email] _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Richard Eisenberg-5
How does the right associativity of the short-circuiting Boolean operators in any way contradict the way that such operators work in other languages? These operators are associative, so a && (b && c) necessarily has the same value and effects as (a && b) && c. It has never been the case that all operators in all programming languages were left associative. For addition and subtraction it matters; you don't want a-b+c interpreted as a-(b+c), but not for || and not for &&. My expectation is that these operators should be right associative. On Fri, 12 Apr 2019 at 14:13, Richard Eisenberg <[hidden email]> wrote: Hi café, _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
Correct me if I'm wrong here. On Fri, 12 Apr 2019 at 05:21, Richard O'Keefe <[hidden email]> wrote:
In pure Haskell, perhaps, but in other languages, I would say no. In a language like C, I would expect that: - a && b && c be represented in the AST as (a && b) && c - The compiler optimizes the implementation of && to short circuit, which is, in some way, using laziness. This is not to say that they are right-associative; it's just a compiler optimization.
I can't find any reference for logic itself and, because /\ is introduced as associative from the start in propositional logic, it does not really matter. However, my training as a kid in math and the exposure to how I studied to solve (+) left to right (implicitly associating to the left) would have led me to intuitively parse / parenthesize conjunctions with multiple (&&) the same way unless instructed otherwise. I think this portion of the Haskell Report is also relevant to this intuition in the case of haskell programmers: "If no fixity
declaration is given for `op` then it defaults
to highest precedence and left associativity" (Section 4.4.2). Cheers Ivan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
Sorry, this is from section 3.2. Section 4.4.2 goes on to say: "Any operator lacking a fixity declaration
is assumed to be infixl 9" Ivan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Richard O'Keefe
If you look at definitions of other languages (C, Java), you see that the operators are defined to be left-associative. Perhaps those other languages got it wrong, then. :) In any case, this conversation has been illuminating. Thanks! Richard _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
On 2019-04-12 8:47 AM, Richard Eisenberg wrote:
I don't think so. It's just that short-circuit evaluation isn't a direct consequence of associativity in those languages because they're otherwise strict and this is an exception to their default semantics. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Richard O'Keefe
Am 12.04.19 um 11:21 schrieb Richard O'Keefe:
> It has > never been the case that all operators in all programming languages were > left associative. For addition and subtraction it matters; you don't > want a-b+c interpreted as a-(b+c), but not for || and not for &&. My > expectation is that these operators should be right associative. What is the basis for this expectation? My expectation would be left associativity, just because I read stuff from left to right, and left associativity is what I get if I mentally parse from left to right. So I'm curious what thought process arrives at the opposite expectation. Regards, Jo _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
> On Apr 12, 2019, at 3:15 PM, Joachim Durchholz <[hidden email]> wrote:
> > What is the basis for this expectation? > My expectation would be left associativity, just because I read stuff from left to right, and left associativity is what I get if I mentally parse from left to right. > So I'm curious what thought process arrives at the opposite expectation. Since (&&) short-circuits on first failure, and (||) short-circuits on first success, right associativity is more intuitive: a && b && c == a && (b && c) a || b || c == a || (b || c) as each can stop immediately when `a` is respectively False or True. This matches the fact that (&&) is naturally strict in its first argument and and lazy in the second, which works well with currying. Also consider that, for example, in "ghci": foldr (&&) True (replicate 100000000000 False) returns immediately, while: foldr (&&) True (replicate 100000000000 False) hangs, which clearly shows that right folds are the more natural way to combine these boolean operators. And reading left to right, *is* IMHO right associative! You see: a || ... and immediately process a, then move on to what follows. When reading an English sentence, you don't have to reach the last word before you can start to make sense of the preceding words, for left associativity, try German... [ Oops! Never mind, perhaps that explains the difference in perspective. :-) :-) ] -- Viktor. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Jon Fairbairn
>> I don't know the historical answer, but I think it's because the true
>> fixity can't be expressed in Haskell. > No, the historical answer is that with lazy evaluation the > shortcutting happens in the expected order. We did think about > that. I don't understand how laziness enters the picture: (False && ⊥) && ⊥ ≡ False False && (⊥ && ⊥) ≡ False in both cases we get the same result. Stefan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
Er? Without laziness, you're going to try to evaluate the bottoms regardless of where they are. Or are you asserting that the short-circuiting done by many strict languages is their standard evaluation model? On Fri, Apr 12, 2019 at 7:32 PM Stefan Monnier <[hidden email]> wrote: >> I don't know the historical answer, but I think it's because the true brandon s allbery kf8nh _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
> Er? Without laziness, you're going to try to evaluate the bottoms
> regardless of where they are. Exactly: with lazyness, either associativity gives the same result, and without lazyness either associativity also gives the same result. The two seem orthogonal to me. Stefan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
I think Brandon's point is that short-circuiting is in fact an example of lazy evaluation, regardless of the language being otherwise strict. On Fri, Apr 12, 2019, 4:52 PM Stefan Monnier <[hidden email]> wrote: > Er? Without laziness, you're going to try to evaluate the bottoms _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
Exactly. Short-circuiting is emulating laziness in this one case where it turns out to be generally useful. And while (_|_ && _|_) may be evaluatable from a logical standpoint, computer languages tend to not do well with it: regardless of how it evaluates, (&&) is going to try to force at least one of the bottoms. On Fri, Apr 12, 2019 at 9:19 PM Theodore Lief Gannon <[hidden email]> wrote:
brandon s allbery kf8nh _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Theodore Lief Gannon
> I think Brandon's point is that short-circuiting is in fact an example of
> lazy evaluation, regardless of the language being otherwise strict. Sure, call it "short-circuiting", my comment stays the same: in which way is it related to *associativity*? In terms of semantics, all three (++), (||), and (&&) are associative AFAICT, so the choice of which associativity to use in the syntax is not related to the semantics (instead it's likely related to secondary concerns like efficiency). Stefan _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
In reply to this post by Stefan Monnier
I don't understand how laziness enters the picture: The first expression builds two thunks before trying the leftmost operand, and the second one only builds one thunk. More generally, a left-associative conjunction of n lazy Bools will build n - 1 thunks at once when forced, but a right-associative one will have only one at a time, though it may have to iterate through all n - 1 before finishing. _______________________________________________ Haskell-Cafe mailing list To (un)subscribe, modify options or view archives go to: http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe Only members subscribed via the mailman list are allowed to post. |
Free forum by Nabble | Edit this page |