Smarter do notation

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
22 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Smarter do notation

Daniel Peebles
Hi all,

I was wondering what people thought of a smarter do notation. Currently, there's an almost trivial desugaring of do notation into (>>=), (>>), and fail (grr!) which seem to naturally imply Monads (although oddly enough, return is never used in the desugaring). The simplicity of the desugaring is nice, but in many cases people write monadic code that could easily have been Applicative.

For example, if I write in a do block:

x <- action1
y <- action2
z <- action3
return (f x y z)

that doesn't require any of the context-sensitivty that Monads give you, and could be processed a lot more efficiently by a clever Applicative instance (a parser, for instance). Furthermore, if return values are ignored, we could use the (<$), (<*), or (*>) operators which could make the whole thing even more efficient in some instances.

Of course, the fact that the return method is explicitly mentioned in my example suggests that unless we do some real voodoo, Applicative would have to be a superclass of Monad for this to make sense. But with the new default superclass instances people are talking about in GHC, that doesn't seem too unlikely in the near future.

On the implementation side, it seems fairly straightforward to determine whether Applicative is enough for a given do block. Does anyone have any opinions on whether this would be a worthwhile change? The downsides seem to be a more complex desugaring pass (although still something most people could perform in their heads), and some instability with making small changes to the code in a do block. If you make a small change to use a variable before the return, you instantly jump from Applicative to Monad and might break types in your program. I'm not convinced that's necessary a bad thing, though.

Any thoughts?

Thanks,
Dan

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Tony Morris-4
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Tasty.

On 04/09/11 12:34, Daniel Peebles wrote:

> Hi all,
>
> I was wondering what people thought of a smarter do notation. Currently,
> there's an almost trivial desugaring of do notation into (>>=), (>>), and
> fail (grr!) which seem to naturally imply Monads (although oddly enough,
> return is never used in the desugaring). The simplicity of the desugaring is
> nice, but in many cases people write monadic code that could easily have
> been Applicative.
>
> For example, if I write in a do block:
>
> x <- action1
> y <- action2
> z <- action3
> return (f x y z)
>
> that doesn't require any of the context-sensitivty that Monads give you, and
> could be processed a lot more efficiently by a clever Applicative instance
> (a parser, for instance). Furthermore, if return values are ignored, we
> could use the (<$), (<*), or (*>) operators which could make the whole thing
> even more efficient in some instances.
>
> Of course, the fact that the return method is explicitly mentioned in my
> example suggests that unless we do some real voodoo, Applicative would have
> to be a superclass of Monad for this to make sense. But with the new default
> superclass instances people are talking about in GHC, that doesn't seem too
> unlikely in the near future.
>
> On the implementation side, it seems fairly straightforward to determine
> whether Applicative is enough for a given do block. Does anyone have any
> opinions on whether this would be a worthwhile change? The downsides seem to
> be a more complex desugaring pass (although still something most people
> could perform in their heads), and some instability with making small
> changes to the code in a do block. If you make a small change to use a
> variable before the return, you instantly jump from Applicative to Monad and
> might break types in your program. I'm not convinced that's necessary a bad
> thing, though.
>
> Any thoughts?
>
> Thanks,
> Dan
>
>
>
>
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe


- --
Tony Morris
http://tmorris.net/

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.10 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/

iQEcBAEBAgAGBQJOYuR6AAoJEPxHMY3rBz0PRE8IAMK8sQTzxtgRYeWcyP6JmWso
Yl3eDUjny2uMSzIkifJix/t7tYuYG092H6SvA5VhgVBPQUd8LnZH/91X3PDGANBu
ufjmCJLuN5+bgeNxvyzBHwz5iYM3GOkPhGvpJ3hJzYFIBlDVnVmMNoCDkki46/nq
xJ/gsAIwfgpe4+Ll1LWu9DjVaQHb9nWmdBpTvpbXb7W+WEX7MHIsVsP/KysVFZkc
XwPESJntb7oTHCcS3q1GEVTYdMFNKHlWOFcrdkGGQlegvwfjdt221oVDNToZi4z1
wJ268MdvXLSVIcU+JHLYxElQj6zrf2D51oQbHWLS/wlHRnpZHU5gtmrMTKvPvf8=
=d1uz
-----END PGP SIGNATURE-----

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Ivan Lazar Miljenovic
In reply to this post by Daniel Peebles
On 4 September 2011 12:34, Daniel Peebles <[hidden email]> wrote:
> Hi all,
> For example, if I write in a do block:
> x <- action1
> y <- action2
> z <- action3
> return (f x y z)
> that doesn't require any of the context-sensitivty that Monads give you, and
> could be processed a lot more efficiently by a clever Applicative instance
> (a parser, for instance).

What advantage is there in using Applicative rather than Monad for
this?  Does it _really_ lead to an efficiency increase?

--
Ivan Lazar Miljenovic
[hidden email]
IvanMiljenovic.wordpress.com

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Daniel Peebles
With parsers, for example, it amounts to you have a context-free vs. a context-sensitive language. The functions hidden behind a monadic bind are effectively opaque to any sort of analysis, whereas the static structure of an applicative can be analyzed as much as you want. Ed Kmett does this in his trifecta parsing library (I think there's a couple of other libraries that also do this), but you have to use the applicative interface explicitly where possible to take advantage of the additional optimizations.

This would also have benefits for other sorts of EDSLs, for the same reason. An applicative computation might for example be sparked and processed in parallel, whereas it's a lot harder (impossible) to do that if your structure isn't determined beforehand.


On Sun, Sep 4, 2011 at 12:24 AM, Ivan Lazar Miljenovic <[hidden email]> wrote:
On 4 September 2011 12:34, Daniel Peebles <[hidden email]> wrote:
> Hi all,
> For example, if I write in a do block:
> x <- action1
> y <- action2
> z <- action3
> return (f x y z)
> that doesn't require any of the context-sensitivty that Monads give you, and
> could be processed a lot more efficiently by a clever Applicative instance
> (a parser, for instance).

What advantage is there in using Applicative rather than Monad for
this?  Does it _really_ lead to an efficiency increase?

--
Ivan Lazar Miljenovic
[hidden email]
IvanMiljenovic.wordpress.com


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Stephen Tetley-2
In reply to this post by Daniel Peebles
It seems like complication for very slight advantage.

Firstly, so far only UU Parsing and Trifecta appear to have optimized
Applicative instances (does the optimization work for mixed
Monad+Applicative parsers or only if the whole parser is
Applicative?). Secondly if you want Applicative then you can write in
the Applicative style, often as succinct as do-notation.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Shachaf Ben-Kiki
In reply to this post by Daniel Peebles
On Sat, Sep 3, 2011 at 19:34, Daniel Peebles <[hidden email]> wrote:
...
> Of course, the fact that the return method is explicitly mentioned in my
> example suggests that unless we do some real voodoo, Applicative would have
> to be a superclass of Monad for this to make sense. But with the new default
> superclass instances people are talking about in GHC, that doesn't seem too
> unlikely in the near future.
...

One way to avoid explicitly mentioning return would be to use monad
comprehension syntax, which uses return implicitly, instead of do
notation. This also has the advantage of being "new" in GHC 7.2,
rather than officially being part of Haskell 98/2010, and therefore
being more amenable to various extensions (e.g. there are already
extensions that use MonadPlus/MonadZip/MonadGroup). Applicative would
probably still have to be a superclass of Monad, but the translation
of this syntax is simpler.

    Shachaf

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Daniel Peebles
Good idea! I'd forgotten about monad comprehensions.

On Sun, Sep 4, 2011 at 3:11 AM, Shachaf Ben-Kiki <[hidden email]> wrote:
On Sat, Sep 3, 2011 at 19:34, Daniel Peebles <[hidden email]> wrote:
...
> Of course, the fact that the return method is explicitly mentioned in my
> example suggests that unless we do some real voodoo, Applicative would have
> to be a superclass of Monad for this to make sense. But with the new default
> superclass instances people are talking about in GHC, that doesn't seem too
> unlikely in the near future.
...

One way to avoid explicitly mentioning return would be to use monad
comprehension syntax, which uses return implicitly, instead of do
notation. This also has the advantage of being "new" in GHC 7.2,
rather than officially being part of Haskell 98/2010, and therefore
being more amenable to various extensions (e.g. there are already
extensions that use MonadPlus/MonadZip/MonadGroup). Applicative would
probably still have to be a superclass of Monad, but the translation
of this syntax is simpler.

   Shachaf


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Sebastian Fischer-2
In reply to this post by Daniel Peebles
On Sun, Sep 4, 2011 at 11:34 AM, Daniel Peebles <[hidden email]> wrote:
> I was wondering what people thought of a smarter do notation.

I'd support it (for both do notation and monad comprehensions) once
Applicative is a superclass of Monad.

To me it looks light a slight complication for an advantage. Parsers
are not the only examples that benefit. Implicitly parallel
computations would be another because the arguments of <*> can be
evaluated in parallel, those of >>= cannot.

I think it's quite reasonable to try to desugar into the most general
form. Something like

    do x <- something
       return (bla x)

could (and, I think, should) be desugared by using only Functor.

Sebastian

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Thomas Schilling-2
In reply to this post by Daniel Peebles
I don't quite understand how this would work.  For example, would it work for these examples?

  do x <- blah
     let foo = return
     foo (f x)  -- Using an alias of return/pure

  do x <- Just blah   
     Just (f x)  -- another form of aliasing

  do x <- blah
     return (g x x)  -- could perhaps be turned into:
                     -- (\x -> g x x) <$> blah

  do x <- blah
     y <- return x
     return (f y)    -- = f <$> blah ?

  do x1 <- foo1        -- effect order must not be reversed
     x2 <- foo2
     return (f x2 x1)  -- note reversed order

  -- multiple uses of applicative
  do x1 <- foo1
     y <- return (f x1)
     x2 <- foo2
     y2 <- return (g y x2)
     return y2

So I guess it's possible to detect the pattern:

  do x1 <- foo1; ...; xN <- fooN; [res <-] return (f {x1..xN})

where {x1..xN} means "x1..xN" in some order" and turn it into:

  do [res <-] (\x1..xN -> f {x1..xN}) <$> foo1 <*> ... <*> fooN

Open issues would be detection of the correct "return"-like thing.  This is why using monad comprehensions would help somewhat, but not fully because it's still possible to put "x <- return y" in the generators part.  The current desugaring of do-notation is very simple because it doesn't even need to know about the monad laws.  They are used implicitly by the optimiser (e.g., "foo >>= \x -> return x" is optimised to just "foo" after inlining), but the desugarer doesn't need to know about them.


On 4 September 2011 03:34, Daniel Peebles <[hidden email]> wrote:
Hi all,

I was wondering what people thought of a smarter do notation. Currently, there's an almost trivial desugaring of do notation into (>>=), (>>), and fail (grr!) which seem to naturally imply Monads (although oddly enough, return is never used in the desugaring). The simplicity of the desugaring is nice, but in many cases people write monadic code that could easily have been Applicative.

For example, if I write in a do block:

x <- action1
y <- action2
z <- action3
return (f x y z)

that doesn't require any of the context-sensitivty that Monads give you, and could be processed a lot more efficiently by a clever Applicative instance (a parser, for instance). Furthermore, if return values are ignored, we could use the (<$), (<*), or (*>) operators which could make the whole thing even more efficient in some instances.

Of course, the fact that the return method is explicitly mentioned in my example suggests that unless we do some real voodoo, Applicative would have to be a superclass of Monad for this to make sense. But with the new default superclass instances people are talking about in GHC, that doesn't seem too unlikely in the near future.

On the implementation side, it seems fairly straightforward to determine whether Applicative is enough for a given do block. Does anyone have any opinions on whether this would be a worthwhile change? The downsides seem to be a more complex desugaring pass (although still something most people could perform in their heads), and some instability with making small changes to the code in a do block. If you make a small change to use a variable before the return, you instantly jump from Applicative to Monad and might break types in your program. I'm not convinced that's necessary a bad thing, though.

Any thoughts?

Thanks,
Dan

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe




--
Push the envelope. Watch it bend.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Dominique Devriese-2
In reply to this post by Daniel Peebles
It's not the same as what you propose, but it's related, so for
discussion, I just want to point out idiom brackets (an analog for
do-notation for Applicative functors) which have been introduced in
some Haskell-related languages. Examples are Idris
(http://www.cs.st-andrews.ac.uk/~eb/Idris/donotation.html) and SHE
(http://personal.cis.strath.ac.uk/~conor/pub/she/idiom.html).

Dominique

2011/9/4 Daniel Peebles <[hidden email]>:

> Hi all,
> I was wondering what people thought of a smarter do notation. Currently,
> there's an almost trivial desugaring of do notation into (>>=), (>>), and
> fail (grr!) which seem to naturally imply Monads (although oddly enough,
> return is never used in the desugaring). The simplicity of the desugaring is
> nice, but in many cases people write monadic code that could easily have
> been Applicative.
> For example, if I write in a do block:
> x <- action1
> y <- action2
> z <- action3
> return (f x y z)
> that doesn't require any of the context-sensitivty that Monads give you, and
> could be processed a lot more efficiently by a clever Applicative instance
> (a parser, for instance). Furthermore, if return values are ignored, we
> could use the (<$), (<*), or (*>) operators which could make the whole thing
> even more efficient in some instances.
> Of course, the fact that the return method is explicitly mentioned in my
> example suggests that unless we do some real voodoo, Applicative would have
> to be a superclass of Monad for this to make sense. But with the new default
> superclass instances people are talking about in GHC, that doesn't seem too
> unlikely in the near future.
> On the implementation side, it seems fairly straightforward to determine
> whether Applicative is enough for a given do block. Does anyone have any
> opinions on whether this would be a worthwhile change? The downsides seem to
> be a more complex desugaring pass (although still something most people
> could perform in their heads), and some instability with making small
> changes to the code in a do block. If you make a small change to use a
> variable before the return, you instantly jump from Applicative to Monad and
> might break types in your program. I'm not convinced that's necessary a bad
> thing, though.
> Any thoughts?
> Thanks,
> Dan
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Dan Doel
In reply to this post by Ivan Lazar Miljenovic
On Sun, Sep 4, 2011 at 12:24 AM, Ivan Lazar Miljenovic
<[hidden email]> wrote:

> On 4 September 2011 12:34, Daniel Peebles <[hidden email]> wrote:
>> Hi all,
>> For example, if I write in a do block:
>> x <- action1
>> y <- action2
>> z <- action3
>> return (f x y z)
>> that doesn't require any of the context-sensitivty that Monads give you, and
>> could be processed a lot more efficiently by a clever Applicative instance
>> (a parser, for instance).
>
> What advantage is there in using Applicative rather than Monad for
> this?  Does it _really_ lead to an efficiency increase?

Forget about efficiency. What if I just want nicer syntax for some
applicative stuff? For instance, this is applicative:

  do x <- fx ; y <- fy ; z <- fz ; pure (x*x + y*y + z*z)

But my only option for writing it to require just applicative is something like:

  (\x y z -> x*x + y*y + z*z) <$> fx <*> fy <*> fz

Even if I had idiom brackets, it'd just be:

  (| (\x y z -> x*x + y*y + z*z) fx fy fz |)

Basically the situation boils down to this: applicatives admit a form
of let as sugar:

  let
    x = ex
    y = ey
    z = ez
   in ...

where the definitions are not recursive, and x is not in scope in ey
and so on. This is desugarable to (in lambda calculus):

  (\x y z -> ...) (ex) (ey) (ez)

but we are currently forced to write in the latter style, because
there's no support for the sugared syntax. So if anyone's looking for
motivation, ask yourself if you've ever found let or where useful. And
of course, in this case, we can't just beta reduce the desugared
expression, because of the types involved.

Comprehensions are rather like an expression with a where:

  [ x*x + y*y + z*z | x <- ex, y <- ey, z <- ez ]

-- Dan

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Daniel Peebles
In reply to this post by Dominique Devriese-2
Yeah, I use SHE and her idiom brackets for several of my projects, but there are many cases in which they're awkward too.

Another consideration about the "monad" comprehensions is that unbound (i.e., with no <-) statements in a monad comprehension are treated as MonadPlus guards, so the applicative <* and *> wouldn't really have a clean place to go.

On Sun, Sep 4, 2011 at 1:32 PM, Dominique Devriese <[hidden email]> wrote:
It's not the same as what you propose, but it's related, so for
discussion, I just want to point out idiom brackets (an analog for
do-notation for Applicative functors) which have been introduced in
some Haskell-related languages. Examples are Idris
(http://www.cs.st-andrews.ac.uk/~eb/Idris/donotation.html) and SHE
(http://personal.cis.strath.ac.uk/~conor/pub/she/idiom.html).

Dominique

2011/9/4 Daniel Peebles <[hidden email]>:
> Hi all,
> I was wondering what people thought of a smarter do notation. Currently,
> there's an almost trivial desugaring of do notation into (>>=), (>>), and
> fail (grr!) which seem to naturally imply Monads (although oddly enough,
> return is never used in the desugaring). The simplicity of the desugaring is
> nice, but in many cases people write monadic code that could easily have
> been Applicative.
> For example, if I write in a do block:
> x <- action1
> y <- action2
> z <- action3
> return (f x y z)
> that doesn't require any of the context-sensitivty that Monads give you, and
> could be processed a lot more efficiently by a clever Applicative instance
> (a parser, for instance). Furthermore, if return values are ignored, we
> could use the (<$), (<*), or (*>) operators which could make the whole thing
> even more efficient in some instances.
> Of course, the fact that the return method is explicitly mentioned in my
> example suggests that unless we do some real voodoo, Applicative would have
> to be a superclass of Monad for this to make sense. But with the new default
> superclass instances people are talking about in GHC, that doesn't seem too
> unlikely in the near future.
> On the implementation side, it seems fairly straightforward to determine
> whether Applicative is enough for a given do block. Does anyone have any
> opinions on whether this would be a worthwhile change? The downsides seem to
> be a more complex desugaring pass (although still something most people
> could perform in their heads), and some instability with making small
> changes to the code in a do block. If you make a small change to use a
> variable before the return, you instantly jump from Applicative to Monad and
> might break types in your program. I'm not convinced that's necessary a bad
> thing, though.
> Any thoughts?
> Thanks,
> Dan
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
>


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Sebastian Fischer-2
In reply to this post by Thomas Schilling-2
These are important questions. I think there is a trade-off between supporting many cases and having a simple desugaring. We should find a sweet-spot where the desugaring is reasonably simple and covers most idiomatic cases.

So I guess it's possible to detect the pattern:

  do x1 <- foo1; ...; xN <- fooN; [res <-] return (f {x1..xN})

where {x1..xN} means "x1..xN" in some order"

Your third example shows that it's beneficial to also support duplicated variables.
 
and turn it into:

  do [res <-] (\x1..xN -> f {x1..xN}) <$> foo1 <*> ... <*> fooN

I think this is a reasonably simple rule and it covers most idiomatic cases.
 
Open issues would be detection of the correct "return"-like thing.  

I'm not sure how much return aliasing is worth supporting. In general it is undecidable but we could add special cases for specialized returns (like 'Just' instead of 'return') depending on how difficult it is to implement.
 
The current desugaring of do-notation is very simple because it doesn't even need to know about the monad laws.  

Could you point out which monad law your proposed desugaring requires?
 
They are used implicitly by the optimiser (e.g., "foo >>= \x -> return x" is optimised to just "foo" after inlining), but the desugarer doesn't need to know about them.

Does the inliner have RULES for monad laws or why would this work?

Sebastian

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Max Bolingbroke-2
On 5 September 2011 02:38, Sebastian Fischer <[hidden email]> wrote:
> These are important questions. I think there is a trade-off between
> supporting many cases and having a simple desugaring. We should find a
> sweet-spot where the desugaring is reasonably simple and covers most
> idiomatic cases.

I have proposed a desugaring (in executable form) at
https://gist.github.com/1194308.

My desugaring aims for a slightly different design that does not try
to detect "return" and instead treats the use of <*, *> and liftA2
purely as an optimisation - so any computation using "do" still
generates a Monad constraint, but it may be desugared in a more
efficient way than it is currently by using the Applicative
combinators.

(If you do want to support the type checker only generating requests
for an Applicative constraint you could just insist that user code
writes "pure" instead of "return", in which case this would be quite
easy to implement)

There are still some interesting cases in my proposal. For example, if
you have my second example:

x <- computation1
y <- computation2
z <- computation3 y
computation4 x

You might reasonably "reassociate" computation2 and computation3
together and desugar this to:

liftA2 computation1 (computation2 >>= \y -> computation3 y) >>= \(x,
_z) -> computation4 x

But currently I desugar to:

liftA2 computation1 computation2 >>= \(x, y) -> computation3 y *> computation4 x

It wouldn't be too hard (and perhaps a nice exercise) to modify the
desugaring to do this reassocation.

Max

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Max Bolingbroke-2
On 5 September 2011 08:35, Max Bolingbroke <[hidden email]> wrote:
> (If you do want to support the type checker only generating requests
> for an Applicative constraint you could just insist that user code
> writes "pure" instead of "return", in which case this would be quite
> easy to implement)

I take back this parenthetical remark. Using pure instead of return
only solves the most boring 1/2 of the problem :-)

Using the Applicative methods to optimise "do" desugaring is still
possible, it's just not that easy to have that weaken the generated
constraint from Monad to Applicative since only degenerate programs
like this one won't use a Monad method:

do computation1
    computation2
    computation3

Max

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Sebastian Fischer-2
Hi Max, 

thanks for you proposal!

Using the Applicative methods to optimise "do" desugaring is still
possible, it's just not that easy to have that weaken the generated
constraint from Monad to Applicative since only degenerate programs
like this one won't use a Monad method:

Is this still true, once Monad is a subclass of Applicative which defines return?

I'd still somewhat prefer if return get's merged with the preceding statement so sometimes only a Functor constraint is generated but I think, I should adjust your desugaring then..

Sebastian


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Sebastian Fischer-2
Hi again,

I think the following rules capture what Max's program does if applied after the usual desugaring of do-notation:

a >>= \p -> return b
 -->
(\p -> b) <$> a

a >>= \p -> f <$> b -- 'free p' and 'free b' disjoint
 -->
((\p -> f) <$> a) <*> b

a >>= \p -> f <$> b -- 'free p' and 'free f' disjoint
 -->
f <$> (a >>= \p -> b)

a >>= \p -> b <*> c -- 'free p' and 'free c' disjoint
 -->
(a >>= \p -> b) <*> c

a >>= \p -> b >>= \q -> c -- 'free p' and 'free b' disjoint
 -->
liftA2 (,) a b >>= \(p,q) -> c

a >>= \p -> b >> c -- 'free p' and 'free b' disjoint
 -->
(a << b) >>= \p -> c

The second and third rule overlap and should be applied in this order. 'free' gives all free variables of a pattern 'p' or an expression 'a','b','c', or 'f'.

If return, >>, and << are defined in Applicative, I think the rules also achieve the minimal necessary class constraint for Thomas's examples that do not involve aliasing of return.

Sebastian

On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer <[hidden email]> wrote:
Hi Max, 

thanks for you proposal!

Using the Applicative methods to optimise "do" desugaring is still
possible, it's just not that easy to have that weaken the generated
constraint from Monad to Applicative since only degenerate programs
like this one won't use a Monad method:

Is this still true, once Monad is a subclass of Applicative which defines return?

I'd still somewhat prefer if return get's merged with the preceding statement so sometimes only a Functor constraint is generated but I think, I should adjust your desugaring then..

Sebastian



_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Thomas Schilling-2

On 5 September 2011 13:41, Sebastian Fischer <[hidden email]> wrote:
Hi again,

I think the following rules capture what Max's program does if applied after the usual desugaring of do-notation:

a >>= \p -> return b
 -->
(\p -> b) <$> a

a >>= \p -> f <$> b -- 'free p' and 'free b' disjoint
 -->
((\p -> f) <$> a) <*> b

Will there also be an optimisation for some sort of simple patterns?  I.e., where we could rewrite this to: 

  liftA2 (\pa pb -> f ...) a b

I think I remember someone saying that the one-at-a-time application of <*> inhibits certain optimisations.
 

a >>= \p -> f <$> b -- 'free p' and 'free f' disjoint
 -->
f <$> (a >>= \p -> b)

a >>= \p -> b <*> c -- 'free p' and 'free c' disjoint
 -->
(a >>= \p -> b) <*> c

a >>= \p -> b >>= \q -> c -- 'free p' and 'free b' disjoint
 -->
liftA2 (,) a b >>= \(p,q) -> c

a >>= \p -> b >> c -- 'free p' and 'free b' disjoint
 -->
(a << b) >>= \p -> c

I find (a << b) confusing.  The intended semantics seem to be "effect a", then "effect b", return result of "a".  That doesn't seem intuitive to me because it contradicts with the effect ordering of (=<<) which reverses the effect ordering of (>>=).  We already have (<*) and (*>) for left-to-right effect ordering and pointed result selection.  I understand that (>>) = (*>) apart from the Monad constraint, but I would prefer not to have (<<) = (<*). 

 

The second and third rule overlap and should be applied in this order. 'free' gives all free variables of a pattern 'p' or an expression 'a','b','c', or 'f'.

If return, >>, and << are defined in Applicative, I think the rules also achieve the minimal necessary class constraint for Thomas's examples that do not involve aliasing of return.

Sebastian

On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer <[hidden email]> wrote:
Hi Max, 

thanks for you proposal!

Using the Applicative methods to optimise "do" desugaring is still
possible, it's just not that easy to have that weaken the generated
constraint from Monad to Applicative since only degenerate programs
like this one won't use a Monad method:

Is this still true, once Monad is a subclass of Applicative which defines return?

I'd still somewhat prefer if return get's merged with the preceding statement so sometimes only a Functor constraint is generated but I think, I should adjust your desugaring then..

Sebastian





--
Push the envelope. Watch it bend.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Sebastian Fischer-2
On Mon, Sep 5, 2011 at 10:19 PM, Thomas Schilling <[hidden email]> wrote:
a >>= \p -> f <$> b -- 'free p' and 'free b' disjoint
 -->
((\p -> f) <$> a) <*> b

Will there also be an optimisation for some sort of simple patterns?  I.e., where we could rewrite this to: 

  liftA2 (\pa pb -> f ...) a b

I think I remember someone saying that the one-at-a-time application of <*> inhibits certain optimisations.

liftA2 is defined via one-at-a-time application and cannot be redefined because it is no method of Applicative. Do you remember more details?
 
I find (a << b) confusing.  The intended semantics seem to be "effect a", then "effect b", return result of "a".  

Sorry, I didn't know that << doesn't exist. I meant an operator with the meaning of <* .

Sebastian

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Smarter do notation

Alberto G. Corona
In reply to this post by Sebastian Fischer-2
The problem in the parallel distribution of monadic computations that may have been Applicative seems to be the >> operator

But if  Monad is defined as a subclass of applicative:

http://www.haskell.org/haskellwiki/Functor-Applicative-Monad_Proposal

then ">>" can be defined as   (>>) =   (*>)  and parallelization should be pòssible ?

Alberto

2011/9/5 Sebastian Fischer <[hidden email]>
Hi again,

I think the following rules capture what Max's program does if applied after the usual desugaring of do-notation:

a >>= \p -> return b
 -->
(\p -> b) <$> a

a >>= \p -> f <$> b -- 'free p' and 'free b' disjoint
 -->
((\p -> f) <$> a) <*> b

a >>= \p -> f <$> b -- 'free p' and 'free f' disjoint
 -->
f <$> (a >>= \p -> b)

a >>= \p -> b <*> c -- 'free p' and 'free c' disjoint
 -->
(a >>= \p -> b) <*> c

a >>= \p -> b >>= \q -> c -- 'free p' and 'free b' disjoint
 -->
liftA2 (,) a b >>= \(p,q) -> c

a >>= \p -> b >> c -- 'free p' and 'free b' disjoint
 -->
(a << b) >>= \p -> c

The second and third rule overlap and should be applied in this order. 'free' gives all free variables of a pattern 'p' or an expression 'a','b','c', or 'f'.

If return, >>, and << are defined in Applicative, I think the rules also achieve the minimal necessary class constraint for Thomas's examples that do not involve aliasing of return.

Sebastian

On Mon, Sep 5, 2011 at 5:37 PM, Sebastian Fischer <[hidden email]> wrote:
Hi Max, 

thanks for you proposal!

Using the Applicative methods to optimise "do" desugaring is still
possible, it's just not that easy to have that weaken the generated
constraint from Monad to Applicative since only degenerate programs
like this one won't use a Monad method:

Is this still true, once Monad is a subclass of Applicative which defines return?

I'd still somewhat prefer if return get's merged with the preceding statement so sometimes only a Functor constraint is generated but I think, I should adjust your desugaring then..

Sebastian



_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe



_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
12