Implementing Strict Core

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
18 messages Options
Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
Hi Simon and Max,

I've been thinking about Strict Core quite a bit lately. I'd like to take a
stab at implementing it in GHC, if feasible. My motivations for
implementing Strict Core are

   - generating faster code in a more robust manner, and
   - implementing strictness annotations at the type level in Haskell
   itself*.

Now, I don't know how much work this will be. What's you best time estimate
(say, if you would do it vs if I would do it)?

My plan would roughly be as follows:

   1. Create a branch to develop on.
   2. Temporarily delete all the non-essential Core2Core transformations.
   This will let us get to a place where we have a functional compiler using
   Strict Core more quickly.
   3. Change CoreSyn.Expr to be Strict Core.
   4. Plumb the changes through the compiler until it compiles (and passes
   the tests) again.
   5. Once all tests pass, re-add all the Core2Core passes and make them
   use Strict Core.
   6. Rebase the commits so they can be merged nicely into the main branch.

How does that sound?

* I think this could be one of the most important changes in a long time to
help Haskell in the "real world". It gives us a better way to talk about
strictness than we have today, reducing time spent on chasing down space
leaks. One we have strictness annotations in type, we could experiment with
a Strict language pragma to make a whole module call-by-value.

Cheers,
Johan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130502/41a3d9fb/attachment-0001.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Erik de Castro Lopo-34
Johan Tibell wrote:

> I've been thinking about Strict Core quite a bit lately.

Same here. Ever since you posted a link to that paper on G+.

I've also been asking some questions of my local GHC internals guru Ben
Lippmeier. A couple of open questions Ben brought are as follows:

* The paper seems to be based on the System Fw + algebraic data types
  (including existentials) based core, while core is now System Fc
  with equality constraints and coercions. How does that affect this
  project?

* Does anyone have any feel for whether the Core2Core transforms will
  be easier or more difficult to write with Strict Core?


Cheers,
Erik
--
----------------------------------------------------------------------
Erik de Castro Lopo
http://www.mega-nerd.com/


Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Carter Schonwald
In reply to this post by Johan Tibell-2
I can't speak for anyone else, but This would be interesting and valuable
for my own work too.

One question I have is what sort of assumptions / optimizations will have
to be different in strict vs lazy code?  What changes in the semantics  of
doing fusion optimization? Does let lifting etc change substantially? Do
strict variables not need the whole blackholing / thunk computation
tracking that we need for lazy values (and would that result in any
improvments to performance)?




On Thu, May 2, 2013 at 12:30 PM, Johan Tibell <johan.tibell at gmail.com>wrote:

> Hi Simon and Max,
>
> I've been thinking about Strict Core quite a bit lately. I'd like to take
> a stab at implementing it in GHC, if feasible. My motivations for
> implementing Strict Core are
>
>    - generating faster code in a more robust manner, and
>    - implementing strictness annotations at the type level in Haskell
>    itself*.
>
> Now, I don't know how much work this will be. What's you best time
> estimate (say, if you would do it vs if I would do it)?
>
> My plan would roughly be as follows:
>
>    1. Create a branch to develop on.
>    2. Temporarily delete all the non-essential Core2Core transformations.
>    This will let us get to a place where we have a functional compiler using
>    Strict Core more quickly.
>    3. Change CoreSyn.Expr to be Strict Core.
>    4. Plumb the changes through the compiler until it compiles (and
>    passes the tests) again.
>    5. Once all tests pass, re-add all the Core2Core passes and make them
>    use Strict Core.
>    6. Rebase the commits so they can be merged nicely into the main
>    branch.
>
> How does that sound?
>
> * I think this could be one of the most important changes in a long time
> to help Haskell in the "real world". It gives us a better way to talk about
> strictness than we have today, reducing time spent on chasing down space
> leaks. One we have strictness annotations in type, we could experiment with
> a Strict language pragma to make a whole module call-by-value.
>
> Cheers,
> Johan
>
>
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130502/d54eba2c/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
In reply to this post by Erik de Castro Lopo-34
On Thu, May 2, 2013 at 12:09 PM, Erik de Castro Lopo
<mle+hs at mega-nerd.com>wrote:

> I've also been asking some questions of my local GHC internals guru Ben
>  Lippmeier. A couple of open questions Ben brought are as follows:
>
> * The paper seems to be based on the System Fw + algebraic data types
>   (including existentials) based core, while core is now System Fc
>   with equality constraints and coercions. How does that affect this
>   project?
>

I'm afraid I don't know and I hope that someone smarter than me (Simon? :))
could comment on this.


> * Does anyone have any feel for whether the Core2Core transforms will
>   be easier or more difficult to write with Strict Core?
>

I don't know if it'll be more verbose or not, but the result should be
better since we have strictly more information available.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130502/a18d4b66/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
In reply to this post by Carter Schonwald
All good questions!

On Thu, May 2, 2013 at 12:48 PM, Carter Schonwald <
carter.schonwald at gmail.com> wrote:

> What changes in the semantics  of doing fusion optimization?
>

I think Duncan said that fusion is quite separate from the
strictness/laziness in the language.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130502/2cd05b45/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Austin Seipp-4
In reply to this post by Johan Tibell-2
Hi Johan,

I cannot give an exact estimate on times because I'm not a true guru.
But my main worry is that without substantial and directed effort,
such an attempt would die rather quickly. In only the 4 years since
Types are Calling Conventions was published, Core has changed a lot
from my passenger-window-view, and there have been various overhauls
in several parts of the optimizers. It's unclear if those would
actually pose a problem, but it's just another thing to consider.

For another example that springs to mind, consider the Glorious New
Code Generator. Since it's initial inception, the design and
integration and finally switching it to the default (just 2-4 months
ago!) took over 3 or 4 years as well. Yet the number of times the
regular backend and C-- changed in theory/principle in this era was
relatively small, compared to core. There was little momentum
elsewhere, so 'keeping up' is easy. Despite that, it was a
multi-pronged effort by several people and took years to get ready.

Core is far more volatile than C--, and will see far more radical
churn in the next 3 years as well, I'm sure. I suppose my direct point
is: the actual *momentum* needed to both do this, and keep up with
where GHC will be when you are done I think is tremendous without a
substantial amount of help and dedicated focus.

What I would like to know (and perhaps Max can chime in here,) is: is
there any standalone implementation of Strict Core anywhere, worth
playing with and experimenting upon? If I remember correctly, the TACC
paper even leaves several important novel optimizations unspecified,
such as the various arity-raising and deep unboxing optimizations.
Implementing these will be another task, if they are to be a win.

If we had a standalone implementation, I think it would not only help
iron out some of the questions posed here like how the optimizations
port over, it would also be substantially easier to approach and
reason about (at the very least, due to not having to fight GHC so
much initially.) Finally, I think it would be possible to hook this
into GHC independently* to try it out as a prototype with little
effort.

Anyway, sorry for the ramblings. Again, I'm not the biggest guru here
by a million miles, but I have seen a lot of GHC's evolution in the
past, so I don't think I'm very far off the mark when I say it will be
a lot of work.

* In theory we can just do the exact same thing me and David
Himmelstrup did with LHC years ago, but less of a hack. We can simply
convert from GHC's core to a Strict Core we define independently. LHC
did this by parsing external core and converting to GRIN, but now we
have something better: Compiler plugins. I think it would be quite
possible to write a plugin that replaces the entire Core2Core pipeline
in the frontend with a "convert to StrictCore, optimize, convert back
to ordinary core" function.

This is very easy using plugins, and means we can purely see the win
from using Strict Core, and take the time to port optimizations and
see their effect. Ben L. also recently used a similar approach for
Repa4, using the DDC core language instead. Max B. himself also did
something similar in [1] using a core language that is both "simpler
and more expressive than GHC Core," but the details are either left to
the reader to decipher, or Max to explain...

[1] https://github.com/batterseapower/cps-core

On Thu, May 2, 2013 at 11:30 AM, Johan Tibell <johan.tibell at gmail.com> wrote:

> Hi Simon and Max,
>
> I've been thinking about Strict Core quite a bit lately. I'd like to take a
> stab at implementing it in GHC, if feasible. My motivations for implementing
> Strict Core are
>
> generating faster code in a more robust manner, and
> implementing strictness annotations at the type level in Haskell itself*.
>
> Now, I don't know how much work this will be. What's you best time estimate
> (say, if you would do it vs if I would do it)?
>
> My plan would roughly be as follows:
>
> Create a branch to develop on.
> Temporarily delete all the non-essential Core2Core transformations. This
> will let us get to a place where we have a functional compiler using Strict
> Core more quickly.
> Change CoreSyn.Expr to be Strict Core.
> Plumb the changes through the compiler until it compiles (and passes the
> tests) again.
> Once all tests pass, re-add all the Core2Core passes and make them use
> Strict Core.
> Rebase the commits so they can be merged nicely into the main branch.
>
> How does that sound?
>
> * I think this could be one of the most important changes in a long time to
> help Haskell in the "real world". It gives us a better way to talk about
> strictness than we have today, reducing time spent on chasing down space
> leaks. One we have strictness annotations in type, we could experiment with
> a Strict language pragma to make a whole module call-by-value.
>
> Cheers,
> Johan
>
>
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>



--
Regards,
Austin - PGP: 4096R/0x91384671


Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Don Stewart
In reply to this post by Johan Tibell-2
Fusion is more useful in a strict setting - it makes strict pipelines
compositional in performance.

On Thursday, 2 May 2013, Johan Tibell wrote:

> All good questions!
>
> On Thu, May 2, 2013 at 12:48 PM, Carter Schonwald <
> carter.schonwald at gmail.com <javascript:_e({}, 'cvml',
> 'carter.schonwald at gmail.com');>> wrote:
>
>> What changes in the semantics  of doing fusion optimization?
>>
>
> I think Duncan said that fusion is quite separate from the
> strictness/laziness in the language.
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130502/ef0b1a17/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Simon Peyton Jones
In reply to this post by Johan Tibell-2
Johan

I think this would be a very interesting thing to do.  But it would be a big project.  It is absolutely not routine work.


*         Strict Core is not just a little wibble.  It is a pretty big change.

o   It embraces multiple values.  These aren't the same as unboxed tuples because they include types, thus <Int, 7, 'x'>

o   It has multiple binders, in which early ones scope over later ones

o   It has both 'let' and 'valrec'.

o   I'm not sure how ANF-ised it should be

o   I am unsure about the interaction with user-written RULES

*         It needs to be extended with coercions and casts.  I don't think this would be too hard

*         How do unboxed tuples fit in?

*         I'm not certain whether delay/force should be done via encoding (as the paper suggests) or expressed directly.

*         Programs would get bigger

So it would force re-examination of many things.  In effect we'd go back three steps in order to position ourselves to move forward four or more.  The benefits would, I think, ultimately be substantial, but it'd take a while for them to manifest themselves.

If you wanted to undertake it I'd be happy to play an advisory role.  It's really hard to guess how long it'd take.  To get to something that stumbles along: weeks not days.  To get to something that routinely out-performs the baseline: months not weeks.

Step 0 should be to write a detailed specification, based on the paper, so that we all understand the language you are implementing.  Richards docs/core-spec/  is a model.

Simon

From: Johan Tibell [mailto:johan.tibell at gmail.com]
Sent: 02 May 2013 17:30
To: Simon Peyton-Jones; Max Bolingbroke
Cc: ghc-devs at haskell.org
Subject: Implementing Strict Core

Hi Simon and Max,

I've been thinking about Strict Core quite a bit lately. I'd like to take a stab at implementing it in GHC, if feasible. My motivations for implementing Strict Core are

  *   generating faster code in a more robust manner, and
  *   implementing strictness annotations at the type level in Haskell itself*.
Now, I don't know how much work this will be. What's you best time estimate (say, if you would do it vs if I would do it)?

My plan would roughly be as follows:

  1.  Create a branch to develop on.
  2.  Temporarily delete all the non-essential Core2Core transformations. This will let us get to a place where we have a functional compiler using Strict Core more quickly.
  3.  Change CoreSyn.Expr to be Strict Core.
  4.  Plumb the changes through the compiler until it compiles (and passes the tests) again.
  5.  Once all tests pass, re-add all the Core2Core passes and make them use Strict Core.
  6.  Rebase the commits so they can be merged nicely into the main branch.
How does that sound?

* I think this could be one of the most important changes in a long time to help Haskell in the "real world". It gives us a better way to talk about strictness than we have today, reducing time spent on chasing down space leaks. One we have strictness annotations in type, we could experiment with a Strict language pragma to make a whole module call-by-value.

Cheers,
Johan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130503/25c8955c/attachment-0001.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Ben Lippmeier-2
In reply to this post by Austin Seipp-4

On 03/05/2013, at 6:26 AM, Austin Seipp wrote:

> This is very easy using plugins, and means we can purely see the win
> from using Strict Core, and take the time to port optimizations and
> see their effect. Ben L. also recently used a similar approach for
> Repa4, using the DDC core language instead.

I'd also highly (highly) recommend using the plugin approach instead of simply forking GHC, deleting chunks of code from it, and then trying to get it to compile again.

Just producing a plugin that converts existing GHC core to Strict core and back again *for all programs in the testsuite* will iron out a boatload of problems in both your prototype implementation and the full language design (with coercions, primops, state threading hacks etc). You can slurp in chunks of existing GHC code into the plugin, and then change them there.

I expect that fixing the parser / desugarer / pretty printer / type checker / transforms etc for Strict Core will take much longer than simply doing the conversion between it and GHC core.

Ben.




Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
Hi all,

Thanks for the input. Given that it'd probably take me weeks or months to
get a production ready implementation, I'm probably going to punt on this
and hope MSR gets a intern to do it some day. :)

-- Johan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130503/db75572f/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Gábor Lehel
In reply to this post by Johan Tibell-2
On Thu, May 2, 2013 at 6:30 PM, Johan Tibell <johan.tibell at gmail.com> wrote:

>
>    - implementing strictness annotations at the type level in Haskell
>    itself*.
>
>
> * I think this could be one of the most important changes in a long time
> to help Haskell in the "real world". It gives us a better way to talk about
> strictness than we have today, reducing time spent on chasing down space
> leaks. One we have strictness annotations in type, we could experiment with
> a Strict language pragma to make a whole module call-by-value.
>
>
FWIW, have you done any thinking about the "strictness types" part of the
whole thing? Has anyone? I'd be quite interested in it if so. I mean
"Haskell has problems keeping track of strictness/laziness, Haskell has a
great type system => let's put strictness in the type system" seems logical
enough, but does it actually work out?

The basic idea -- stick a ! on a type to get a strict version of it --
looks simple, but after trying to think it through further, it doesn't seem
as simple as it seems. The main questions/issues I've encountered in my
undereducated speculation:

- If `!a` and `a` are completely separate types, you would have to put
explicit conversions everywhere, which is unpleasant. If `!a` and `a` are
the same type, the compiler can't prevent them from getting mixed up, which
is useless. So I think you would want `!a` to be a subtype of `a`: `a` is
inhabited by both evaluated and unevaluated values, `!a` only by evaluated
ones. Where `a` is expected, `!a` is also accepted. But GHC's type system
doesn't have subtyping, for (I'm told) good reasons. Could it, for this
specific case?

  - As a consequence of the subtype relationship, you would also have to
track the co/contra/invariance of type parameters, in order to determine
whether `f !a` is a subtype of `f a`, or vice versa.

- If you try to pass an `a` where an `!a` is expected, should it be a type
error, or should the compiler automatically insert code to evaluate it?
What about `f a` and `f !a`? How could the compiler possibly know how to
evaluate the `a`s inside of a structure?

- Strict types in the return type position don't make any sense. This makes
taking a value, evaluating it somehow, and then having that fact show up in
its type difficult.

  - As far as I can tell, "strict types" and "unlifted types" are the same
thing. The former is guaranteed to be in WHNF, while the latter is
guaranteed not to be bottom. I don't see a difference there. Perhaps the
second formulation has seen more research?

Or would it be a different kind of scheme that doesn't rest on `!a` being a
subtype of `a`? (Especially the thing with the return types makes me think
it might not be the best idea.) But how would it look?

--
Your ship was destroyed in a monadic eruption.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130507/4b7440a7/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Sturdy, Ian
Further largely uneducated thoughts on type-level strictness annotations:

 - I would imagine that it would work somewhat like uniqueness types in Clean, where `!a` is not a separate type from (or even a subtype of) `a`, but rather `a` itself with the type modifier `!`. I am not certain that all the UNQ semantics carry over, but that is where I would start.

 - I think that `f !a` would be a lazy spine with strict elements, so that a function with an input of type `[!a]` could be passed a lazy list, but with each element evaluated when the spine is evaluated that far rather than when the value itself becomes needed. Consider the rather pointless function

    f :: [a] -> Maybe a
    f (_:e:_) = Just e
    f _         = Nothing

`f [undefined,1]` evaluates just fine, because even though the spine is evaluated past the first element, that element is not needed. But if we change the signature to `f :: [!a] -> Maybe a`, when evaluating the spine past that element the compiler would evaluate the element and throw the undefined. I see no reason why this would be a problem to implement, but that could be my ignorance.

 - While technically speaking a "strict type" does not make sense in the return, an evaluated type does, and it is my feeble understanding that being strict in an argument is equivalent to requiring evaluation of the argument before evaluating the function. So if we redefine `!a` from "strict a" to "evaluated a", it does what we want in argument positions while also making sense in returns, so that we can have `seq' :: a -> !a`. But as I think about it, while that makes sense, it is also pointless: if I have `f :: a -> Int; f = const 0`, `f . seq'` will happily never evaluate its argument, since it will never try to evaluate `seq'`. In a strict functional language, evaluation is only forced when something is passed as a function parameter; I see no need to show evaluation in returns because it will be overridden by laziness in the function application itself.
________________________________
From: ghc-devs-bounces at haskell.org [ghc-devs-bounces at haskell.org] on behalf of G?bor Lehel [illissius at gmail.com]
Sent: Monday, May 06, 2013 7:14 PM
To: Johan Tibell
Cc: Max Bolingbroke; ghc-devs at haskell.org
Subject: Re: Implementing Strict Core



On Thu, May 2, 2013 at 6:30 PM, Johan Tibell <johan.tibell at gmail.com<mailto:johan.tibell at gmail.com>> wrote:

  *   implementing strictness annotations at the type level in Haskell itself*.

* I think this could be one of the most important changes in a long time to help Haskell in the "real world". It gives us a better way to talk about strictness than we have today, reducing time spent on chasing down space leaks. One we have strictness annotations in type, we could experiment with a Strict language pragma to make a whole module call-by-value.


FWIW, have you done any thinking about the "strictness types" part of the whole thing? Has anyone? I'd be quite interested in it if so. I mean "Haskell has problems keeping track of strictness/laziness, Haskell has a great type system => let's put strictness in the type system" seems logical enough, but does it actually work out?

The basic idea -- stick a ! on a type to get a strict version of it -- looks simple, but after trying to think it through further, it doesn't seem as simple as it seems. The main questions/issues I've encountered in my undereducated speculation:

- If `!a` and `a` are completely separate types, you would have to put explicit conversions everywhere, which is unpleasant. If `!a` and `a` are the same type, the compiler can't prevent them from getting mixed up, which is useless. So I think you would want `!a` to be a subtype of `a`: `a` is inhabited by both evaluated and unevaluated values, `!a` only by evaluated ones. Where `a` is expected, `!a` is also accepted. But GHC's type system doesn't have subtyping, for (I'm told) good reasons. Could it, for this specific case?

  - As a consequence of the subtype relationship, you would also have to track the co/contra/invariance of type parameters, in order to determine whether `f !a` is a subtype of `f a`, or vice versa.

- If you try to pass an `a` where an `!a` is expected, should it be a type error, or should the compiler automatically insert code to evaluate it? What about `f a` and `f !a`? How could the compiler possibly know how to evaluate the `a`s inside of a structure?

- Strict types in the return type position don't make any sense. This makes taking a value, evaluating it somehow, and then having that fact show up in its type difficult.

  - As far as I can tell, "strict types" and "unlifted types" are the same thing. The former is guaranteed to be in WHNF, while the latter is guaranteed not to be bottom. I don't see a difference there. Perhaps the second formulation has seen more research?

Or would it be a different kind of scheme that doesn't rest on `!a` being a subtype of `a`? (Especially the thing with the return types makes me think it might not be the best idea.) But how would it look?

--
Your ship was destroyed in a monadic eruption.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130507/7422c26d/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
In reply to this post by Gábor Lehel
Hi,

All very good questions. Here's my current thinking on the matter:

`a` and `!a` are compatible types. The compiler will insert the `seq`s
where needed. I don't think this is useless.

First, it serves as good documentation. Documenting strictness properties,
e.g. as we do in the container library today, is awkward. Strictness
annotations on types would make the documentation precise and succinct.

Second, it's a bit more declarative than having put manually put bang
patterns at the right places in the function definition and having to
remember (and teach beginners) how to do that correctly..

Third, it might help use generate better code, especially if we also
implement Strict Core.

I believe this is analogous to what Scala does for their by-value
parameters, which types are preceded with a => annotation.

Cheers,
Johan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130507/2bb72ca1/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Gábor Lehel
In reply to this post by Sturdy, Ian
On Tue, May 7, 2013 at 4:02 PM, Sturdy, Ian <sturdyi12 at mail.wlu.edu> wrote:

>  Further largely uneducated thoughts on type-level strictness annotations:
>
>  - I would imagine that it would work somewhat like uniqueness types in
> Clean, where `!a` is not a separate type from (or even a subtype of) `a`,
> but rather `a` itself with the type modifier `!`. I am not certain that all
> the UNQ semantics carry over, but that is where I would start.
>

Interesting. Are strict types and uniqueness types in Clean related to each
other (or the same thing)? I had the impression that they were separate.
I've also had the thought that perhaps we should look into how Clean does
things, but I haven't been able to find much discussion or documentation
about it.


>
>  - I think that `f !a` would be a lazy spine with strict elements, so that
> a function with an input of type `[!a]` could be passed a lazy list, but
> with each element evaluated when the spine is evaluated that far rather
> than when the value itself becomes needed. Consider the rather pointless
> function
>
>     f :: [a] -> Maybe a
>     f (_:e:_) = Just e
>     f _         = Nothing
>
> `f [undefined,1]` evaluates just fine, because even though the spine is
> evaluated past the first element, that element is not needed. But if we
> change the signature to `f :: [!a] -> Maybe a`, when evaluating the spine
> past that element the compiler would evaluate the element and throw the
> undefined. I see no reason why this would be a problem to implement, but
> that could be my ignorance.
>

That brings up a good point. I had been thinking that if you have a
function [!Int] -> Int, and you apply it to an argument that's [Int], then
all the Ints in the list should be evaluated before calling the function,
to match its expectations. But I see now that that's **clearly** wrong:
Haskell is still a call-by-need language, and the list is still spine-lazy.
As you say, the individual Ints should be forced when the list node
containing them is forced. If I'm thinking right it's as if we had (using
present Haskell)

foo :: [a] -> [a]
foo [] = []
foo (x:xs) = x `seq` x:(foo xs)

and the compiler would automatically insert `foo` wherever a translation
from [a] to [!a] was needed.

But the question I posed still basically applies: How in the world can the
compiler know what `foo` should look like for *any* type? Say, `IO !a`? Or
`forall a. Typeable !a`? etc.


>  - While technically speaking a "strict type" does not make sense in the
> return, an evaluated type does, and it is my feeble understanding that
> being strict in an argument is equivalent to requiring evaluation of the
> argument before evaluating the function. So if we redefine `!a` from
> "strict a" to "evaluated a", it does what we want in argument positions
> while also making sense in returns, so that we can have `seq' :: a -> !a`.
> But as I think about it, while that makes sense, it is also pointless: if I
> have `f :: a -> Int; f = const 0`, `f . seq'` will happily never evaluate
> its argument, since it will never try to evaluate `seq'`. In a strict
> functional language, evaluation is only forced when something is passed as
> a function parameter; I see no need to show evaluation in returns because
> it will be overridden by laziness in the function application itself.
>

Yep, that's what I was referring to. Thanks for the example :)



>  ------------------------------
> *From:* ghc-devs-bounces at haskell.org [ghc-devs-bounces at haskell.org] on
> behalf of G?bor Lehel [illissius at gmail.com]
> *Sent:* Monday, May 06, 2013 7:14 PM
> *To:* Johan Tibell
> *Cc:* Max Bolingbroke; ghc-devs at haskell.org
> *Subject:* Re: Implementing Strict Core
>
>
>
> On Thu, May 2, 2013 at 6:30 PM, Johan Tibell <johan.tibell at gmail.com>wrote:
>
>>
>>    - implementing strictness annotations at the type level in Haskell
>>    itself*.
>>
>>
>>  * I think this could be one of the most important changes in a long
>> time to help Haskell in the "real world". It gives us a better way to talk
>> about strictness than we have today, reducing time spent on chasing down
>> space leaks. One we have strictness annotations in type, we could
>> experiment with a Strict language pragma to make a whole module
>> call-by-value.
>>
>>
>  FWIW, have you done any thinking about the "strictness types" part of
> the whole thing? Has anyone? I'd be quite interested in it if so. I mean
> "Haskell has problems keeping track of strictness/laziness, Haskell has a
> great type system => let's put strictness in the type system" seems logical
> enough, but does it actually work out?
>
>  The basic idea -- stick a ! on a type to get a strict version of it --
> looks simple, but after trying to think it through further, it doesn't seem
> as simple as it seems. The main questions/issues I've encountered in my
> undereducated speculation:
>
>  - If `!a` and `a` are completely separate types, you would have to put
> explicit conversions everywhere, which is unpleasant. If `!a` and `a` are
> the same type, the compiler can't prevent them from getting mixed up, which
> is useless. So I think you would want `!a` to be a subtype of `a`: `a` is
> inhabited by both evaluated and unevaluated values, `!a` only by evaluated
> ones. Where `a` is expected, `!a` is also accepted. But GHC's type system
> doesn't have subtyping, for (I'm told) good reasons. Could it, for this
> specific case?
>
>    - As a consequence of the subtype relationship, you would also have to
> track the co/contra/invariance of type parameters, in order to determine
> whether `f !a` is a subtype of `f a`, or vice versa.
>
>  - If you try to pass an `a` where an `!a` is expected, should it be a
> type error, or should the compiler automatically insert code to evaluate
> it? What about `f a` and `f !a`? How could the compiler possibly know how
> to evaluate the `a`s inside of a structure?
>
>  - Strict types in the return type position don't make any sense. This
> makes taking a value, evaluating it somehow, and then having that fact show
> up in its type difficult.
>
>    - As far as I can tell, "strict types" and "unlifted types" are the
> same thing. The former is guaranteed to be in WHNF, while the latter is
> guaranteed not to be bottom. I don't see a difference there. Perhaps the
> second formulation has seen more research?
>
>  Or would it be a different kind of scheme that doesn't rest on `!a`
> being a subtype of `a`? (Especially the thing with the return types makes
> me think it might not be the best idea.) But how would it look?
>
>  --
> Your ship was destroyed in a monadic eruption.
>
> _______________________________________________
> ghc-devs mailing list
> ghc-devs at haskell.org
> http://www.haskell.org/mailman/listinfo/ghc-devs
>
>


--
Your ship was destroyed in a monadic eruption.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130507/f375ffa1/attachment-0001.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Gábor Lehel
In reply to this post by Johan Tibell-2
On Tue, May 7, 2013 at 6:38 PM, Johan Tibell <johan.tibell at gmail.com> wrote:

> Hi,
>
> All very good questions. Here's my current thinking on the matter:
>
> `a` and `!a` are compatible types. The compiler will insert the `seq`s
> where needed. I don't think this is useless.
>
> First, it serves as good documentation. Documenting strictness properties,
> e.g. as we do in the container library today, is awkward. Strictness
> annotations on types would make the documentation precise and succinct.
>
> Second, it's a bit more declarative than having put manually put bang
> patterns at the right places in the function definition and having to
> remember (and teach beginners) how to do that correctly..
>
> Third, it might help use generate better code, especially if we also
> implement Strict Core.
>

Right. I completely agree. The idea is attractive to me for the same
reasons. My questions basically relate to: how would you implement it?

Right now GHC's type checker works by solving type equality constraints.
Type equality is symmetric, and implies that the two types are
substitutable for each other in all contexts: there's no concept of "mixing
them up", because there's nothing to get mixed up, they are the same. But
`a` and `!a` aren't like that. You very much want to avoid mixing them up,
so you can insert `seq`s in the right places. But how would that mesh with
the present system?

Would you allow strictness annotations on type arguments in arbitrary
positions ([!Int] and so forth)? That leads to the issues regarding "how
could the compiler know how to strict-ify arbitrary types", and/or
co/contra/invariance of type parameters.


>
> I believe this is analogous to what Scala does for their by-value
> parameters, which types are preceded with a => annotation.
>

Looks like it's the reverse actually: http://stackoverflow.com/a/6297466


>
> Cheers,
> Johan
>
>
>


--
Your ship was destroyed in a monadic eruption.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130507/c69b8dfe/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
On Tue, May 7, 2013 at 1:00 PM, G?bor Lehel <illissius at gmail.com> wrote:

> On Tue, May 7, 2013 at 6:38 PM, Johan Tibell <johan.tibell at gmail.com>wrote:
>
>> `a` and `!a` are compatible types. The compiler will insert the `seq`s
>> where needed. I don't think this is useless.
>>
>> First, it serves as good documentation. Documenting strictness
>> properties, e.g. as we do in the container library today, is awkward.
>> Strictness annotations on types would make the documentation precise and
>> succinct.
>>
>> Second, it's a bit more declarative than having put manually put bang
>> patterns at the right places in the function definition and having to
>> remember (and teach beginners) how to do that correctly..
>>
>> Third, it might help use generate better code, especially if we also
>> implement Strict Core.
>>
>
> Right. I completely agree. The idea is attractive to me for the same
> reasons. My questions basically relate to: how would you implement it?
>

A function:

    f :: !a -> ...

would be implemented as

    f !x = ...

i.e. the body of x would start with a `seq` of `x`.

I we had Strict Core we could do better and instead demand that the caller
does the evaluation, removing cases of duplicate reevaluation and perhaps
expose more unboxing.



> Right now GHC's type checker works by solving type equality constraints.
> Type equality is symmetric, and implies that the two types are
> substitutable for each other in all contexts: there's no concept of "mixing
> them up", because there's nothing to get mixed up, they are the same. But
> `a` and `!a` aren't like that. You very much want to avoid mixing them up,
> so you can insert `seq`s in the right places. But how would that mesh with
> the present system?
>

I honestly don't know. Possibly we could just erase the strictness
annotations after we have annotated `f` as being strict in its first
argument.


>  Would you allow strictness annotations on type arguments in arbitrary
> positions ([!Int] and so forth)? That leads to the issues regarding "how
> could the compiler know how to strict-ify arbitrary types", and/or
> co/contra/invariance of type parameters.
>

My current thinking is that we would only allow `!` at the "top-level".
Even if we defined a meaning for e.g. `f !a`, I don't know if it would be
very useful as we wouldn't be able to say things about specific fields. For
example, given

    data T a = C a a

should `T !a` mean that both fields are strict? What if we only want one to
be strict? What if the type is opaque to the user, how is he/she supposed
to know what `T !a` means without referring to implementation details? I
believe Scala has the same restriction.

The only thing I would regret if `!` is only allowed at the top-level is
that we couldn't reuse e.g. Haskell pairs for both lazy and strict pairs,
which would be neat.


> Looks like it's the reverse actually:
> http://stackoverflow.com/a/6297 using466<http://stackoverflow.com/a/6297466>
>

I might have used the word "analogous" incorrectly. I meant: similar to
what Scala does, except that Scala is strict by-default and adds by-name
parameters through annotations.

-- Johan
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130507/10e07b82/attachment.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Simon Peyton Jones
I don't know any way to make a compositional story about nested ! annotations in types.  As various of you have pointed out, it looks attractive and then runs out of steam.

Putting !'s on data constructor arguments works because it's not really part of the type at all; rather, it just says "give this contstructor a wrapper that makes it strict".  I think one could make a similar story for functions in general, so that
          f :: !a -> a -> ![a] -> Int
          f a b c = e
would mean "add a wrapper to f that makes it strict"; thus
          f a b c = a `seq` c `seq` e
But it's not at all clear that this is worth the fuss.


Strict Core is another matter.  As the paper says, it's an intermediate language that exposes more about what is evaluated and what is not, and how functions are curried.  As I said in an earlier reply, it would certainly be an interesting experiment to make GHC's intermediate language be Strict Core, but it would not be a routine exercise.  Fun to be had!

Simon

From: Johan Tibell [mailto:johan.tibell at gmail.com]
Sent: 07 May 2013 21:33
To: G?bor Lehel
Cc: Simon Peyton-Jones; Max Bolingbroke; ghc-devs at haskell.org
Subject: Re: Implementing Strict Core

On Tue, May 7, 2013 at 1:00 PM, G?bor Lehel <illissius at gmail.com<mailto:illissius at gmail.com>> wrote:
On Tue, May 7, 2013 at 6:38 PM, Johan Tibell <johan.tibell at gmail.com<mailto:johan.tibell at gmail.com>> wrote:
`a` and `!a` are compatible types. The compiler will insert the `seq`s where needed. I don't think this is useless.

First, it serves as good documentation. Documenting strictness properties, e.g. as we do in the container library today, is awkward. Strictness annotations on types would make the documentation precise and succinct.

Second, it's a bit more declarative than having put manually put bang patterns at the right places in the function definition and having to remember (and teach beginners) how to do that correctly..

Third, it might help use generate better code, especially if we also implement Strict Core.

Right. I completely agree. The idea is attractive to me for the same reasons. My questions basically relate to: how would you implement it?

A function:

    f :: !a -> ...

would be implemented as

    f !x = ...

i.e. the body of x would start with a `seq` of `x`.

I we had Strict Core we could do better and instead demand that the caller does the evaluation, removing cases of duplicate reevaluation and perhaps expose more unboxing.


Right now GHC's type checker works by solving type equality constraints. Type equality is symmetric, and implies that the two types are substitutable for each other in all contexts: there's no concept of "mixing them up", because there's nothing to get mixed up, they are the same. But `a` and `!a` aren't like that. You very much want to avoid mixing them up, so you can insert `seq`s in the right places. But how would that mesh with the present system?

I honestly don't know. Possibly we could just erase the strictness annotations after we have annotated `f` as being strict in its first argument.

Would you allow strictness annotations on type arguments in arbitrary positions ([!Int] and so forth)? That leads to the issues regarding "how could the compiler know how to strict-ify arbitrary types", and/or co/contra/invariance of type parameters.

My current thinking is that we would only allow `!` at the "top-level". Even if we defined a meaning for e.g. `f !a`, I don't know if it would be very useful as we wouldn't be able to say things about specific fields. For example, given

    data T a = C a a

should `T !a` mean that both fields are strict? What if we only want one to be strict? What if the type is opaque to the user, how is he/she supposed to know what `T !a` means without referring to implementation details? I believe Scala has the same restriction.

The only thing I would regret if `!` is only allowed at the top-level is that we couldn't reuse e.g. Haskell pairs for both lazy and strict pairs, which would be neat.

Looks like it's the reverse actually: http://stackoverflow.com/a/6297 using466<http://stackoverflow.com/a/6297466>

I might have used the word "analogous" incorrectly. I meant: similar to what Scala does, except that Scala is strict by-default and adds by-name parameters through annotations.

-- Johan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.haskell.org/pipermail/ghc-devs/attachments/20130508/f02f96a8/attachment-0001.htm>

Reply | Threaded
Open this post in threaded view
|

Implementing Strict Core

Johan Tibell-2
On Wed, May 8, 2013 at 1:14 AM, Simon Peyton-Jones
<simonpj at microsoft.com> wrote:
> I don?t know any way to make a compositional story about nested !
> annotations in types.  As various of you have pointed out, it looks
> attractive and then runs out of steam.

Agreed!

> Putting !?s on data constructor arguments works because it?s not really part
> of the type at all; rather, it just says ?give this contstructor a wrapper
> that makes it strict?.  I think one could make a similar story for functions
> in general, so that
>
>           f :: !a -> a -> ![a] -> Int
>
>           f a b c = e
>
> would mean ?add a wrapper to f that makes it strict?; thus
>
>           f a b c = a `seq` c `seq` e
>
> But it?s not at all clear that this is worth the fuss.

I don't know either, but it might be worth experimenting with, if the
cost is not too high. This touches on more squishy usability issues
that are hard to get hard data on.

I recently ran a mini-survey (results to be published soon) where
people did indicate they spend more time on performance than
correctness. I'd like to address that issue, as I think it's important
for real world use of Haskell. I don't know the best way to do so. I
think the key is to help programmers deal with laziness in a more
effective manner. Having to think hard about laziness all the time or
risk getting a performance bug doesn't sound like a good idea to me*.

This is just one idea of tackling this problem. Another idea would to
add a {-# LANGUAGE Strict #-} pragma that would make a module strict
by default and optionally lazy using e.g. ~x patterns.

* After all, people could argue that we don't need higher level
languages at all and people should just "think harder" when writing C.
If we accept that's a bad argument for correctness I think we should
reject it as a way to deal with performance issues too.

-- Johan