Re: Monad of no `return` Proposal (MRP): Moving `return` out of `Monad`

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
91 messages Options
12345
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Edward Kmett-2

On Thu, Oct 22, 2015 at 9:29 AM, Geoffrey Mainland <[hidden email]> wrote:
Thanks to you and Dan [1], I now have a greater understanding and
appreciation for where the committee has been coming from. My new
understanding is that the changes that were formalized in AMP, FTP, and
MRP were the basis for the committee's creation. It also seems that
there are more changes in the pipeline that have not yet been made into
proposals, e.g., pulling (>>) out of Control.Monad [2]. Part of
"stability" is signaling change as far ahead as possible. The committee
has put a lot of effort into this, which I appreciate! However, as each
of these proposal has come down the pipeline, I never realized that they
were part of a larger master plan.

The "master plan" where (>>) is concerned is that it'd be nice to get Traversable down to a minimal state and to eliminate unnecessary distinctions in the Prelude between things like mapM and traverse. Right now they have different type constraints, but this is entirely a historical artifact. But it causes problems, we have a situation where folks have commonly optimized (>>) but left (*>) unfixed. This yields different performance for mapM_ and traverse_. A consequence of the AMP is that the neither one of those could be defined in terms of the other (*>) has a default definition in terms of (<*>). (>>) has a default definition in terms of (>>=). With two places where optimizations can happen and two different definitions for operations that are logically required to be the same thing we can and do see rather radically different performance between these two things.

This proposal is something that was put out as a sort of addendum to the Monad of No Return proposal for discussion, but unlike MRP has no particular impact on a sacred cow like return. We have yet to put together a timeline that incorporates the (>>) changes from MRP.

1) What is the master plan, and where is it documented, even if this
document is not up to the standard of a proposal? What is the final
target, and when might we expect it to be reached? What is in the
pipeline after MRP?

Relatedly, guidance on how to write code now so that it will be
compatible with future changes helps mitigate the stability issue.

The current plans more or less stop with finishing the MonadFail proposal, getting Semigroup in as a superclass of Monoid, and incorporating some additional members into Floating. The working document for the timeline going forward is available here:


2) How can I write code that makes use of the Prelude so that it will
work with every new GHC release over the next 3 years? 5 years? For
example, how can I write a Monad instance now, knowing the changes that
are coming, so that the instance will work with every new GHC release
for the next 3 years? 5 years? If the answer is "you can't," then when
might I be able to do such a thing? As of 8.4? 8.6? I'm embarrassed to
say I don't know the answer!

We have a backwards facing "3 release policy" that says it should always be possible to write code that works backwards for 3 releases. This means that changes like moving fail out of Monad will take 5 years. However, maintaining both that and a _forward facing_ 3 release policy would mean that any change that introduced a superclass would take something like 9 years of intermediate states that make no sense to complete. 9 years to move one method.

Now looking forward. You can write code today with 7.10 that will work without warnings until 8.2. That happens to be 3 releases. In 8.4 you'll start to get warnings about Semigroup and MonadFail changes, but looking at it as 3 releases going forward in 8.0 you can just write the instances and your code would be warning free forward for 3 releases. In 8.6 those changes go into effect, but you will have been able to make the code changes that you need to accomodate 8.6 since 8.0.

The current roadmap happens to give you a 3 year sliding window.

Finally, if none of these changes broke Prelude backwards compatibility,
far fewer people would be complaining :)

If none of our changes were ever able to break Prelude backwards compatibility the same people who have been complaining about the utter lack of progress for the previous 17 years and that nearly exploded the community 2 years ago would be complaining, and based on polling and discusssions that is actually a much larger group. The AMP passed nearly unanimously.
 
Of course, we can't always make
progress without breaking things, but a more deliberative process might
offer an opportunity to make progress while still preserving backwards
compatibility. Take AMP for example. There were at least two [3] [4]
proposals for preserving backwards compatibility. Investigating them
would have taken time and delayed AMP, yes, but why the rush?

We've been talking about various superclass defaulting proposals for the better part of a decade and no progress has been made. The rush was that we'd been letting them block every previous discussion, and that the concrete plan with an actual implementation that was on hand was a very popular proposal even without that mitigation strategy.

3) Can we have a process that allows more deliberation over, and wider
publicity for, changes that break backwards compatibility? The goal of
such a process would not be to prevent change, but to allow more time to
find possible solution to the issue of backwards compatibility.

My proposal for a low-traffic mailing list where all proposals were
announced was meant to provide wider publicity.

I don't think anybody has an objection to wider visibility of proposals that affect things mentioned in the Haskell Report.
 
Personally, I think these proposals do indeed fix a lot of warts in the
language. As a researcher who uses actively uses Haskell every day,
these warts have had approximately zero impact on me for the past
(almost) decade, and I would be perfectly content if they were never
fixed. The only pain I can recall enduring is having to occasionally
write an orphan Applicative instance. I have been importing Prelude
hiding mapM for years. I have been importing Control.Applicative for
years. Neither has been painful.

And yet the vast preponderance of public opinion lies in the other camp. The "change nothing" policy had an iron grip on the state of affairs for 17 years and there were serious cracks starting to form from the appearance that nothing could ever be fixed if the Prelude was affected in any way.

The only thing that broke with that was when Ian Lynagh unilaterally removed Eq and Show as superclasses of Num. That was more or less the first glimmer that the world wouldn't end if deliberated changes were made to the Prelude.

Dealing with AMP? I'm working on a
collaborative research project that is stuck on 7.8 because of AMP. I
agree, that seems silly, but whether or not it is silly, it is an impact
I feel.

What changes did you face beyond writing

instance Functor Foo where
  fmap = liftM

instance Applicative Foo where
  pure = return
  (<*>) = ap

that is AMP related?

Maybe there are a lot of people who answer "yes" to both questions. I
would like to know! But does having return in the Monad class really
cause anyone anything other than existential pain?

The MRP is by far the most marginal proposal on the table. This is why it remains just a proposal and not part of the roadmap. That said, moving return to a top level definition will mean that more code that is compiled will be able to infer an Applicative constraint.

The other proposals that are on the roadmap on the other hand defend a lot better.

The (>>) fragment of MRP fixes rampant performance regressions, however. We went to generalize the implementation of mapM_ to use (*>) internally and found performance regressions within base itself due to instances that are optimized inconsistently. This informed the design here. More code will infer with weaker Applicative constraints, Traversable can eventually be simplified, and folks like Simon Marlow who have folks internally at Facebook use mapM will just have their code "work" in Haxl. I can answer "yes" to both of your questions here.

The continued existence of fail in Monad on the other hand has caused a great deal of pain in instances for things like `Either a` for years. To supply `fail`, we used to incur a needless Error a constraint. We can be more precise and remove a potential source of partiality from a lot of code. I can answer "yes" to both of your questions here.

The lack of Semigroup as a superclass of Monoid has meant that the Monoid instance for Maybe adds a unit to something that already has a unit. It means that First and Last, etc. all useless tack an extra case that everyone has to handle in. It has dozens of knock-on consequences. Much code that currently only needs a semigroup falls back on a monoid because of the lack of a proper class relationship or gets duplicated. I can answer "yes" to both of your questions here.

The numerics changes to Floating mean that Haskell numerics just have awful precision. Adding expm1, etc. to Floating means that people will be able to write decent numerical code without having to choose between generality (using exp from Floating that works everywhere) and accuracy. I can answer "yes" to both of your questions here.

-Edward

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Edward Kmett-2
In reply to this post by Geoffrey Mainland
On Thu, Oct 22, 2015 at 11:36 AM, Geoffrey Mainland <[hidden email]> wrote:
On 10/22/2015 11:02 AM, Matthias Hörmann wrote:
> I would say that the need to import Control.Applicative in virtually
> every module manually
> definitely caused some pain before AMP.

In this particular case, there is a trade off between breaking code on
the one hand and having to write some import statements on the other. I
find writing some extra imports less painful than breaking (other
people's and my) code, but the other position is defensible as well. I
sense that I am in the minority, at least on the libraries list.

> I would also argue that a
> non-negligible amount
> of effort goes into teaching the warts, the reasons for the warts and
> how to work around them.

Which wart(s) in particular? All of them? Does having return (and (>>))
in Monad make teaching more difficult?

Having (>>) means that we have hundreds of monads out there where (>>) has been optimized, but (*>) has not.  

If I were working alone, AMP wouldn't be a huge deal. I could fix the
code for 7.10 compatibility, but then unless everyone switches to 7.10,
changes to the codebase made by someone using 7.8, e.g., defining a new
Monad instance, could break things on 7.10 again. It's easier to stick
with 7.8. Any time spent dealing with compatibility issues is time not
spent writing actual code.

In the open source world many of us just fire off our code to travis-ci and get it to build with a dozen different compiler versions. I maintain a lot of code that supports things back to 7.0 and forward to HEAD this way.
 
I outlined one possible path to avoid this kind of issue: spend more
time thinking about ways to maintain compatibility. We had proposals for
doing this with AMP.

And on the other hand we also had a concrete proposal that didn't require language changes that was ridiculously popular. People had been talking about Applicative as a superclass of Monad for a decade before we finally acted upon the AMP. People had been talking about superclass defaulting for a decade. When do you cut off discussion and ship the proposal that has overwhelming support? If there is no process that enables this you can stall the process indefinitely by raising objections of this form. Such a situation is not without costs all its own.

-Edward
 
Cheers,
Geoff

>
> On Thu, Oct 22, 2015 at 3:29 PM, Geoffrey Mainland <[hidden email]> wrote:
>> On 10/22/2015 02:40 AM, Edward Kmett wrote:
>>> On Wed, Oct 21, 2015 at 8:42 PM, Gregory Collins
>>> <[hidden email] <mailto:[hidden email]>> wrote:
>>>
>>>
>>>     On Wed, Oct 21, 2015 at 3:18 PM, Geoffrey Mainland
>>>     <[hidden email] <mailto:[hidden email]>> wrote:
>>>
>>>         My original email stated my underlying concern: we are losing
>>>         valuable
>>>         members of the community not because of the technical
>>>         decisions that are
>>>         being made, but because of the process by which they are being
>>>         made.
>>>
>>>     [If] you're doing research you're on the treadmill, almost by
>>>     definition, and you're delighted that we're finally making some
>>>     rapid progress on fixing up some of the longstanding warts.
>>>
>>>     If you're a practitioner, you are interested in using Haskell for,
>>>     y'know, writing programs. You're probably in one of two camps:
>>>     you're in "green field" mode writing a lot of new code (early
>>>     stage startups, prototype work, etc), or you're
>>>     maintaining/extending programs you've already written that are out
>>>     "in the field" for you doing useful work. Laura Wingerd calls this
>>>     the "annealing temperature" of software, and I think this is a
>>>     nice metaphor to describe it. How tolerant you are of ecosystem
>>>     churn depends on what your temperature is: and I think it should
>>>     be obvious to everyone that Haskell having "success" for
>>>     programming work would mean that lots of useful and correct
>>>     programs get written, so everyone who is in the former camp will
>>>     cool over time to join the latter.
>>>
>>>
>>>     I've made the point before and I don't really want to belabor it:
>>>     our de facto collective posture towards breaking stuff, especially
>>>     in the past few years, has been extremely permissive, and this
>>>     alienates people who are maintaining working programs.
>>>
>>>
>>> Even among people who purported to be teaching Haskell or using
>>> Haskell today in industry the margin of preference for the concrete
>>> FTP proposal was ~79%. This was considerably higher than I expected in
>>> two senses. One: there were a lot more people who claimed to be in one
>>> of those two roles than I expected by far, and two: their appetite for
>>> change was higher than I expected. I initially expected to see a
>>> stronger "academic vs. industry" split in the poll, but the groups
>>> were only distinguishable by a few percentage point delta, so while I
>>> expected roughly the end percentage of the poll, based on the year
>>> prior I'd spent running around the planet to user group meetings and
>>> the like, I expected it mostly because I expected more hobbyists and
>>> less support among industrialists.
>>>
>>>     I'm actually firmly of the belief that the existing committee
>>>     doesn't really have process issues, and in fact, that often it's
>>>     been pretty careful to minimize the impact of the changes it wants
>>>     to make. As others have pointed out, lots of the churn actually
>>>     comes from platform libraries, which are out of the purview of
>>>     this group.
>>>
>>>
>>> Historically we've had a bit of a split personality on this front.
>>> Nothing that touches the Prelude had changed in 17 years. On the other
>>> hand the platform libraries had maintained a pretty heavy rolling wave
>>> of breakage the entire time I've been around in the community. On a
>>> more experimental feature front, I've lost count of the number of
>>> different things we've done to Typeable or template-haskell.
>>>
>>>
>>>     All I'm saying is that if we want to appeal to or cater to working
>>>     software engineers, we have to be a lot less cavalier about
>>>     causing more work for them, and we need to prize stability of the
>>>     core infrastructure more highly. That'd be a broader cultural
>>>     change, and that goes beyond process: it's policy.
>>>
>>>
>>> The way things are shaping up, we've had 17 years of rock solid
>>> stability, 1 release that incorporated changes that were designed to
>>> minimize impact, to the point that the majority of the objections
>>> against them are of the form where people would prefer that we broke
>>> _more_ code, to get a more sensible state. Going forward, it looks
>>> like the next 2 GHC releases will have basically nothing affecting the
>>> Prelude, and there will be another punctuation in the equilibrium
>>> around 8.4 as the next set of changes kicks in over 8.4 and 8.6 That
>>> gives 2 years worth of advance notice of pending changes, and a pretty
>>> strong guarantee from the committee that you should be able to
>>> maintain code with a 3 release window without running afoul of
>>> warnings or needing CPP.
>>>
>>> So, out of curiosity, what additional stability policy is it that you
>>> seek?
>> Thanks to you and Dan [1], I now have a greater understanding and
>> appreciation for where the committee has been coming from. My new
>> understanding is that the changes that were formalized in AMP, FTP, and
>> MRP were the basis for the committee's creation. It also seems that
>> there are more changes in the pipeline that have not yet been made into
>> proposals, e.g., pulling (>>) out of Control.Monad [2]. Part of
>> "stability" is signaling change as far ahead as possible. The committee
>> has put a lot of effort into this, which I appreciate! However, as each
>> of these proposal has come down the pipeline, I never realized that they
>> were part of a larger master plan.
>>
>> 1) What is the master plan, and where is it documented, even if this
>> document is not up to the standard of a proposal? What is the final
>> target, and when might we expect it to be reached? What is in the
>> pipeline after MRP?
>>
>> Relatedly, guidance on how to write code now so that it will be
>> compatible with future changes helps mitigate the stability issue.
>>
>> 2) How can I write code that makes use of the Prelude so that it will
>> work with every new GHC release over the next 3 years? 5 years? For
>> example, how can I write a Monad instance now, knowing the changes that
>> are coming, so that the instance will work with every new GHC release
>> for the next 3 years? 5 years? If the answer is "you can't," then when
>> might I be able to do such a thing? As of 8.4? 8.6? I'm embarrassed to
>> say I don't know the answer!
>>
>> Finally, if none of these changes broke Prelude backwards compatibility,
>> far fewer people would be complaining :) Of course, we can't always make
>> progress without breaking things, but a more deliberative process might
>> offer an opportunity to make progress while still preserving backwards
>> compatibility. Take AMP for example. There were at least two [3] [4]
>> proposals for preserving backwards compatibility. Investigating them
>> would have taken time and delayed AMP, yes, but why the rush?
>>
>> 3) Can we have a process that allows more deliberation over, and wider
>> publicity for, changes that break backwards compatibility? The goal of
>> such a process would not be to prevent change, but to allow more time to
>> find possible solution to the issue of backwards compatibility.
>>
>> My proposal for a low-traffic mailing list where all proposals were
>> announced was meant to provide wider publicity.
>>
>> Personally, I think these proposals do indeed fix a lot of warts in the
>> language. As a researcher who uses actively uses Haskell every day,
>> these warts have had approximately zero impact on me for the past
>> (almost) decade, and I would be perfectly content if they were never
>> fixed. The only pain I can recall enduring is having to occasionally
>> write an orphan Applicative instance. I have been importing Prelude
>> hiding mapM for years. I have been importing Control.Applicative for
>> years. Neither has been painful. Dealing with AMP? I'm working on a
>> collaborative research project that is stuck on 7.8 because of AMP. I
>> agree, that seems silly, but whether or not it is silly, it is an impact
>> I feel.
>>
>> One way to look at these proposals is to ask the question "Wouldn't the
>> language be nicer if all these changes were made?" Another is to ask the
>> question "Does the fact that these changes have not been made make your
>> life as a Haskell programmer more difficult in any significant way?" I
>> answer "yes" to the former and "no" to the latter. Is our stance that
>> answering "yes" to the former question is enough to motivate braking
>> change? Shouldn't a answer "no" to the latter question cause some
>> hesitation?
>>
>> Maybe there are a lot of people who answer "yes" to both questions. I
>> would like to know! But does having return in the Monad class really
>> cause anyone anything other than existential pain?
>>
>> Cheers,
>> Geoff
>>
>> [1] https://mail.haskell.org/pipermail/libraries/2015-October/026390.html
>> [2] https://mail.haskell.org/pipermail/libraries/2015-September/026158.html
>> [3] https://ghc.haskell.org/trac/ghc/wiki/InstanceTemplates
>> [4] https://ghc.haskell.org/trac/ghc/wiki/IntrinsicSuperclasses

_______________________________________________


_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Edward Kmett-2
In reply to this post by Mario Blažević
On Thu, Oct 22, 2015 at 12:20 PM, Mario Blažević <[hidden email]> wrote:
On 15-10-22 09:29 AM, Geoffrey Mainland wrote:
...

1) What is the master plan, and where is it documented, even if this
document is not up to the standard of a proposal? What is the final
target, and when might we expect it to be reached? What is in the
pipeline after MRP?

Relatedly, guidance on how to write code now so that it will be
compatible with future changes helps mitigate the stability issue.

        I have been fully in favour of all the proposals implemented so far, and I think that having an explicit master plan would be a great idea. It would address some of the process-related objections that have been raised, and it would provide a fixed long-term target that would be much easier to make the whole community aware of and contribute to.

        For that purpose, the master plan should be advertised directly on the front page of haskell.org. Once we have it settled and agreed, the purpose of the base-library commitee would essentially become to figure out the details like the timeline and code migration path. One thing they wouldn't need to worry about is whether anybody disagrees with their goals.


2) How can I write code that makes use of the Prelude so that it will
work with every new GHC release over the next 3 years? 5 years? For
example, how can I write a Monad instance now, knowing the changes that
are coming, so that the instance will work with every new GHC release
for the next 3 years? 5 years? If the answer is "you can't," then when
might I be able to do such a thing? As of 8.4? 8.6? I'm embarrassed to
say I don't know the answer!

        From the discussions so far it appears that the answer for 3 years (or at least the next 3 GHC releases) would be to write the code that works with the current GHC and base, but this policy has not been codified anywhere yet. Knowing the upcoming changes doesn't help with making your code any more robust, and I think that's a shame. We could have a two-pronged policy:

- code that works and compiles with the latest GHC with no *warnings* will continue to work and compile with no *errors* with the following 2 releases, and
- code that also follows the forward-compatibility recommendations current for that version of GHC will continue to work and compile with no *errors* with the following 4 releases.

We have adopted a "3 release policy" facing backwards, not forwards. However, all proposals currently under discussion actually meet a stronger condition, a 3 release policy that you can slide both forward and backwards to pick the 3 releases you want to be compatible with without using CPP. It also appears that all of the changes that we happen to have in the wings

https://ghc.haskell.org/trac/ghc/wiki/Status/BaseLibrary

comply with both of your goals here. However, I hesitate to say that we can simultaneously meet this goal and the 3 release policy facing backwards _and_ sufficient notification in all situations even ones we can't foresee today. As a guideline? Sure. If we have two plans that can reach the same end-goal and one complies and the other doesn't, I'd say we should favor the plan that gives more notice and assurance. However, this also needs to be tempered against the number of years folks suffer the pain of having in an inconsistent intermediate state. (e.g. having generalized combinators in Data.List today)

        The forward-compatibility recommendations would become a part of the online GHC documentation so nobody complains they didn't know about them. Personally, I'd prefer if the recommendations were built into the compiler itself as a new class of warnings, but then (a) some people would insist on turning them on together with -Werror and then complain when their builds break and (b) this would increase the pressure on GHC implementors.
 
The current discussion is centering around adding a -Wcompat flag that warns of changes that you maybe can't yet implement in a way that would be backwards compatible with a 3 release backwards-facing window, but which will eventually cause issues.

-Edward

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Geoffrey Mainland
In reply to this post by Edward Kmett-2

>     I outlined one possible path to avoid this kind of issue: spend more
>     time thinking about ways to maintain compatibility. We had
>     proposals for
>     doing this with AMP.
>
>
> And on the other hand we also had a concrete proposal that didn't
> require language changes that was ridiculously popular. People had
> been talking about Applicative as a superclass of Monad for a decade
> before we finally acted upon the AMP. People had been talking about
> superclass defaulting for a decade. When do you cut off discussion and
> ship the proposal that has overwhelming support? If there is no
> process that enables this you can stall the process indefinitely by
> raising objections of this form. Such a situation is not without costs
> all its own.
>

I agree. It was certainly within the power of the committee to start a
clock and say something like "if we don't have a patch to GHC that
provides backwards compatibility for AMP within 1 year, we will push out
AMP as-is." Had I understand the implications of AMP at the time, or
even been aware that AMP was happening (I was actually actively working
on the GHC code base during that period), that certainly would have been
motivation for me to do something about it! *That* would be how one
could cut off discussion and ship a proposal.

I am not against changing the Prelude! But it sure would be nice if
-XHaskell98 gave me a Haskell 98 Prelude and -XHaskell2010 gave me a
Haskell 2010 Prelude, both of which could be used with external packages
that themselves used the more modern Prelude. Maybe that's impossible.
Setting a firm deadline to finding a solution to the compatibility issue
would have been a way to compromise. Ideally, changing the Prelude
wouldn't require breaking code written to use an older version of the
Prelude. Yes, attaining that goal would require more work.

Evolving the Prelude and maintaining compatibility are not necessarily
mutually exclusive options.

Cheers,
Geoff
_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Edward Kmett-2
On Thu, Oct 22, 2015 at 12:59 PM, Geoffrey Mainland <[hidden email]> wrote:

>     I outlined one possible path to avoid this kind of issue: spend more
>     time thinking about ways to maintain compatibility. We had
>     proposals for
>     doing this with AMP.
>
>
> And on the other hand we also had a concrete proposal that didn't
> require language changes that was ridiculously popular. People had
> been talking about Applicative as a superclass of Monad for a decade
> before we finally acted upon the AMP. People had been talking about
> superclass defaulting for a decade. When do you cut off discussion and
> ship the proposal that has overwhelming support? If there is no
> process that enables this you can stall the process indefinitely by
> raising objections of this form. Such a situation is not without costs
> all its own.
>

I agree. It was certainly within the power of the committee to start a
clock and say something like "if we don't have a patch to GHC that
provides backwards compatibility for AMP within 1 year, we will push out
AMP as-is." Had I understand the implications of AMP at the time, or
even been aware that AMP was happening (I was actually actively working
on the GHC code base during that period), that certainly would have been
motivation for me to do something about it! *That* would be how one
could cut off discussion and ship a proposal.

I freely admit that there is room for improvement in the process. We're all learning here.

The current Semigroup-Monoid proposal more or less fits the bill you are looking for here. We have a roadmap today that migrates an existing package with several years worth of back support into base more or less unmodified, and then in 3 releases starts requiring instances. You can think of that 3 release clock as precisely what you are looking for here.

If we get an implementation of superclass defaulting or some other mechanism that can mitigate the extra couple of lines of code that this proposal will tax users with, within that timeline, we'd gladly incorporate it into the proposal.
 
I am not against changing the Prelude! But it sure would be nice if
-XHaskell98 gave me a Haskell 98 Prelude and -XHaskell2010 gave me a
Haskell 2010 Prelude, both of which could be used with external packages
that themselves used the more modern Prelude.

It would definitely be a preferable state of affairs. Unfortunately, at least with the tools available to us today, such a plan is incompatible with any plan that introduces a new superclass. It also cuts off plans that ever factors an existing class into two, such as the MonadFail proposals. We simply do not at this time have the technical capabilities that would support such a system. If they showed up in GHC we can adapt plans to fit.
 
Maybe that's impossible.
Setting a firm deadline to finding a solution to the compatibility issue
would have been a way to compromise. Ideally, changing the Prelude
wouldn't require breaking code written to use an older version of the
Prelude. Yes, attaining that goal would require more work.

We looked around for a year for a roadmap that would get us there. None presented itself. In the end we wound up shedding the core libraries status of the haskell98 and haskell2010 packages as the 3-4 different ways in which one could write a Haskell2010 package all have different trade-offs and can be maintained in user-land.

Examples:

* A hardline version of haskell2010 with a Monad and Num that fully complies with the report, but which doesn't work with Monad and Num instances supplied by other libraries. This needs RebindableSyntax, so it doesn't quite work right. With compiler support for rebinding syntax to a particular library instead of to whatever is in scope, such a thing might be suitable for teaching a Haskell class.

* A pragmatic haskell2010 where the Monad has an Applicative superclass and Num has the current semantic. This works with everything but doesn't faithfully follow the report. 

*  A middle-ground package that tries to use a superclass defaulting mechanism that we don't have to supply missing Applicative superclasses might resolve the Applicative-Monad issue in theory, but does nothing for report compliance of our existing Num.

Each one of these solutions has flaws. Two of them require innovations in the compiler that we don't have.
 
Evolving the Prelude and maintaining compatibility are not necessarily
mutually exclusive options.

Agreed, but as you can see above, maintaining compatibility isn't necessarily always a viable option either.

-Edward

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Geoffrey Mainland
On 10/22/2015 01:29 PM, Edward Kmett wrote:

> On Thu, Oct 22, 2015 at 12:59 PM, Geoffrey Mainland
> <[hidden email] <mailto:[hidden email]>> wrote:
>  
>
>     I am not against changing the Prelude! But it sure would be nice if
>     -XHaskell98 gave me a Haskell 98 Prelude and -XHaskell2010 gave me a
>     Haskell 2010 Prelude, both of which could be used with external
>     packages
>     that themselves used the more modern Prelude.
>
>
> It would definitely be a preferable state of affairs. Unfortunately,
> at least with the tools available to us today, such a plan is
> incompatible with any plan that introduces a new superclass. It also
> cuts off plans that ever factors an existing class into two, such as
> the MonadFail proposals. We simply do not at this time have the
> technical capabilities that would support such a system. If they
> showed up in GHC we can adapt plans to fit.

Great!

Could we work to characterize what technical capabilities we would need
to support full backwards Prelude compatibility?

Here is my rough understanding of what we would need:

1) Some method for "default superclasses." This would solve the AMP issue.

2) A method for factoring existing classes into two (or more) parts.
This would solve the MonadFail problem.

3) A method for imposing extra superclass constraints on a class. This
would be needed for full Num compatibility. Seems much less important
that 1 and 2.

The most thought has gone into 1.

Are these three technical capabilities *all* that we would need? Perhaps
we also need a way to tie the current language (-XHaskell98,
-XHaskell2010) to a particular implementation of the Prelude.

Geoff
_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Bardur Arantsson-2
On 10/22/2015 07:41 PM, Geoffrey Mainland wrote:

> On 10/22/2015 01:29 PM, Edward Kmett wrote:
>> On Thu, Oct 22, 2015 at 12:59 PM, Geoffrey Mainland
>> <[hidden email] <mailto:[hidden email]>> wrote:
>>  
>>
>>     I am not against changing the Prelude! But it sure would be nice if
>>     -XHaskell98 gave me a Haskell 98 Prelude and -XHaskell2010 gave me a
>>     Haskell 2010 Prelude, both of which could be used with external
>>     packages
>>     that themselves used the more modern Prelude.
>>
>>
>> It would definitely be a preferable state of affairs. Unfortunately,
>> at least with the tools available to us today, such a plan is
>> incompatible with any plan that introduces a new superclass. It also
>> cuts off plans that ever factors an existing class into two, such as
>> the MonadFail proposals. We simply do not at this time have the
>> technical capabilities that would support such a system. If they
>> showed up in GHC we can adapt plans to fit.
>
> Great!
>
> Could we work to characterize what technical capabilities we would need
> to support full backwards Prelude compatibility?
>

It's basically the stuff that never materialized in 10 years until
people got fed up with the situation and voted AMP through even though
it would cause (limited) breakage.

> Here is my rough understanding of what we would need:
>
> 1) Some method for "default superclasses." This would solve the AMP issue.
>
> 2) A method for factoring existing classes into two (or more) parts.
> This would solve the MonadFail problem.
>
> 3) A method for imposing extra superclass constraints on a class. This
> would be needed for full Num compatibility. Seems much less important
> that 1 and 2.
>
> The most thought has gone into 1.
>
> Are these three technical capabilities *all* that we would need? Perhaps
> we also need a way to tie the current language (-XHaskell98,
> -XHaskell2010) to a particular implementation of the Prelude.
>

You say "all" as if a) it's easy (hint: it's all highly non-trivial), b)
anybody's actually going to do the work.

Look, we'd all like unicorns and rainbows, but clearly nobody's done the
required work[1], and a lot of people were getting fed up with the
status quo.

Just wishing that this will happen won't make it so, and frankly,
downplaying the difficulty seems like an attempt to veto any change.

Regards,

[1] Understandable, given the highly non-trivial nature of it.

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Edward Kmett-2
In reply to this post by Geoffrey Mainland

On Thu, Oct 22, 2015 at 1:41 PM, Geoffrey Mainland <[hidden email]> wrote:
On 10/22/2015 01:29 PM, Edward Kmett wrote:
> On Thu, Oct 22, 2015 at 12:59 PM, Geoffrey Mainland
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>
>     I am not against changing the Prelude! But it sure would be nice if
>     -XHaskell98 gave me a Haskell 98 Prelude and -XHaskell2010 gave me a
>     Haskell 2010 Prelude, both of which could be used with external
>     packages
>     that themselves used the more modern Prelude.
>
>
> It would definitely be a preferable state of affairs. Unfortunately,
> at least with the tools available to us today, such a plan is
> incompatible with any plan that introduces a new superclass. It also
> cuts off plans that ever factors an existing class into two, such as
> the MonadFail proposals. We simply do not at this time have the
> technical capabilities that would support such a system. If they
> showed up in GHC we can adapt plans to fit.

Great!

Could we work to characterize what technical capabilities we would need
to support full backwards Prelude compatibility?

Here is my rough understanding of what we would need:

1) Some method for "default superclasses." This would solve the AMP issue.

2) A method for factoring existing classes into two (or more) parts.
This would solve the MonadFail problem.

3) A method for imposing extra superclass constraints on a class. This
would be needed for full Num compatibility. Seems much less important
that 1 and 2.

The most thought has gone into 1.

Are these three technical capabilities *all* that we would need? Perhaps
we also need a way to tie the current language (-XHaskell98,
-XHaskell2010) to a particular implementation of the Prelude.
 
I don't have a concrete plan here. I'm not even sure one can be achieved that works. I'd say that the burden of figuring out such a thing falls on the party that can create a plan, pitch it to the community and potentially implement it.

If I enumerate a set of conditions here I'm basically implying that I'd agree to any plan that incorporated them. I'm just not prepared to make that commitment sight-unseen to something with unknown warts and implications.

I can, however, say that it is plausible that what you have enumerated above could potentially address the outstanding issues, but I don't know how good of a compromise the result would be. 

-Edward

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Edward Kmett-2
In reply to this post by Edward Kmett-2
On Thu, Oct 22, 2015 at 1:37 PM, Gregory Collins <[hidden email]> wrote:

On Wed, Oct 21, 2015 at 11:40 PM, Edward Kmett <[hidden email]> wrote:
All I'm saying is that if we want to appeal to or cater to working software engineers, we have to be a lot less cavalier about causing more work for them, and we need to prize stability of the core infrastructure more highly. That'd be a broader cultural change, and that goes beyond process: it's policy.

The way things are shaping up, we've had 17 years of rock solid stability

I have >95% confidence that all of the C++ programs I wrote 15 years ago would build and work if I dusted them off and typed "make" today. I have Haskell programs I wrote last year that I probably couldn't say that about.

So I don't buy that, at all, at least if we're discussing the topic of the stability of the core infrastructure in general rather than changes being made to the Prelude. It's been possible to write to Haskell 98 without too much breakage, yes, but almost nobody actually does that; they write to Haskell as defined by GHC + the boot libraries + Haskell platform + Hackage, IMO with decreasing expectations of stability for each. The core set breaks a lot.

I definitely agree here. 

We have a lot of libraries in the Haskell Platform that have fairly liberal change policies. On the other hand, we have a policy of "maintainer decides" around issues. This yields a fairly decentralized change management process, with different maintainers who have different views. The Platform gives us a central pool of packages that are generally the "best of breed" in their respective spaces, but gives us few stability guarantees. 

Heck, every release I wind up having to change whatever code I have that uses template-haskell or Typeable.

On the other hand, it isn't clear with a larger "core" platform with harder stability guarantees that we have a volunteer force that can and would sign up for the long slog of maintenance without that level of autonomy.
 
We definitely shouldn't adopt a posture to breaking changes as conservative as the C++ committee's, and literally nobody in the Haskell community is arguing against breaking changes in general, but as I've pointed out, most of these breakages could have been avoided with more careful engineering, and indeed, on many occasions the argument has been made and it's fallen on deaf ears.

I would argue that there are individual maintainers that give lie to that statement. In many ways Johan himself has served as a counter-example there. The libraries he has maintained have acted as a form of bedrock with long maintenance windows. On the other hand, the burden of maintaining that stability seems to have ultimately burned him out.

They can speak for themselves but I think for Mark and Johan, this is a "straw that broke the camel's back" issue rather than anything to do with the merits of removing return from Monad. I think the blowback just happens to be so much stronger on MRP because the breaking change is so close to the core of the language, and the benefits are so nebulous. fixing an aesthetic problem has almost zero practical value

I personally don't care about the return side of the equation.

Herbert's MRP proposal was an attempt by him to finish out the changes started by AMP so that a future Haskell Report can read cleanly. Past reports have been remarkably free of historical baggage.

I'd personally readily sacrifice "progress" there in the interest of harmony. Herbert as haskell-prime chair possibly feels differently.
 
and ">> could be slightly more efficient for some monads" is pretty weak sauce.

The issue right now around (>>) is that it has knock-on effects that run pretty far and wide. "Weak sauce" or not, it means through second order consequences that we can't move the useless mapM and sequence to the top level from their current status as memberrs of Traversable and that users have to care about which of two provably equivalent things that they are using, at all times. 

It means that code that calls mapM will be less efficient and that mapM_ behaves in a manner with rather radically different space and time behavior than mapM today and not in a consistently good way.

-Edward

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

Re: Breaking Changes and Long Term Support Haskell

Geoffrey Mainland
In reply to this post by Edward Kmett-2
On 10/22/2015 02:25 PM, Edward Kmett wrote:

>
> On Thu, Oct 22, 2015 at 1:41 PM, Geoffrey Mainland
> <[hidden email] <mailto:[hidden email]>> wrote:
>
>     On 10/22/2015 01:29 PM, Edward Kmett wrote:
>     > On Thu, Oct 22, 2015 at 12:59 PM, Geoffrey Mainland
>     > <[hidden email] <mailto:[hidden email]>
>     <mailto:[hidden email] <mailto:[hidden email]>>> wrote:
>     >
>     >
>     >     I am not against changing the Prelude! But it sure would be
>     nice if
>     >     -XHaskell98 gave me a Haskell 98 Prelude and -XHaskell2010
>     gave me a
>     >     Haskell 2010 Prelude, both of which could be used with external
>     >     packages
>     >     that themselves used the more modern Prelude.
>     >
>     >
>     > It would definitely be a preferable state of affairs. Unfortunately,
>     > at least with the tools available to us today, such a plan is
>     > incompatible with any plan that introduces a new superclass. It also
>     > cuts off plans that ever factors an existing class into two, such as
>     > the MonadFail proposals. We simply do not at this time have the
>     > technical capabilities that would support such a system. If they
>     > showed up in GHC we can adapt plans to fit.
>
>     Great!
>
>     Could we work to characterize what technical capabilities we would
>     need
>     to support full backwards Prelude compatibility?
>
>     Here is my rough understanding of what we would need:
>
>     1) Some method for "default superclasses." This would solve the
>     AMP issue.
>
>     2) A method for factoring existing classes into two (or more) parts.
>     This would solve the MonadFail problem.
>
>     3) A method for imposing extra superclass constraints on a class. This
>     would be needed for full Num compatibility. Seems much less important
>     that 1 and 2.
>
>     The most thought has gone into 1.
>
>
>     Are these three technical capabilities *all* that we would need?
>     Perhaps
>     we also need a way to tie the current language (-XHaskell98,
>     -XHaskell2010) to a particular implementation of the Prelude.
>
>  
> I don't have a concrete plan here. I'm not even sure one can be
> achieved that works. I'd say that the burden of figuring out such a
> thing falls on the party that can create a plan, pitch it to the
> community and potentially implement it.
>
> If I enumerate a set of conditions here I'm basically implying that
> I'd agree to any plan that incorporated them. I'm just not prepared to
> make that commitment sight-unseen to something with unknown warts and
> implications.
>
> I can, however, say that it is plausible that what you have enumerated
> above could potentially address the outstanding issues, but I don't
> know how good of a compromise the result would be.
>
> -Edward

I don't have a concrete plan either, not am I sure that one is possible.
But I don't see how having a conversation about how one might achieve
backwards compatibility would commit anyone to anything. Any eventual
proposal would have to go through the same approval process as every
other proposal. And even if we did have a hypothetical draft proposal
that you had at some point stated you approved of in some way, you would
always be free to change your mind!

Isn't the libraries list exactly where this sort of conversation should
happen?

Cheers,
Geoff
_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
Reply | Threaded
Open this post in threaded view
|

RE: Breaking Changes and Long Term Support Haskell

Simon Peyton Jones
| >     Are these three technical capabilities *all* that we would need?
| >     Perhaps
| >     we also need a way to tie the current language (-XHaskell98,
| >     -XHaskell2010) to a particular implementation of the Prelude.
| >
| >
| > I don't have a concrete plan here. I'm not even sure one can be
| > achieved that works. I'd say that the burden of figuring out such a
| > thing falls on the party that can create a plan, pitch it to the
| > community and potentially implement it.

In fact there is more than one concrete plan: https://ghc.haskell.org/trac/ghc/wiki/IntrinsicSuperclasses

All are complex, only partially designed, entirely unimplemented (and the implementation will be non-trivial), and lacking an active champion.  The one I link to above is probably the leading contender, but it feels too complicated to me.

Simon

_______________________________________________
Haskell-prime mailing list
[hidden email]
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-prime
12345