Proposal: Explicitly require "Data.Bits.bit (-1) == 0" property

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
149 messages Options
12345678
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Gregory Collins-3

On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <[hidden email]> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

That's because you maintain a lot of packages, and you're considering buildability on short time frames (i.e. you mostly care about "does all the latest stuff build right now?"). The consequences of violating the PVP are that as a piece of code ages, the probability that it still builds goes to zero, even if you go and dig out the old GHC version that you were using at the time. I find this really unacceptable, and believe that people who are choosing not to be compliant with the policy are BREAKING HACKAGE and making life harder for everyone by trading convenience now for guaranteed pain later. In fact, in my opinion the server ought to be machine-checking PVP compliance and refusing to accept packages that don't obey the policy.

Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I've long maintained that the solution to this issue should be tooling. The dependency graph that you stipulate in your cabal file should be a *warrant* that "this package is known to be compatible with these versions of these packages". If a new major version of package "foo" comes out, a bumper tool should be able to try relaxing the dependency and seeing if your package still builds, bumping your version number accordingly based on the PVP rules. Someone released a tool to attempt to do this a couple of days ago --- I haven't tried it yet but surely with a bit of group effort we can improve these tools so that they really fast and easy to use.

Of course, people who want to follow PVP are also going to need tooling to make sure their programs still build in the future because so many people have broken the policy in the past -- that's where proposed kludges like "cabal freeze" are going to come in.

G
--
Gregory Collins <[hidden email]>

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Daniel Trstenjak-2

On Tue, Feb 25, 2014 at 11:23:44AM -0800, Gregory Collins wrote:
> Someone released a tool to attempt to do this a couple of days ago ---
> I haven't tried it yet but surely with a bit of group effort we can
> improve these tools so that they really fast and easy to use.

That's an amazing tool ... ;)

> Of course, people who want to follow PVP are also going to need tooling to make
> sure their programs still build in the future because so many people have
> broken the policy in the past -- that's where proposed kludges like "cabal
> freeze" are going to come in.

If I understood it correctly, then cabal >1.19 supports the option '--allow-newer'
to be able to ignore upper bounds, which might solve several of the issues here,
so upper bounds could be set but still ignored if desired.


Greetings,
Daniel
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Edward Kmett-2
It alleviates the common case, but it doesn't resolve the scenario where someone put a hard bound in for a reason due to a known change in semantics or known incompatibility.


On Tue, Feb 25, 2014 at 3:17 PM, Daniel Trstenjak <[hidden email]> wrote:

On Tue, Feb 25, 2014 at 11:23:44AM -0800, Gregory Collins wrote:
> Someone released a tool to attempt to do this a couple of days ago ---
> I haven't tried it yet but surely with a bit of group effort we can
> improve these tools so that they really fast and easy to use.

That's an amazing tool ... ;)

> Of course, people who want to follow PVP are also going to need tooling to make
> sure their programs still build in the future because so many people have
> broken the policy in the past -- that's where proposed kludges like "cabal
> freeze" are going to come in.

If I understood it correctly, then cabal >1.19 supports the option '--allow-newer'
to be able to ignore upper bounds, which might solve several of the issues here,
so upper bounds could be set but still ignored if desired.


Greetings,
Daniel
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries


_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman
In reply to this post by Gregory Collins-3



On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <[hidden email]> wrote:

On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <[hidden email]> wrote:
But that's only one half of the "package interoperability" issue. I face this first hand on a daily basis with my Stackage maintenance. I spend far more time reporting issues of restrictive upper bounds than I do with broken builds from upstream changes. So I look at this as purely a game of statistics: are you more likely to have code break because version 1.2 of text changes the type of the map function and you didn't have an upper bound, or because two dependencies of yours have *conflicting* versions bounds on a package like aeson[2]? In my experience, the latter occurs far more often than the former.

That's because you maintain a lot of packages, and you're considering buildability on short time frames (i.e. you mostly care about "does all the latest stuff build right now?"). The consequences of violating the PVP are that as a piece of code ages, the probability that it still builds goes to zero, even if you go and dig out the old GHC version that you were using at the time. I find this really unacceptable, and believe that people who are choosing not to be compliant with the policy are BREAKING HACKAGE and making life harder for everyone by trading convenience now for guaranteed pain later. In fact, in my opinion the server ought to be machine-checking PVP compliance and refusing to accept packages that don't obey the policy.

Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.


I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)
* People make mistakes. I've been bitten by people making breaking changes in point releases by mistake. If the only way your build will succeed is by assuming no one will ever mess up, you're in trouble.
* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I absolutely believe that, if you want to have code that builds reliably, you have to specify all of your deep dependencies. That's what I do for any production software, and it's what I recommend to anyone who will listen to me. Trying to push this off as a responsibility of every Hackage package author is (1) shifting the burden to the wrong place, and (2) irresponsible, since some maintainer out in the rest of the world has no obligation to make sure your code keeps working. That's your responsibility.
 
I've long maintained that the solution to this issue should be tooling. The dependency graph that you stipulate in your cabal file should be a *warrant* that "this package is known to be compatible with these versions of these packages". If a new major version of package "foo" comes out, a bumper tool should be able to try relaxing the dependency and seeing if your package still builds, bumping your version number accordingly based on the PVP rules. Someone released a tool to attempt to do this a couple of days ago --- I haven't tried it yet but surely with a bit of group effort we can improve these tools so that they really fast and easy to use.

Of course, people who want to follow PVP are also going to need tooling to make sure their programs still build in the future because so many people have broken the policy in the past -- that's where proposed kludges like "cabal freeze" are going to come in.


This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

Michael

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Vincent Hanquez
In reply to this post by Gregory Collins-3
On 2014-02-25 19:23, Gregory Collins wrote:

>
> On Mon, Feb 24, 2014 at 10:44 PM, Michael Snoyman <[hidden email]
> <mailto:[hidden email]>> wrote:
>
>     But that's only one half of the "package interoperability" issue.
>     I face this first hand on a daily basis with my Stackage
>     maintenance. I spend far more time reporting issues of restrictive
>     upper bounds than I do with broken builds from upstream changes.
>     So I look at this as purely a game of statistics: are you more
>     likely to have code break because version 1.2 of text changes the
>     type of the map function and you didn't have an upper bound, or
>     because two dependencies of yours have *conflicting* versions
>     bounds on a package like aeson[2]? In my experience, the latter
>     occurs far more often than the former.
>
>
> That's because you maintain a lot of packages, and you're considering
> buildability on short time frames (i.e. you mostly care about "does
> all the latest stuff build right now?"). The consequences of violating
> the PVP are that as a piece of code ages, the probability that it
> still builds goes to *zero*, even if you go and dig out the old GHC
> version that you were using at the time. I find this really
> unacceptable, and believe that people who are choosing not to be
> compliant with the policy are BREAKING HACKAGE and making life harder
> for everyone by trading convenience now for guaranteed pain later. In
> fact, in my opinion the server ought to be machine-checking PVP
> compliance and refusing to accept packages that don't obey the policy.
If you're going to dig an old ghc version, what's stopping you from
downloading old packages manually from hackage ? I'm sure it can even be
automated (more or less).

However, I don't think we should optimise for this use case; I'ld rather
use maintained packages that are regularly updated. And even if I wanted
to use an old package, provided it's not tied to something fairly
internals like GHC's api or such, in a language like haskell, porting to
recent version of libraries should be easier than in most other language.

Furthermore, some old libraries should not be used anymore. Consider old
libraries that have security issues for example. Whilst it's not the
intent, It's probably a good thing that those old libraries don't build
anymore, and people are forced to move to the latest maintained version.

The PvP at it stand seems to be a refuge for fossilised packages.

> Like Ed said, this is pretty cut and dried: we have a policy, you're
> choosing not to follow it, you're not in compliance, you're breaking
> stuff. We can have a discussion about changing the policy (and this
> has definitely been discussed to death before), but I don't think your
> side has the required consensus/votes needed to change the policy. As
> such, I really wish that you would reconsider your stance here.

"we have a policy".

*ouch*, I'm sorry, but I find those biggoted views damaging in a nice
inclusive haskell community (as I like to view it).

While we may have different opinions, I think we're all trying our best
to contribute to the haskell ecosystem the way we see fit.

--
Vincent
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Vincent Hanquez
In reply to this post by Carter Schonwald
On 2014-02-25 17:26, Carter Schonwald wrote:
> indeed.
>
> So lets think about how to add module types or some approximation
> thereof to GHC? (seriously, thats the only sane "best solution" i can
> think of, but its not something that can be done casually). Theres
> also the fact that any module system design will have to explicitly
> deal with type class instances in a more explicit fashion than we've
> done thus far.

Yes. I think that's the only way a PvP could actually work. I would
imagine that it could be quite fiddly and the reason why it wasn't been
done yet.
But, clearly for this scheme to work, it need to remove the human part
in the equation as much as possible.

It still wouldn't be perfect, as there would still be stuff that can't
be accounted for (bug, laziness, performance issues,...) , but clearly
it would work better than a simple flat sequence of number that is
suppose to represent many different aspects of a package and the
author's understanding of a policy.

--
Vincent
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

MightyByte
In reply to this post by Vincent Hanquez
On Tue, Feb 25, 2014 at 4:16 PM, Vincent Hanquez <[hidden email]> wrote:
> If you're going to dig an old ghc version, what's stopping you from
> downloading old packages manually from hackage ? I'm sure it can even be
> automated (more or less).

It's much more difficult because the scale is much greater.  Also, if
people aren't putting in version bounds, then you have no clue what
versions to try.  Leaving out version bounds is throwing away
information.

> However, I don't think we should optimise for this use case; I'ld rather use
> maintained packages that are regularly updated.

When I write code and get it working, I want it to work for all time.
There's absolutely no reason we shouldn't be able to make that happen.
 If we ignore this case, then Haskell will never be suitable for use
in serious production situations.  Large organizations want to know
that if they start using something it will continue to work.  (And
don't respond to this with the "avoid success at all costs" line.
Haskell is now mature enough that I and a growing number of other
people use Haskell on a daily basis for mission-critical
applications.)

> And even if I wanted to use an old package, provided it's not tied to something fairly internals like
> GHC's api or such, in a language like haskell, porting to recent version of
> libraries should be easier than in most other language.

It might be easier, but it can still require a LOT of effort...much
more than is justified in some situations.  And that doesn't mean that
in those situations getting old code working doesn't have significant
value.

> Furthermore, some old libraries should not be used anymore. Consider old
> libraries that have security issues for example. Whilst it's not the intent,
> It's probably a good thing that those old libraries don't build anymore, and
> people are forced to move to the latest maintained version.

This argument does not hold water when getting a legacy piece of code
working has significant intrinsic value.  There are plenty of
situations where code can have great value to a person/organization
even if it doesn't touch the wild internet.
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Vincent Hanquez
In reply to this post by Michael Snoyman
On 2014-02-25 20:38, Michael Snoyman wrote:

>
>
>
> On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins
> <[hidden email] <mailto:[hidden email]>> wrote:
>
> I really don't like this appeal to authority. I don't know who the
> "royal we" is that you are referring to here, and I don't accept the
> premise that the rest of us must simply adhere to a policy because "it
> was decided." "My side" as you refer to it is giving concrete negative
> consequences to the PVP. I'd expect "your side" to respond in kind,
> not simply assert that we're "breaking Hackage" and other such hyperbole.
>
Strongly agreed.

>
>     Of course, people who want to follow PVP are also going to need
>     tooling to make sure their programs still build in the future
>     because so many people have broken the policy in the past --
>     that's where proposed kludges like "cabal freeze" are going to
>     come in.
>
>
> This is where we apparently fundamentally disagree. cabal freeze IMO
> is not at all a kludge. It's the only sane approach to reliable
> builds. If I ran my test suite against foo version 1.0.1, performed
> manual testing on 1.0.1, did my load balancing against 1.0.1, I don't
> want some hotfix build to automatically get upgraded to version 1.0.2,
> based on the assumption that foo's author didn't break anything.
>

This is probably also the only sane approach at the moment for safe
builds. Considering the whole hackage infrastructure is quite insecure
at the moment (http download/upload, no package signing, etc), freezing
your build packages after you have audited them is probably the only
sensible way to ship secure products.

In a production environment (at 2 different work places), i've seen two
approachs for proper builds:

* still using hackage directly, but pinning each package with a
cryptographic hash on your build site.
* a private hackage instance where packages are manually imported. build
is using exclusively this.

Using hackage directly(+ depending on the PvP) is at the moment too much
like playing russian roulette.

--
Vincent
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Vincent Hanquez
In reply to this post by MightyByte
On 2014-02-25 21:34, MightyByte wrote:
> On Tue, Feb 25, 2014 at 4:16 PM, Vincent Hanquez <[hidden email]> wrote:
>> If you're going to dig an old ghc version, what's stopping you from
>> downloading old packages manually from hackage ? I'm sure it can even be
>> automated (more or less).
> It's much more difficult because the scale is much greater.  Also, if
> people aren't putting in version bounds, then you have no clue what
> versions to try.  Leaving out version bounds is throwing away
> information.
I'm not saying this is not painful, but i've done it in the past, and
using dichotomy and educated guesses (for example not using libraries
released after a certain date), you converge pretty quickly on a solution.

But the bottom line is that it's not the common use case. I rarely have
to dig old unused code.

>> However, I don't think we should optimise for this use case; I'ld rather use
>> maintained packages that are regularly updated.
> When I write code and get it working, I want it to work for all time.
> There's absolutely no reason we shouldn't be able to make that happen.
>   If we ignore this case, then Haskell will never be suitable for use
> in serious production situations.  Large organizations want to know
> that if they start using something it will continue to work.  (And
> don't respond to this with the "avoid success at all costs" line.
> Haskell is now mature enough that I and a growing number of other
> people use Haskell on a daily basis for mission-critical
> applications.)

This is moot IMHO. A large organisation would *not* rely on cabal, nor
the PvP to actually download packages properly:
Not only this is insecure, and as Michael mentioned, you would not get
the guarantee you need anyway.

Even if the above wasn't an issue, Haskell doesn't run in a bubble. I
don't expect old ghc and old packages to work with newer operating
systems and newer libraries forever.

>> Furthermore, some old libraries should not be used anymore. Consider old
>> libraries that have security issues for example. Whilst it's not the intent,
>> It's probably a good thing that those old libraries don't build anymore, and
>> people are forced to move to the latest maintained version.
> This argument does not hold water when getting a legacy piece of code
> working has significant intrinsic value.  There are plenty of
> situations where code can have great value to a person/organization
> even if it doesn't touch the wild internet.
Sure, in this case it doesn't apply to my "security issue" example, does
it ?

--
Vincent
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Gregory Collins-3
In reply to this post by Michael Snoyman
On Tue, Feb 25, 2014 at 12:38 PM, Michael Snoyman <[hidden email]> wrote:
On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <[hidden email]> wrote:
Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

This is not an appeal to authority, it's an appeal to consensus. The community comes together to work on lots of different projects like Hackage and the platform and we have established procedures and policies (like the PVP and the Hackage platform process) to manage this. I think the following facts are uncontroversial:
  • a Hackage package versioning policy exists and has been published in a known location
  • we don't have another one
  • you're violating it
Now you're right to argue that the PVP as currently constituted causes problems, i.e. "I can't upgrade to new-shiny-2.0 quickly enough" and "I manage 200 packages and you're driving me insane". And new major base versions cause a month of churn before everything goes green again. Everyone understands this. But the solution is either to vote to change the policy or to write tooling to make your life less insane, not just to ignore it, because the situation this creates (programs bitrot and become unbuildable over time at 100% probability) is really disappointing.

Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

There's a strawman in there -- in an ideal world PVP violations would be rare and would be considered bugs. Also, if it were up to me we'd be machine-checking PVP compliance. I don't know what you're talking about re: "irresponsible development". In the scenario I'm talking about, my program depends on "foo-1.2", "foo-1.2" depends on any version of "bar", and then when "bar-2.0" is released "foo-1.2" stops building and there's no way to fix this besides trial and error because the solver doesn't have enough information to do its work (and it's been lied to!!!). The only practical solutions right now are to:
  • commit to maintaining every program you've ever written on the hackage upgrade treadmill forever, or
  • write down the exact versions of all of the libraries you need in the transitive closure of the dependency graph.
#2 is best practice for repeatable builds anyways and you're right that cabal freeze will help here, but it doesn't help much for all the programs written before "cabal freeze" comes out. 

But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)

Of course. But compute the probability of this occurring (rare) vs the probability of breakage given no upper bounds (100% as t -> ∞). Think about what you're saying semantically when you say you depend only on "foo > 3": "foo version 4.0 or any later version". You can't own up to this contract.

* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I think you're making my point for me -- given that this paragraph you wrote is 100% correct, it makes sense for cabal not to try to build against the new version of a dependency until the package maintainer has checked that things still work and given the solver the go-ahead by bumping the package upper bound.

This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

This wouldn't be an assumption, Michael -- the tool should run the build and the test suites. We'd bump version on green tests.

G
--
Gregory Collins <[hidden email]>

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Gregory Collins-3
In reply to this post by Vincent Hanquez
On Tue, Feb 25, 2014 at 1:16 PM, Vincent Hanquez <[hidden email]> wrote:
On 2014-02-25 19:23, Gregory Collins wrote:

That's because you maintain a lot of packages, and you're considering buildability on short time frames (i.e. you mostly care about "does all the latest stuff build right now?"). The consequences of violating the PVP are that as a piece of code ages, the probability that it still builds goes to *zero*, even if you go and dig out the old GHC version that you were using at the time. I find this really unacceptable, and believe that people who are choosing not to be compliant with the policy are BREAKING HACKAGE and making life harder for everyone by trading convenience now for guaranteed pain later. In fact, in my opinion the server ought to be machine-checking PVP compliance and refusing to accept packages that don't obey the policy.
If you're going to dig an old ghc version, what's stopping you from downloading old packages manually from hackage ? I'm sure it can even be automated (more or less).

The solver can't help you here. Like I wrote in my last message to Michael: if I depend on foo-1.2, and foo-1.2 depends on "bar", and "bar-2.0" comes out that breaks "foo-1.2", what can I do? I have to binary search the transitive closure of the dependency space because the solver cannot help.

However, I don't think we should optimise for this use case; I'ld rather use maintained packages that are regularly updated. And even if I wanted to use an old package, provided it's not tied to something fairly internals like GHC's api or such, in a language like haskell, porting to recent version of libraries should be easier than in most other language.

Furthermore, some old libraries should not be used anymore. Consider old libraries that have security issues for example. Whilst it's not the intent, It's probably a good thing that those old libraries don't build anymore, and people are forced to move to the latest maintained version.

The PvP at it stand seems to be a refuge for fossilised packages.

I care much more about programs than about libraries here. Most Haskell programs that were ever written never made it to Hackage. I don't understand the point about old libraries: people will stop using libraries that aren't updated by their maintainers, or someone else will take them over.

Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

"we have a policy".

*ouch*, I'm sorry, but I find those biggoted views damaging in a nice inclusive haskell community (as I like to view it).

I don't see what bigotry or inclusiveness has to do with this. This is a conversation between insiders anyways :)

While we may have different opinions, I think we're all trying our best to contribute to the haskell ecosystem the way we see fit.

Of course, nobody's saying otherwise. People arguing for the omission of upper bounds often point to breakage caused by the PVP -- I just want to make it clear that people who ignore PVP cause breakage too, and this breakage is worse (because it affects end users instead of Haskell nerds, who know how to fix it). See e.g. https://github.com/snapframework/cufp2011/issues/4 for an instance where one of your packages broke a program of mine for no reason. This program would have continued building fine basically forever if you'd followed the PVP.

G
--
Gregory Collins <[hidden email]>

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Omari Norman-2
In reply to this post by Gregory Collins-3
On Tue, Feb 25, 2014 at 4:52 PM, Gregory Collins
<[hidden email]> wrote:

> write down the exact versions of all of the libraries you need in the
> transitive closure of the dependency graph.

I cobbled together a rudimentary tool that does just that:

https://hackage.haskell.org/package/sunlight

the idea being that, to my knowledge, there were no tools making it
easy to verify that a package builds with the *minimum* specified
versions.  Typical CI testing will eagerly pull the latest
dependencies.

sunlight builds in a sandbox, runs your tests, and snapshots the
resulting GHC package database.  It can do this for multiple GHC
versions, and will do one build with the minimum versions possible (it
does require that you specify a minimum version for each dependency,
but not a maximum).  At least then you can consult a record showing
the exact package graph that actually worked.
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Ganesh Sittampalam
In reply to this post by Michael Snoyman
On 25/02/2014 06:44, Michael Snoyman wrote:

> Next is the issue of PVP. I am someone who has stopped religiously
> following the PVP in the past few years. Your email seems to imply that
> only those following the PVP care about making sure that "packages work
> together." I disagree here; I don't use the PVP specifically because I
> care about package interoperability.
>
> The point of the PVP is to ensure that code builds. It's a purely
> compile-time concept. The PVP solves the problem of an update to a
> dependency causing a downstream package to break. And assuming everyone
> adheres to it[1], it ensures that cabal will never try to perform a
> build which isn't guaranteed to work.
>
> But that's only one half of the "package interoperability" issue. I face
> this first hand on a daily basis with my Stackage maintenance. I spend
> far more time reporting issues of restrictive upper bounds than I do
> with broken builds from upstream changes. So I look at this as purely a
> game of statistics: are you more likely to have code break because
> version 1.2 of text changes the type of the map function and you didn't
> have an upper bound, or because two dependencies of yours have
> *conflicting* versions bounds on a package like aeson[2]? In my
> experience, the latter occurs far more often than the former.

It's worth mentioning that cabal failing to find a solution is far less
costly for me to discover than cabal finding a solution and then having
a build of a large graph of packages fail, because by that point I've
wasted a lot of time waiting for the build and I now have a thoroughly
confused package database to recover from (whether using a sandbox or not).

Ganesh
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on

Herbert Valerio Riedel
In reply to this post by Michael Snoyman
On 2014-02-25 at 21:38:38 +0100, Michael Snoyman wrote:

[...]

> * The PVP itself does *not* guarantee reliable builds in all cases. If a
> transitive dependency introduces new exports, or provides new typeclass
> instances, a fully PVP-compliant stack can be broken. (If anyone doubts
> this claim, let me know, I can spell out the details. This has come up in
> practice.)

...are you simply referring to the fact that in order to guarantee
PVP-semantics of a package version, one has to take care to restrict the
version bounds of that package's build-deps in such a way, that any API
entities leaking from its (direct) build-deps (e.g.  typeclass instances
or other re-exported entities) are not a function of the "internal"
degree of freedoms the build-dep version-ranges provide? Or is there
more to it?
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

MightyByte
In reply to this post by Vincent Hanquez
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <[hidden email]> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.

> This is moot IMHO. A large organisation would *not* rely on cabal, nor the
> PvP to actually download packages properly:

Sorry, let me rephrase.  s/Large organizations/organizations/  Not
everyone is big enough to devote the kind of resources it would take
to set up their own system.  I've personally worked at two such
companies.  Building tools that can serve the needs of these
organizations will help the Haskell community as a whole.

> Not only this is insecure, and as Michael mentioned, you would not get the
> guarantee you need anyway.

In many cases security doesn't matter because code doesn't interact
with the outside world.  We're not talking about guaranteeing that
building with a later version is buggy.  We're talking about
guaranteeing that the package will work the way it always worked.
It's kind of a package-level purity/immutability.

> Even if the above wasn't an issue, Haskell doesn't run in a bubble. I don't
> expect old ghc and old packages to work with newer operating systems and
> newer libraries forever.

I don't expect this either.  I expect old packages to work the way
they always worked with the packages they always worked with.
_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman



On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <[hidden email]> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <[hidden email]> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

I still maintain that new codebases should be creating freeze files (or whatever we want to call them), and we need a community supported tool for it. After speaking with various Haskell-based companies, I'm fairly certain just about everyone's reinvented their own proprietary version of such a tool.

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

All of that said, I still think the only real solution is getting end users off of Hackage. We need an intermediate, stabilizing layer. That's why I started Stackage, and I believe that it's the only solution that will ultimately make library authors and end-users happy. Everything we're discussing now is window dressing.

My offer of cabal-timemachine was serious: I'll be happy to start that project, and I *do* think it will solve many people's issues. I'd just like it if it was released concurrently with cabal-freeze, so that once you figure out the right set of packages, you can freeze them in place and never run into these issues again.

Michael

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Michael Snoyman
In reply to this post by Gregory Collins-3



On Tue, Feb 25, 2014 at 11:52 PM, Gregory Collins <[hidden email]> wrote:
On Tue, Feb 25, 2014 at 12:38 PM, Michael Snoyman <[hidden email]> wrote:
On Tue, Feb 25, 2014 at 9:23 PM, Gregory Collins <[hidden email]> wrote:
Like Ed said, this is pretty cut and dried: we have a policy, you're choosing not to follow it, you're not in compliance, you're breaking stuff. We can have a discussion about changing the policy (and this has definitely been discussed to death before), but I don't think your side has the required consensus/votes needed to change the policy. As such, I really wish that you would reconsider your stance here.

I really don't like this appeal to authority. I don't know who the "royal we" is that you are referring to here, and I don't accept the premise that the rest of us must simply adhere to a policy because "it was decided." "My side" as you refer to it is giving concrete negative consequences to the PVP. I'd expect "your side" to respond in kind, not simply assert that we're "breaking Hackage" and other such hyperbole.

This is not an appeal to authority, it's an appeal to consensus. The community comes together to work on lots of different projects like Hackage and the platform and we have established procedures and policies (like the PVP and the Hackage platform process) to manage this. I think the following facts are uncontroversial:
  • a Hackage package versioning policy exists and has been published in a known location
  • we don't have another one
  • you're violating it
Now you're right to argue that the PVP as currently constituted causes problems, i.e. "I can't upgrade to new-shiny-2.0 quickly enough" and "I manage 200 packages and you're driving me insane". And new major base versions cause a month of churn before everything goes green again. Everyone understands this. But the solution is either to vote to change the policy or to write tooling to make your life less insane, not just to ignore it, because the situation this creates (programs bitrot and become unbuildable over time at 100% probability) is really disappointing.


You talk about voting on the policy as if that's the natural thing to do. When did we vote to accept the policy in the first place? I don't remember ever putting my name down as "I agree, this makes sense." Talking about voting, violating, complying, etc, in a completely open system like Hackage, makes no sense, and is why your comments come off as an appeal to authority.

If you want to have more rigid rules on what packages can be included, start a downstream, PVP-only Hackage, and don't allow in violating packages. If it takes off, and users have demonstrated that they care very much about PVP compliance, then us PVP naysayers will have hard evidence that our beliefs were mistaken. Right now, it's just a few people constantly accusing us of violations and insisting we spend a lot more work on a policy we believe to be flawed.
 
Now, I think I understand what you're alluding to. Assuming I understand you correctly, I think you're advocating irresponsible development. I have codebases which I maintain and which use older versions of packages. I know others who do the same. The rule for this is simple: if your development process only works by assuming third parties to adhere to some rules you've established, you're in for a world of hurt. You're correct: if everyone rigidly followed the PVP, *and* no one every made any mistakes, *and* the PVP solved all concerns, then you could get away with the development practices you're talking about.

There's a strawman in there -- in an ideal world PVP violations would be rare and would be considered bugs.

Then you're missing my point completely. You're advocating making package management policy based on developer practices of not pinning down deep dependencies. My point is that *bugs happen*. And as I keep saying, it's not just build-time bugs: runtime bugs are possible and far worse. I see no reason that package authors should go through lots of effort to encourage bad practice.
 
Also, if it were up to me we'd be machine-checking PVP compliance. I don't know what you're talking about re: "irresponsible development". In the scenario I'm talking about, my program depends on "foo-1.2", "foo-1.2" depends on any version of "bar", and then when "bar-2.0" is released "foo-1.2" stops building and there's no way to fix this besides trial and error because the solver doesn't have enough information to do its work (and it's been lied to!!!). The only practical solutions right now are to:
  • commit to maintaining every program you've ever written on the hackage upgrade treadmill forever, or
  • write down the exact versions of all of the libraries you need in the transitive closure of the dependency graph.
#2 is best practice for repeatable builds anyways and you're right that cabal freeze will help here, but it doesn't help much for all the programs written before "cabal freeze" comes out. 


Playing the time machine game is silly. Older programs are broken. End of story. If we all agree to start using the PVP now, it won't fix broken programs. If we release "cabal freeze" now, it won't fix broken programs. But releasing "cabal freeze" *will* prevent this problem from happening in the future.
 
But that's not the real world. In the real world:

* The PVP itself does *not* guarantee reliable builds in all cases. If a transitive dependency introduces new exports, or provides new typeclass instances, a fully PVP-compliant stack can be broken. (If anyone doubts this claim, let me know, I can spell out the details. This has come up in practice.)

Of course. But compute the probability of this occurring (rare) vs the probability of breakage given no upper bounds (100% as t -> ∞). Think about what you're saying semantically when you say you depend only on "foo > 3": "foo version 4.0 or any later version". You can't own up to this contract.


That's because you're defining the build-depends to mean "I guarantee this to be the case." I could just as easily argue that `foo < 4` is also a lie: how do you know that it *won't* build? This argument has been had many times, please stop trying to make it seem like a clear-cut argument.
 
* Just because your code *builds*, doesn't mean your code *works*. Semantics can change: bugs can be introduced, bugs that you depended upon can be resolved, performance characteristics can change in breaking ways, etc.

I think you're making my point for me -- given that this paragraph you wrote is 100% correct, it makes sense for cabal not to try to build against the new version of a dependency until the package maintainer has checked that things still work and given the solver the go-ahead by bumping the package upper bound.


Again, you're missing it. If there's a point release, PVP-based code will automatically start using that new point release. That's simply not good practice for a production system.
 
This is where we apparently fundamentally disagree. cabal freeze IMO is not at all a kludge. It's the only sane approach to reliable builds. If I ran my test suite against foo version 1.0.1, performed manual testing on 1.0.1, did my load balancing against 1.0.1, I don't want some hotfix build to automatically get upgraded to version 1.0.2, based on the assumption that foo's author didn't break anything.

This wouldn't be an assumption, Michael -- the tool should run the build and the test suites. We'd bump version on green tests.


Maybe you write perfect code every time. But I've seen this process many times in the past:

* Work on version 2 of an application.
* Create a staging build of version 2.
* Run automated tests on version 2.
* QA manually tests version 2.
* Release version 2.
* Three weeks later, discover a bug.
* Write a hotfix, deploy to staging, run automated tests, QA the changed code, and ship.

In these circumstances, it would be terrible if my build system automatically accepted a new point release of a package on Hackage because the PVP says it's OK. Yes, we should all have 100% test coverage, with automated testing that covers all functionality of the product, and every single release would have full test coverage. But we all know that's not the real world. Letting a build system throw variables into an equation is irresponsible.

Michael

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on

Michael Snoyman
In reply to this post by Herbert Valerio Riedel



On Wed, Feb 26, 2014 at 12:52 AM, Herbert Valerio Riedel <[hidden email]> wrote:
On 2014-02-25 at 21:38:38 +0100, Michael Snoyman wrote:

[...]

> * The PVP itself does *not* guarantee reliable builds in all cases. If a
> transitive dependency introduces new exports, or provides new typeclass
> instances, a fully PVP-compliant stack can be broken. (If anyone doubts
> this claim, let me know, I can spell out the details. This has come up in
> practice.)

...are you simply referring to the fact that in order to guarantee
PVP-semantics of a package version, one has to take care to restrict the
version bounds of that package's build-deps in such a way, that any API
entities leaking from its (direct) build-deps (e.g.  typeclass instances
or other re-exported entities) are not a function of the "internal"
degree of freedoms the build-dep version-ranges provide? Or is there
more to it?

That's essentially it. I'll give one of the examples I ran into. (Names omitted on purpose, if the involved party wants to identify himself, please do so, I just didn't feel comfortable doing so without your permission.) Version 0.2 of monad-logger included MonadLogger instances for IO and other base monads. For various reasons, these were removed, and the version bumped to 0.3. This is in full compliance with the PVP.

persistent depends on monad-logger. It can work with either version 0.2 or 0.3 of monad-logger, and the cabal file allows this via `monad-logger >= 0.2 && < 0.4` (or something like that). Again, full PVP compliance.

A user wrote code against persistent when monad-logger version 0.2 was available. He used a function that looked like:

runDatabase :: MonadLogger m => Persistent a -> m a

(highly simplified). In his application, he used this in the IO monad. He depended on persistent with proper lower and upper bounds. Once again, full PVP compliance.

Once I released version 0.3 of monad-logger, his next build automatically upgraded him to monad-logger 0.3, and suddenly his code broke, because there's no MonadLogger instance for IO.

Now *if* the program had been using a system like "cabal freeze" or the like, this could have never happened: cabal wouldn't be trying to automatically upgrade to monad-logger 0.3.

Will this kind of bug happen all the time? No, I doubt it. But if the point of the PVP is to guarantee that builds will work (ignoring runtime concerns), and the PVP clearly fails at that job as well, we really need to reassess putting ourselves through this pain and suffering.

Michael

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

John Lato-2
In reply to this post by Michael Snoyman
On Tue, Feb 25, 2014 at 9:25 PM, Michael Snoyman <[hidden email]> wrote:


On Wed, Feb 26, 2014 at 1:28 AM, MightyByte <[hidden email]> wrote:
On Tue, Feb 25, 2014 at 4:51 PM, Vincent Hanquez <[hidden email]> wrote:
>
> I'm not saying this is not painful, but i've done it in the past, and using
> dichotomy and educated guesses (for example not using libraries released
> after a certain date), you converge pretty quickly on a solution.
>
> But the bottom line is that it's not the common use case. I rarely have to
> dig old unused code.

And I have code that I would like to have working today, but it's too
expensive to go through this process.  The code has significant value
to me and other people, but not enough to justify the large cost of
getting it working again.



I think we need to make these cases more concrete to have a meaningful discussion. Between Doug and Gregory, I'm understanding two different use cases:

1. Existing, legacy code, built again some historical version of Hackage, without information on the exact versions of all deep dependencies.
2. Someone starting a new project who wants to use an older version of a package on Hackage.

If I've missed a use case, please describe it.

For (1), let's start with the time machine game: *if* everyone had been using the PVP, then theoretically this wouldn't have happened. And *if* the developers had followed proper practice and documented their complete build environment, then PVP compliance would be irrelevant. So if we could go back in time and twist people's arms, no problems would exist. Hurray, we've established that 20/20 hindsight is very nice :).

But what can be done today? Actually, I think the solution is a very simple tool, and I'll be happy to write it if people want: cabal-timemachine. It takes a timestamp, and then deletes all cabal files from our 00-index.tar file that represent packages uploaded after that date. Assuming you know the last date of a successful build, it should be trivial to get a build going again. And if you *don't* know the date, you can bisect until you get a working build. (For that matter, the tool could even *include* a bisecter in it.) Can anyone picture a scenario where this wouldn't solve the problem even better than PVP compliance?

This scenario is never better than PVP compliance.  First of all, the user may want some packages that are newer than the timestamp, which this wouldn't support.  As people have already mentioned, it's entirely possible for valid install graphs to exist that cabal will fail to find if it doesn't have upper bound information available, because it finds other *invalid* graphs.

And even aside from that issue, this would push the work of making sure that a library is compatible with its dependencies onto the library *users*, instead of the developer, where it rightfully belongs (and your proposal ends up pushing even more work onto users!).

Why do you think it's acceptable for users to do the testing to make sure that your code works with other packages that your code requires?

For (2), talking about older versions of a package is not relevant. I actively maintain a number of my older package releases, as I'm sure others do as well. The issue isn't about *age* of a package, but about *maintenance* of a package. And we simply shouldn't be encouraging users to start off with an unmaintained version of a package. This is a completely separate discussion from the legacy code base, where- despite the valid security and bug concerns Vincent raised- it's likely not worth updating to the latest and greatest.

Usually the case is not that somebody *wants* to use an older version of package 'foo', it's that they're using some package 'bar' which hasn't yet been updated to be compatible with the latest 'foo'.  There are all sorts of reasons this may happen, including big API shifts (e.g. parsec2/parsec3, openGL), poor timing in a maintenance cycle, and the usual worldly distractions.  But if packages have upper bounds, the user can 'cabal install', get a coherent package graph, and begin working.  At the very worst, cabal will give them a clear lead as to what needs to be updated/who to ping.  This is much better than the situation with no upper bounds, where a 'cabal install' may fail miserably or even put together code that produces garbage.

And again, it's the library *user* who ends up having to deal with these problems.  Upper bounds lead to a better user experience.

All of that said, I still think the only real solution is getting end users off of Hackage. We need an intermediate, stabilizing layer. That's why I started Stackage, and I believe that it's the only solution that will ultimately make library authors and end-users happy. Everything we're discussing now is window dressing.

A curated ecosystem can certainly function, but it seems like a lot more work than just following the PVP and specifying upper bounds.  And upper bounds are likely to work better with packages that, for whatever reason, aren't in that curated ecosystem.

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
Reply | Threaded
Open this post in threaded view
|

Re: qualified imports, PVP and so on (Was: add new Data.Bits.Bits(bitZero) method)

Bardur Arantsson-2
In reply to this post by Carter Schonwald
On 2014-02-25 18:26, Carter Schonwald wrote:
> indeed.
>
> So lets think about how to add module types or some approximation thereof
> to GHC? (seriously, thats the only sane "best solution" i can think of, but
> its not something that can be done casually). Theres also the fact that any
> module system design will have to explicitly deal with type class instances
> in a more explicit fashion than we've done thus far.

This may be relevant:

   http://plv.mpi-sws.org/backpack/

Regards,

_______________________________________________
Libraries mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/libraries
12345678