Hi,
I'm playing around with associated type synonyms (ATS) [1] and the PHRaC interpreter, trying to model existing uses of FDs. I really enjoy working with ATS, but I've come across a situation that I don't quite know how to handle, or indeed if it can be handled at all. The scene is Control.Monad.Writer from mtl, where we can find the following definition (simplified for presentation): class (Monoid w, Monad m) => MonadWriter m w | m -> w where tell :: w -> m () The class MonadWriter has two type parameters, one for the monad itself and the other for that which is written. The Monoid constraint on the output is (I guess) there only as a documentation of sorts, stating that the intention is that the output of subsequent tell's should be combinable using mplus. A simple custom monad that just does output could be written as: data MyWriterM w a = MyWriterM (a, [w]) instance Monad (MyWriterM w) where return x = MyWriterM (x, []) MyWriterM (a, xs) >>= k = let MyWriterM (b, xs') = k a in MyWriterM (b, xs ++ xs') instance MonadWriter (MyWriterM w) w where tell x = MyWriterM ((), [x]) Now, to model this using ATS we would write, ignoring the Monoid constraint, the following class declaration*: class Monad m => MonadWriter m where type Output m tell :: Output m -> m () instance MonadWriter (MyWriterM w) where type Output m = [w] tell x = MyWriterM ((), [x]) This works fine, but the obvious question is then, where to put back the Monoid restriction? This time we want the type Output m to be constrained, so my naive first guess was to write: class (Monad m, Monoid (Output m)) => MonadWriter m where .. but phrac says "illegal context for class MonadWriter: only type variables can be constrained". I can't really think of anywhere else to put the restriction, other than making up an ad hoc class constraint syntax for associated types like class Monad m => MonadWriter m where type (Monoid) Output m ... My first question is already stated: where do I put the Monoid constraint? But since I suspect the answer is "you don't", a number of followup questions spring to mind: - why not? Apart from possibly having to use some ugly syntax for it, are there any fundamental reasons why this should not be allowed? - when looking at the definition of MonadWriter the Monoid constraint is not strictly necessary, and none of the other mtl monads have anything similar. Is it the assumption that this kind of constraint is never really necessary, and thus no support for it is needed for ATS? Hope someone can shed some light on this matter :-) Regards, /Niklas [1] http://www.cse.unsw.edu.au/~chak/papers/CKP05.html * Using Haskell syntax instead of the PHRaC equivalent _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote:
> - when looking at the definition of MonadWriter the Monoid constraint > is not strictly necessary, and none of the other mtl monads have > anything similar. Is it the assumption that this kind of constraint is > never really necessary, and thus no support for it is needed for ATS? I think that's right -- it's only needed for the Monad instance for WriterT. But it is a convenience. In any instance of MonadWriter, the w will be a monoid, as there'll be a WriterT in the stack somewhere, so the Monoid constraint just saves people writing general functions with MonadWriter from having to add it. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On 2/10/06, Ross Paterson <[hidden email]> wrote:
> On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote: > > - when looking at the definition of MonadWriter the Monoid constraint > > is not strictly necessary, and none of the other mtl monads have > > anything similar. Is it the assumption that this kind of constraint is > > never really necessary, and thus no support for it is needed for ATS? > > I think that's right -- it's only needed for the Monad instance for > WriterT. But it is a convenience. In any instance of MonadWriter, the > w will be a monoid, as there'll be a WriterT in the stack somewhere, > so the Monoid constraint just saves people writing general functions > with MonadWriter from having to add it. Sure it's a convenience, and thinking about it some more it would seem like that is always the case - it is never crucial to constrain a parameter. But then again, the same applies to the Monad m constraint, we could give the same argument there (all actual instances will be monads, hence...). So my other question still stands, why not allow constraints on associated types as well, as a convenience? Irrelevant to the discussion above, but I wonder whether the Monoid constraint should really be there on MonadWriter. I could imagine lots of interesting applications of writer monads that don't output a monoid, an example would be a monad that iteratively computes a result of better and better precision and "tells" the result of each step until told to stop. In this case the merging would not be mplus but to always throw away the old argument (flip const). But then again you could always keep the results in a list and only use the last element... :-) /Niklas _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Niklas Broberg
On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote:
> class Monad m => MonadWriter m where > type Output m > tell :: Output m -> m () > > instance MonadWriter (MyWriterM w) where > type Output m = [w] > tell x = MyWriterM ((), [x]) > > This works fine, but the obvious question is then, where to put back > the Monoid restriction? This time we want the type Output m to be > constrained, so my naive first guess was to write: perhaps > class Monad m => MonadWriter m where > type Output m > tell :: Monoid (Output m) => Output m -> m () ? At least, this would let you do the same things, but is not quite equivalent. The main reason I can think of for the constraint is so that you can use the methods of said class in the default methods of a class. Including them like this would let you do that. however, this would mean dictionaries get passed to each method rather than being included in the MonadWriter dictionary so it is not quite equivalent. the Monoid constraint is actually important, so you can do things like: > -- double the pleasure, double the fun. > newtype DWriter m a = DWriter m a > > instance MonadWriter m => MonadWriter (DWriter m) where > tell x = DWriter $ tell (x `mappend` x) which would not work without the Monoid constraint on 'tell' or again, some sort of syntax to constrain the assosiated type on an instance declaration. So, perhaps we need this? I don't think the syntax you proposed >instance (MonadWriter m, Monoid (Output m)) => MonadWriter (DWriter m) where > tell x = DWriter $ tell (x `mappend` x) would be problematic, but am not sure... John -- John Meacham - ⑆repetae.net⑆john⑈ _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Niklas Broberg
On Sun, Feb 12, 2006 at 09:32:14PM +0100, Niklas Broberg wrote:
> Irrelevant to the discussion above, but I wonder whether the Monoid > constraint should really be there on MonadWriter. I could imagine lots > of interesting applications of writer monads that don't output a > monoid, an example would be a monad that iteratively computes a result > of better and better precision and "tells" the result of each step > until told to stop. In this case the merging would not be mplus but to > always throw away the old argument (flip const). You also need a value for return. Then the monad laws will ensure that you have another monoid. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Niklas Broberg
Niklas Broberg <[hidden email]> wrote::
> On 2/10/06, Ross Paterson <[hidden email]> wrote: >> On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote: >> > - when looking at the definition of MonadWriter the Monoid constraint >> > is not strictly necessary, and none of the other mtl monads have >> > anything similar. Is it the assumption that this kind of constraint is >> > never really necessary, and thus no support for it is needed for ATS? >> >> I think that's right -- it's only needed for the Monad instance for >> WriterT. But it is a convenience. In any instance of MonadWriter, the >> w will be a monoid, as there'll be a WriterT in the stack somewhere, >> so the Monoid constraint just saves people writing general functions >> with MonadWriter from having to add it. > > Sure it's a convenience, and thinking about it some more it would seem > like that is always the case - it is never crucial to constrain a > parameter. But then again, the same applies to the Monad m constraint, > we could give the same argument there (all actual instances will be > monads, hence...). So my other question still stands, why not allow > constraints on associated types as well, as a convenience? Manuel (Chakravarty) and I agree that it should be possible to constrain associated type synonyms in the context of class definitions. Your example shows that this feature is actually needed. I will integrate it into phrac within the next few days. Stefa _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Stefan Wehr writes:
> Niklas Broberg <[hidden email]> wrote:: > > > On 2/10/06, Ross Paterson <[hidden email]> wrote: > >> On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote: > >> > - when looking at the definition of MonadWriter the Monoid constraint > >> > is not strictly necessary, and none of the other mtl monads have > >> > anything similar. Is it the assumption that this kind of constraint is > >> > never really necessary, and thus no support for it is needed for ATS? > >> > >> I think that's right -- it's only needed for the Monad instance for > >> WriterT. But it is a convenience. In any instance of MonadWriter, the > >> w will be a monoid, as there'll be a WriterT in the stack somewhere, > >> so the Monoid constraint just saves people writing general functions > >> with MonadWriter from having to add it. > > > > Sure it's a convenience, and thinking about it some more it would seem > > like that is always the case - it is never crucial to constrain a > > parameter. But then again, the same applies to the Monad m constraint, > > we could give the same argument there (all actual instances will be > > monads, hence...). So my other question still stands, why not allow > > constraints on associated types as well, as a convenience? > > Manuel (Chakravarty) and I agree that it should be possible to > constrain associated type synonyms in the context of class > definitions. Your example shows that this feature is actually > needed. I will integrate it into phrac within the next few days. > By possible you mean this extension won't break any of the existing ATS inference results? You have to be very careful otherwise you'll loose decidability. Something more controversial. Why ATS at all? Why not encode them via FDs? Example -- ATS class C a where type T a m :: a->T a instance C Int where type T Int = Int m _ = 1 -- FD encoding class T a b | a->b instance T Int Int class C a where m :: T a b => a->b instance C Int where m _ = 1 -- general recipe: -- encode type functions T a via type relations T a b -- replace T a via fresh b under the constraint C a b The FD program won't type check under ghc but this doesn't mean that it's not a legal FD program. It's wrong to derive certain conclusions about a language feature by observing the behavior of a particular implementation! Martin _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Martin Sulzmann wrote,
> By possible you mean this extension won't break any of the existing > ATS inference results? You have to be very careful otherwise you'll > loose decidability. Can someone explain to me why decidability is of any practical interest at all? What's the (practical) difference between a decision procedure which might never terminate and one which might take 1,000,000 years to terminate? Actually, why push it out to 1,000,000 years: in the context of a compiler for a practical programming language, a decision procedure which might take an hour to terminate might as well be undecidable ... surely all we really need is that the decision procedure _typically_ terminates quickly, and that where it doesn't we have the means of giving it hints which ensure that it will. There's a very nice paper by George Boolos on this[1]: he gives an example of an inference which is (and is intuitively) valid in second- order logic, and which can be first-orderized ... but the proof of the first-order equivalent is completely unfeasible. [1] Boolos, G., A Curious Inference, Journal of Philosophical Logic, 16, 1987. Also in Boolos, Logic, Logic and Logic, 1998. Useful citation here, http://www.nottingham.ac.uk/journals/analysis/preprints/KETLAND.pdf Cheers, Miles _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
> Can someone explain to me why decidability is of any practical interest
> at all? What's the (practical) difference between a decision procedure > which might never terminate and one which might take 1,000,000 years to > terminate? Actually, why push it out to 1,000,000 years: in the context > of a compiler for a practical programming language, a decision > procedure which might take an hour to terminate might as well be > undecidable ... surely all we really need is that the decision > procedure _typically_ terminates quickly, and that where it doesn't we > have the means of giving it hints which ensure that it will. I'm tempted to agree with you. I don't care if the compiler terminates on all my programs, or if it sometimes quits and says: "I don't know if this is correct." However, I do think that it is of very much importance that programmers and compiler implementors know which programs are legal and which are not. If a problem is decidable, it has the nice property that the problem (*not* the algorithm) can be used as a specification. Implementors are free to implement different algorithms, as long as they all solve the problem. If the problem is undecidable, how do you make sure that different compilers accept the same programs? If you don't want to find a subproblem that is decidable, you'll have to specify an algorithm, which is usually far more complicated, error-prone, and difficult to grasp for programmers. Cheers, Andres _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Andres Loeh wrote,
> If a problem is decidable, it has the nice property that the problem > (*not* the algorithm) can be used as a specification. Implementors > are free to implement different algorithms, as long as they all solve > the problem. If the problem is undecidable, how do you make sure that > different compilers accept the same programs? If you don't want to > find a subproblem that is decidable, you'll have to specify an > algorithm, which is usually far more complicated, error-prone, and > difficult to grasp for programmers. Again, I'm not sure that decidability helps practically here: we're not interested in "compiler A and compiler B accept the same programs", we're interested in "compiler A and compiler B accept some well defined subset of possible programs in a reasonable amount of time and space". Cheers, Miles _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Thu, Feb 16, 2006 at 12:45:03PM +0000, Miles Sabin wrote:
> Andres Loeh wrote, > > If a problem is decidable, it has the nice property that the problem > > (*not* the algorithm) can be used as a specification. Implementors > > are free to implement different algorithms, as long as they all solve > > the problem. If the problem is undecidable, how do you make sure that > > different compilers accept the same programs? If you don't want to > > find a subproblem that is decidable, you'll have to specify an > > algorithm, which is usually far more complicated, error-prone, and > > difficult to grasp for programmers. > > Again, I'm not sure that decidability helps practically here: we're not > interested in "compiler A and compiler B accept the same programs", > we're interested in "compiler A and compiler B accept some well defined > subset of possible programs in a reasonable amount of time and space". But it seems that well defining that subset is equivalent to the problem of finding a suitable decidable algorithm. John -- John Meacham - ⑆repetae.net⑆john⑈ _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
John Meacham wrote,
> > Again, I'm not sure that decidability helps practically here: we're > > not interested in "compiler A and compiler B accept the same > > programs", we're interested in "compiler A and compiler B accept > > some well defined subset of possible programs in a reasonable > > amount of time and space". > > But it seems that well defining that subset is equivalent to the > problem of finding a suitable decidable algorithm. Fair comment. I was (unintentionally) playing fast and loose with terminolgy there. What I really meant by "well defined" in the above was much weaker than the context implied. I meant something more like "operationally useful and agreed on" in the way that most informal or semi-formal specs are. That's probably going to seem unsatisfactory to a lot of the readers of this list, and I guess I could reasonably be accused of begging Andres' question. But I stand by my original point. What I should have said in reponse to Andres' is that it's all very well, but most (or at least a very large proportion of) interesting problems are undecidable so, for those at least, the benefits of a decidable specification are moot. Expressive type systems, and programming languages in general, are a Good Thing, and I'm prepared to trade expressive power for decidability. Having a well understood and decidable fragment of such seems like a Good Thing too, it's just that I don't see many circumstances under which I'd need or want to restrict myself to working in that fragment. Cheers, Miles _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Martin Sulzmann
Martin Sulzmann <[hidden email]> wrote::
> Stefan Wehr writes: > > [...] > > Manuel (Chakravarty) and I agree that it should be possible to > > constrain associated type synonyms in the context of class > > definitions. Your example shows that this feature is actually > > needed. I will integrate it into phrac within the next few days. > > > > By possible you mean this extension won't break any > of the existing ATS inference results? Yes, although we didn't go through all the proofs. > You have to be very careful otherwise you'll loose decidability. Do you have something concrete in mind or is this a more general advice? Stefan _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Martin Sulzmann
> Something more controversial.
> Why ATS at all? Why not encode them via FDs? Funny you should say that, just when I've been thinking about the same thing. That doesn't mean that ATS aren't a nice way to describe some special cases of FDs, but my feeling is that if ATS can't be encoded in FDs, then there is something wrong with _current_ FD versions that should be fixed. I'd love to hear the experts' opinions about this claim!-) The main argument for ATS is that the extra parameter for the functionally dependend type disappears, but as you say, that should be codeable in FDs. I say should be, because that does not seem to be the case at the moment. My approach for trying the encoding was slightly different from your's, but also ran into trouble with implementations. First, I think you need a per-class association, so your T a b would be specific to C. Second, I'd use a superclass context to model the necessity of providing an associated type, and instance contexts to model the provision of such a type. No big difference, but it seems closer to the intension of ATS: associated types translate into type association constraints. (a lot like calling an auxiliary function with empty accumulator, to hide the extra parameter from the external interface) > Example > > -- ATS > class C a where > type T a > m :: a->T a > instance C Int where > type T Int = Int > m _ = 1 -- alternative FD encoding attempt class CT a b | a -> b instance CT Int Int class CT a b => C a where m :: a-> b instance CT Int b => C Int where m _ = 1::b > -- FD encoding > class T a b | a->b > instance T Int Int > > class C a where > m :: T a b => a->b > > instance C Int where > m _ = 1 > > -- general recipe: > -- encode type functions T a via type relations T a b > -- replace T a via fresh b under the constraint C a b referring to the associated type seems slightly awkward in these encodings, so the special syntax for ATS would still be useful, but I agree that an encoding into FDs should be possible. > The FD program won't type check under ghc but this > doesn't mean that it's not a legal FD program. glad to hear you say that. but is there a consistent version of FDs that allows these things - and if not, is that for lack of trying or because it is known not to work? Cheers, Claus > It's wrong to derive certain conclusions > about a language feature by observing the behavior > of a particular implementation! > > Martin _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Stefan Wehr
On Fri, Feb 17, 2006 at 01:26:18PM +0000, Stefan Wehr wrote:
> Martin Sulzmann <[hidden email]> wrote:: > > By possible you mean this extension won't break any > > of the existing ATS inference results? > > Yes, although we didn't go through all the proofs. > > > You have to be very careful otherwise you'll loose decidability. The paper doesn't claim a proof of decidability (or principal types), but conjectures that it will go through. Apropos of that, I tried translating the non-terminating FD example from the FD-CHR paper (ex. 6) to associated type synonyms (simplified a bit): data T a = K a; class C a where { type S a; r :: a -> S a; } instance C a => C (T a) where { type S (T a) = T (S a); r (K x) = K (r x); } f b x = if b then r (K x) else x; Phrac infers f :: forall a . (S (T a) = a, C a) => Bool -> a -> T (S a) The constraint is reducible (ad infinitum), but Phrac defers constraint reduction until it is forced (as GHC does with ordinary instance inference). We can try to force it using the MR, by changing the definition of f to f = \ b x -> if b then r (K x) else x; For this to be legal, the constraint must be provable. In the corresponding FD case, this sends GHC down the infinite chain of reductions, but Phrac just gives up and complains about deferred constraints being left over after type inference. I don't think this is right either, as in other cases the constraint will reduce away to nothing. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Stefan Wehr
Stefan Wehr <[hidden email]> wrote::
> Niklas Broberg <[hidden email]> wrote:: > >> On 2/10/06, Ross Paterson <[hidden email]> wrote: >>> On Fri, Feb 10, 2006 at 05:20:47PM +0100, Niklas Broberg wrote: >>> > - when looking at the definition of MonadWriter the Monoid constraint >>> > is not strictly necessary, and none of the other mtl monads have >>> > anything similar. Is it the assumption that this kind of constraint is >>> > never really necessary, and thus no support for it is needed for ATS? >>> >>> I think that's right -- it's only needed for the Monad instance for >>> WriterT. But it is a convenience. In any instance of MonadWriter, the >>> w will be a monoid, as there'll be a WriterT in the stack somewhere, >>> so the Monoid constraint just saves people writing general functions >>> with MonadWriter from having to add it. >> >> Sure it's a convenience, and thinking about it some more it would seem >> like that is always the case - it is never crucial to constrain a >> parameter. But then again, the same applies to the Monad m constraint, >> we could give the same argument there (all actual instances will be >> monads, hence...). So my other question still stands, why not allow >> constraints on associated types as well, as a convenience? > > Manuel (Chakravarty) and I agree that it should be possible to > constrain associated type synonyms in the context of class > definitions. Your example shows that this feature is actually > needed. I will integrate it into phrac within the next few days. I have now implemented this feature in PHRaC. You need to get PHRaC via darcs: http://www.cse.unsw.edu.au/~pls/repos/phrac/ An example is included in PHRaC's source code under tests/constrained-assoc-types/should_pass/000.phc. I haven't tested the extension extensively. Just tell me if you have any problems. Stefan _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Stefan Wehr
Stefan Wehr writes:
> Martin Sulzmann <[hidden email]> wrote:: > > > Stefan Wehr writes: > > > [...] > > > Manuel (Chakravarty) and I agree that it should be possible to > > > constrain associated type synonyms in the context of class > > > definitions. Your example shows that this feature is actually > > > needed. I will integrate it into phrac within the next few days. > > > > > > > By possible you mean this extension won't break any > > of the existing ATS inference results? > > Yes, although we didn't go through all the proofs. > > > You have to be very careful otherwise you'll loose decidability. > > Do you have something concrete in mind or is this a more general > advice? > I'm afraid, I think there's a real issue. Here's the AT version of Example 15 from "Understanding FDs via CHRs" class D a class F a where type T a instance F [a] where type T [a] = [[a]] instance (D (T a), F a) => D [a] ^^^^^^^ type function appears in type class Type inference (i.e. constraint solving) for D [a] will not terminate. Roughly, D [[a]] -->_instance D (T [a]), F [a]) -->_type function D [[a]], F [a] and so on Will this also happen if type functions appear in superclasses? Let's see. Consider class C a class F a where type T a instance F [a] where type T [a] = [[[a]]] class C (T a) => D a ^^^^^ type function appears in superclass context instance D [a] => C [[a]] -- satisfies Ross Paterson's Termination Conditions Consider D [a] -->_superclass C (T [a]), D [a] -->_type function C [[[a]]], D [a] -->_instance D [[a]], D [a] and so on My point: - The type functions are obviously terminating, e.g. type T [a] = [[a]] clearly terminates. - It's the devious interaction between instances/superclasss and type function which causes the type class program not to terminate. Is there a possible fix? Here's a guess. For each type definition in the AT case type T t1 = t2 (or improvement rule in the FD case rule T1 t1 a ==> a=t2 BTW, any sufficient restriction which applies to the FD case can be lifted to the AT case and vice versa!) we demand that the number of constructors in t2 is strictly smaller than the in t1 (plus some of the other usual definitions). Then, type T [a] = [[a]] although terminating, is not legal anymore. Then, there might be some hope to recover termination (I've briefly sketched a proof and termination may indeed hold but I'm not 100% convinced yet). Martin _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Claus Reinke
Claus Reinke writes:
> The main argument for ATS is that the extra parameter for the > functionally dependend type disappears, but as you say, that > should be codeable in FDs. I say should be, because that does > not seem to be the case at the moment. > > My approach for trying the encoding was slightly different from > your's, but also ran into trouble with implementations. > > First, I think you need a per-class association, so your T a b > would be specific to C. Second, I'd use a superclass context > to model the necessity of providing an associated type, and > instance contexts to model the provision of such a type. No > big difference, but it seems closer to the intension of ATS: > associated types translate into type association constraints. > > (a lot like calling an auxiliary function with empty accumulator, > to hide the extra parameter from the external interface) > > > Example > > > > -- ATS > > class C a where > > type T a > > m :: a->T a > > instance C Int where > > type T Int = Int > > m _ = 1 > > -- alternative FD encoding attempt > > class CT a b | a -> b > instance CT Int Int > > class CT a b => C a where > m :: a-> b > > instance CT Int b => C Int where > m _ = 1::b > Hm, I haven't thought about this. Two comments. You use scoped variables in class declarations. Are they available in ghc? How do you encode? --AT instance C a => C [a] where type T [a] = [T a] m xs = map m xs Via the following I guess? instance CT a b => CT a [b] instance C a => C [a] where m xs = map m xs It seems your solution won't suffer from the problem I face. See below. > > -- FD encoding > > class T a b | a->b > > instance T Int Int > > > > class C a where > > m :: T a b => a->b > > > > instance C Int where > > m _ = 1 > > > > -- general recipe: > > -- encode type functions T a via type relations T a b > > -- replace T a via fresh b under the constraint C a b > My AT encoding won't work with ghc/hugs because the class declaration of C demands that the output type b is univeral. Though, in the declaration instance C Int we return an Int. Hence, the above cannot be translated to System F. Things would be different if we'd translate to an untyped back-end. > referring to the associated type seems slightly awkward > in these encodings, so the special syntax for ATS would > still be useful, but I agree that an encoding into FDs should > be possible. > > > The FD program won't type check under ghc but this > > doesn't mean that it's not a legal FD program. > > glad to hear you say that. but is there a consistent version > of FDs that allows these things - and if not, is that for lack > of trying or because it is known not to work? > The FD framework in "Understanding FDs via CHRs" clearly subsumes ATs (based on my brute-force encoding). My previous email showed that type inference for FDs and ATs can be equally tricky. Though, why ATs? Well, ATs are still *very useful* because they help to structure programs (they avoid redundant parameters). On the other hand, FDs provide the user with the convenience of 'automatic' improvement. Let's do a little test. Who can translate the following FD program to AT? zip2 :: [a]->[b]->[(a,b)] zip2 (a:as) (b:bs) = (a,b) : (zip2 as bs) zip2 _ _ = [] class Zip a b c | c -> a, c -> b where mzip :: [a] -> [b] -> c instance Zip a b [(a,b)] where mzip = zip2 instance Zip (a,b) c e => Zip a b ([c]->e) where mzip as bs cs = mzip (zip2 as bs) cs Martin _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Martin Sulzmann
Martin Sulzmann wrote: > - The type functions are obviously terminating, e.g. > type T [a] = [[a]] clearly terminates. > - It's the devious interaction between instances/superclasss > and type function which causes the type class program > not to terminate. > > Is there a possible fix? Here's a guess. > For each type definition in the AT case > > type T t1 = t2 > > (or improvement rule in the FD case > > rule T1 t1 a ==> a=t2 > > we demand that the number of constructors in t2 > is strictly smaller than the in t1 > (plus some of the other usual definitions). I'm afraid that may still be insufficient, as the following counter-example shows. It causes GHC 6.4.1 to loop in the typechecking phase. I haven't checked the latest GHC. The example corresponds to a type function (realized as a class E with functional dependencies) in the context of an instance. The function in question is class E m a b | m a -> b instance E m (() -> ()) (m ()) We see that the result of the function, "m ()" is smaller (in the number of constructors) that the functions' arguments, "m" and "() -> ()" together. Plus any type variable free in the result is also free in at least one of the arguments. And yet it loops. {-# OPTIONS -fglasgow-exts #-} -- Note the absence of the flag -fallow-undecidable-instances module F where class Foo m a where foo :: m b -> a -> Bool instance Foo m () where foo _ _ = True instance (E m a b, Foo m b) => Foo m (a->()) where foo m f = undefined class E m a b | m a -> b where tr :: m c -> a -> b -- There is only one instance of the class with functional dependencies instance E m (() -> ()) (m ()) where tr x = undefined -- GHC(i) loops test = foo (\f -> (f ()) :: ()) (\f -> (f ()) :: ()) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
This is not a counter-example to my conjecture. Let's consider the general case (which I didn't describe in my earlier email). In case we have an n-ary type function T (or (n+1)-ary type class constraint T) the conditions says for each type T t1 ... tn = t (or rule T t1 ... tn x ==> t) then rank(ti) > rank(t) for each i=1,..,n Your example violates this condition > class E m a b | m a -> b > instance E m (() -> ()) (m ()) The improvement rule says: rule E m (() -> ()) x ==> x=(m ()) but rank m < rank (m ()) Your example shows that the condition rank(t1)+...+rank(tn) > rank(t) is not sufficient (but that's not a surprise). Program text > test = foo (\f -> (f ()) :: ()) (\f -> (f ()) :: ()) gives rise to Foo ((->) (() -> ())) ((() -> ()) -> ()) via > instance (E m a b, Foo m b) => Foo m (a->()) where this constraint reduces to E ((->) (() -> ())) (()->()) x Foo ((->) (() -> ())) x the above improvement yields x = (((->) (() -> ()))) () this leads to Foo ((->) (() -> ())) ((((->) (() -> ()))) ()) and so on (the second component is increasing). So, I'll stick to my claim. I don't think I have time at the moment to work out the details of my claim/proof sketch. But if somebody is interested. The following is a good reference how to attack the problem: @inproceedings{thom-term, author = "T. Fr{\"u}hwirth", title = "Proving Termination of Constraint Solver Programs", booktitle = "Proc.\ of New Trends in Constraints: Joint {ERCIM/Compulog} Net Workshop", volume = "1865", series = "LNAI", publisher = "Springer-Verlag", year = "2000" } Martin [hidden email] writes: > > Martin Sulzmann wrote: > > > - The type functions are obviously terminating, e.g. > > type T [a] = [[a]] clearly terminates. > > - It's the devious interaction between instances/superclasss > > and type function which causes the type class program > > not to terminate. > > > > Is there a possible fix? Here's a guess. > > For each type definition in the AT case > > > > type T t1 = t2 > > > > (or improvement rule in the FD case > > > > rule T1 t1 a ==> a=t2 > > > > we demand that the number of constructors in t2 > > is strictly smaller than the in t1 > > (plus some of the other usual definitions). > > I'm afraid that may still be insufficient, as the following > counter-example shows. It causes GHC 6.4.1 to loop in the typechecking > phase. I haven't checked the latest GHC. The example corresponds to a > type function (realized as a class E with functional dependencies) in > the context of an instance. The function in question is > > class E m a b | m a -> b > instance E m (() -> ()) (m ()) > > We see that the result of the function, "m ()" is smaller (in the > number of constructors) that the functions' arguments, "m" and > "() -> ()" together. Plus any type variable free in the result is also > free in at least one of the arguments. And yet it loops. > > > > {-# OPTIONS -fglasgow-exts #-} > -- Note the absence of the flag -fallow-undecidable-instances > > module F where > > class Foo m a where > foo :: m b -> a -> Bool > > instance Foo m () where > foo _ _ = True > > instance (E m a b, Foo m b) => Foo m (a->()) where > foo m f = undefined > > class E m a b | m a -> b where > tr :: m c -> a -> b > > -- There is only one instance of the class with functional dependencies > instance E m (() -> ()) (m ()) where > tr x = undefined > > -- GHC(i) loops > > test = foo (\f -> (f ()) :: ()) (\f -> (f ()) :: ()) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |