Hi,
I'm fairly new to Haskell and recently came across some programming tricks for reducing monadic overhead, and am wondering what higher-level concepts they map to. It would be great to get some pointers to related work. Background: I'm a graduate student whose research interests include methods for implementing domain specific languages. Recently, I have been trying to get more familiar with Haskell and implementing DSLs in it. I'm coming from having a fair bit of experience in Python so I know the basics of functional programming. However, I'm much less familiar with Haskell. In particular I have little to no internal map from existing DSL implementation techniques to the Haskell extensions that would no doubt make DSL implementations easier (and when they are *not* needed). I also don't have a complete picture of the functional programming research that would inform these techniques. I would greatly appreciate it if I could get pointers to the appropriate references so I can really get going on this. Specifically: There are some DSLs that can be largely expressed as monads, that inherently play nicely with expressions on non-monadic values. We'd like to use the functions that already work on the non-monadic values for monadic values without calls to liftM all over the place. The probability monad is a good example. import Control.Monad import Data.List newtype Prob a = Prob { getDist :: [(a, Float)] } deriving (Eq, Show) multiply :: Prob (Prob a) -> Prob a multiply (Prob xs) = Prob $ concat $ map multAll xs where multAll (Prob innerxs, p) = map (\(x, r) -> (x, p * r)) innerxs instance Functor Prob where fmap f (Prob xs) = Prob $ map (\(x, p) -> (f x, p)) xs instance Monad Prob where return x = Prob [(x, 1.0)] x >>= f = multiply (fmap f x) In this monad, >>= hides the multiplying-out of conditional probabilities that happen during the composition of a random variable with a conditional distribution. coin x = Prob [(1, x), (0, 1 - x)] test = do x <- (coin 0.5) y <- (coin 0.5) return $ x + y *Main> test Prob {getDist = [(2,0.25),(1,0.25),(1,0.25),(0,0.25)]} We can use a 'sum out' function to get more useful results: sumOut :: (Ord a) => Prob a -> Prob a sumOut (Prob xs) = Prob $ map (\kvs -> foldr1 sumTwoPoints kvs) eqValues where eqValues = groupBy (\x y -> (fst x == fst y)) $ sortBy compare xs sumTwoPoints (v1, p1) (v2, p2) = (v1, p1 + p2) *Main> sumOut test Prob {getDist = [(0,0.25),(1,0.5),(2,0.25)]} I'm interested in shortening the description of 'test', as it is really just a 'formal addition' of random variables. One can use liftM for that: test = liftM2 (+) (coin 0.5) (coin 0.5) It seems what I'm leading into here is making functions on ordinary values polymorphic over their monadic versions; I think this is the desire for 'autolifting' or 'monadification' that has been mentioned in works such as HaRE http://www.haskell.org/pipermail/haskell/2005-March/015557.html One alternate way of doing this, however, is instancing the typeclasses of the ordinary values with their monadic versions: instance (Num a) => Num (Prob a) where (+) = liftM2 (+) (*) = liftM2 (*) abs = liftM abs signum = liftM signum fromInteger = return . fromInteger instance (Fractional a) => Fractional (Prob a) where fromRational = return . fromRational (/) = liftM2 (/) Note that already, even though each function in the typeclass had to be manually lifted, this eliminates more overhead compared to lifting every function used, because any function with a general enough type bound can work with both monadic and non-monadic values, not just the ones in the typeclass: *Main> sumOut $ (coin 0.5) + (coin 0.5) + (coin 0.5) Prob {getDist = [(0,0.125),(1,0.375),(2,0.375),(3,0.125)]} *Main> let foo x y z = (x + y) * z *Main> sumOut $ foo (coin 0.5) (coin 0.5) (coin 0.5) Prob {getDist = [(0,0.625),(1,0.25),(2,0.125)]} Because of the implementation of fromInteger as return . fromInteger, we also 'luck out' and have the ability to mix ordinary and non-monadic values in the same expression: *Main> 1 + coin 0.5 / 2 Prob {getDist = [(1.5,0.5),(1.0,0.5)]} My question is, what are the higher-level concepts at play here? The motivation is that it should be possible to automatically do this typeclass instancing, letting us get the benefits of concise monadic expressions without manually instancing the typeclasses. Indeed, I didn't have this idea in Haskell; I'm coming from Python where one can realize the automatic instances: if we take the view that classes in Python are combined datatypes and instanced typeclasses, we can use the meta-object protocol to look inside one class's representation and output another class with liftM-ed (or return . -ed) methods and a custom constructor. I realize Template Haskell gives you the ability to reify instance/class declarations, but perhaps there's a less heavyweight way (or there should be). I think a good question as a starting point is whether it's possible to do this 'monadic instance transformation' for any typeclass, and whether or not we were lucky to have been able to instance Num so easily (as Num, Fractional can just be seen as algebras over some base type plus a coercion function, making them unusually easy to lift if most typeclasses actually don't fit this description). In general, what this seems to be is a transformation on functions that also depends explicitly on type signatures. For example in Num: class (Eq a, Show a) => Num a where (+), (-), (*) :: a -> a -> a negate, abs, signum :: a -> a fromInteger :: Integer -> a instance (Num a) => Num (Prob a) where (+) = liftM2 (+) -- Prob a -> Prob a -> Prob a (*) = liftM2 (*) -- Prob a -> Prob a -> Prob a abs = liftM abs -- Prob a -> Prob a signum = liftM signum -- Prob a -> Prob a fromInteger = return . fromInteger -- Integer -> Prob a Note that if we consider this in a 'monadification' context, where we are making some choice for each lifted function, treating it as entering, exiting, or computing in the monad, instancing the typeclass leads to very few choices for each: the monadic versions of +, -, * must be obtained with "liftM2",the monadic versions of negate, abs, signum must be obtained with "liftM", and the monadic version of fromInteger must be obtained with "return . " I think this is what we're doing in general: if we had a typeclass C with a type variable a, with some set of type signatures in which 'a' appears bound, we can do "instance C (M a)" for some monad M if there is some way to realize the resulting set of type signatures where every bound occurence of 'a' is replaced with 'M a'. The following script illustrates precisely what I mean by this: data Typ = Gr String -- Irreducible grounded type, like Int or Bool | Con String Typ -- Type constructor applied to a type of kind *, i.e., IO String, Maybe (Prob Int) | Arr Typ Typ -- Function T1 -> T2 | Tup (Typ, Typ) -- Tuple (T1, T2) deriving Eq instance Show Typ where show (Gr a) = a show (Con m a) = "(" ++ m ++ " " ++ show a ++ ")" show (Arr a b) = "(" ++ show a ++ " -> " ++ show b ++ ")" show (Tup (x, y)) = "(" ++ show x ++ ", " ++ show y ++ ")" --let a = b in expr for our type signature calculus typlet :: Typ -> Typ -> Typ -> Typ typlet a b expr = case a == expr of False -> typletdown a b expr True -> b typletdown a b (Tup (x, y)) = Tup (typlet a b x, typlet a b y) typletdown a b (Arr x y) = Arr (typlet a b x) (typlet a b y) typletdown a b (Con str c) = Con str (typlet a b c) typletdown a b (Gr s) = Gr s monadify a b expr = typlet a (Con b a) expr -- class declaration : the free type variable, with a list of signatures type ClassDecl = (Typ, [Typ]) -- monadify class signatures monadic_sigs :: ClassDecl -> String -> ClassDecl monadic_sigs (btyp, sigs) it = (Con it btyp, map (\sig -> monadify btyp it sig) sigs) -- Num mkNum = (Gr "a", [ (Gr "a") `Arr` ((Gr "a") `Arr` (Gr "a")), -- (+), (-), (*) (Gr "a") `Arr` (Gr "a"), -- abs, signum, negate (Gr "Integer") `Arr` (Gr "a")]) -- fromInteger main = do let testexpr =(Gr "a") `Arr` ((Gr "a") `Arr` (Gr "a")) print "Sample type signature:" print testexpr print "Monadified:" print $ monadify (Gr "a") "Prob" testexpr print $ "Num typeclass signatures:" print $ mkNum print "Signatures of the functions needed for instance (Num a) => Num (Prob a):" print $ monadic_sigs mkNum "Prob" I suppose I'm basically suggesting that the 'next step' is to somehow do this calculation of types on real type values, and use an inductive programming tool like Djinn to realize the type signatures. I think the general programming technique this is getting at is an orthogonal version of LISP style where one goes back and forth between types and functions, rather than data and code. I would also appreciate any pointers to works in that area. Thanks for any leads, Lingfeng Yang lyang at cs dot stanford dot edu _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Nov 15, 2010, at 9:43 AM, Ling Yang wrote: Specifically: There are some DSLs that can be largely expressed as monads, This, to me, is a big hint that applicative functors could be useful. Every monad is an applicative functor. Given a monad instance for F, you can do: instance Applicative F where pure = return (<*>) = ap <$> is an alias of fmap. <*> can be interpreted as a kind of "lifting" product operator (Examine the types as you learn it. The notation will become transparent once you "get it"). So you write expressions like: data F a = F a -- We'll assume F is instantiated as a monad data Foo = Foo Int Int Int foo :: F Foo foo = Foo <$> monad_action_that_returns_an_int_for_your_first_argument <*> monad_action_that_returns_an_int_for_your_second_argument <*> monad_action_that_etc Your test test = liftM2 (+) (coin 0.5) (coin 0.5) translates to: test = (+) <$> (coin 0.5) <*> (coin 0.5) You can't really express a test in 5 arguments (I think there's no liftM5...) but it's easy with <$> and <*>: test = Five <$> one <*> two <*> three <*> four <*> five _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Ling Yang
On Mon, Nov 15, 2010 at 12:43 PM, Ling Yang <[hidden email]> wrote:
> Specifically: There are some DSLs that can be largely expressed as monads, > that inherently play nicely with expressions on non-monadic values. > We'd like to use the functions that already work on the non-monadic > values for monadic values without calls to liftM all over the place. It's worth noting that using liftM is possibly the worst possible way to do this, aesthetically speaking. To start with, liftM is just fmap with a gratuitous Monad constraint added on top. Any instance of Monad can (and should) also be an instance of Functor, and if the instances aren't buggy, then liftM f = (>>= return . f) = fmap f. Additionally, in many cases readability is improved by using (<$>), an operator synonym for fmap, found in Control.Applicative, I believe. > The probability monad is a good example. > [snip] > I'm interested in shortening the description of 'test', as it is > really just a 'formal addition' of random variables. One can use liftM > for that: > > test = liftM2 (+) (coin 0.5) (coin 0.5) Also on the subject of Control.Applicative, note that independent probabilities like this don't actually require a monad, merely the ability to lift currying into the underlying functor, which is what Applicative provides. The operator ((<*>) :: f (a -> b) -> f a -> f b) is convenient for writing such expressions, e.g.: test = (+) <$> coin 0.5 <*> coin 0.5 Monads are only required for lifting control flow into the functor, which in this case amounts to conditional probability. You would not, for example, be able to easily use simple lifted functions to write "roll a 6-sided die, flip a coin as many times as the die shows, then count how many flips were heads". > I think a good question as a starting point is whether it's possible > to do this 'monadic instance transformation' for any typeclass, and > whether or not we were lucky to have been able to instance Num so > easily (as Num, Fractional can just be seen as algebras over some base > type plus a coercion function, making them unusually easy to lift if > most typeclasses actually don't fit this description). Part of the reason Num was so easy is that all the functions produce values whose type is the class parameter. Your Num instance could almost be completely generic for any ((Applicative f, Num a) => f a), except that Num demands instances of Eq and Show, neither of which can be blindly lifted the way the numeric operations can. I imagine it should be fairly obvious why you can't write a non-trivial generic instance (Show a) => Show (M a) that would work for any possible monad M--you'd need a function (show :: M a -> String) which is impossible for abstract types like IO, as well as function types like the State monad. The same applies to (==), of course. Trivial instances are always possible, e.g. show _ = "[not showable]", but then you don't get sensible behavior when a non-trivial instance does exist, such as for Maybe or []. > Note that if we consider this in a 'monadification' context, where we > are making some choice for each lifted function, treating it as > entering, exiting, or computing in the monad, instancing the typeclass > leads to very few choices for each: the monadic versions of +, -, * > must be obtained with "liftM2",the monadic versions of negate, abs, > signum must be obtained with "liftM", and the monadic version of > fromInteger must be obtained with "return . " Again, this is pretty much the motivation and purpose of Control.Applicative. Depending on how you want to look at it, the underlying concept is either lifting multi-argument functions into the functor step by step, or lifting tuples into the functor, e.g. (f a, f b) -> f (a, b); the equivalence is recovered using fmap with either (curry id) or (uncurry id). Note that things do get more complicated if you have to deal with the full monadic structure, but since you're lifting functions that have no knowledge of the functor whatsoever they pretty much have to be independent of it. > I suppose I'm basically suggesting that the 'next step' is to somehow > do this calculation of types on real type values, and use an inductive > programming tool like Djinn to realize the type signatures. I think > the general programming technique this is getting at is an orthogonal > version of LISP style where one goes back and forth between types and > functions, rather than data and code. I would also appreciate any > pointers to works in that area. Well, I don't think there's any good way to do this in Haskell directly, in general. There's a GHC extension that can automatically derive Functor for many types, but nothing to automatically derive Applicative as far as I know (other than in trivial cases with newtype deriving)--I suspect due to Applicative instances being far less often uniquely determined than for Functor. And while a fully generic instance can be written and used for any Applicative and Num, the impossibility of sensible instances for Show and Eq, combined with the context-blind nature of Haskell's instance resolution, means that it can't be written directly in full generality. It would, however, be fairly trivial to manufacture instance declarations for specific types using some sort of preprocessor, assuming Show/Eq instances have been written manually or by creating trivial ones. Anyway, you may want to read the paper that introduced Applicative, since that seems to describe the subset of generic lifted functions you're after: http://www.soi.city.ac.uk/~ross/papers/Applicative.html If for some reason you'd rather continue listening to me talk about it, I wrote an extended ode to Applicative on Stack Overflow some time back that was apparently well-received: http://stackoverflow.com/questions/3242361/haskell-how-is-pronounced/3242853#3242853 - C. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
See my reply to Alex's post for my perspective on how this relates to
applicative functors, reproduced here: >> This, to me, is a big hint that applicative functors could be useful. > >Indeed, the ideas here also apply to applicative functors; it is just the lifting primitives that will be different; instead of having liftM<N>, we can use <$> and <*> to lift the functions. We could have done this for Num and Maybe (suppose Maybe is an instance of Applicative): > >instance (Num a) => Num (Maybe a) where > (+) = \x y -> (+) <$> x <*> y > (-) = \x y -> (-) <$> x <*> y > (*) = \x y -> (+) <$> x <*> y > abs = abs <$> > signum = signum <$> > fromInteger = pure . fromInteger > >The larger goal remains the same: autolifting in a principled manner. > >However, you actually bring up a very good point; what if it is really only the applicative functors that this method works on in general, that there is no 'use case' for considering this autolifting for monads in particular? >I think the answer lies in the fact that monads can be 'flattened;' that is, realizations of the type m (m a) -> m a are mechanical (in the form of 'join') given that >>= is defined. This is important when we have a typeclass that also has monadic signatures. To be more concrete, consider how this function could be used in a 'monadic DSL': > >enter x = case x of > 0 -> Nothing > _ -> Just "hi" > >The type of 'enter' is one case of the general from 'a -> M b'. If we were instancing a typeclass that had an 'a -> M b' function, we'd need a function of type 'M a -> M b'. This would be accomplished by > >enter' = join . liftM enter > >So the set of lifting primitives must include at least some way to get M a -> M b from 'a -> M b'---which requires that M is a monad, not just an applicative functor. > >Thanks for the mention of applicative functors; I should have included them in the original post. > >Lingfeng Yang >lyang at cs dot stanford dot edu > I should have included a mention of Applicative in my original post. > Part of the reason Num was so easy is that all the functions produce > values whose type is the class parameter. Your Num instance could > almost be completely generic for any ((Applicative f, Num a) => f a), > except that Num demands instances of Eq and Show, neither of which can > be blindly lifted the way the numeric operations can. > I imagine it should be fairly obvious why you can't write a > non-trivial generic instance (Show a) => Show (M a) that would work > for any possible monad M--you'd need a function (show :: M a -> > String) which is impossible for abstract types like IO, as well as > function types like the State monad. The same applies to (==), of > course. Trivial instances are always possible, e.g. show _ = "[not > showable]", but then you don't get sensible behavior when a > non-trivial instance does exist, such as for Maybe or []. Good point. This is where we can start defining restrictions for when this automatic lifting can or cannot take place. I reference the concept of 'runnable monads' here, from "[Erwig and Ren 2004] Monadification of Functional Programs" A 'runnable monad' is a monad with an exit function: class (Monad m) => Runnable m where exit : m a -> a And yes, for monads like IO, no one would really have a need for 'exit' outside of the cases where they need unsafePerformIO. However, for Maybe and Prob, 'exit' is extremely useful. In fact, in the probability monad, if you could not exit the monad, you could not get anything done, as the real use is around sampling and computing probabilities, which are of non-monadic types. Provided M is a runnable monad, class (Show a) => Show (M a) where show = show . exit I'm aware of the limitations of this approach; I just want to come up with a set of primitives that characterize the cases where autolifting/monadic instancing is useful. On Mon, Nov 15, 2010 at 11:19 AM, C. McCann <[hidden email]> wrote: > On Mon, Nov 15, 2010 at 12:43 PM, Ling Yang <[hidden email]> wrote: >> Specifically: There are some DSLs that can be largely expressed as monads, >> that inherently play nicely with expressions on non-monadic values. >> We'd like to use the functions that already work on the non-monadic >> values for monadic values without calls to liftM all over the place. > > It's worth noting that using liftM is possibly the worst possible way > to do this, aesthetically speaking. To start with, liftM is just fmap > with a gratuitous Monad constraint added on top. Any instance of Monad > can (and should) also be an instance of Functor, and if the instances > aren't buggy, then liftM f = (>>= return . f) = fmap f. > > Additionally, in many cases readability is improved by using (<$>), an > operator synonym for fmap, found in Control.Applicative, I believe. > >> The probability monad is a good example. >> > [snip] >> I'm interested in shortening the description of 'test', as it is >> really just a 'formal addition' of random variables. One can use liftM >> for that: >> >> test = liftM2 (+) (coin 0.5) (coin 0.5) > > Also on the subject of Control.Applicative, note that independent > probabilities like this don't actually require a monad, merely the > ability to lift currying into the underlying functor, which is what > Applicative provides. The operator ((<*>) :: f (a -> b) -> f a -> f b) > is convenient for writing such expressions, e.g.: > > test = (+) <$> coin 0.5 <*> coin 0.5 > > Monads are only required for lifting control flow into the functor, > which in this case amounts to conditional probability. You would not, > for example, be able to easily use simple lifted functions to write > "roll a 6-sided die, flip a coin as many times as the die shows, then > count how many flips were heads". > >> I think a good question as a starting point is whether it's possible >> to do this 'monadic instance transformation' for any typeclass, and >> whether or not we were lucky to have been able to instance Num so >> easily (as Num, Fractional can just be seen as algebras over some base >> type plus a coercion function, making them unusually easy to lift if >> most typeclasses actually don't fit this description). > > Part of the reason Num was so easy is that all the functions produce > values whose type is the class parameter. Your Num instance could > almost be completely generic for any ((Applicative f, Num a) => f a), > except that Num demands instances of Eq and Show, neither of which can > be blindly lifted the way the numeric operations can. > > I imagine it should be fairly obvious why you can't write a > non-trivial generic instance (Show a) => Show (M a) that would work > for any possible monad M--you'd need a function (show :: M a -> > String) which is impossible for abstract types like IO, as well as > function types like the State monad. The same applies to (==), of > course. Trivial instances are always possible, e.g. show _ = "[not > showable]", but then you don't get sensible behavior when a > non-trivial instance does exist, such as for Maybe or []. > >> Note that if we consider this in a 'monadification' context, where we >> are making some choice for each lifted function, treating it as >> entering, exiting, or computing in the monad, instancing the typeclass >> leads to very few choices for each: the monadic versions of +, -, * >> must be obtained with "liftM2",the monadic versions of negate, abs, >> signum must be obtained with "liftM", and the monadic version of >> fromInteger must be obtained with "return . " > > Again, this is pretty much the motivation and purpose of > Control.Applicative. Depending on how you want to look at it, the > underlying concept is either lifting multi-argument functions into the > functor step by step, or lifting tuples into the functor, e.g. (f a, f > b) -> f (a, b); the equivalence is recovered using fmap with either > (curry id) or (uncurry id). > > Note that things do get more complicated if you have to deal with the > full monadic structure, but since you're lifting functions that have > no knowledge of the functor whatsoever they pretty much have to be > independent of it. > >> I suppose I'm basically suggesting that the 'next step' is to somehow >> do this calculation of types on real type values, and use an inductive >> programming tool like Djinn to realize the type signatures. I think >> the general programming technique this is getting at is an orthogonal >> version of LISP style where one goes back and forth between types and >> functions, rather than data and code. I would also appreciate any >> pointers to works in that area. > > Well, I don't think there's any good way to do this in Haskell > directly, in general. There's a GHC extension that can automatically > derive Functor for many types, but nothing to automatically derive > Applicative as far as I know (other than in trivial cases with newtype > deriving)--I suspect due to Applicative instances being far less often > uniquely determined than for Functor. And while a fully generic > instance can be written and used for any Applicative and Num, the > impossibility of sensible instances for Show and Eq, combined with the > context-blind nature of Haskell's instance resolution, means that it > can't be written directly in full generality. It would, however, be > fairly trivial to manufacture instance declarations for specific types > using some sort of preprocessor, assuming Show/Eq instances have been > written manually or by creating trivial ones. > > Anyway, you may want to read the paper that introduced Applicative, > since that seems to describe the subset of generic lifted functions > you're after: http://www.soi.city.ac.uk/~ross/papers/Applicative.html > > If for some reason you'd rather continue listening to me talk about > it, I wrote an extended ode to Applicative on Stack Overflow some time > back that was apparently well-received: > http://stackoverflow.com/questions/3242361/haskell-how-is-pronounced/3242853#3242853 > > - C. > Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Ling Yang
On Mon, Nov 15, 2010 at 6:43 PM, Ling Yang <[hidden email]> wrote:
> ... > One alternate way of doing this, however, is instancing the > typeclasses of the ordinary values with their monadic versions: > > instance (Num a) => Num (Prob a) where > (+) = liftM2 (+) > (*) = liftM2 (*) > abs = liftM abs > signum = liftM signum > fromInteger = return . fromInteger > > instance (Fractional a) => Fractional (Prob a) where > fromRational = return . fromRational > (/) = liftM2 (/) You may also like to look at Conal Elliott's applicative-numbers package: http://hackage.haskell.org/package/applicative-numbers Bas _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Ling Yang
I like your autolifting stuff, and the runnable concept.
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Ling Yang
Ling Yang wrote > I think a good question as a starting point is whether it's possible > to do this 'monadic instance transformation' for any typeclass, and > whether or not we were lucky to have been able to instance Num so > easily (as Num, Fractional can just be seen as algebras over some base > type plus a coercion function, making them unusually easy to lift if > most typeclasses actually don't fit this description). > > In general, what this seems to be is a transformation on functions > that also depends explicitly on type signatures. For example in Num: Things start to break down when we get to the higher order. In the first order, it is indeed easy to see that the monadification of the term Int -> Int -> Int should/could be m Int -> m Int -> m Int Indeed, liftM2 realizes this transformation. But what about (Int -> Int) -> Int ? Should it be (m Int -> m Int) -> m Int ? A moment of thought shows that there is no total function of the type Monad m => ((Int -> Int) -> Int) -> ((m Int -> m Int) -> m Int) because there is no way, in general, to get from (m Int -> m Int) to the pure function Int->Int. That is, we can't write Monad m => (m Int -> m Int) -> m (Int->Int) One may try tabulation (for finite domains) tf f = do vt <- f (return True) vf <- f (return False) return $ \x -> if x then vt else vf but that doesn't quite work: what if the resulting function is never invoked on the True argument. We have needlessly computed that value, vt. Worse, we have incurred the effect of computing vt; that effect could be failure. We have needlessly failed. One can say: we need runnable > class (Monad m) => Runnable m where > exit : m a -> a are there many monads that are members of that typeclass? For example, Maybe cannot be Runnable; otherwise, what is (exit Nothing)? Any Error or MonadPlus monad can't be Runnable. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Thanks Oleg for the feedback. If I understand right, there is a hard (as in
computability) limit for automatic instancing due to the general requirement of deriving functions off of the behavior of another. At this point, I wonder: How good are we at doing this? There's languages that reside in the more expressive corners of the lambda cube, such as Epigram. Some of the concepts have been translated to Haskell, such as Djinn. Are only 'trivial' results possible, or that the incomputability problems are just moved into type space? In any case, it's a good reason to limit the scope of autolifting. On Tue, Nov 16, 2010 at 2:49 AM, <[hidden email]> wrote: > > Ling Yang wrote >> I think a good question as a starting point is whether it's possible >> to do this 'monadic instance transformation' for any typeclass, and >> whether or not we were lucky to have been able to instance Num so >> easily (as Num, Fractional can just be seen as algebras over some base >> type plus a coercion function, making them unusually easy to lift if >> most typeclasses actually don't fit this description). >> >> In general, what this seems to be is a transformation on functions >> that also depends explicitly on type signatures. For example in Num: > > Things start to break down when we get to the higher order. In the first > order, it is indeed easy to see that the monadification of the term > Int -> Int -> Int > should/could be > m Int -> m Int -> m Int > Indeed, liftM2 realizes this transformation. But what about > (Int -> Int) -> Int > ? > Should it be > (m Int -> m Int) -> m Int > ? > A moment of thought shows that there is no total function of the type > > Monad m => ((Int -> Int) -> Int) -> ((m Int -> m Int) -> m Int) > > because there is no way, in general, to get from (m Int -> m Int) to > the pure function Int->Int. That is, we can't write > Monad m => (m Int -> m Int) -> m (Int->Int) > One may try tabulation (for finite domains) > > tf f = do > vt <- f (return True) > vf <- f (return False) > return $ \x -> if x then vt else vf > > but that doesn't quite work: what if the resulting function is never > invoked on the True argument. We have needlessly computed that value, > vt. Worse, we have incurred the effect of computing vt; that effect > could be failure. We have needlessly failed. > > One can say: we need runnable > >> class (Monad m) => Runnable m where >> exit : m a -> a > > are there many monads that are members of that typeclass? For example, > Maybe cannot be Runnable; otherwise, what is (exit Nothing)? Any Error > or MonadPlus monad can't be Runnable. > Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Nov 16, 2010, at 12:36 PM, Ling Yang wrote: > Are only > 'trivial' results possible, or that the incomputability problems are > just moved > into type space? That's typically the case, under Rice's theorem. A construct is derivable if it works for all cases (i.e., it's a "free theorem"), or if it works for none. If it works for some, you need to encode the differences between the cases yourself. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Ling Yang
Let me point out another concern with autolifting: it makes it easy to overlook sharing. In the pure world, sharing is unobservable; not so when effects are involved. Let me take the example using the code posted in your first message: > t1 = let x = 1 + 2 in x + x The term t1 is polymorphic and can be evaluated as an Int or as a distribution over Int: > t1r = t1 ::Int -- 6 > t1p = t1 ::Prob Int -- Prob {getDist = [(6,1.0)]} That looks wonderful. In fact, too wonderful. Suppose later on we modify the code to add a non-trivial choice: > t2 = let x = coin 0.5 + 1 in x + x > -- Prob {getDist = [(4,0.25),(3,0.25),(3,0.25),(2,0.25)]} The result isn't probably what one expected. Here, x is a shared computation rather than a shared value. Therefore, in (x + x) the two occurrences of 'x' correspond to two _independent_ coin flips. Errors like that are insidious and very difficult to find. There are no overt problems, no exceptions are thrown, and the results might just look right. Thus the original code had to be translated into monadic style more carefully: `let' should not have been translated as it was. We should have replaced let with bind, using either of the following patterns: > t2b1 = do x <- coin 0.5 + 1; return $ x + x > -- Prob {getDist = [(4,0.5),(2,0.5)]} > t2b2 = coin 0.5 + 1 >>= \xv -> let x = return xv in x + x > -- Prob {getDist = [(4,0.5),(2,0.5)]} After all, let is bind (with the different order of arguments): see Moggi's original computational lambda-calculus. Our example points out that monadifying Int->Int function as m Int -> m Int can be quite dangerous. For example, suppose we have a pure function fi: > fi :: Int -> Int > fi x = x + x and we simple-mindedly monadified it: > fp :: Monad m => m Int -> m Int > fp x = liftM2 (+) x x We can use it as follows: after all, the function accepts arguments of the type m Int: > tfp = fp (coin 0.5) > -- Prob {getDist = [(2,0.25),(1,0.25),(1,0.25),(0,0.25)]} The result shows two independent coin flips. Most of the readers of the program will argue that in an expression (x + x), two occurrences of x should be _correlated_. After all, that's why we use the same name, 'x'. But they are not correlated in our translation. Furthermore, translating let as bind is suboptimal. Consider > t3b2 = coin 0.5 + 1 >>= \xv -> let x = return xv in (5 :: Prob Int) > -- Prob {getDist = [(5,0.5),(5,0.5)]} although we don't use the result of a coin flip, we performed the coin flip nevertheless, doubling the search space. We know that such doubling is disastrous even for toy probabilistic problems. These issues are discussed at great length in the paper with Sebastian Fischer and Chung-chieh Shan, ICFP 2009. http://okmij.org/ftp/Computation/monads.html#lazy-sharing-nondet _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Thank you for highlighting these problems; I should really test my own code more thoroughly. After reading these most recent examples, the translation to existing monads is definitely too neat, and a lot of semantics of the monad are
'defaulted' on. In particular for the probability monad examples I see I had the mistaken impression that it would preserve the random-world semantics of the do-notation whereas autolifting actually imposes a random-evaluation
semantics, which would not be how I envision an autolifted probabilistic DSL. Overall, I think I pretty much got caught up in how cool it was going to be to use <$>, <*>, join/enter/exit as primitives to make any monad-based DSL work
'concisely' in an environment of existing typeclasses. That is kind of the thing I want to do at a high level; implement DSLs like probabilistic programming languages as primitives that that play transparently with existing
expressions. But now it seems clear to me that this autolifting approach will not be useful with any monad where it is important to control sharing and effects, which is critical in the probability monad (and all others I can think of); in fact it
seems necessary to incur the do-notation 'overhead' (which I now see is not overhead at all!) to work with them meaningfully at all, no matter how 'pure' they look in other settings. Because of this we see that the Prob monad as it
is defined here is mostly unusable with autolifting. Again, thanks for the examples; I think I now have a much better intuition for do/bind and why they are required. At this point, however, I would still want to see if it is possible to do the
autolifting in a more useful manner, so that the user still has some control over effects. Things like the share combinator in the paper you linked will probably be very useful. I will definitely go over it in detail.
From my previous experience however, this might also be accomplished by inserting something between the autolifting and the target monad. I think it would be more helpful now to talk more about where I'm coming from.
Indeed, the probability monad examples feature heavily here because I'm coming off of implementing a probabilistic programming language in Python that worked through autolifting, so expressions in it looked like host language
expressions. It preserved the random-world semantics because it was using a "quote"-like applicative functor to turn a function composition in the language into an expression-tree rep of the same. I am not sure yet how to express it in
Haskell (as I need to get more comfortable with GADTs), but pure would take a term to an abstract version of it, and fmap would take a function and abstract term to an abstract term representing the answer. One would then have the call
graph available after doing this on lifted functions. I think of this as an automatic way of performing the 'polymorphic embedding' referenced in [Hofer et al 2008] Polymorphic Embedding of DSLs.
By keeping IDs on different abstract terms, expressions like X + X (where X = coin 0.5) would take the proper distributions under random-world semantics.
In general for monads where the control of sharing is important, it can be seen as limiting the re-running of effects to one per name. Each occurence of a name using the same unwrapped value. I had the initial impression, now corrected, that I could just come up with an autolifting scheme in Haskell, use it with the usual probability monad and somehow get this random-world semantics for free. No; control of sharing and
effects is in fact critical, but could be done through using the autolifting as a way to turn expressions into a form where control of them is possible. For now, though, it looks like I have a lot of things to read through.
Again, thanks Oleg and everyone else for all the constructive feedback. This definitely sets a personal record for misconceptions corrected / ideas clarified per day. On Wed, Nov 17, 2010 at 12:08 AM, <[hidden email]> wrote:
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |