The great "that's why" is as follows: when you have an abstraction,
then it is sufficient to hold the abstraction in mind instead of the whole concrete implementation. That's the whole purpose of abstraction, after all, be it maths or programming. Let me illustrate this. Suppose you are developing a library that, for instance, has a notion of "settings" and is able to combine two "settings" with several strategies for resolving conflicts between settings with duplicate keys: take the first setting, the second, none, or make a setting with multiple values. For example, an alternative GetOpt. Suppose you don't know what a monoid is and don't even know that such an abstraction exists. Now, when you're reasoning about the library, you have to think "If I combine 3 settings, should the order of combination matter? Hmm, would be nice if it didn't. Also, what should I return if someone queries for a non-existent key in the settings? Should I return an empty list, or a Nothing, or throw an error, or what? Do empty settings make sense?" etc. If you're smart and lucky, you will most probably get most things right and inconsciously create a settings monoid. Now, if you know what a monoid is, you immediately recognize that your settings should be a monoid by nature, and now you have absolutely no doubt that you should make the combining operation associative and provide a unit; and you use this monoid abstraction all the time you are designing this library. Now, you don't think about whether you should throw an error or return a Nothing for an empty key; but instead you think about which result would behave like a unit for the monoid, being motivated by mathematical principles rather than pure intuition. You end up designing a mathematically sound library whose principles make sence and has no design flaws, at least in the mathematically sound part, even if you never actually use the word "monoid" in the documentation. Also, read this post by sigfpe that motivated me to learn abstract algebra in depth (I am yet in the beginning, however), and, overall, this is a breathtaking post: http://sigfpe.blogspot.com/2008/11/approach-to-algorithm-parallelisation.html - this is where I started to appreciate the power of mathematical abstractions even more. 2009/1/17 david48 <[hidden email]>: > All of this is still lacking the great why : why/how an abstraction so > generic can be useful. > I'm starting to believe that the only reason to make a datatype an > instance of Monoid is... why not ! since it's not hard to find an > associative operation and a neutral element. > -- Eugene Kirpichov _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
2009/1/17 Andrew Coppin <[hidden email]>:
> Cory Knapp wrote: >> >> Actually, that was part of my point: When I mention Haskell to people, and >> when I start describing it, they're generally frightened enough by the focus >> on pure code and lazy evaluation-- add to this the inherently abstract >> nature, and we can name typeclasses "cuddlyKitten", and the language is >> still going to scare J. R. Programmer. By "inherently mathematical nature", >> I didn't mean names like "monoid" and "functor", I meant *concepts* like >> monoid and functor. Not that either of them are actually terribly difficult; >> the problem is that they are terribly abstract. That draws a lot of people >> (especially mathematicians), but most people who aren' drawn by that are >> hugely put off-- whatever the name is. So, I guess my point is that the name >> is irrelevant: the language is going to intimidate a lot of people who are >> intimidated by the vocabulary. > > Oh, I don't know. I have no idea what the mathematical definition of > "functor" is, but as far as I can tell, the Haskell typeclass merely allows > you to apply a function simultaneously to all elements of a collection. > That's pretty concrete - and trivial. If it weren't for the seemingly > cryptic name, nobody would think twice about it. (Not sure exactly what > you'd call it though...) > No, a functor is a more wide notion than that, it has nothing to do with collections. An explanation more close to truth would be "A structure is a functor if it provides a way to convert a structure over X to a structure over Y, given a function X -> Y, while preserving the underlying 'structure'", where preserving structure means being compatible with composition and identity. Collections are one particular case. Another case is just functions with fixed domain A: given a "structure" of type [A->]X and a function of type X -> Y, you may build an [A->]Y. Yet another case are monads (actually, the example above is the Reader monad): given a monadic computation of type 'm a' and a function a -> b, you may get a computation of type m b: instance (Monad m) => Functor m where fmap f ma = do a <- ma; return (f a) There are extremely many other examples of functors; they are as ubiquitous as monoids and monads :) -- Eugene Kirpichov _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Eugene Kirpichov wrote:
> No, a functor is a more wide notion than that, it has nothing to do > with collections. > An explanation more close to truth would be "A structure is a functor > if it provides a way to convert a structure over X to a structure over > Y, given a function X -> Y, while preserving the underlying > 'structure'", where preserving structure means being compatible with > composition and identity. > As far as I'm aware, constraints like "while preserving the underlying structure" are not expressible in Haskell. > instance (Monad m) => Functor m where > fmap f ma = do a <- ma; return (f a) > While that's quite interesting from a mathematical point of view, how is this "useful" for programming purposes? _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
2009/1/17 Andrew Coppin <[hidden email]>:
> Eugene Kirpichov wrote: >> >> No, a functor is a more wide notion than that, it has nothing to do >> with collections. >> An explanation more close to truth would be "A structure is a functor >> if it provides a way to convert a structure over X to a structure over >> Y, given a function X -> Y, while preserving the underlying >> 'structure'", where preserving structure means being compatible with >> composition and identity. >> > > As far as I'm aware, constraints like "while preserving the underlying > structure" are not expressible in Haskell. Yes, but they are expressible in your mind so that you can recognize a functor and design you program so that it does satisfy this constraint, thus removing a large faulty piece of the design space. Also, you can write a QuickCheck test for fmap (f . g) = fmap f . fmap g and fmap id = id. > >> instance (Monad m) => Functor m where >> fmap f ma = do a <- ma; return (f a) >> > > While that's quite interesting from a mathematical point of view, how is > this "useful" for programming purposes? In the same sense as monoids are, see my previous message. If you mean the usefulness of a Functor typeclass in Haskell, it's in the fact that everywhere where you'd like to convert a structure over X to a structure over Y (for example, the result of a monadic computation), you simply write 'fmap f structure' and it works the right way, if the structure has an instance for Functor (many structures do). I know I'm being a bit abstract, but that's the way I percept it. do filename <- toLowerCase `fmap` readLine .... > > _______________________________________________ > Haskell-Cafe mailing list > [hidden email] > http://www.haskell.org/mailman/listinfo/haskell-cafe > _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Sat, Jan 17, 2009 at 5:04 AM, Andrew Coppin <[hidden email]> wrote:
Eugene Kirpichov wrote: Well, they're expressible about Haskell. I.e., for functors we require: fmap id = id fmap (f . g) = fmap f . fmap g The first property is how we write "preserving underlying structure", but this has a precise, well-defined meaning that we can say a given functor obeys or it does not (and if it does not, we say that it's a bad instance). But you are correct that Haskell does not allow us to require proofs of such properties. And indeed, some people break those properties in various ways, which some consider okay if the breakage is not observable from outside a given abstraction barrier. I'm on the fence about that... _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Hello Luke,
Saturday, January 17, 2009, 3:16:06 PM, you wrote: > fmap id = id > fmap (f . g) = fmap f . fmap g > The first property is how we write "preserving underlying > structure", but this has a precise, well-defined meaning that we can > say a given functor obeys or it does not (and if it does not, we say > that it's a bad instance). But you are correct that Haskell does > not allow us to require proofs of such properties. not haskell itself, but QuickCheck allows. we may even consider lifting these properties to the language level -- Best regards, Bulat mailto:[hidden email] _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by David Virebayre
On Sat, Jan 17, 2009 at 1:41 AM, david48 <[hidden email]> wrote: On Fri, Jan 16, 2009 at 3:10 PM, Bulat Ziganshin So you're saying it should be better documented in Haskell what a Monoid is. Did you say you searched for "C++ class" why not "Haskell Monoid" then? The first correct google hit that didn't think I meant Monads, takes you straight to the GHC documentation for Data.Monoid.
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
Thinking that Functor allows you to apply a function to all elements
in a collection is a good intuitive understanding. But fmap also allows applying a function on "elements" of things that can't really be called collections, e.g., the continuation monad. -- Lennart On Sat, Jan 17, 2009 at 11:17 AM, Andrew Coppin <[hidden email]> wrote: > Cory Knapp wrote: >> >> Actually, that was part of my point: When I mention Haskell to people, and >> when I start describing it, they're generally frightened enough by the focus >> on pure code and lazy evaluation-- add to this the inherently abstract >> nature, and we can name typeclasses "cuddlyKitten", and the language is >> still going to scare J. R. Programmer. By "inherently mathematical nature", >> I didn't mean names like "monoid" and "functor", I meant *concepts* like >> monoid and functor. Not that either of them are actually terribly difficult; >> the problem is that they are terribly abstract. That draws a lot of people >> (especially mathematicians), but most people who aren' drawn by that are >> hugely put off-- whatever the name is. So, I guess my point is that the name >> is irrelevant: the language is going to intimidate a lot of people who are >> intimidated by the vocabulary. > > Oh, I don't know. I have no idea what the mathematical definition of > "functor" is, but as far as I can tell, the Haskell typeclass merely allows > you to apply a function simultaneously to all elements of a collection. > That's pretty concrete - and trivial. If it weren't for the seemingly > cryptic name, nobody would think twice about it. (Not sure exactly what > you'd call it though...) > > A monoid is a rather more vague concept. (And I'm still not really sure why > it's useful on its own. Maybe I just haven't had need of it yet?) > > I think, as somebody suggested about "monad", the name does tend to inspire > a feeling of "hey, this must be really complicated" so that even after > you've understood it, you end up wondering whether there's still something > more to it than that. > > But yes, some people are definitely put off by the whole "abstraction of > abstractions of abstraction" thing. I think we probably just need some more > concrete examples to weight it down and make it seem like something > applicable to the real world. > > (Thus far, I have convinced exactly *one* person to start learning Haskell. > This person being something of a maths nerd, their main complaint was not > about naming or abstraction, but about the "implicitness" of the language, > and the extreme difficulty of visually parsing it. Perhaps not surprising > comming from a professional C++ programmer...) > >> At the same time, I think everyone is arguing *for* better documentation. >> And you're probably right: better documentation will bring the abstract >> nonsense down to earth somewhat. > > Amen! > > _______________________________________________ > Haskell-Cafe mailing list > [hidden email] > http://www.haskell.org/mailman/listinfo/haskell-cafe > Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Sat, Jan 17, 2009 at 7:33 AM, Lennart Augustsson <[hidden email]> wrote: Thinking that Functor allows you to apply a function to all elements I hadn't even thought about fmap for continuations... interesting! It falls out of the logic though doesn't it?
I'm not one to throw all the cool mathematical and logical thinking out for "simpler terms" or not covering the full usefulness of certain abstractions. I know Haskell allows for lazy evaluation (as an implementation of non-strictness) but Haskell programmers are NOT allowed to be lazy :-)
Try learning the terms that are there... and ask for help if you need help... most of us are pretty helpful! Improving documentation can pretty much *always* be done on any project, and it looks like that's coming out of this long thread that won't die, so kudos to the ones being the gadflies in this instance. It really looked at first like a long troll, but I think something very useful is going to come out of this!
Dave
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by David Leimbach
On Sat, Jan 17, 2009 at 4:08 PM, David Leimbach <[hidden email]> wrote:
> So you're saying it should be better documented in Haskell what a Monoid is. > Did you say you searched for "C++ class" why not "Haskell Monoid" then? > The first correct google hit that didn't think I meant Monads, takes you > straight to the GHC documentation for Data.Monoid. Read my first post on the thread, that's exactly what I did ( and then complained that the doc for Data.Monoid was close to useless ) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Sat, Jan 17, 2009 at 9:16 AM, david48 <[hidden email]> wrote: On Sat, Jan 17, 2009 at 4:08 PM, David Leimbach <[hidden email]> wrote: _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by David Virebayre
On Sat, Jan 17, 2009 at 1:47 AM, david48 <[hidden email]> wrote:
> why would I > need to write a running count this way instead of, for example, a non > monadic fold, which would probably result in clearer and faster code? Maybe my post here will answer some questions like that: http://sigfpe.blogspot.com/2009/01/haskell-monoids-and-their-uses.html -- Dan _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Gracjan Polak
G'day all.
Quoting Gracjan Polak <[hidden email]>: > I remember my early CS algebra courses. I met cool animals there: Group, > Ring, Vector Space. Those beasts were very strong, but also very calm at > the same time. Although I was a bit shy at first, after some work we > became friends. I don't know about you, bu the word "module" threw me. That is probably the one name from algebra that clashes with computer science too much. Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Bulat Ziganshin-2
On Saturday 17 January 2009 8:28:05 am Bulat Ziganshin wrote:
> Hello Luke, > > Saturday, January 17, 2009, 3:16:06 PM, you wrote: > > fmap id = id > > fmap (f . g) = fmap f . fmap g > > > > The first property is how we write "preserving underlying > > structure", but this has a precise, well-defined meaning that we can > > say a given functor obeys or it does not (and if it does not, we say > > that it's a bad instance). But you are correct that Haskell does > > not allow us to require proofs of such properties. > > not haskell itself, but QuickCheck allows. we may even consider > lifting these properties to the language level QuickCheck doesn't allow you to prove that the properties hold, though. It can only prove that they don't hold, and consequently give you confidence that they do hold when the tests fail to prove that they don't. To prove that they hold, you need something more like ESC/Haskell, catch or a fancier type system than the one Haskell (or even GHC) has. -- Dan _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by John Goerzen-3
G'day all.
Quoting John Goerzen <[hidden email]>: > If I see Appendable I can guess what it might be. If I see "monoid", I > have no clue whatsoever, because I've never heard of a monoid before. Any sufficiently unfamiliar programming language looks like line noise. That's why every new language needs to use curly braces. > If you're learning Haskell, which communicates the idea more clearly: > > * Appendable > > or > > * Monoid > > I can immediately figure out what the first one means. No you can't. It is in no way clear, for example, that Integers with addition are "Appendable". I'm not saying that "Monoid" is the most pragmatically desirable term, merely that "Appendable" is misleading. And FWIW, I agree with everyone who has commented that the documentation is inadequate. It'd be nice if there was some way to contribute better documentation without needing checkin access to the libraries. Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by haskell-2
G'day all.
Dan Weston wrote: > Richard Feinman once said: "if someone says he understands quantum > mechanics, he doesn't understand quantum mechanics". > > But what did he know... Presumably not quantum mechanics. Cheers, Andrew Bromage _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Sat, 2009-01-17 at 12:04 +0000, Andrew Coppin wrote:
> Eugene Kirpichov wrote: > > No, a functor is a more wide notion than that, it has nothing to do > > with collections. > > An explanation more close to truth would be "A structure is a functor > > if it provides a way to convert a structure over X to a structure over > > Y, given a function X -> Y, while preserving the underlying > > 'structure'", where preserving structure means being compatible with > > composition and identity. > > > > As far as I'm aware, constraints like "while preserving the underlying > structure" are not expressible in Haskell. > > > instance (Monad m) => Functor m where > > fmap f ma = do a <- ma; return (f a) > > > > While that's quite interesting from a mathematical point of view, how is > this "useful" for programming purposes? Good Lord. fmap (as above) is *at least* useful enough to be in the standard library! (Control.Monad.liftM). Contrary to what some people seem to think, every single function in Haskell's standard library was included because enough people found it actually *useful* enough to add. Here's the first example I found, searching the source code for my macro language interpreter:[1] Macro-call interpolation has syntax §(expression), parsed by do char '§' lexParens $ Interpolation <$> parseExpr = do char '§' lexParens $ fmap Interpolation $ parseExpr = do [2] char '§' fmap Interpolation $ lexParens $ parseExpr Useful enough? jcc [1] The only simplifications applied were (a) ignoring an interpolation syntax substantially more complicated, and (b) re-naming the relevant constructor. [2] Valid because lexParens is a natural transformation. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Sat, 2009-01-17 at 11:07 +0000, Andrew Coppin wrote:
> Anton van Straaten wrote: > > Niklas Broberg wrote: > >>> I still think existential quantification is a step too far though. :-P > >> > >> Seriously, existential quantification is a REALLY simple concept, that > >> you would learn week two (or maybe three) in any introductory course > >> on logic. In fact, I would argue that far more people probably know > >> what existential quantification is than that know what a monoid is. > >> :-) > > > > Andrew's core objection here seems reasonable to me. It was this: > > > > > {-# LANGUAGE ExistentialQuantification #-} is an absurd name and > > > should be changed to something that, at a minimum, tells you it's > > > something to do with the type system. > > > > But I suspect I part company from Andrew in thinking that something > > like ExistentiallyQuantifiedTypes would be a perfectly fine alternative. > > I would suggest that ExistentiallyQuantifiedTypeVariables would be an > improvement on just ExistentialQuantification - but I'd still prefer the > less cryptic HiddenTypeVariables. (Since, after all, that's all this > actually does.) Consider the expression (I hate this expression) case error "Urk!" of x -> error "Yak!" When you translate this into System F, you have to come up with a fresh type variable for the type of x, even though that variable is unused in the type of the entire expression. Which is what HiddenTypeVariables brings to my mind every time you use it. jcc _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by David Virebayre
On Sat, 2009-01-17 at 10:47 +0100, david48 wrote:
> On Fri, Jan 16, 2009 at 4:04 PM, Jonathan Cast > <[hidden email]> wrote: > > > On Fri, 2009-01-16 at 14:16 +0100, david48 wrote: > >> Part of the problem is that something like a monoid is so general that > >> I can't wrap my head around why going so far in the abstraction. > >> For example, the writer monad works with a monoid; using the writer > >> monad with strings makes sense because the mappend operation for lists > >> is (++), now why should I care that I can use the writer monad with > >> numbers > >> which it will sum ? > > > > To accumulate a running count, maybe? A fairly common pattern for > > counting in imperative languages is > > > > int i = 0; > > while (<get a value>) i+= <count of something in value> > > > > Using the writer monad, this turns into > > > > execWriter $ mapM_ (write . countFunction) $ getValues > > well thank you for the example, if I may ask something: why would I > need to write a running count this way instead of, for example, a non > monadic fold, which would probably result in clearer and faster code > (IMHO) ? I agree with you, for this special case. (Did I remember to post the simpler solution: sum $ map countFunction $ getValues somewhere in this thread?) But, just like the (utterly useless) C++ example translated to Haskell in another thread, the monadic form provides a framework you can fill out with larger code fragments. So if the while loop above was replaced with a larger control structure, maybe recursion over a custom tree type, then standard recusion operators, such as folds, may be inapplicable. In that case, moving to a Writer monad can get you some of the advantage back, so you don't end up passing your accumulator around everywhere by hand. jcc _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
Andrew Coppin <[hidden email]> wrote:
> I would suggest that ExistentiallyQuantifiedTypeVariables would be an > improvement [...] That must be a joke. Typing the long extension names in LANGUAGE pragmas over and over again is tiring and annoying enough already. We really don't need even longer names, and your "improvement" fills up almost half of the width of an 80 characters terminal. I can't await the next Haskell standard, where at last all those extensions are builtin. For the remaining extensions, I strongly suggest abbreviations and a more convenient way to specify them. For example an import-like statement: uses MPTC uses FlexInst instead of: {-# LANGUAGE MultiParamTypeClasses, FlexibleInstances #-} Greets, Ertugrul. -- nightmare = unsafePerformIO (getWrongWife >>= sex) http://blog.ertes.de/ _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |