Yesterday I read a rather interesting paper:
http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf It's fascinating stuff, and I *think* I understand the gist of what it's saying. However, the paper is utterly festooned with formulas that look so absurdly over-the-top that they might almost be a spoof of a mathematical formula rather than the real thing. A tiny fraction of the notation is explained in the paper, but the rest is simply "taken to be obvious". The paper also uses several ordinary English words in a way that suggests that they are supposed to have a more specific technical meaning - but I have no idea what. Does anybody have any idea which particular dialect of pure math this paper is speaking? (And where I can go read about it...) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Andrew Coppin <[hidden email]> writes:
> Does anybody have any idea which particular dialect of pure math this > paper is speaking? (And where I can go read about it...) It's pretty garden-variety programming language/type theory. I can recommend Benjamin Pierce's "Types and Programming Languages" textbook for an introduction to the material: http://www.cis.upenn.edu/~bcpierce/tapl/ G -- Gregory Collins <[hidden email]> _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
I think you would enjoy reading (and working) through TAPL[1] and/or
Software Foundations[2] if this interests you. Cheers, Thomas [1] http://www.amazon.com/Types-Programming-Languages-Benjamin-Pierce/dp/0262162091 [2] http://www.cis.upenn.edu/~bcpierce/sf/ On Fri, Oct 15, 2010 at 1:36 PM, Andrew Coppin <[hidden email]> wrote: > Yesterday I read a rather interesting paper: > > http://www.cl.cam.ac.uk/~mb566/papers/tacc-hs09.pdf > > It's fascinating stuff, and I *think* I understand the gist of what it's > saying. However, the paper is utterly festooned with formulas that look so > absurdly over-the-top that they might almost be a spoof of a mathematical > formula rather than the real thing. A tiny fraction of the notation is > explained in the , but the rest is simply "taken to be obvious". The > paper also uses several ordinary English words in a way that suggests that > they are supposed to have a more specific technical meaning - but I have no > idea what. > > Does anybody have any idea which particular dialect of pure math this paper > is speaking? (And where I can go read about it...) > > _______________________________________________ > Haskell-Cafe mailing list > [hidden email] > http://www.haskell.org/mailman/listinfo/haskell-cafe > Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Oct 15, 2010, at 1:36 PM, Andrew Coppin wrote: > Does anybody have any idea which particular dialect of pure math > this paper is speaking? (And where I can go read about it...) It's some kind of typed logic with lambda abstraction and some notion of witnessing, using Gertzen (I think!) style derivation notation. Those A |- B things mean A "derives" B. The "|-" is also called a Tee. If your mail client can see underlining, formulas like A, B | A mean: A, B |- A That's where the Tee gets its name. It's a T under A, B. The former notation is better for some uses, since it "meshes" with the method of "truth trees". _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 On 10/15/10 16:36 , Andrew Coppin wrote: > Does anybody have any idea which particular dialect of pure math this paper > is speaking? (And where I can go read about it...) Type theory. It makes my head spin, too, since essentially my only exposure to it so far is Haskell itself. - -- brandon s. allbery [linux,solaris,freebsd,perl] [hidden email] system administrator [openafs,heimdal,too many hats] [hidden email] electrical and computer engineering, carnegie mellon university KF8NH -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAky5AAUACgkQIn7hlCsL25UxawCePztYYnJLXZS8Cx78H4IdNs4q pG4AnjrRLBkL96gduOhN9AyBJPp+xKSv =IcA6 -----END PGP SIGNATURE----- _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Gregory Collins-3
On 15/10/2010 09:42 PM, Gregory Collins wrote:
> Andrew Coppin<[hidden email]> writes: > >> Does anybody have any idea which particular dialect of pure math this >> paper is speaking? (And where I can go read about it...) > It's pretty garden-variety programming language/type theory. Hypothesis: The fact that the average Haskeller thinks that this kind of dense cryptic material is "pretty garden-variety" notation possibly explains why normal people think Haskell is scary. > I can > recommend Benjamin Pierce's "Types and Programming Languages" textbook > for an introduction to the material: > http://www.cis.upenn.edu/~bcpierce/tapl/ If I were to somehow obtain this book, would it actually make any sense whatsoever? I've read too many maths books which assume you already know truckloads of stuff, and utterly fail to make sense until you do. (Also, being a somewhat famous book, it's presumably extremely expensive...) Type theory doesn't actually interest me, I just wandered what the hell all the notation means. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On 25 October 2010 22:10, Andrew Coppin <[hidden email]> wrote:
> > If I were to somehow obtain this book, would it actually make any sense > whatsoever? I've read too many maths books which assume you already know > truckloads of stuff, and utterly fail to make sense until you do. (Also, > being a somewhat famous book, it's presumably extremely expensive...) > > Type theory doesn't actually interest me, I just wandered what the hell all > the notation means. > Its a clearly written book, though you would have to read it fairly diligently and possibly work through some of the exercises to get it to make sense. If you've no interest in type systems then you don't have a need for it. Such notations are generally typeset with LaTeX's 'Semantic' package - you could look at the guide to this package by Peter Møller Neergaard and Arne John Glenstrup, it might not help much (but it will be cheaper). _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
Andrew Coppin <[hidden email]> writes:
> On 15/10/2010 09:42 PM, Gregory Collins wrote: > >> It's pretty garden-variety programming language/type theory. > > Hypothesis: The fact that the average Haskeller thinks that this kind of dense > cryptic material is "pretty garden-variety" notation possibly explains why > normal people think Haskell is scary. That's ridiculous. You're comparing apples to oranges: using Haskell and understanding the underlying theory are two completely different things. Put it to you this way: a mechanic can strip apart and rebuild an engine without knowing any of the organic chemistry which explains how and why the catalytic converter works. If you work for Ford, on the other hand... P.S. I did my computer science graduate work in type theory, so I may not be an "average Haskeller" in those terms. By "garden-variety" I meant to say that the concepts, notation, and vocabulary are pretty standard for the field, and I had no trouble reading it despite not having seriously read a type theory paper in close to a decade. >> I can recommend Benjamin Pierce's "Types and Programming Languages" >> textbook for an introduction to the material: >> http://www.cis.upenn.edu/~bcpierce/tapl/ > > If I were to somehow obtain this book, would it actually make any > sense whatsoever? I've read too many maths books which assume you > already know truckloads of stuff, and utterly fail to make sense until > you do. (Also, being a somewhat famous book, it's presumably extremely > expensive...) Introductory type theory is usually taught in computer science cirricula at a 3rd or 4th year undergraduate level. If you understand some propositional logic and discrete mathematics, then "probably yes", otherwise "probably not." G -- Gregory Collins <[hidden email]> _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Mon, Oct 25, 2010 at 10:10:56PM +0100, Andrew Coppin wrote:
> Type theory doesn't actually interest me, I just wandered what the > hell all the notation means. That sounds like an oxymoron. How could you possibly learn what the notation "means" without learning about the subject that the notation is about? That's like saying "I'm not actually interested in calculus, I'd just like to know what the hell all these funny S-like symbols mean". For what it's worth, I was once in a similar situation (modulo the interest), and sent a similar query: http://groups.google.com/group/comp.lang.functional/browse_frm/thread/bb82dcec463fd776 Following that, Pierce sent me a draft of his (then upcoming) book, and I found it extremely accessible at my level (at least compared to the other book I studied, Mitchell's "Foundations", which, though full of good information, was a bit hard to digest). So I will add voice to those recommending TAPL. Cheers, Lauri _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Oct 25, 2010, at 2:10 PM, Andrew Coppin wrote: > Hypothesis: The fact that the average Haskeller thinks that this > kind of dense cryptic material is "pretty garden-variety" notation > possibly explains why normal people think Haskell is scary. Maybe, but the notation is still clearer than most programming notations for expressing the same ideas. Most "normal" people don't realize that mathematicians discovered the constructs for expressing structures and computations 50 to hundreds of years before the invention of the semiconductor, or that the mathematician's goal of "elegant" expression is pragmatic. Elegant expressions of a problem and its solutions are the easiest to work with. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On 10-10-25 05:10 PM, Andrew Coppin wrote:
> Hypothesis: The fact that the average Haskeller thinks that this kind of > dense cryptic material is "pretty garden-variety" notation possibly > explains why normal people think Haskell is scary. How many normal people actively stalk highly specialized academic papers like you do and mistake them for Haskell tutorials to begin with? (Pun: strawman is made of stalks.) However, average programmers using languages with interesting type systems should learn type rule notation sooner or later. It takes only Θ(1) to learn, and its saving in writing and reading is Θ(n) if you will use it n times in the rest of your life (which you do because you are using languages with interesting type systems). _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Oct 25, 2010, at 2:10 PM, Andrew Coppin wrote: > Type theory doesn't actually interest me, I just wandered what the > hell all the notation means. Sorry for the double email. I recommend "Language , Proof, and Logic", by Barwise and Etchemendy. It doesn't go into type theory (directly). It is a book about logic for technical people (it was in use at the Reed College philosophy department for many years -- maybe it still is). It is very approachable. The last part of the book is about "advanced" logic, including model theory, some aspects of forcing (there's a forcing proof of Lowenheim-Skolem), Godel's theorems (they called Godel the world's first hacker, with good reason), and some sections on higher order logics. It doesn't include all of the notation you asked about. But understanding the basics will be very helpful, especially if you understand the underlying concepts. For example, set theory is a first order logic that can be recast as a second order logic. This is more-or-less the origin of type theory (types were originally witnesses to the "level" at which a term is defined -- this will make sense in context). The paper you asked about has a richer "ontology" than ZF -- it promotes more things to "named" "kinds" of terms than plain old ZF. But the promotion is straightforward, and the same logical rules apply. You can get it cheap ($1.70 for the cheapest copy) through alibris.com (I am not too happy with them at the moment, but their prices are low. Especially if you can wait a few weeks for the book) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Gregory Collins-3
2010/10/25 Gregory Collins <[hidden email]>
> Andrew Coppin <[hidden email]> writes: > > Hypothesis: The fact that the average Haskeller thinks that this kind of dense > > cryptic material is "pretty garden-variety" notation possibly explains why > > normal people think Haskell is scary. > That's ridiculous. That's not so ridiculous in the sense that some people might (wrongly) think they won't understand haskell until they understand at least some of that cryptic material. Many long discussion about Haskell on reddit seem to have a beginner thinking he must "understand monads" before going on. Yes, the famous monads which aren't that complicated at all, still they are part of this dense cryptic material when you're a newbie that used to think he's smart because he knows c, pascal, basic, php , and learned postscript's basics in a few days (Then you start looking at this curiosity called haskell, and you stumple upon haskell-cafe, and then you are humbled.) (I might be talking about the 3 years ago me, here :) ) > You're comparing apples to oranges: using Haskell and understanding the > underlying theory are two completely different > things. Agree 100%, but it's not automatic to see it that way for a newcomer. David. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Gregory Collins-3
On 25/10/2010 10:36 PM, Gregory Collins wrote:
> Andrew Coppin<[hidden email]> writes: >> Hypothesis: The fact that the average Haskeller thinks that this kind of dense >> cryptic material is "pretty garden-variety" notation possibly explains why >> normal people think Haskell is scary. > That's ridiculous. You're comparing apples to oranges: using Haskell and > understanding the underlying theory are two completely different > things. Put it to you this way: a mechanic can strip apart and rebuild > an engine without knowing any of the organic chemistry which explains > how and why the catalytic converter works. If you work for Ford, on the > other hand... I didn't say "people think Haskell is scary because type theory looks crazy". I said "people think Haskell is scary because the typical Haskeller thinks that type theory looks *completely normal*". As in, Haskellers seem to think that every random stranger will know all about this kind of thing, and be completely unfazed by it. Go look at how many Haskell blog posts mention type theory, or predicate calculus, or category theory, or abstract algebra. Now go look at how many Java or C++ blog posts mention these things. Heck, even SQL-related blogs tend to barely mention the relational algebra. (Which, arguably, is because actual product implementations tend to deviate rather radically from it...) > P.S. I did my computer science graduate work in type theory, so I may > not be an "average Haskeller" in those terms. By "garden-variety" I > meant to say that the concepts, notation, and vocabulary are pretty > standard for the field, and I had no trouble reading it despite not > having seriously read a type theory paper in close to a decade. OK, fair enough. >> If I were to somehow obtain this book, would it actually make any >> sense whatsoever? I've read too many maths books which assume you >> already know truckloads of stuff, and utterly fail to make sense until >> you do. (Also, being a somewhat famous book, it's presumably extremely >> expensive...) > Introductory type theory is usually taught in computer science cirricula > at a 3rd or 4th year undergraduate level. If you understand some > propositional logic and discrete mathematics, then "probably yes", > otherwise "probably not." I don't even know the difference between a proposition and a predicate. I also don't know exactly what "discrete mathematics" actually covers. Then again, I have no formal mathematical training of any kind. (You'd think holding an honours degree in "computer science" would cover this. But alas, my degree is *really* just Information Technology, despite what it says on the certificate...) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Lauri Alanko
On 25/10/2010 11:01 PM, Lauri Alanko wrote:
> On Mon, Oct 25, 2010 at 10:10:56PM +0100, Andrew Coppin wrote: >> Type theory doesn't actually interest me, I just wandered what the >> hell all the notation means. > That sounds like an oxymoron. How could you possibly learn what the > notation "means" without learning about the subject that the notation > is about? That's like saying "I'm not actually interested in calculus, > I'd just like to know what the hell all these funny S-like symbols > mean". You can explain the integral notation in a few short sentences without having to undergo an entire semester of integral calculus training. Hell, the other night I was laying in bed with my girlfriend (who hates mathematics) and I managed to make her understand what a partial derivative is. Now of course if you needed to *use* integral calculus for something, that's another matter entirely. But just to get the gist of what it's about and what it's for is much simpler. > So I will add voice to those recommending TAPL. OK, well maybe I'll see if somebody will buy it for me for Christmas or something... _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On 26 October 2010 19:29, Andrew Coppin <[hidden email]> wrote:
> > I don't even know the difference between a proposition and a predicate. A proposition is an abstraction from sentences, the idea being that e.g. "Snow is white", "Schnee ist weiß" and "La neige est blanche" are all sentences expressing the same proposition. Propositional logic is quite a simple logic, where the building blocks are atomic formulae and the usual logical connectives. An example of a well-formed formula might be "P → Q". It tends to be the first system taught to undergraduates, while the second is usually the first-order predicate calculus, which introduces predicates and quantifiers. Predicates are usually interpreted as properties; we might write "P(x)" or "Px" to indicate that object x has the property P. > I also don't know exactly what "discrete mathematics" actually covers. Discrete mathematics is concerned with mathematical structures which are discrete, rather than continuous. Real analysis, for example, is concerned with real numbers—"the continuum"—and thus would not be covered. Graph theory, on the other hand, concerns objects (nodes and edges) which have sharp cutoffs—if an edge directly connects two nodes, there are no intermediate nodes, whereas if you consider an interval between any two real numbers, no matter how close, there are more real numbers between them. Computers being the kind of things they are, discrete mathematics has a certain obvious utility. Benedict. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On 26/10/2010 07:54 PM, Benedict Eastaugh wrote:
> On 26 October 2010 19:29, Andrew Coppin<[hidden email]> wrote: >> I don't even know the difference between a proposition and a predicate. > A proposition is an abstraction from sentences, the idea being that > e.g. "Snow is white", "Schnee ist weiß" and "La neige est blanche" are > all sentences expressing the same proposition. Uh, OK. > Propositional logic is quite a simple logic, where the building blocks > are atomic formulae and the usual logical connectives. An example of a > well-formed formula might be "P → Q". It tends to be the first system > taught to undergraduates, while the second is usually the first-order > predicate calculus, which introduces predicates and quantifiers. Already I'm feeling slightly lost. (What does the arrow denote? What's are "the usual logcal connectives"?) > Predicates are usually interpreted as properties; we might write > "P(x)" or "Px" to indicate that object x has the property P. Right. So a proposition is a statement which may or may not be true, while a predicate is some property that an object may or may not possess? >> I also don't know exactly what "discrete mathematics" actually covers. > Discrete mathematics is concerned with mathematical structures which > are discrete, rather than continuous. Right... so its domain is simply *everything* that is discrete? From graph theory to cellular automina to finite fields to difference equations to number theory? That would seem to cover approximately 50% of all of mathematics. (The other 50% being the continuous mathematics, presumably...) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Oct 26, 2010, at 12:43 PM, Andrew Coppin wrote: >> Propositional logic is quite a simple logic, where the building >> blocks >> are atomic formulae and the usual logical connectives. An example >> of a >> well-formed formula might be "P → Q". It tends to be the first >> system >> taught to undergraduates, while the second is usually the first-order >> predicate calculus, which introduces predicates and quantifiers. > > Already I'm feeling slightly lost. (What does the arrow denote? > What's are "the usual logcal connectives"?) The arrow is notation for "If P, then Q". The other "usual" logical connectives are "not" (denoted by ~, !, the funky little sideways L, and probably others); "or" (denoted by \/, v, (both are pronounced "or" or "vee" even "meet") |, ||, and probably others); "and" (denoted by /\, or a smaller upside-down v (pronounced "wedge" or "and" or even "join"), &, &&, and probably others). > >> Predicates are usually interpreted as properties; we might write >> "P(x)" or "Px" to indicate that object x has the property P. > > Right. So a proposition is a statement which may or may not be true, > while a predicate is some property that an object may or may not > possess? Yes. For any given object a (which is not a "variable" -- we usually reserve x, y, z to denote variables, and objects are denoted by a, b, c), P(a) is a proposition "about" a. Something like "forall x P(x)" means that P(x) is true for every object in the domain you are considering. > >>> I also don't know exactly what "discrete mathematics" actually >>> covers. >> Discrete mathematics is concerned with mathematical structures which >> are discrete, rather than continuous. > > Right... so its domain is simply *everything* that is discrete? From > graph theory to cellular automina to finite fields to difference > equations to number theory? That would seem to cover approximately > 50% of all of mathematics. (The other 50% being the continuous > mathematics, presumably...) Basically, yes. There are some nuances, in that continuous structures might be studied in terms of discrete structures, and vice-versa. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On 26 October 2010 20:43, Andrew Coppin <[hidden email]> wrote:
> >> Propositional logic is quite a simple logic, where the building blocks >> are atomic formulae and the usual logical connectives. An example of a >> well-formed formula might be "P → Q". It tends to be the first system >> taught to undergraduates, while the second is usually the first-order >> predicate calculus, which introduces predicates and quantifiers. > > Already I'm feeling slightly lost. (What does the arrow denote? What's are > "the usual logcal connectives"?) The arrow is just standard logical notation for the conditional: "if ... then" in English. If you were to read "P → Q" aloud, it would sound like "If P then Q". It's one of the usual logical connectives I mentioned. The others are "∧" (conjunction; "and", in English); "∨" (disjunction; "or". Note that it's inclusive: if both operands are true then the whole expression is true. This is different to how the word "or" functions in everyday English, where it's exclusive: you can have cheesecake or apple pie, but not both); "¬" (negation; "not"--the only unary operator in the usual group of connectives) and "↔" (biconditional; "if and only if"). They are the usual logical connectives purely in virtue of the fact that they're the ones we tend to use most of the time. Historically speaking this is because they seem to map well onto use cases in natural language. However, there are many other possible logical connectives; I have only mentioned a few unary and binary connectives. There are 4 unary operators, 16 binary operators, 256 ternary operators, and in general, 2 ^ 2 ^ n logical connectives for n < ω (i.e. the first infinite ordinal: we only consider operators with a finite number of operands). We could write the four unary operators quite easily in Haskell, assuming that we take them as functions from Bool to Bool: > data Bool = True | False > > not :: Bool -> Bool > not True = False > not False = True > > id :: Bool -> Bool > id True = True > id False = False > > true :: Bool -> Bool > true _ = True > > false :: Bool -> Bool > false _ = False The `true` and `false` functions are constant functions: they always produce the same output regardless of their inputs. For this reason they're not very interesting. The `id` function's output is always identical to its input, so again it's not very interesting. The `not` function is the only one which looks like it holds any interest, producing the negation of its input. Given this it shouldn't surprise us that it's the one which actually gets used in formal languages. >> Predicates are usually interpreted as properties; we might write >> "P(x)" or "Px" to indicate that object x has the property P. > > Right. So a proposition is a statement which may or may not be true, while a > predicate is some property that an object may or may not possess? Exactly. The sentence "There is a black beetle" could be taken to express the proposition that there is a black beetle. It includes the predicates "is black" and "is a beetle". We might write this in first-order logic, to make the predicates (and the quantification) more explicit, as "∃x [Black(x) ∧ Beetle(x)]". I.e., "There exists something that is black and is a beetle". I hedged, saying that we usually interpret predicates as properties, because the meaning of an expression in a formal language (or, indeed, any language) depends on the interpretation of that language, which assigns meanings to the symbols and truth-values to its sentences. Benedict. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On 27/10/2010, at 7:29 AM, Andrew Coppin wrote: > I didn't say "people think Haskell is scary because type theory looks crazy". I said "people think Haskell is scary because the typical Haskeller thinks that type theory looks *completely normal*". As in, Haskellers seem to think that every random stranger will know all about this kind of thing, and be completely unfazed by it. I came to Haskell from ML. The ML community isn't into category theory (much; Rod Burstall at Edinburgh was very keen on it). But they are very definitely into type theory. The experience of ML was that getting the theory right was the key to getting implementations without major loop-holes. The way type systems are presented owes a great deal to one approach to specifying logics; type *inference* is basically a kind of deduction, and you want type inference to be sound, so you have to define what are valid deduction steps. I came to ML from logic, so it all made perfect sense. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |