> In Haskell, most of these assumptions are invalid:
I see. Thanks for such a clear explanation.
> > * something may be curried or member of a strange typeclass (like > printf). No assumptions about the number of arguments can be > made > * It may be possible that we do not yet know the type of a because > we can't infer it's type without knowing the type of x > * show obj is definitely a String > * 2 is of type Num a => a. What if there are two something, one > with a parameter of type Int and one with a Float? > > You see, It's not so easy. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by MigMit
On Tue, Dec 20, 2011 at 9:16 PM, MigMit <[hidden email]> wrote:
It is not "limiting" to make distinctions that capture real differences. An overly broad generalization limits what can be proved. Can we prove that every vehicle with wheels has a motor? Of course not -- bicycles exist. Can we prove every car has a motor? Yes we can. Throwing bottoms into the collection of values is like throwing bicycles into the collection of cars. We can say /less/ about the collection than we could before, /because/ the collection is more general.
And denotational semantics is not just nice. It is useful. It's the best way to understand why the program we just wrote doesn't terminate. Denotational semantics is unrealistic. It is a Platonic model of constructive computation. Alan Turing introduced the notion of an "oracle" to deal with what we are calling bottom. An oracle is a "thing" that (magically) "knows" what a bottom value denotes, without having to wait for an infinite number of steps. Does Haskell offer oracles? If not, we should abandon the use of distinct bottoms. The /defining/ feature of a bottom is that it doesn't have an interpretation.
Note that I am not suggesting abandoning the notion of /a/ bottom. They should all be treated alike, and be treated differently from every other Haskell value. Every other Haskell value /does/ have an interpretation. Bottom is different from every "other value". We should exclude it from the collection of values. Treating things that are not alike as if they are introduces a loss of information. We can prove useful things about the collection "real" values that we cannot prove about bottom, and so, about the collection of real values and bottoms.
I happen to only write Haskell programs that terminate. It is not that hard. We must merely restrict ourselves to the total fragment of the language, and there are straight-forward methods to do so. In particular, I suggest the paper "Fast and Loose Reasoning is Morally Correct":
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Gregory Crosswhite
On Tue, Dec 20, 2011 at 10:30 PM, Gregory Crosswhite <[hidden email]> wrote:
A proof: fst :: (a,a) -> a fst (a,b) = a
This expression is basically non-sense. Should we accept straight-forwardly ill-typed expressions like: data Strict a = Strict !a fst (Strict [1..])
just because the argument is "strictly" a bottom? Bottom inhabits every type, but only vacuously. To be generous in my interpretation, I assume you mean something like:
fst (_|_, 10) = _|_. That is still proved by fst (x,y) = x Things like seq, unsafeCoerce, and the like, are defined as (functions into) bottom in GHC.Prim, and the real semantic-changing magic they perform is done behind the scenes. It cannot be expressed as Haskell in the same Haskell context it is used. So assuming you mean something like:
fst (seq [1..] (1,2)) I must respond that you are using one of these magical keywords which change Haskell's semantics. They should be avoided.
I am collapsing the semantics for "distinct" bottoms into a single bottom and noting that it has no interpretation as a Haskell value. Notice that this brings Haskell's type theory in line with ZF and typed set theories. In those theories, bottom merely exists as a syntactic expression with no interpretation. It is the proto-value which is not equal to itself. We can say that it inhabits every type. But that is only vacuously. Bottom does not exist.
We can stay in the total fragment of Haskell very easily, essentially by using my quotient construction for bottom: It requires a shift of point of view, from denotational semantics and "computational effects" (the things we're trying to avoid by going functional and lazy!) to the language of logic, proof, and interpretation. It is possible, consistent, and much simpler conceptually and in use.
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
I would make the 'type' symbol a single character ala Agda. For example, a : Int If your users are writing a lot of types, make it easy! On Dec 22, 2011 10:42 AM, "Alexander Solla" <[hidden email]> wrote:
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Alexander Solla
On Dec 22, 2011, at 12:25 PM, Alexander Solla wrote:
Sure, throwing bottom into the set of values means that we can no longer prove as many nice properties about them. However, since bottom *does* exist in this set since functions cannot be guaranteed to terminate, the properties that we do prove will have more relevance. Cheers, Greg _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Alexander Solla
On Dec 22, 2011, at 12:40 PM, Alexander Solla wrote:
This is only "nonsense" because you refuse to accept that there are valid formalisms other than your own that contain _|_ as a perfectly valid entity. :-)
No, each type has its own value for _|_, and your example demonstrates why this makes more sense than making all _|_'s be equivalent.
So... now you want to throw out seq so that we no longer have a way to force the evaluation of values, and the motivation for this is because when we throw out _|_ then we no longer have a formal way to describe the semantics of seq?
I agree that if you collapse all of the distinct bottoms then you get a mess, but since whenever we are talking about semantics in the Haskell community we give each type has its own _|_ it is an incredibly moot point; it's like saying that the problem with cars is that if you remove all of their wheels then they have a lot of trouble getting anywhere at all. :-) Cheers, Greg _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Alexander Solla
On 22 Dec 2011, at 06:25, Alexander Solla wrote: > Denotational semantics is unrealistic. And so are imaginary numbers. But they are damn useful for electrical circuits calculations, so who cares? > The /defining/ feature of a bottom is that it doesn't have an interpretation. What do you mean by "interpretation"? > They should all be treated alike, and be treated differently from every other Haskell value. But they ARE very similar to other values. They can be members of otherwise meaningful structures, and you can do calculations with these structures. "fst (1, _|_)" is a good and meaningful calculation. > Every other Haskell value /does/ have an interpretation. So, (_|_) is bad, but (1, _|_) is good? You know, my scientific advisor used to say "math is about calling different things by the same name; philosophy is about calling the same thing by different names". It seems to me that philosophy is something you're doing now, whereas programming is all about math. > I happen to only write Haskell programs that terminate. Sure, but if you've ever used recursion, then you do have bottoms in your program. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Alexander Solla
Alexander Solla wrote:
>> And denotational semantics is not just nice. It is useful. It's the best >> way to understand why the program we just wrote doesn't terminate. > > Denotational semantics is unrealistic. It is a Platonic model of > constructive computation. Alan Turing introduced the notion of an "oracle" > to deal with what we are calling bottom. An oracle is a "thing" that > (magically) "knows" what a bottom value denotes, without having to wait for > an infinite number of steps. Does Haskell offer oracles? If not, we > should abandon the use of distinct bottoms. The /defining/ feature of a > bottom is that it doesn't have an interpretation. Huh? I don't see the problem. Introducing bottom as a value is a very practical way to assign a well-defined mathematical object to each expression that you can write down in Haskell. See http://en.wikibooks.org/wiki/Haskell/Denotational_semantics It's irrelevant whether _|_ is "unrealistic", it's just a mathematical model anyway, and a very useful one at that. For instance, we can use it to reason about strictness, which gives us information about lazy evaluation and operational semantics. Best regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Robert Clausecker
On Mon, Dec 19, 2011 at 8:20 PM, Robert Clausecker <[hidden email]> wrote:
> Image you would create your own language with a paradigm similar to > Haskell or have to chance to change Haskell without the need to keep any > compatibility. What stuff would you add to your language, what stuff > would you remove and what problems would you solve completely different? > > Thanks in advance for all answers, yours > > Robert Clausecker A whole lot (a surprisingly very large amount) of things don't require breaking compatibility. Also a lot of things are really amazing but need smarter people than myself to invent them (the Constraint kind is a good example). And many things are trivial and superficial. I agree with everyone who mentioned it about giving things user-friendly names and leaving the mathematical connections to the documentation. I'm partial to Mappable/Sequenceable/Runnable for Functor/Applicative/Monad, but doubtless better ones are possible. I would define Monad in terms of join :p, and use (=<<) as the default bind. I also agree with name: Type instead of name :: Type. I would make : bind tighter. I would rename the * kind to Type, because (Type -> Constraint) looks less weird than (* -> Constraint). I would change some things to be just a little bit more C/Unix-like: != for inequality, allow (not require!) opening and closing braces on the same line, * instead of _ as the wildcard. Many things are in the realm of "this could definitely be done better, but I'm not sure how, either": tuples, records, and modules, in ascending order. Records would be lens-based because composability is nice, but that's about as far as I know. The operative principle with modules would be that after 'import Module' you should be good to go: manual intervention to avoid name clashes is a very nice feature, but you should only have to use it rarely. (In other words, much more control would be given to module authors over how things are exported.) Modules would be parametrizable on types - for example, for FRP libraries where every signature includes the Time type. (If I knew more about ML-style modules I might be advocating them.) I would make the whitespace around infix operators (and other punctuation like list literals) mandatory and significant. It's how you write it in most cases anyways, and how you should have in most of the rest. This frees up a lot of "syntax space" which could be used for various things: special built-in syntax, prefix/postfix operators, and you could have normal-looking array[and] record.access like every other language. (To be clear, list literals would still look [like, this], it's just the presence or absence of whitespace in between them and the preceding identifier which would be significant in this case.) Strictness types can be added as a language extension but either way I would add them. I would put greater emphasis on unboxed polymorphism by multiinstantiation over polymorphism by boxing and dictionary passing (it's not nice that abstract code is slower than monotyped code), but that's an implementation issue. I would add language support for mutating values without explicitly using an IORef, provided you're doing it in the right monad and the effects don't "leak", like what Disciple has but with better syntax. I would distinguish things which read from memory in an impure way from things which write to memory from things which Do Things In The Outside World like write a file. (Maybe by lifting Disciple's effect typing wholesale, but I'm attached to monads.) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
2011/12/22 Gábor Lehel <[hidden email]>
I would have compose (probably not called '.') read the same way we read this sentence (and unix pipes) ie left to right. I would not overload . to mean compose and module-access. Probably gadt could be the norm and classic 'data' could be removed. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Alexander Solla
Alexander Solla wrote:
> I happen to only write Haskell programs that terminate. It is not that > hard. We must merely restrict ourselves to the total fragment of the > language, and there are straight-forward methods to do so. Do (web/XML-RPC/whatever) server type programs terminate? _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Heinrich Apfelmus
On Thu, Dec 22, 2011 at 4:19 AM, Heinrich Apfelmus
<[hidden email]> wrote: > Alexander Solla wrote: >>> >>> And denotational semantics is not just nice. It is useful. It's the best >>> way to understand why the program we just wrote doesn't terminate. >> >> >> Denotational semantics is unrealistic. It is a Platonic model of >> constructive computation. Alan Turing introduced the notion of an >> "oracle" >> to deal with what we are calling bottom. An oracle is a "thing" that >> (magically) "knows" what a bottom value denotes, without having to wait >> for >> an infinite number of steps. Does Haskell offer oracles? If not, we >> should abandon the use of distinct bottoms. The /defining/ feature of a >> bottom is that it doesn't have an interpretation. > > > Huh? I don't see the problem. > > Introducing bottom as a value is a very practical way to assign a > well-defined mathematical object to each expression that you can write down > in Haskell. See > > http://en.wikibooks.org/wiki/Haskell/Denotational_semantics > > It's irrelevant whether _|_ is "unrealistic", it's just a mathematical model > anyway, and a very useful one at that. For instance, we can use it to reason > about strictness, which gives us information about lazy evaluation and > operational semantics. As another example.... Not that long ago, Bob Harper was complaining on his blog about how you can't use induction to reason about Haskell functions. But, that's incorrect. What you can't use is induction based on a model where all data types are the expected inductively defined sets, and non-termination is modeled as an effect. This isn't surprising, of course, because Haskell's models (i.e. denotational semantics) aren't like that. But, once you know what Haskell's models are---they model types as domains, and data types are inductively defined _domains_, not sets---then you in fact get induction principles based on those models (see for instance, Fibrational Induction Rules for Initial Algebras). You need to prove two or three additional cases, but it works roughly the same as the plain ol' induction you seem to lose for having non-strict evaluation. And then you have one less excuse for not using a language with lazy evaluation. :) -- Dan * http://personal.cis.strath.ac.uk/~patricia/csl2010.pdf _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Bardur Arantsson-2
On 22 Dec 2011, at 17:49, Bardur Arantsson wrote: > Alexander Solla wrote: > >> I happen to only write Haskell programs that terminate. It is not >> that >> hard. We must merely restrict ourselves to the total fragment of the >> language, and there are straight-forward methods to do so. > > Do (web/XML-RPC/whatever) server type programs terminate? No, but "total" and "terminating" are not the same. Those *co*programs will, if they're any good, be total by virtue of their productivity rather than their termination. What's slightly controversial is the claim that we "must merely restrict ourselves to the total fragment of the language". It would be more controversial to claim that some new Haskell-like language should restrict us to total programs. I'd be glad if "pure" meant "total", but partiality were an effect supported by the run-time system. Then we could choose to restrict ourselves, but we wouldn't be restricted by the language. This is not the first time the issue has surfaced, nor will it be the last. It's customary at this point to object that one wouldn't want to program with the monadic notation, just for the effect of partiality. I'd counter that I don't want to program with the monadic notation, for any effects: I'd like to program with an applicative notion, but in monadic types. That's what I'd do different, and for me, the subject is not a hypothetical question. All the best Conor _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Отправлено с iPad 22.12.2011, в 23:56, Conor McBride <[hidden email]> написал(а): > I'd be glad if "pure" meant "total", but > partiality were an effect supported by the run-time system. Then we > could choose to restrict ourselves, but we wouldn't be restricted by the > language. I second that. Having a special "partiality" monad would be nice. However, I'm not certain as to how it would interact with recursion — if f is a total function, fix f could be (and almost certainly would be) a possibly undiefined value. So, fix should have type "(a -> a) -> Partial a"; that's OK, but implicit uses of fix (I mean let statements) would be quite different. > I'd like to program with an applicative notion, but > in monadic types. That's what I'd do different, and for me, the subject > is not a hypothetical question. So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)? _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Rustom Mody
> I would have compose (probably not called '.') read the same way we read this sentence (and unix pipes) ie left to right. You can use >>> from Control.Arrow for that if you want. Erik _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by MigMit
On 22 Dec 2011, at 21:29, MigMit wrote: > > > Отправлено с iPad > > 22.12.2011, в 23:56, Conor McBride <[hidden email]> > написал(а): > >> I'd be glad if "pure" meant "total", but >> partiality were an effect supported by the run-time system. Then we >> could choose to restrict ourselves, but we wouldn't be restricted >> by the >> language. > > I second that. Having a special "partiality" monad would be nice. > However, I'm not certain as to how it would interact with recursion > — if f is a total function, fix f could be (and almost certainly > would be) a possibly undiefined value. So, fix should have type "(a - > > a) -> Partial a"; that's OK, but implicit uses of fix (I mean let > statements) would be quite different. Indeed they would, at least to the extent of making clear in the type on what basis recursive calls should be checked. > >> I'd like to program with an applicative notion, but >> in monadic types. That's what I'd do different, and for me, the >> subject >> is not a hypothetical question. > > So... you are developing a programming language with all > calculations being automatically lifted to a monad? What if we want > to do calculations with monadic values themselves, like, for > example, store a few monadic calculations in a list (without joining > all there effects as the sequence function does)? The plan is to make a clearer distinction between "being" and "doing" by splitting types clearly into an effect part and a value part, in a sort of a Levy-style call-by-push-value way. The notation [<list of effects>]<value type> is a computation type whose inhabitants might *do* some of the effects in order to produce a value which *is* of the given value type. But it is always possible to make a value type from a computation type by suspending it (with braces), so that {[<list of effects>]<value type>} is a value type for suspended computations, which *are* things which one could potentially *do* at another time. But we can manipulate suspended computations purely. Haskell doesn't draw a clear line in types between the effect part and the value part, or support easy fluidity of shifting roles between the two. Rather we have two modes: (1) the implicit partiality mode, where the value part is the whole of the type and the notation is applicative; (2) the explicit side-effect mode, where the type is an effect operator applied to the value type and the notation is imperative. It's an awkward split, and it makes it tricky to make clean local renegotiations of effectful capabilities by hiding or handling them. Also, I don't see why partiality deserves an applicative notation where other, perhaps more benign effects (like *handled* exceptions) are forced into an imperative (or clunky Applicative) mode. Maybe this strays too far to classify as "Haskell-like", but it's an attempt to re-rationalise techniques that emerged from Haskell programming. Cheers Conor _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by MigMit
On Fri, 2011-12-23 at 01:29 +0400, MigMit wrote:
> Отправлено с iPad > > 22.12.2011, в 23:56, Conor McBride <[hidden email]> > написал(а): > > > I'd be glad if "pure" meant "total", but > > partiality were an effect supported by the run-time system. Then we > > could choose to restrict ourselves, but we wouldn't be restricted by > the > > language. > > I second that. Having a special "partiality" monad would be nice. > However, I'm not certain as to how it would interact with recursion — > if f is a total function, fix f could be (and almost certainly would > be) a possibly undiefined value. So, fix should have type "(a -> a) -> > Partial a"; that's OK, but implicit uses of fix (I mean let > statements) would be quite different. rec. All implicit fix can be changed into explicit so I imagine that: let rec f x = x -- a -> Partial a let g x = x -- a -> a Regards _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe signature.asc (853 bytes) Download Attachment |
In reply to this post by Chris Wong
On Tue, Dec 20, 2011 at 3:10 PM, Chris Wong
<[hidden email]> wrote: > On Wed, Dec 21, 2011 at 10:53 AM, Matthew Farkas-Dyck > <[hidden email]> wrote: >> With GHC 7.0.3: >> >> $ cat test.hs >> class ℝ a where { >> test :: a; >> }; >> >> (∈) :: Eq a => a -> [a] -> Bool; >> x ∈ (y:ys) = x == y || x ∈ ys; >> >> main = putStrLn "Two of three ain't bad (^_~)"; >> $ runhaskell test.hs >> Two of three ain't bad (^_~) >> $ > > Why not expand it even further? My experience with Agda makes me think that extending it further can make it painful to program in. Initially I thought that using unicode symbols would just make input a bit slower and that editor support could address that. You know, like writing about math using latex. My actual experience with Agda was different than that. I was using Emacs and I found that I needed to make my font size very large to see all the detail of the unicode symbols clearly enough to distinguish between them fully. The alternative was using the support in emacs for displaying the codepoint, as a number, for any glyph I wanted to distinguish. Perhaps it's still "just an issue of editor support" but it left a sour taste in my mouth. Jason _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Conor McBride-2
On 23 Dec 2011, at 02:11, Conor McBride wrote: >> So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)? > > The plan is to make a clearer distinction between "being" and "doing" by > splitting types clearly into an effect part and a value part, in a sort > of a Levy-style call-by-push-value way. The notation > > [<list of effects>]<value type> > > is a computation type whose inhabitants might *do* some of the effects in > order to produce a value which *is* of the given value type. Oh, so it's not an arbitrary monad, but a single one. That's a bit disappointing. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Conor McBride-2
On 23 Dec 2011, at 02:11, Conor McBride wrote: >> So... you are developing a programming language with all calculations being automatically lifted to a monad? What if we want to do calculations with monadic values themselves, like, for example, store a few monadic calculations in a list (without joining all there effects as the sequence function does)? > > The plan is to make a clearer distinction between "being" and "doing" by > splitting types clearly into an effect part and a value part, in a sort > of a Levy-style call-by-push-value way. The notation > > [<list of effects>]<value type> > > is a computation type whose inhabitants might *do* some of the effects in > order to produce a value which *is* of the given value type. Oh, so it's not an arbitrary monad, but a single one. That's a bit disappointing. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |