Hi Folks,
I've come across this a few times - "In Haskell, once can prove the correctness of the code" - Is this true? I know that static typing and strong typing of Haskell eliminate a whole class of problems - is that related to the proving correctness?
Is it about Quickcheck - if so, how is it different from having test sutites in projects using mainstream languages? Regards, Kashyap
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On 11 February 2011 22:06, C K Kashyap <[hidden email]> wrote:
> Hi Folks, > I've come across this a few times - "In Haskell, once can prove the > correctness of the code" - Is this true? I'm not quite sure where you got that... But since Haskell is pure, we can also do equational reasoning, etc. to help prove correctness. Admittedly, I don't know how many people actually do so... > I know that static typing and strong typing of Haskell eliminate a whole > class of problems - is that related to the proving correctness? > Is it about Quickcheck - if so, how is it different from having test sutites > in projects using mainstream languages? QuickCheck doesn't prove correctness: I had a bug that survived several releases tested regularly during development with a QC-based testsuite before it arose (as it required a specific condition to be satisfied for the bug to happen). As far as I know, a testsuite - no matter what language or what tools/methodologies are used - for a non-trivial piece of work just provides reasonable degree of assurance of correctness; after all, there could be a bug/problem you hadn't thought of! -- Ivan Lazar Miljenovic [hidden email] IvanMiljenovic.wordpress.com _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
I think one thing it means, is that, with the typesystem, you just can't do random things where-ever you want. Like, in the pure world if you want to transform values from one type to another, you always need to have implementations of functions available to do that. (You can't just take a pure value, get its mem location and interpret it as something else, without being explicid about it.) So when lining up your code (composing functions), you can be sure, that at least as far as types are concerned, everything is correct -- that such a program, that you wrote, can actually exist == that all the apropriate functions exist.
And it is correct only that far -- the value-level coding is still up to you, so no mind-reading... -- Markus Läll On Fri, Feb 11, 2011 at 1:16 PM, Ivan Lazar Miljenovic <[hidden email]> wrote:
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
On 02/11/2011 12:06 PM, C K Kashyap wrote:
> [...] > I know that static typing and strong typing of Haskell eliminate a > whole class of problems - is that related to the proving correctness? > [...] You might have read about "free theorems" arising from types. They are a method to derive certain properties about a value that must hold, only looking at its type. For example, a value > x :: a can't be anything else than bottom, a function > f :: [a] -> [a] must commute with 'map', etc. For more information you may be interested in "Theorems for free"[1] by Philip Wadler. [1] http://ttic.uchicago.edu/~dreyer/course/papers/wadler.pdf _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
On Friday 11 February 2011 12:06:58, C K Kashyap wrote:
> Hi Folks, > > I've come across this a few times - "In Haskell, once can prove the > correctness of the code" - Is this true? One can also prove the correctness of the code in other languages. What makes these proofs much easier in Haskell than in many other languages is purity and immutability. Also the use of higher order combinators (you need prove foldr, map, ... correct only once, not everytime you use them). Thus, proving correctness of the code is feasible for more complicated programmes in Haskell than in many other languages. Nevertheless, for sufficiently complicated programmes, proving correctness is unfeasible in Haskell too. > > I know that static typing and strong typing of Haskell eliminate a whole > class of problems - is that related to the proving correctness? Yes, strong static typing (and the free theorems mentioned by Steffen) give you a stronger foundation upon which to build the proof. > Is it about Quickcheck - if so, how is it different from having test > sutites in projects using mainstream languages? Testing can only prove code incorrect, it can never prove code correct (except for extremely simple cases where testing all possible inputs can be done; but guaranteeing that QuickCheck generates all possible inputs is generally harder than a proof without testing in those cases). _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
One way to prove the correctness of a program is to "calculate" it from its specification. If the specification is also a Haskell program, equational reasoning can be used to transform a (often inefficient) specification into an equivalent (but usually faster) implementation. Richard Bird describes many examples of this approach, one in his functional pearl on a program to solve Sudoku [1]. Jeremy Gibbons gives an introduction to calculating functional programs in his lecture notes of the Summer School on Algebraic and Coalgebraic Methods in the Mathematics of Program Construction [2].
Sebastian _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Daniel Fischer
Am 11.02.2011 13:48, schrieb Daniel Fischer:
[...] +1 > Testing can only prove code incorrect, it can never prove code correct > (except for extremely simple cases where testing all possible inputs can be > done; but guaranteeing that QuickCheck generates all possible inputs is > generally harder than a proof without testing in those cases). Maybe smallcheck is better than QuickCheck for such (finite and simple) cases. http://hackage.haskell.org/package/smallcheck C. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
Kashyap,
2011/2/11 C K Kashyap <[hidden email]>: > I've come across this a few times - "In Haskell, once can prove the > correctness of the code" - Is this true? > I know that static typing and strong typing of Haskell eliminate a whole > class of problems - is that related to the proving correctness? > Is it about Quickcheck - if so, how is it different from having test sutites > in projects using mainstream languages? You may be confusing Haskell with dependently typed programming languages such as Coq or Agda, where formal proofs of correctness properties of programs can be verified by the type checker. Dominique _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
On Fri, Feb 11, 2011 at 3:06 AM, C K Kashyap <[hidden email]> wrote:
Hi Folks, Let's interpret a type as a "partial specification" of a value. If we can express this "partial specification" completely enough so that one (and only one, at least ignoring values like bottom) value is a member of the type, then any expression of that value must be formally correct.
There are a couple of issues: the type system is strong, but it still has limitations. There are types we might like to express, but can't. We might be able to express "supersets" of the type we really want, and the type inference engine will ensure that a value in the type meets at least this partial specification, but it cannot check to see if it is the "right" value in the type. That is up to us. Some of Haskell's properties, like referential transparency, equational reasoning, etc. make this easier than in other languages.
A related difficulty is that encoding specifications is a programming task in itself. Even if you correctly compile requirements, and are able to completely encode them in the type system, you might still introduce a logic mistake in this encoding. A similar issue is that logic mistakes can creep into the requirements compiling phase of a project. In either of these cases, your values would dutifully and correctly compute the wrong things.
_______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
On Fri, Feb 11, 2011 at 4:06 AM, C K Kashyap <[hidden email]> wrote:
> Hi Folks, > I've come across this a few times - "In Haskell, once can prove the > correctness of the code" - Is this true? You can prove the correctness of code for any language that has precise semantics. Which is basically none of them (I believe a dialect of ML has one). But many languages come very close, and Haskell is one of them. In particular, Haskell's laziness allows very liberal use of equational reasoning, which is much more approachable as a technique for correctness proofs than operational semantics. The compiler is not able to verify your proofs, as Coq and Agda can, except in very simple cases. On the other hand, real-world programmers the advantage of not being forced to prove the correctness of their code because proofs are hard (technically Coq and Agda only require you to prove termination, but many times termination proofs require knowing most properties of your program so you have to essentially prove correctness anyway). I would like to see a language that allowed optional verification, but that is a hard balance to make because of the interaction of non-termination and the evaluation that needs to happen when verifying a proof. I have proved the correctness of Haskell code before. Mostly I prove that monads I define satisfy the monad laws when I am not sure whether they will. It is usually a pretty detailed process, and I only do it when I am feeling adventurous. I am not at home and I don't believe I've published any of my proofs, so you'll have to take my word for it :-P There is recent research on automatically proving (not just checking like QuickCheck, but formal proofs) stated properties in Haskell programs. It's very cool. http://www.doc.ic.ac.uk/~ws506/tryzeno/ I would not characterize "provable code" as an essential defining property of Haskell, though it is easier than in imperative and in strict functional languages. Again, Agda and Coq are really the ones that stand out in the provable code arena. And certainly I have an easier time mentally informally verifying the correctness of Haskell code than in other languages, because referential transparency removes many of the subtleties encountered in the process. I am often 100% sure of the correctness of my refactors. Luke _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Ivan Lazar Miljenovic
Ivan Lazar Miljenovic wrote:
> C K Kashyap wrote: > >> I've come across this a few times - "In Haskell, once can prove the >> correctness of the code" - Is this true? > > I'm not quite sure where you got that... > > But since Haskell is pure, we can also do equational reasoning, etc. > to help prove correctness. Admittedly, I don't know how many people > actually do so... I did, I did! http://projects.haskell.org/operational/Documentation.html#proof-of-the-monad-laws-sketch Regards, Heinrich Apfelmus -- http://apfelmus.nfshost.com _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Luke Palmer-2
many of the subtleties encountered in the process. I am often 100% While I have an intuitive understanding of what you mean about the correctness of refactoring ... I personally feel much more comfortable refactoring Haskell code ... as in - as long as I apease the compiler, things work correctly!!!
This is certainly not true in case of imperative languages ... In all my work experience, I've always found folks and myself very very uncomfortable making changes to existing code ... which in my opinion contributes to software bloat!
Anyway, how can one go about explaining to an imperative programmer with no FP exposure - what aspect of Haskell makes it easy to refactor? Regards, Kashyap _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Sat, Feb 12, 2011 at 6:08 AM, C K Kashyap <[hidden email]> wrote:
> Anyway, how can one go about explaining to an imperative programmer with no > FP exposure - what aspect of Haskell makes it easy to refactor? I think you just said it: typechecking, typechecking, typechecking. In Haskell, you can change one line of code and be confident that the compiler will force you to change every other line of code that's rendered nonsensical by your change. You just can't do that in C. It really liberates your mind and makes you less committed to your own design mistakes, since refactoring doesn't come with gut-wrenching worry that you'll introduce a silent error as a result. That said, I find that explaining Haskell's or ML's type system to someone used to a language with a much weaker type system is difficult. Many such people believe that type errors are trivial errors by definition and the compiler doesn't give them any help finding significant errors, which is true for the languages they've used (they may even believe that typecheckers get in their way by forcing them to correct errors). What's important is not just that Haskell has static typing, but that algebraic data types are a rich enough language to let you express your intent in data and not just in code. That helps you help the compiler help you. Cheers, Tim -- Tim Chevalier * http://cs.pdx.edu/~tjc/ * Often in error, never in doubt "an intelligent person fights for lost causes,realizing that others are merely effects" -- E.E. Cummings _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1 On 2/11/11 06:06 , C K Kashyap wrote: > I've come across this a few times - "In Haskell, once can prove the > correctness of the code" - Is this true? Only up to a point. While most of the responses so far focus on the question from one direction, the other is epitomized by a Knuth quote: "Beware of bugs in the above code; I have only proved it correct, not tried it." - -- brandon s. allbery [linux,solaris,freebsd,perl] [hidden email] system administrator [openafs,heimdal,too many hats] kf8nh -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.11 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAk1XRLkACgkQIn7hlCsL25XbNgCfSifYHygWPmG6UJUZZzeVXZWd +fYAn1Tv1IJlt6H8R4t6TxSKX1h3xwQG =AdfB -----END PGP SIGNATURE----- _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Tim Chevalier
On 2/12/11 11:41 AM, Tim Chevalier wrote:
> What's important is not just that > Haskell has static typing, but that algebraic data types are a rich > enough language to let you express your intent in data and not just in > code. That helps you help the compiler help you. ADTs are an amazing thing to have. They directly express the types we usually think in, which liberates us to think about those types. Conversely, in C we have to use structs, untagged unions, and pointers, which makes us spend far too much time worrying about low-level representation issues instead of being able to think about the types we really care about (the lists, the nodes, graphs, trees,...) and worrying about their high-level representation issues (does a list really capture the shape of my data, or would it be better to use a tuple, a priority queue,...?) Similarly in most OO languages there's no way to define a class which has a fixed non-zero number of subclasses, so it's hard to match the clarity of ADTs' "no junk, no confusion". Instead, we waste time defensively programming against the subclasses our evil users will come up with. This is especially problematic when designing datastructures that have to maintain invariants. And this often causes folks to move program logic from the type realm into the executable realm; how often have you seen methods which simulate dynamic dispatch by case analysis on a state or flag field? Static typing, type inference, and lambdas are all excellent things, but I think the importance of ADTs is vastly underrated when comparing functional languages (of the ML or Haskell tradition) to procedural languages. Not only do they make life easier, but they also help with proving correctness. -- Live well, ~wren _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by C K Kashyap
On Sat, 12 Feb 2011 19:38:31 +0530
C K Kashyap <[hidden email]> wrote: > Anyway, how can one go about explaining to an imperative programmer > with no FP exposure - what aspect of Haskell makes it easy to > refactor? Like other people have said, the static type system is a major factor. It's true that Haskell's type system is more advanced than most imperative languages but it's also the often not mentioned that static typing gives a stronger check in a functional language than an imperative ones even in simple cases. This is because all input and output data flow is type checked in a function application, whereas imperative side effects might escape checking. For example, the type signature for a variable swapping procedure in C: void swap(int *a, int *b) This will still type check even if it modified only one of the argument references. However, if written functionally, it must return a pair: swap :: (Int,Int) -> (Int,Int) Now the type checker will reject any implementation that fails to return a pair of results in every case. Of course for a trivial example like swap this is easy to ensure in any imperative language, but for more complex programs it is actually quite common to forget some update some component of the state. BTW, I found this and other interesting reflections on the avantages of FP vs. imperative in Martin Oderski's book "Programming in Scala". Best regards, Pedro Vasconcelos _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Pedro Vasconcelos wrote:
> This is because all input and output data flow is type checked in a > function application, whereas imperative side effects might escape > checking. > > For example, the type signature for a variable swapping procedure in C: > > void swap(int *a, int *b) > > This will still type check even if it modified only one of the argument > references. However, if written functionally, it must return a pair: > > swap :: (Int,Int) -> (Int,Int) This benefit of explicit input and output values can interact nicely with parametric polymorphism: swap :: (a, b) -> (b, a) This more general type signature makes sure we are not just returning the input values unswapped. Tillmann _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Mon, 14 Feb 2011 12:54:55 +0100
Tillmann Rendel <[hidden email]> wrote: > This benefit of explicit input and output values can interact nicely > with parametric polymorphism: > > swap :: (a, b) -> (b, a) > > This more general type signature makes sure we are not just returning > the input values unswapped. > Good point. This is a good reason why it's good practice to look for possiblity of writing more general functions and types even when they end up being used in a single instance. Pedro _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Pedro Vasconcelos
On Mon, Feb 14, 2011 at 12:06 PM, Pedro Vasconcelos <[hidden email]> wrote:
> On Sat, 12 Feb 2011 19:38:31 +0530 > C K Kashyap <[hidden email]> wrote: > >> Anyway, how can one go about explaining to an imperative programmer >> with no FP exposure - what aspect of Haskell makes it easy to >> refactor? > > > Like other people have said, the static type system is a major factor. > It's true that Haskell's type system is more advanced than most > imperative languages but it's also the often not mentioned that static > typing gives a stronger check in a functional language than an > imperative ones even in simple cases. This is because all input and > output data flow is type checked in a function application, whereas > imperative side effects might escape checking. > > For example, the type signature for a variable swapping procedure in C: > > void swap(int *a, int *b) > > This will still type check even if it modified only one of the argument > references. However, if written functionally, it must return a pair: > > swap :: (Int,Int) -> (Int,Int) > > Now the type checker will reject any implementation that fails to > return a pair of results in every case. Of course for a trivial example > like swap this is easy to ensure in any imperative language, but for > more complex programs it is actually quite common to forget some update > some component of the state. > > BTW, I found this and other interesting reflections on the avantages of > FP vs. imperative in Martin Oderski's book "Programming in Scala". I'm not completely sure, but I suspect another part of it (or maybe I'm just rephrasing what you said?) has to do with the fact that in Haskell, basically everything is an expression. In imperative languages, the code is segragated into statements (which each contain expressions) which are evaluated individually for their side effects. Type checking occurs mainly within statements (in expressions), while between them it is minimal to nonexistent. In Haskell, functions are one big expression -- even imperative code is represented by monadic expressions. If you have a complicated function that transforms lists in some way, and change something, the change has to satisfy the types of not just its immediate environment, or that of the complicated function you're writing, but also everything else in between, with the consequences of the change (and the consequences of the consequences...) being propogated automatically by the type inferencer. (The 'boundaries' so to speak between expressions being defined by where you put explicit type signatures -- calling a function without an explicit signature is similar from the typechecker's point of view to having its contents inlined in the place where it was called. (Modulo things like the monomorphism restriction, it should be equivalent?)) Does this sound roughly correct? > > Best regards, > > Pedro Vasconcelos > > > > _______________________________________________ > Haskell-Cafe mailing list > [hidden email] > http://www.haskell.org/mailman/listinfo/haskell-cafe > -- Work is punishment for failing to procrastinate effectively. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Mon, 14 Feb 2011 15:07:01 +0100
Gábor Lehel <[hidden email]> wrote: > I'm not completely sure, but I suspect another part of it (or maybe > I'm just rephrasing what you said?) has to do with the fact that in > Haskell, basically everything is an expression. > Yes, the fact that control statements (e.g. if-then-else) are expressions makes type checking much more effective. However, I think this is somewhat lost when programming imperative code in Haskell using a state or I/O monad (because a monadic type such as "IO t" does not discriminate what effects might take place, only the result type t). Of course one can use a more specialized monad (such ST for mutable references, etc.). I don't think that my imperative programs are automatically made more correct by writing them as monadic code in Haskell --- only that in Haskell I can opt for the functional style most of the time. BTW (slightly off topic) I found particularly annoying when teaching Python to be forced to use an imperative style (e.g. if-then-else are always statements). Scala is must better in this regard (altought it is not purely functional language); a statement is simply an expression whose return is unit. Pedro _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |