Learning about Programming Languages (specifically Haskell)

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
26 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Learning about Programming Languages (specifically Haskell)

Samuel Williams
Dear Friends,

I'm looking for some help from the Haskell community. I hope this is the right place to ask for information.

I'm putting together a website aimed at high school students and teachers, and would like to add a few paragraphs to the following page:

        http://programming.dojo.net.nz/languages/haskell/index

... explaining why one would want to learn Haskell. I don't know the answer to this since I don't use Haskell extensively. What are the main application areas and how is it useful?

Any other suggestions or ideas for the Haskell page would be fantastic, and any suggestions to other pages in general is also very helpful.

        http://programming.dojo.net.nz/

Kind regards,
Samuel

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Learning about Programming Languages (specifically Haskell)

Samuel Williams
<base href="applewebdata://6663EC20-3ED6-4A77-970C-7C3BDD7B2BC3"><base href="applewebdata://6663EC20-3ED6-4A77-970C-7C3BDD7B2BC3">Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great!

data Door = Open | Closed deriving Show

 

toggle Open   = Closed
toggle Closed = Open

 

pass k = zipWith ($) (cycle $ replicate k id ++ [toggle])

 

run n = foldl (flip pass) (replicate n Closed) [0..n]

Do I need to add run 100 to the end of the example for it to actually do something?

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Learning about Programming Languages (specifically Haskell)

Rafael Gustavo da Cunha Pereira Pinto-2
If you are running from GHCi, just type run 100 at the prompt..

If you intend to compile it, you have to add

main = print $ run 100

The compiler adds a call to main::IO (), which is intended to be the main entry point of your code.

We need to add print, as run has type

run::Int->[Door]

so run 100 has type [Door].

print is

print::(Show a) => a -> IO ()

The IO () stands for an empty IO monad, which is the black magic of haskell, intended to separate pure code from I/O side-effects...




On Mon, May 3, 2010 at 06:31, Samuel Williams <[hidden email]> wrote:
Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great!

data Door = Open | Closed deriving Show

 

toggle Open   = Closed
toggle Closed = Open

 

pass k = zipWith ($) (cycle $ replicate k id ++ [toggle])

 

run n = foldl (flip pass) (replicate n Closed) [0..n]

Do I need to add run 100 to the end of the example for it to actually do something?

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe




--
Rafael Gustavo da Cunha Pereira Pinto


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Learning about Programming Languages (specifically Haskell)

Kyle Murphy-2
Reasons to learn Haskell include:
Lazy evaluation can make some kinds of algorithms possible to implement that aren't possible to implement in other languages (without modification to the algorithm).
Strict type system allows for a maximum number of programming errors to be caught at compile time.
Functional design makes designing for concurrency and parallelism simpler than in procedural or OO designs.
Clear differentiation of impure code helps to isolate points of failure and to make the programmer more aware of possible side effects.
Knowledge of high order functions can provide the programmer with unique ways of solving certain problems, even in non-functional languages.
Excellent Foreign Function Interface makes integration with existing libraries relatively painless (compared with many other languages).

There are probably other reasons I'm not even aware of as I'm still a beginner myself, but to me those are some of the most important.

The program you provided seems like a poor example to demonstrate the language as it uses some very hard to follow logic for someone not familiar with the language. Furthmore the results can not be easily explained nor reasoned about simply (mostly due to the repeated application of id and toggle instances to the same list as foldl runs). Nevertheless I've attempted to provide what comments I can, although I'm sure someone can do better than me and I might have made a mistake somewhere.

--- Begin Code ---

{- This declares a new type, Door, two new constructors for the Door type, Open, and Closed, and tells the compiler to make Door an instance of the Show class which provides the function show which can be used to convert something into a String. I.E. show :: Door -> String -}
data Door = Open | Closed deriving Show

 

toggle :: Door -> Door

toggle Open   = Closed -- New function to convert a Open Door to a Close Door.
toggle Closed = Open

 

{- I broke this line down for easier understanding. This line takes two lists and combines them using the ($) operator. The first list is provided by converter, the second list is implicit as can be seen by the function signature provided below and consists of a list of Doors. In other words, it takes the function in the converter list, and applies it to the Door from the last argument to the function to produce a new list of Door objects. The list this produces can be thought of as looking something like the following when called with 3 for example:


[(id $ Door), (id $ Door), (id $ Door), (toggle $ Door), (id $ Door), (id $ Door)...]

-}

pass :: Int -> [Door] -> [Door]

pass k = zipWith ($) converter

    where

        converter :: [Door -> Door]

        {- This produces a list of functions from Door to Door, it produces k id functions, one toggle function, and then repeats. id just returns whatever it's given. -}

        converter = cycle $ replicate k id ++ [toggle]

 

{- this creates two lists, one n long of Closed instances, and one from 0 to n. flip pass reverses the order of arguments to pass so that instead of taking a number and a list of Doors it instead takes a list of Doors and a number. foldl takes one number from the list, the list of Closed Door instances if this is the first time through, or the result of the last run, and passes them both to pass. -}

run :: Int -> [Door]

run n = foldl (flip pass) (replicate n Closed) [0..n]

main = print $ run 100

--- End Code ---

-R. Kyle Murphy
--
Curiosity was framed, Ignorance killed the cat.


On Mon, May 3, 2010 at 06:43, Rafael Gustavo da Cunha Pereira Pinto <[hidden email]> wrote:
If you are running from GHCi, just type run 100 at the prompt..

If you intend to compile it, you have to add

main = print $ run 100

The compiler adds a call to main::IO (), which is intended to be the main entry point of your code.

We need to add print, as run has type

run::Int->[Door]

so run 100 has type [Door].

print is

print::(Show a) => a -> IO ()

The IO () stands for an empty IO monad, which is the black magic of haskell, intended to separate pure code from I/O side-effects...




On Mon, May 3, 2010 at 06:31, Samuel Williams <[hidden email]> wrote:
Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great!

data Door = Open | Closed deriving Show

 

toggle Open   = Closed
toggle Closed = Open

 

pass k = zipWith ($) (cycle $ replicate k id ++ [toggle])

 

run n = foldl (flip pass) (replicate n Closed) [0..n]

Do I need to add run 100 to the end of the example for it to actually do something?

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe




--
Rafael Gustavo da Cunha Pereira Pinto


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe



_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Learning about Programming Languages (specifically Haskell)

Luke Palmer-2
On Mon, May 3, 2010 at 9:17 AM, Kyle Murphy <[hidden email]> wrote:
> Reasons to learn Haskell include:
> Lazy evaluation can make some kinds of algorithms possible to implement that
> aren't possible to implement in other languages (without modification to the
> algorithm).

One could say the reverse as well.

I would say that laziness allows many more *compositions* of
algorithms.  Many more objects can be described from simple building
blocks (combinators).
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Strict type system allows for a maximum number of programming errors to be caught at compile time.

caseyh
In reply to this post by Kyle Murphy-2
>Strict type system allows for a maximum number of programming errors to be caught at compile time.

I keep hearing this statement but others would argue that programming
errors caught at compile time only form a minor subset of all errors
caught.

So, in functional programming languages with a strict type system,
e.g. Haskell, do typing errors from a larger subset of all programming
errors.

I don't mean "tpynig errros".

:)
--
Regards,
Casey
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Learning about Programming Languages (specifically Haskell)

Samuel Williams
In reply to this post by Kyle Murphy-2
Dear Kyle,

Improving the example to use more idiomatic Haskell is a good idea. I'm happy for you to propose another approach entirely. I'm simply not skilled in Haskell to do anything better.

However, thanks for writing some comments for the code below, it will certainly help.

Kind regards,
Samuel

On 4/05/2010, at 3:17 AM, Kyle Murphy wrote:

Reasons to learn Haskell include:
Lazy evaluation can make some kinds of algorithms possible to implement that aren't possible to implement in other languages (without modification to the algorithm).
Strict type system allows for a maximum number of programming errors to be caught at compile time.
Functional design makes designing for concurrency and parallelism simpler than in procedural or OO designs.
Clear differentiation of impure code helps to isolate points of failure and to make the programmer more aware of possible side effects.
Knowledge of high order functions can provide the programmer with unique ways of solving certain problems, even in non-functional languages.
Excellent Foreign Function Interface makes integration with existing libraries relatively painless (compared with many other languages).

There are probably other reasons I'm not even aware of as I'm still a beginner myself, but to me those are some of the most important.

The program you provided seems like a poor example to demonstrate the language as it uses some very hard to follow logic for someone not familiar with the language. Furthmore the results can not be easily explained nor reasoned about simply (mostly due to the repeated application of id and toggle instances to the same list as foldl runs). Nevertheless I've attempted to provide what comments I can, although I'm sure someone can do better than me and I might have made a mistake somewhere.

--- Begin Code ---

{- This declares a new type, Door, two new constructors for the Door type, Open, and Closed, and tells the compiler to make Door an instance of the Show class which provides the function show which can be used to convert something into a String. I.E. show :: Door -> String -}
data Door = Open | Closed deriving Show

 

toggle :: Door -> Door
toggle Open   = Closed -- New function to convert a Open Door to a Close Door.
toggle Closed = Open

 

{- I broke this line down for easier understanding. This line takes two lists and combines them using the ($) operator. The first list is provided by converter, the second list is implicit as can be seen by the function signature provided below and consists of a list of Doors. In other words, it takes the function in the converter list, and applies it to the Door from the last argument to the function to produce a new list of Door objects. The list this produces can be thought of as looking something like the following when called with 3 for example:

[(id $ Door), (id $ Door), (id $ Door), (toggle $ Door), (id $ Door), (id $ Door)...]
-}
pass :: Int -> [Door] -> [Door]
pass k = zipWith ($) converter
    where
        converter :: [Door -> Door]
        {- This produces a list of functions from Door to Door, it produces k id functions, one toggle function, and then repeats. id just returns whatever it's given. -}
        converter = cycle $ replicate k id ++ [toggle]

 

{- this creates two lists, one n long of Closed instances, and one from 0 to n. flip pass reverses the order of arguments to pass so that instead of taking a number and a list of Doors it instead takes a list of Doors and a number. foldl takes one number from the list, the list of Closed Door instances if this is the first time through, or the result of the last run, and passes them both to pass. -}
run :: Int -> [Door]
run n = foldl (flip pass) (replicate n Closed) [0..n]

main = print $ run 100

--- End Code ---

-R. Kyle Murphy
--
Curiosity was framed, Ignorance killed the cat.


On Mon, May 3, 2010 at 06:43, Rafael Gustavo da Cunha Pereira Pinto <[hidden email]> wrote:
If you are running from GHCi, just type run 100 at the prompt..

If you intend to compile it, you have to add

main = print $ run 100

The compiler adds a call to main::IO (), which is intended to be the main entry point of your code.

We need to add print, as run has type

run::Int->[Door]

so run 100 has type [Door].

print is

print::(Show a) => a -> IO ()

The IO () stands for an empty IO monad, which is the black magic of haskell, intended to separate pure code from I/O side-effects...




On Mon, May 3, 2010 at 06:31, Samuel Williams <[hidden email]> wrote:
Also, one more thing - if someone could write some comments to go along with the source code that explain what it is doing, that would be really helpful. I can see the general structure, but I don't know the ins and outs of Haskell. If someone could augment the example with comments explaining what the functions do that would be great!

data Door = Open | Closed deriving Show

 

toggle Open   = Closed
toggle Closed = Open

 

pass k = zipWith ($) (cycle $ replicate k id ++ [toggle])

 

run n = foldl (flip pass) (replicate n Closed) [0..n]

Do I need to add run 100 to the end of the example for it to actually do something?

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe




--
Rafael Gustavo da Cunha Pereira Pinto


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe




_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Edward Kmett-2
In reply to this post by caseyh
On Mon, May 3, 2010 at 11:34 AM, Casey Hawthorne <[hidden email]> wrote:
>Strict type system allows for a maximum number of programming errors to be caught at compile time.

I keep hearing this statement but others would argue that programming
errors caught at compile time only form a minor subset of all errors
caught.

So, in functional programming languages with a strict type system,
e.g. Haskell, do typing errors from a larger subset of all programming
errors.

Yes. You can usually write much the same program with and without static types. The difference will be that in the dynamic typing/untyped setting you'll get the errors at runtime, and in the statically typed setting you'll get (many of) the errors at compile time. You can express a lot of invariants with types once you become familiar with their usage. For instance you can make a 2-3 tree in Haskell where the types enforce the fact that the tree is balanced. You can implement the same code in Scheme or another dynamically typed language, but the result will lack that extra guarantee. Nothing says that your insert operation drops the node in the right place, etc, but at last you know you are safe from one class of bugs.

Types effectively prove lots of little boring theorems about your code as you program. Inconsistent code will often cause these sorts of little proofs to fail.

One thing that thinking about types helps you to do is to figure out if given the types if the choice of implementation is more-or-less unique. For instance given the type for fmap, and the extra law(s) you need to satisfy, you really only have one correct implementation option. ;)

So not only do you get benefit from the compiler giving you errors rather than finding out later when your runtime system blows up in production, but the types help inform you as to what the shape of a correct implementation should be.

-Edward Kmett

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Samuel Williams
In reply to this post by caseyh
Are you really sure about that... it might cause a typing error if you misspell something.

Proposal: "The double typing error"

Kind regards,
Samuel

On 4/05/2010, at 3:34 AM, Casey Hawthorne wrote:

> I don't mean "tpynig errros".

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Luke Palmer-2
In reply to this post by caseyh
On Mon, May 3, 2010 at 9:34 AM, Casey Hawthorne <[hidden email]> wrote:
>>Strict type system allows for a maximum number of programming errors to be caught at compile time.
>
> I keep hearing this statement but others would argue that programming
> errors caught at compile time only form a minor subset of all errors
> caught.
>
> So, in functional programming languages with a strict type system,
> e.g. Haskell, do typing errors from a larger subset of all programming
> errors.

Absolutely!  Haskell developers trade debugging time for time arguing
with the compiler about the correctness of their code.

I'll give this meaningless anecdotal statistic:

Compiler says my code is right => My code is actually right   -- 60%
Compiler says my code is wrong => My code is actually wrong -- 95%

Haskell has a particular reputation for the former.

Luke
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Learning about Programming Languages (specifically Haskell)

Stephen Tetley-2
In reply to this post by Samuel Williams
Hi Samuel

I'm not sure Haskell is an ideal language for school age teaching,
DrScheme seems a more obvious choice.

Paul Hudak made a good case for Haskell as a learning language with
his School of Expression book, but if you weren't directly following
that book, I think it would be hard to make the teaching fun (which
I'd expect to be the major battle for teaching kids). Obvious problems
- cryptic error messages, hiatus from graphics / multimedia, lack of a
text book (if not using School of Expression)...

That said, there were some nice slides from John Peterson about
teaching summer school mathematics where Haskell was used 'under the
hood' to create music or draw pictures. The students certainly weren't
exposed to full Haskell - just a very small core that was largely
familiar from maths, plus specific functions to compose pictures /
music.

Best wishes

Stephen
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Learning about Programming Languages (specifically Haskell)

Samuel Williams
Dear Stephen,

The goal of the site is not an introduction to programming for the beginner.

Its a site designed to expose students and teachers to the multitudes of programming languages out there.

DrScheme looks like a good approach.

Kind regards,
Samuel

On 4/05/2010, at 4:03 AM, Stephen Tetley wrote:

> Hi Samuel
>
> I'm not sure Haskell is an ideal language for school age teaching,
> DrScheme seems a more obvious choice.
>
> Paul Hudak made a good case for Haskell as a learning language with
> his School of Expression book, but if you weren't directly following
> that book, I think it would be hard to make the teaching fun (which
> I'd expect to be the major battle for teaching kids). Obvious problems
> - cryptic error messages, hiatus from graphics / multimedia, lack of a
> text book (if not using School of Expression)...
>
> That said, there were some nice slides from John Peterson about
> teaching summer school mathematics where Haskell was used 'under the
> hood' to create music or draw pictures. The students certainly weren't
> exposed to full Haskell - just a very small core that was largely
> familiar from maths, plus specific functions to compose pictures /
> music.
>
> Best wishes
>
> Stephen
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Kyle Murphy-2
In reply to this post by Luke Palmer-2
The problem with dynamic typing is that it has a much higher chance of having a subtle error creep into your code that can go undetected for a long period of time. A strong type system forces the code to fail early where it's easier to track down and fix the problem, rather than trying to perform debugging on the fly in a production system. This has an added advantage for compiled languages in that for many non-trivial applications the time to build and deploy a new instance of the program, even in the development environment is often substantial, and the more trivial errors are discovered at compile time, the less time is wasted on builds.

For small code bases the time spent tracking down a error at runtime might be less than the time spent making your code conform to strict type requirements, but for larger code bases the amount of time necessary to track down such errors greatly out weighs the amount of time needed to make your code well typed.

To look at the flip side of your statistics:
Compiler says my code is right => My code is actually wrong -- 40%
Compiler says my code is wrong => My code is actually right -- 5%

I'd argue this is provably wrong, as correct code by definition would compile. The fact that it doesn't is proof enough that there's a problem with it even if that problem is simply that the types you're using aren't exactly correct. Further, I'd argue that in the first instance with a non-strict type system, the instance of wrong code that compiles would be higher. The only argument to support non-strict typing would be if you could show that it takes less time to track down runtime bugs than it does to fix compile time type errors, and any such claim I'd be highly skeptical of.

-R. Kyle Murphy
--
Curiosity was framed, Ignorance killed the cat.


On Mon, May 3, 2010 at 12:00, Luke Palmer <[hidden email]> wrote:
On Mon, May 3, 2010 at 9:34 AM, Casey Hawthorne <[hidden email]> wrote:
>>Strict type system allows for a maximum number of programming errors to be caught at compile time.
>
> I keep hearing this statement but others would argue that programming
> errors caught at compile time only form a minor subset of all errors
> caught.
>
> So, in functional programming languages with a strict type system,
> e.g. Haskell, do typing errors from a larger subset of all programming
> errors.

Absolutely!  Haskell developers trade debugging time for time arguing
with the compiler about the correctness of their code.

I'll give this meaningless anecdotal statistic:

Compiler says my code is right => My code is actually right   -- 60%
Compiler says my code is wrong => My code is actually wrong -- 95%

Haskell has a particular reputation for the former.

Luke
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Learning about Programming Languages (specifically Haskell)

Samuel Williams
In reply to this post by Kyle Murphy-2
Dear Kyle,

I've recevied the following program. You did a fantastic job of explaining the other one, but as you said it wasn't a great approach, if you have a moment could you explain this one?

doorOpen :: Int -> Bool
doorOpen door = doh door door

doh :: Int -> Int -> Bool
doh door 0 = True
doh door pass =
        if (door `rem` (pass+1)) == pass
        then not (doh door (pass-1))
        else doh door (pass-1)

doors :: [Bool]
doors = [doorOpen n | n <- [0..]]

printDoor :: (Int,Bool) -> IO ()
printDoor (door,open) =
        putStrLn ("Door #" ++ (show door) ++ " is " ++
                if open then "open." else "closed.")

printUpTo :: Int -> IO ()
printUpTo n =
        mapM_ printDoor (zip [0..(n-1)] doors)

printUpTo 100

Kind regards,
Samuel

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Roel van Dijk-3
In reply to this post by Kyle Murphy-2
In my opinion code is 'right' when it conforms to the specification.
Haskell's type system allows the programmer to express a part of the
specification in the types, which then get checked by the
compiler/type-checker. This is where I see the biggest benefit of a
very expressive statically checked type system.
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Richard A. O'Keefe
In reply to this post by Kyle Murphy-2

On May 4, 2010, at 5:07 AM, Kyle Murphy wrote:
>
> To look at the flip side of your statistics:
> Compiler says my code is right => My code is actually wrong -- 40%
> Compiler says my code is wrong => My code is actually right -- 5%
>
> I'd argue this is provably wrong, as correct code by definition  
> would compile.

Only for some definitions of "correct".
It was clearly explained in the original paper introducing
Hindley-Milner type checking that some *correct* programs
(in the sense that they would not "go wrong" at run time
and would deliver the intended results) would be rejected by
such a type checker.  To give a Haskell example,

        main = print $ (head [1,'x'] + 1)

cannot possibly go wrong, yet Haskell will reject it.

But that's not the argument for dynamic types.
The argument for dynamic types is adaptability.
This is why Brad Cox designed Objective C the way he did,
KNOWING that there was a price in efficiency and a price
in static checking:  to write programs that could be extended
more easily.
This is why Joe Armstrong designed Erlang the way he did,
KNOWING that there was a price in efficiency and a price
in static checking:  so that systems could dynamically load
new modules and even replace existing ones, so that systems
could be upgraded without being shut down.
(The Smalltalk term for this is "changing the engine while
driving down the highway at 60mph", but sometimes shutdowns
really are a very bad thing.)

It's interesting that after Brad Cox's original design,
Objective C was extended in a way that permitted more
type checking.  It's interesting that Erlang, after several
experiments, has finally got a type system that seems to
have stuck (possibly because it serves three purposes:
documentation in ErlDoc, performance in the native code
compiler HiPE, and static checking in the Dialyzer), but
it remains optional.

For about a year I kept a log of errors in the Smalltalk code
I was writing, and type errors were in fact quite rare.
Errors due to partial functions being applied to arguments
outside their domain were more common.

When people have this argument, they are using thinking in
terms of type systems like (say) Java.  A type system that
only stops you doing bad things is useful, but it's hard to
love.  The great thing about the Haskell type system is that
it does far more than that.  It's at least plausible that
the Haskell type system reduces errors by reducing the
amount of code you have to write.

To give just one example:  the original QuickCheck for Haskell
was admittedly a brilliant and innovative idea, but the actual
*code* needed to make it happen wasn't that bulky, or even that
hard to understand given the core idea: that random data
generation could be driven by the types.  In contrast, the
QuickCheck that has been written for Erlang is a commercial
product in which we are advised in strong terms that the
"generators" for random data are NOT types, and writing new
ones is, while not *too* complicated, something the programmer
always has to do instead of sometimes letting it be automatic.

Various kinds of generic programming for Haskell take this idea
of letting the types drive automatic programming still further.

ML types => theorems for free;
Haskell types => code for free!

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Luke Palmer-2
In reply to this post by Kyle Murphy-2
On Mon, May 3, 2010 at 11:07 AM, Kyle Murphy <[hidden email]> wrote:

> The problem with dynamic typing is that it has a much higher chance of
> having a subtle error creep into your code that can go undetected for a long
> period of time. A strong type system forces the code to fail early where
> it's easier to track down and fix the problem, rather than trying to perform
> debugging on the fly in a production system. This has an added advantage for
> compiled languages in that for many non-trivial applications the time to
> build and deploy a new instance of the program, even in the development
> environment is often substantial, and the more trivial errors are discovered
> at compile time, the less time is wasted on builds.
>
> For small code bases the time spent tracking down a error at runtime might
> be less than the time spent making your code conform to strict type
> requirements, but for larger code bases the amount of time necessary to
> track down such errors greatly out weighs the amount of time needed to make
> your code well typed.
>
> To look at the flip side of your statistics:
> Compiler says my code is right => My code is actually wrong -- 40%
> Compiler says my code is wrong => My code is actually right -- 5%
>
> I'd argue this is provably wrong, as correct code by definition would compile.

Here is a contrived example of what I am referring to:

prefac f 0 = 1
prefac f n = n * f (n-1)

fac = (\x -> x x) (\x -> prefac (x x))

If this code were allowed to compile (say by inserting unsafeCoerce
anywhere you like), it would correctly implement a factorial function.

It is precisely these cases behind the dynamically typed languages'
advocacy: my code is right but I can't (or it is too much work to)
convince the compiler of that fact.  It is a pretty bold statement to
say that these do not occur.

> The fact that it doesn't is proof enough that there's a problem
> with it even if that problem is simply that the types you're using aren't
> exactly correct. Further, I'd argue that in the first instance with a
> non-strict type system, the instance of wrong code that compiles would be
> higher. The only argument to support non-strict typing would be if you could
> show that it takes less time to track down runtime bugs than it does to fix
> compile time type errors, and any such claim I'd be highly skeptical of.

Clearly.  But many people believe in this methodology, and use test
suites and code coverage instead of types.  Indeed, such practices are
essentially "empirical type checking", and they afford the advantage
that their verification is much more expressive (however less
reliable) than our static type system, because they may use arbitrary
code to express their predicates.

What I seem to be getting at is this plane of type systems:

    Constrained ------------------------- Expressive
    Unreliable
        |   (C)
        |                            (test suites)
        |           (C++)                    .
        |                                    .
        |           (Java/C#)    (Scala)     .
        |                                    .
        |                                    .
        |           (Haskell)                .
        |
        |                                 (Agda)
    Reliable


Where by Constrained/Expressive I mean the ability for the system to
express properties *about the code* (so C++'s type system being turing
complete is irrelevant).  By Unreliable/Reliable I mean, given popular
engineering practice in that language, the chance that if it passes
the checks then it works as intended.  For all the languages, I mean
their compilers.  Test suites extend down the right-hand side,
depending on how rigorous you are about testing, but they never get as
far down as Agda.  :-)

Luke
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Don Stewart-2
lrpalmer:

> What I seem to be getting at is this plane of type systems:
>
>     Constrained ------------------------- Expressive
>     Unreliable
>         |   (C)
>         |                            (test suites)
>         |           (C++)                    .
>         |                                    .
>         |           (Java/C#)    (Scala)     .
>         |                                    .
>         |                                    .
>         |           (Haskell)                .
>         |
>         |                                 (Agda)
>     Reliable
>

Where have I seen this before.... oh!!

    http://i.imgur.com/srLvr.jpg

The Big Lebowski Alignment Chart, mirrored!!


        +------------------------------------------------
        | Lawful Good                         Chaotic Good
        |  (Agda)                               (C++)              
        |
        |
        |                True Netural
        |                 (Haskell)
        |
        |
        |
        | Lawful Evil                       Chaotic Evil
        | (testsuites)                          (C)
        +------------------------------------------------
         
         
        |
        |
        |
        |
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Ivan Lazar Miljenovic
In reply to this post by Luke Palmer-2
On 4 May 2010 13:30, Luke Palmer <[hidden email]> wrote:
> Here is a contrived example of what I am referring to:
>
> prefac f 0 = 1
> prefac f n = n * f (n-1)
>
> fac = (\x -> x x) (\x -> prefac (x x))

I can't work out how this works (or should work rather); is it meant
to be using church numerals or something (assuming that they have been
made an instance of Num so that - and * work)?

--
Ivan Lazar Miljenovic
[hidden email]
IvanMiljenovic.wordpress.com
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Strict type system allows for a maximum number of programming errors to be caught at compile time.

Daniel Peebles
prefac is just a normal factorial function with recursion factored out. fix prefac 5 gives 120, for example. 

On Tue, May 4, 2010 at 12:13 AM, Ivan Miljenovic <[hidden email]> wrote:
On 4 May 2010 13:30, Luke Palmer <[hidden email]> wrote:
> Here is a contrived example of what I am referring to:
>
> prefac f 0 = 1
> prefac f n = n * f (n-1)
>
> fac = (\x -> x x) (\x -> prefac (x x))

I can't work out how this works (or should work rather); is it meant
to be using church numerals or something (assuming that they have been
made an instance of Num so that - and * work)?

--
Ivan Lazar Miljenovic
[hidden email]
IvanMiljenovic.wordpress.com
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
12