Tutorial: Haskell for the Evil Genius

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
31 messages Options
12
Reply | Threaded
Open this post in threaded view
|

Tutorial: Haskell for the Evil Genius

Andrew Pennebaker
I've gotten mixed feedback from Reddit for my tutorial. It provides an overview of how functional and declarative programming in Haskell empower baddies, increasing the accuracy and efficiency of their atomic superweapons. What do you guys think of my tutorial, Haskell for the Evil Genius?

Cheers,

Andrew Pennebaker

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Alex Stangl
On Fri, Sep 14, 2012 at 12:13:15PM -0400, Andrew Pennebaker wrote:
> I've gotten mixed feedback from Reddit for my tutorial. It provides an
> overview of how functional and declarative programming in Haskell empower
> baddies, increasing the accuracy and efficiency of their atomic
> superweapons. What do you guys think of my tutorial, Haskell for the Evil
> Genius <http://www.yellosoft.us/evilgenius/>?

Hopefully you'll get more feedback here, although my recent post here
soliciting feedback netted me nothing.

FWIW, my feedback is below.

Alex


Under Declarative, you aren't creating a "named expression, 2 + 2", really.
You are redefining (+).

Under Lazy, your example of binding fib 30 is not a good example of
memoization. With memoization you typically call the underlying computation
the first time, and memoize it for repeated retrieval later, not hardwire in
values at compile time. Here you never ever call the real fib at all. On top
of everything else, it'd be too easy to introduce a typo into one of your
hardwired constant values.

Under Infinite, you should use "sieve (n:ns)" pattern matching instead of
calling head.

Under Type-Safe
Subtle distinction, but returning () is not the same as returning nothing
at all.
s/ommitted/omitted/
You've got an unusual indentation scheme. Consider adopting a more standard
one for your tutorial.
In general, monotonically decreasing is not sufficient to prove you will hit
a base case. For example, decreasing by 5 would still be monotonically
decreasing, and could jump right over your base cases.
(Not arguing that the code is incorrect, but rather than your explanation is
a little misleading/incomplete.)
Again, further confusion about what memoization really is.


Under Records

"Functions are already defined by their data structures; they are already
semantically bundled..." doesn't seem to make sense.

"... acts on the specific constructor, blasting fools, murdering crowds..."
makes it sound like fireOn actually has side effects.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Conal Elliott
In reply to this post by Andrew Pennebaker
Hi Andrew,

To save others the search, here's the/a reddit URL: http://www.reddit.com/r/programming/related/nhnyd/nfa_in_a_single_line_of_haskell/ .

The terribly misleading/mistaken remarks on fib & memoization are still in your post. As hammar commented on reddit commenter, you're not memoizing in that example; just shadowing one fib definition with another (very partial) one. (BTW, I highly recommend compiling with -Wall in all cases and then addressing all reported issues. Doing so would have issued a shadowing warning.)

Another comment:

As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values.

Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.

Regards, - Conal


On Fri, Sep 14, 2012 at 9:13 AM, Andrew Pennebaker <[hidden email]> wrote:
I've gotten mixed feedback from Reddit for my tutorial. It provides an overview of how functional and declarative programming in Haskell empower baddies, increasing the accuracy and efficiency of their atomic superweapons. What do you guys think of my tutorial, Haskell for the Evil Genius?

Cheers,

Andrew Pennebaker

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe



_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Andrew Pennebaker
In reply to this post by Alex Stangl
A summary of the changes I've included so far:
 
Under Declarative, you aren't creating a "named expression, 2 + 2", really.
You are redefining (+).


Noted and reflected in the new version.
 
Under Lazy, your example of binding fib 30 is not a good example of
memoization. With memoization you typically call the underlying computation
the first time, and memoize it for repeated retrieval later, not hardwire in
values at compile time. Here you never ever call the real fib at all. On top
of everything else, it'd be too easy to introduce a typo into one of your
hardwired constant values.


Noted and reflected in the new version. After several comments to this effect, I do not want to misrepresent memoization in the tutorial. Sometimes it is useful to be slightly inaccurate in a tutorial in order to help bridge the gap between someone with no experience in a something vs the wealth of knowledge and complexity in the thing itself. This is not one of those times, and fortunately, fixing the misrepresentation in my tutorial is as easy as removing the hardcoded call.

One thing I want to double check is that Haskell does, in fact, automatically memoize all pure function calls. Is this true?

I would still like to provide a performance comparison of the Fibonacci code with and without memoization, for readers who enjoy numerical evidence, using the Unix "time" command, but I don't know how to turn memoization off. I guess I would have to reimplement the algorithm in a way that can't make use of memoization. Any suggestions?

Under Infinite, you should use "sieve (n:ns)" pattern matching instead of
calling head.

Thank you! I try to make use of Haskell pattern matching wherever I can. Since I needed to refer to the whole list, as well as the head and tail, I originally used "head" instead of pattern matching. Noted and reflected in the new version.
 
Under Type-Safe
Subtle distinction, but returning () is not the same as returning nothing
at all.

True. Noted and reflected in the new version. What's the Haskell name for () again? I fear explaining the exact type information of IO () may be too much for a brief introduction to Haskell to cover.
 
s/ommitted/omitted/

Noted and reflected in the new version.
 
You've got an unusual indentation scheme. Consider adopting a more standard
one for your tutorial.

I'm in the camp of hard tabs rather than soft tabs, and that preference is probably responsible for much of the difference in indentation scheme. Unfortunately, HTML is terrible at representing hard tabs in <PRE> code with a custom width preference; they all come out looking like some idiot pressed space bar eight times. I could use JavaScript to remedy this, but for now, I like to directly copy and paste my working code from .HS files into <PRE> tags just in case.

If tabs are not the issue, then maybe I'm not indenting far enough to the right for some tastes? Or maybe it's my tendency to put "where" on its own line, something a Perl obfuscater would detest. I dunno. If someone would suggest a more idiomatic indentation scheme for my code so that I know exactly what is different, I can take a look.
 
In general, monotonically decreasing is not sufficient to prove you will hit
a base case. For example, decreasing by 5 would still be monotonically
decreasing, and could jump right over your base cases.
(Not arguing that the code is incorrect, but rather than your explanation is
a little misleading/incomplete.)

Understood. Noted and reflected in the new version.

Incidentally, when will Nat be available in Haskell? The Fibonacci algorithm is designed to work only over natural numbers, and the best way to express this in the type system is with Nat. But this type doesn't seem available out-of-the-box for Haskell users. Unless I'm using my Haskell Platform (GHC 7.0.3) is slightly outdated. Eh?
 
Again, further confusion about what memoization really is.


Under Records

"Functions are already defined by their data structures; they are already
semantically bundled..." doesn't seem to make sense.
 
Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place.

"... acts on the specific constructor, blasting fools, murdering crowds..."
makes it sound like fireOn actually has side effects.

A truly fair point. Would you like to contribute a monad for blowing up the moon?

Another comment:

"As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values."

Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.

Noted and reflected in the new version.

I'm trying to introduce the concept of declarative programming as opposed to imperative programming. Declarative programming according to Wikipedia:

is a programming paradigm that expresses the logic of a computation without describing its control flow.

I believe this is done in Haskell and other declarative languages by treating code as manipulable, reducible expressions, where imperative languages would use machine instructions.

---

Now, besides the code and the description of the code, I know we're not English majors, but what do you think of the overall tone, pacing, etc.? Is it entertaining? Is it humorous? Is it boring? Is it offensive? Is it too brief or too laborious?

Thanks all for your suggestions.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Albert Y. C. Lai
On 12-09-14 05:18 PM, Andrew Pennebaker wrote:
> One thing I want to double check is that Haskell does, in fact,
> automatically memoize all pure function calls. Is this true?

A simple back-of-envelope calculation that immediately raises doubts:

2 seconds on a 2 GHz computer is 4x10^9 clock cycles, or something
between 10^9 to 4x10^9 steps. This sounds more like 2^30 recursive calls
to fib, not 30. Even with interpreter inefficiency.

A simple experiment to nail the coffin:

Experiment #1:

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)

main :: IO ()
main = mapM_ (print . fib) (replicate 10 30)

Null hypothesis: fibs are memoized, therefore the first printing takes 2
seconds, the remaining 9 printings are free.
Alternative hypothesis: fibs are not memoized, therefore every printing
takes 2 seconds.
Observation: ______________________
Which hypothesis to reject: _______

A few advanced experiments to mess with your mind:

Experiment #2:

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)

main :: IO ()
main = let y = fib 30 in mapM_ print (replicate 10 y)

Question: Is anything memoized here? Which thing?

Experiment #3:

fib :: Int -> Int
fib 0 = 0
fib 1 = 1
fib n = fib (n - 1) + fib (n - 2)

main :: IO ()
main = mapM_ print (replicate 10 (fib 30))

Question: is this like #1? or is this like #2?

Experiment #4:

fib :: Int -> Int
fib n = mem !! n

mem :: [Int]
mem = map aux [0..]
-- remark: even [Int] is not a very efficient data structure for this

aux 0 = 0
aux 1 = 1
aux n = mem!!(n-1) + mem!!(n-2)

main :: IO ()
main = mapM_ (print . fib) (replicate 10 30)

Question: How fast is this compared to #1? Why?

Final remark: Evil geniuses should use the scientific method, not the
opinionative method.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Alex Stangl
In reply to this post by Andrew Pennebaker
On Fri, Sep 14, 2012 at 05:18:24PM -0400, Andrew Pennebaker wrote:
> A summary of the changes I've included so far:
> > Under Declarative, you aren't creating a "named expression, 2 + 2", really.
> > You are redefining (+).
> Noted and reflected in the new version.

It may not be obvious to readers when you are putting (+) or fib in the
let clause, that you are shadowing the function in the outer scope, so
for example, a reader might be surprised at the results of trying
let 2 + 2 = 5 in 1 + 1
(or similar experiments with your "memoized" fib)

It's not clear whether you believe you are only overriding behavior for
the specific case of 2 + 2, or if you realize that you've completely
redefined (+).


> Noted and reflected in the new version. After several comments to this
> effect, I do not want to misrepresent memoization in the tutorial.
> Sometimes it is useful to be slightly inaccurate in a tutorial in order to
> help bridge the gap between someone with no experience in a something vs
> the wealth of knowledge and complexity in the thing itself. This is not one
> of those times, and fortunately, fixing the misrepresentation in my
> tutorial is as easy as removing the hardcoded call.
>
> One thing I want to double check is that Haskell does, in fact,
> automatically memoize all pure function calls. Is this true?

Generally you store the memoized results in some structure list a list
or array defined at the top level. You don't get "automatic"
memoization, nor would it be desirable since it would quickly fill
memory with memoized results of every function call, unless it could
intelligently gauge which ones to keep around. Check out the memoization
section of the Haskell wiki for more info.


> I would still like to provide a performance comparison of the Fibonacci
> code with and without memoization, for readers who enjoy numerical
> evidence, using the Unix "time" command, but I don't know how to turn
> memoization off. I guess I would have to reimplement the algorithm in a way
> that can't make use of memoization. Any suggestions?

See above. Off is default -- you need to add an array to get the
memoization. The array can be initialized with the lazy list of fibs,
which in turn rely upon the array itself to retrieve memoized values.
Downside to this approach is you need to define your array bounds
upfront. Alternately, you could memoize more simply (and w/o bounds)
by putting the values into a top-level list, however the values get
more expensive to access, the deeper they are in the list.


> Under Infinite, you should use "sieve (n:ns)" pattern matching instead of
> > calling head.
> Thank you! I try to make use of Haskell pattern matching wherever I can.
> Since I needed to refer to the whole list, as well as the head and tail, I
> originally used "head" instead of pattern matching. Noted and reflected in
> the new version.

Are you aware you can use an "as pattern" (e.g., nl@(n:ns)) so you can
refer to the whole list as nl when you like, but still have destructured
as well?


> > Under Type-Safe
> > Subtle distinction, but returning () is not the same as returning nothing
> > at all.
> True. Noted and reflected in the new version. What's the Haskell name for
> () again? I fear explaining the exact type information of IO () may be too
> much for a brief introduction to Haskell to cover.

() is called unit.


> > You've got an unusual indentation scheme. Consider adopting a more standard
> > one for your tutorial.
> I'm in the camp of hard tabs rather than soft tabs, and that preference is
> probably responsible for much of the difference in indentation scheme.
> Unfortunately, HTML is terrible at representing hard tabs in <PRE> code
> with a custom width preference; they all come out looking like some idiot
> pressed space bar eight times. I could use JavaScript to remedy this, but
> for now, I like to directly copy and paste my working code from .HS files
> into <PRE> tags just in case.
>
> If tabs are *not* the issue, then maybe I'm not indenting far enough to the
> right for some tastes? Or maybe it's my tendency to put "where" on its own
> line, something a Perl obfuscater would detest. I dunno. If someone would
> suggest a more idiomatic indentation scheme for my code so that I know
> exactly what is different, I can take a look.

Unless the lines get too long, you'd usually see something like:
     do args <- getArgs
        fcontents <- readFile (head args)
        ...
        putStrLn results


By the way, your fib.hs (extended) is still in poor form, showing up to
fib 30 hardwired. Generally you would just hardwire your first one or two
values to "seed" your computation, like the first two Fibonacci numbers,
or first prime, etc., and then any memoization is accomplished as
described above, using some array or other structure to hold previously
computed values.


> Incidentally, when will Nat be available in Haskell?

Don't know, sorry.


> The Fibonacci
> algorithm is designed to work only over natural numbers, and the best way
> to express this in the type system is with Nat. But this type doesn't seem
> available out-of-the-box for Haskell users.

It's not unusual for a function to be partially defined this way. You
could equally well express the Fibonacci (or any) sequence as an
infinite list. You can only use non-negative values for your list
lookup, so if you view this as a deficiency, then the ubiquitous list
shares the same deficiency.


> Noted and reflected... I'm trying to convey to an audience largely composed
> of Java and C++ fanatics how Haskell records are much better than OOP, how
> GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what
> I'm trying to get across in that bit. And it's hard to do this in just a
> few sentences, especially when the reader isn't even familiar with GADTs in
> the first place.

I meant to also mention in my previous email, your section on Records
doesn't actually deal with what the Haskell community considers
records (e.g., algebraic datatype with labelled fields.) You're just
using a simple algebraic datatype, so it'd be better not to call it a
record, as that's bound to lead to confusion.


> Now, besides the code and the description of the code, I know we're not
> English majors, but what do you think of the overall tone, pacing, etc.? Is
> it entertaining? Is it humorous? Is it boring? Is it offensive? Is it too
> brief or too laborious?

Very subjective, but the "evil genius" thing doesn't work for me. I like
the "launch missiles" analogy when discussing side-effects, but I don't
care for it as the entire basis to wind a tutorial around. It is
humorous, not boring. I find the pacing OK, but it's hard for me to put
myself into the mindset of somebody encountering the material for the
first time. Putting an appropriate quote under headings is a nice
touch.

Alex

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Andrew Pennebaker
In reply to this post by Albert Y. C. Lai
Experiment #4:

fib :: Int -> Int
fib n = mem !! n

mem :: [Int]
mem = map aux [0..]
-- remark: even [Int] is not a very efficient data structure for this

aux 0 = 0
aux 1 = 1
aux n = mem!!(n-1) + mem!!(n-2)

main :: IO ()
main = mapM_ (print . fib) (replicate 10 30)

Question: How fast is this compared to #1? Why?
 
Final remark: Evil geniuses should use the scientific method, not the opinionative method.
 
Noted and reflected in the new version.

Thank you for taking the time to write out a detailed, practical analysis of the question. Yes, I should assume less and test more. I tried these out as requested, and came up with results demonstrating that Haskell, does not, as I mistakenly interpreted, automatically memoize all function calls. Which makes sense, otherwise memory would quickly run out.

I found some useful reference code on the Haskell Wiki and constructed my own memoized Fibonacci function using the MemoTrie library, which, fortunately, is builtin with Haskell Platform and therefore does not require the tutorial reader to install additional code.

The new version of the tutorial includes an ordinary recursive Fibonacci function (fib1.hs), and the same code but renamed to fib', memoized using the memo function from the MemoTrie library (fib2.hs), and exposed as fib. Unix time information provides real numbers for comparison: The memoized version is clearly much faster.

I rewrote the section, deleting the part that stated memoization was automatic, and added text describing how Haskell makes memoization easy.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Corentin Dupont
In reply to this post by Andrew Pennebaker
Well, to make it short, I liked it!
As suggestions, little things like first class functions and partial application can be easily introduced.
For example the line:
map (+1) [1..10]
contains these concepts and it very short and expressive.

On the story side, why not introducing a character? This would help personalize the story. It would be nice if a real story is told also, beside the Haskell story, like a character building something or making a (evil) journey learning Haskell.

Cheers,
Corentin


On Fri, Sep 14, 2012 at 11:18 PM, Andrew Pennebaker <[hidden email]> wrote:
A summary of the changes I've included so far:
 
Under Declarative, you aren't creating a "named expression, 2 + 2", really.
You are redefining (+).


Noted and reflected in the new version.
 
Under Lazy, your example of binding fib 30 is not a good example of
memoization. With memoization you typically call the underlying computation
the first time, and memoize it for repeated retrieval later, not hardwire in
values at compile time. Here you never ever call the real fib at all. On top
of everything else, it'd be too easy to introduce a typo into one of your
hardwired constant values.


Noted and reflected in the new version. After several comments to this effect, I do not want to misrepresent memoization in the tutorial. Sometimes it is useful to be slightly inaccurate in a tutorial in order to help bridge the gap between someone with no experience in a something vs the wealth of knowledge and complexity in the thing itself. This is not one of those times, and fortunately, fixing the misrepresentation in my tutorial is as easy as removing the hardcoded call.

One thing I want to double check is that Haskell does, in fact, automatically memoize all pure function calls. Is this true?

I would still like to provide a performance comparison of the Fibonacci code with and without memoization, for readers who enjoy numerical evidence, using the Unix "time" command, but I don't know how to turn memoization off. I guess I would have to reimplement the algorithm in a way that can't make use of memoization. Any suggestions?

Under Infinite, you should use "sieve (n:ns)" pattern matching instead of
calling head.

Thank you! I try to make use of Haskell pattern matching wherever I can. Since I needed to refer to the whole list, as well as the head and tail, I originally used "head" instead of pattern matching. Noted and reflected in the new version.
 
Under Type-Safe
Subtle distinction, but returning () is not the same as returning nothing
at all.

True. Noted and reflected in the new version. What's the Haskell name for () again? I fear explaining the exact type information of IO () may be too much for a brief introduction to Haskell to cover.
 
s/ommitted/omitted/

Noted and reflected in the new version.
 
You've got an unusual indentation scheme. Consider adopting a more standard
one for your tutorial.

I'm in the camp of hard tabs rather than soft tabs, and that preference is probably responsible for much of the difference in indentation scheme. Unfortunately, HTML is terrible at representing hard tabs in <PRE> code with a custom width preference; they all come out looking like some idiot pressed space bar eight times. I could use JavaScript to remedy this, but for now, I like to directly copy and paste my working code from .HS files into <PRE> tags just in case.

If tabs are not the issue, then maybe I'm not indenting far enough to the right for some tastes? Or maybe it's my tendency to put "where" on its own line, something a Perl obfuscater would detest. I dunno. If someone would suggest a more idiomatic indentation scheme for my code so that I know exactly what is different, I can take a look.
 
In general, monotonically decreasing is not sufficient to prove you will hit
a base case. For example, decreasing by 5 would still be monotonically
decreasing, and could jump right over your base cases.
(Not arguing that the code is incorrect, but rather than your explanation is
a little misleading/incomplete.)

Understood. Noted and reflected in the new version.

Incidentally, when will Nat be available in Haskell? The Fibonacci algorithm is designed to work only over natural numbers, and the best way to express this in the type system is with Nat. But this type doesn't seem available out-of-the-box for Haskell users. Unless I'm using my Haskell Platform (GHC 7.0.3) is slightly outdated. Eh?
 
Again, further confusion about what memoization really is.


Under Records

"Functions are already defined by their data structures; they are already
semantically bundled..." doesn't seem to make sense.
 
Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place.

"... acts on the specific constructor, blasting fools, murdering crowds..."
makes it sound like fireOn actually has side effects.

A truly fair point. Would you like to contribute a monad for blowing up the moon?

Another comment:

"As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values."

Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.

Noted and reflected in the new version.

I'm trying to introduce the concept of declarative programming as opposed to imperative programming. Declarative programming according to Wikipedia:

is a programming paradigm that expresses the logic of a computation without describing its control flow.

I believe this is done in Haskell and other declarative languages by treating code as manipulable, reducible expressions, where imperative languages would use machine instructions.

---

Now, besides the code and the description of the code, I know we're not English majors, but what do you think of the overall tone, pacing, etc.? Is it entertaining? Is it humorous? Is it boring? Is it offensive? Is it too brief or too laborious?

Thanks all for your suggestions.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe



_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Alexander Solla
In reply to this post by Andrew Pennebaker


On Fri, Sep 14, 2012 at 2:18 PM, Andrew Pennebaker <[hidden email]> wrote:
A summary of the changes I've included so far:

Noted and reflected... I'm trying to convey to an audience largely composed of Java and C++ fanatics how Haskell records are much better than OOP, how GADTs are more intuitive, robust, ... OOP doesn't even compare! That's what I'm trying to get across in that bit. And it's hard to do this in just a few sentences, especially when the reader isn't even familiar with GADTs in the first place.

But, Haskell records aren't better than OOP.

I am not trying to be controversial here -- Haskell records would "naturally" implement prototype-based OOP, like JavaScript uses, if they (Haskell records) weren't so useless.  This is basically why "lenses" were designed (i.e., to make consistent syntax for getters and setters)

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Andrew Pennebaker
But, Haskell records aren't better than OOP.

I am not trying to be controversial here -- Haskell records would "naturally" implement prototype-based OOP, like JavaScript uses, if they (Haskell records) weren't so useless.  This is basically why "lenses" were designed (i.e., to make consistent syntax for getters and setters)

Hey, I don't want the tutorial to be controversial either, especially since my word choice of words like "better" are highly subjective. I find it extraordinarily empowering that Haskell's type system allows programmers to use a wide variety of programming paradigms: functional, declarative, imperative, lazy, eager, parallel, quantum, matrix, CUDA, multithreaded, distributed, logical, object oriented...

When I describe Haskell's type system as better than OOP, what I mean is this: You can code OOP in Haskell, because the type system can adapt to that. But it's much harder to go the other way, trying to code GADTs in Java/C++. In order to get the effect, you'll have to code something as complex as Scala, at which point you might as well just use Haskell (unless you really really really need the JVM for compatibility). It's the same with Lisp or JavaScript or Smalltalk or Ruby: Allowing the programmer to roll his own paradigm, such as OOP, is more powerful than offering only that paradigm. More to the point, the monad system enables all of this, and I'm not sure how to reword this tutorial to reflect that; monads themselves are generally treated as an advanced lesson, and this one tries to hit the ground running.

Does anyone know of a brief introductory Haskell tutorial that engages monads? LYAH covers monads, but it does so after a few chapters of simpler, pure function Haskell coding. I know of some brief tutorials for monads that explain them in a variety of creative ways, but they all assume the reader is at least somewhat familiar with Haskell.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Kristopher Micinski
On Fri, Sep 14, 2012 at 8:23 PM, Andrew Pennebaker
<[hidden email]> wrote:
> [snip..]
> Does anyone know of a brief introductory Haskell tutorial that engages
> monads? LYAH covers monads, but it does so after a few chapters of simpler,
> pure function Haskell coding. I know of some brief tutorials for monads that
> explain them in a variety of creative ways, but they all assume the reader
> is at least somewhat familiar with Haskell.

My opinion: you are walking down the potentially disastrous road of
trying to introduce monads in as small a tutorial as yours.  If the
person reading the tutorial is *indeed* an evil genius, I'd at least
start by showing an example of a monad that wasn't a disingenuous
presentation of their full capabilities (I'm taking a long cold stare
at  the people who write tutorials that say "hey, Maybe is a monad,
and Maybe is simple, so monads a pretty simple too!"), like the Cont
monad.  But, you say, "hell, Cont has quite a complex set of things
going on!"  And I say, yes, it's sufficiently complicated to force
people to work out the underlying details.

(As a side note, I've had an itching feeling to introduce monads to
students via having them write an interpreter and presenting it by
proxy of state based semantics in something of the spirit of Imp.)

Monads are a complex subject, not overly complex, I would say "they
let you chain things together and hide behavior under the rug," but
even this isn't helpful to someone to hasn't seen them before.  So my
argument is that if you introduce monads, you do so with a great
example that demonstrates a nontrivial monad (State or Cont, *not*
Maybe or IO).  This is a case where forcing people to work out the
math is imperative (hehe, get it, it's a pun..), programmers' natural
tendency when they see IO examples is to say "oh hey, I'll just copy
that pattern, and that's how you do IO," where in reality this doesn't
showcase everything monads really do.

If you want to *link* to some good tutorials, I think that reading the
"All about monads" tutorial is a great way to learn Haskell, period.

Everyone in the Haskell cafe probably has a secret dream to give the
best "five minute monad talk."  Challenge: get someone to have a
competition at one of the conferences where students all give their
best "five minute monad talk" and try to find the most comprehensible
one!

Perhaps that was a bit of a rant, apologies if so,

kris

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Andrew Pennebaker
Everyone in the Haskell cafe probably has a secret dream to give the
best "five minute monad talk."  Challenge: get someone to have a
competition at one of the conferences where students all give their
best "five minute monad talk" and try to find the most comprehensible
one!

Haha, maybe that's why I'm writing.

Agree on all points, not just this quotation.

Yeah, IO and Maybe are the first monads most new Haskell programmers encounter.  Perhaps a tour of RVars or the accelerate library would give a better impression. I bet a lot of students get the concept of pure functional programming, and if you shock them with: "So how would you implement a PRNG?", they would understand the role monads play.

Given that Maybe and Either don't modify state, nor do they communicate with outside interfaces, nor do they specify computation ordering, I don't understand why they're implemented as monads. Why not a primitive typeclass or even datatype declaration?

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Brandon Allbery
On Fri, Sep 14, 2012 at 9:08 PM, Andrew Pennebaker <[hidden email]> wrote:
Given that Maybe and Either don't modify state, nor do they communicate with outside interfaces, nor do they specify computation ordering, I don't understand why they're implemented as monads. Why not a primitive typeclass or even datatype declaration?

They're not used in their monadic guise as often as they should be, IMO.  Either String has for a while been an "error monad" (more commonly, EitherT String as ErrorT) but has annoying shortcomings --- but they're an obvious mechanism for reporting and tracking / short-circuiting failure in computations (carrying a failure reason in the case of Either).

--
brandon s allbery                                      [hidden email]
wandering unix systems administrator (available)     (412) 475-9364 vm/sms


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Joel Burget
In reply to this post by Andrew Pennebaker
I find the Monad instance for Maybe and Either very useful. You can do things like the following (which technically only uses the Applicative instance):

Prelude Control.Applicative> (*3) <$> (+2) <$> Just 1
Just 9
Prelude Control.Applicative> (*3) <$> (+2) <$> Nothing
Nothing
Prelude Control.Applicative> (*3) <$> (+2) <$> Left "error" :: Either String Int
Left "error"
Prelude Control.Applicative> (*3) <$> (+2) <$> Right 1 :: Either String Int
Right 9

Also, Maybe and Either are not "implemented as monads". They are defined using `data` like you suggest:

data Maybe a = Nothing | Just a
data Either a b = Left a | Right b

You can even look up their definitions using ghci using `:i Either` or `:i Maybe`. Monad comes in because they're both instances of the Monad type class.

Take a look at the list monad for another one that doesn't modify state or communicate with an outside interface.

- Joel

On Friday, September 14, 2012 at 6:08 PM, Andrew Pennebaker wrote:

Everyone in the Haskell cafe probably has a secret dream to give the
best "five minute monad talk."  Challenge: get someone to have a
competition at one of the conferences where students all give their
best "five minute monad talk" and try to find the most comprehensible
one!

Haha, maybe that's why I'm writing.

Agree on all points, not just this quotation.

Yeah, IO and Maybe are the first monads most new Haskell programmers encounter.  Perhaps a tour of RVars or the accelerate library would give a better impression. I bet a lot of students get the concept of pure functional programming, and if you shock them with: "So how would you implement a PRNG?", they would understand the role monads play.

Given that Maybe and Either don't modify state, nor do they communicate with outside interfaces, nor do they specify computation ordering, I don't understand why they're implemented as monads. Why not a primitive typeclass or even datatype declaration?
_______________________________________________
Haskell-Cafe mailing list


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Andrew Pennebaker
In reply to this post by Kristopher Micinski
Challenge: get someone to have a competition at one of the conferences where students all give their
best "five minute monad talk" and try to find the most comprehensible one!


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Taylor Hedberg
In reply to this post by Joel Burget
Joel Burget, Fri 2012-09-14 @ 19:08:29-0700:

> I find the Monad instance for Maybe and Either very useful. You can do
> things like the following (which technically only uses the Applicative
> instance):
>
> Prelude Control.Applicative> (*3) <$> (+2) <$> Just 1
> Just 9
> Prelude Control.Applicative> (*3) <$> (+2) <$> Nothing
> Nothing
> Prelude Control.Applicative> (*3) <$> (+2) <$> Left "error" :: Either String Int
> Left "error"
> Prelude Control.Applicative> (*3) <$> (+2) <$> Right 1 :: Either String Int
> Right 9
Nitpick: You are using the Functor instances of Maybe and Either here,
not Applicative. (<$>) == fmap; it just happens to be defined in the
Control.Applicative module.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe

signature.asc (853 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Kristopher Micinski
In reply to this post by Joel Burget
On Fri, Sep 14, 2012 at 10:08 PM, Joel Burget <[hidden email]> wrote:
> [snip]
>
> Also, Maybe and Either are not "implemented as monads". They are defined
> using `data` like you suggest:
>
> data Maybe a = Nothing | Just a
> data Either a b = Left a | Right b
>

That's not my point, or my objection.  My objection is to people who
present monads showing examples that begin with Maybe or Either, or
these 'trivial monads,' types onto which you can strip monadic
behavior fairly simply.  I'm not saying they're bad as monads, or
useless, but I think the step from Maybe as a datatype to using it as
a monad is great enough that explaining monads by way of introducing
them with Maybe as an example is sort of confusing because it
trivializes what's actually going on.

I'm honestly not sure what you mean by Maybe or Either being
"implemented as monads," versus others.  Monad is just a type class,
there's always an underlying type.  Perhaps you mean that people
actually *care* about things in Maybe outside of it being used as a
monad, versus other things where you don't touch the underlying type.

This isn't intended to start an argument, however, and I'd prefer not
to argue over methodology, I just wanted to throw out there that if
you say "monads, think about maybe, and add some stuff, then that's it
is, what's all the fuss about!?"  I think the hard part for people
understanding monads isn't the definition of monads, but rather that
you are forced to really tackle higher order behavior in a very direct
way.  (For example, say you're new to Haskell, you probably don't know
about CPS, and you read about Cont in a monad tutorial.  Is it the
monad that makes it hard?  No, it's probably the concept of CPS.)

kris

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Kristopher Micinski
In reply to this post by Andrew Pennebaker
On Fri, Sep 14, 2012 at 10:13 PM, Andrew Pennebaker
<[hidden email]> wrote:
>> Challenge: get someone to have a competition at one of the conferences
>> where students all give their
>> best "five minute monad talk" and try to find the most comprehensible one!
>
>
> Challenge accepted.
>

Great!  Maybe I'll try to write up a post on this challenge and put it
on my blog, and try to solicit responses from others too, perhaps
putting up a wiki page somewhere where people can post their attempts!

(By the way, I consider this hard, I'm not touting myself as having a
good solution for this, I'm genuinely interested in others'
responses!)

kris

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Joel Burget
In reply to this post by Kristopher Micinski
Kris,

Sorry for the confusion, I wasn't directly addressing your post. I was trying to correct what I perceived as a misconception in the last paragraph of Andrew's message, beginning with "Given that…".

- Joel

On Saturday, September 15, 2012 at 10:24 AM, Kristopher Micinski wrote:

On Fri, Sep 14, 2012 at 10:08 PM, Joel Burget <[hidden email]> wrote:
[snip]

Also, Maybe and Either are not "implemented as monads". They are defined
using `data` like you suggest:

data Maybe a = Nothing | Just a
data Either a b = Left a | Right b

That's not my point, or my objection. My objection is to people who
present monads showing examples that begin with Maybe or Either, or
these 'trivial monads,' types onto which you can strip monadic
behavior fairly simply. I'm not saying they're bad as monads, or
useless, but I think the step from Maybe as a datatype to using it as
a monad is great enough that explaining monads by way of introducing
them with Maybe as an example is sort of confusing because it
trivializes what's actually going on.

I'm honestly not sure what you mean by Maybe or Either being
"implemented as monads," versus others. Monad is just a type class,
there's always an underlying type. Perhaps you mean that people
actually *care* about things in Maybe outside of it being used as a
monad, versus other things where you don't touch the underlying type.

This isn't intended to start an argument, however, and I'd prefer not
to argue over methodology, I just wanted to throw out there that if
you say "monads, think about maybe, and add some stuff, then that's it
is, what's all the fuss about!?" I think the hard part for people
understanding monads isn't the definition of monads, but rather that
you are forced to really tackle higher order behavior in a very direct
way. (For example, say you're new to Haskell, you probably don't know
about CPS, and you read about Cont in a monad tutorial. Is it the
monad that makes it hard? No, it's probably the concept of CPS.)

kris


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Tutorial: Haskell for the Evil Genius

Conal Elliott
In reply to this post by Andrew Pennebaker
On Fri, Sep 14, 2012 at 2:18 PM, Andrew Pennebaker wrote:
A summary of the changes I've included so far:
[...]

Another comment:

"As a declarative language, Haskell manipulates expressions, eventually reducing expressions to values."

Huh? In what sense do declarative languages manipulate expressions? Sounds like a classic syntax/semantics confusion, especially when interpreters and/or lazy evaluation (implementation issues, not language properties) are in the mix.

Noted and reflected in the new version.

I'm trying to introduce the concept of declarative programming as opposed to imperative programming. Declarative programming according to Wikipedia:

is a programming paradigm that expresses the logic of a computation without describing its control flow.

I believe this is done in Haskell and other declarative languages by treating code as manipulable, reducible expressions, where imperative languages would use machine instructions.

I'm struggling to find anything in this belief/opinion that I can relate to. How did you come by it? What experiments can you perform to check whether it's true or false? I second Albert Lai's recommendation to use the scientific method.

Along these lines, I still see "Haskell manipulates reducible expressions, eventually reducing them to values" in your tutorial, which again I suspect comes from confusion between syntax & semantics and/or between meaning and possible execution strategy.

Regards, - Conal


_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
12