What is your favourite Haskell "aha" moment?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
114 messages Options
1 ... 3456
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What is yourfavouriteHaskell "aha" moment?)

Joachim Durchholz
Am 15.07.2018 um 08:44 schrieb Alexey Raga:
> If you do bits manipulations you want Words, not Ints! Java doesn't have
> unsigned numbers, so bits manipulations are insanely hard in Java since
> you _always_ need account to the sign bits. This the _real_ problem.

You don't use bit manipulation in (idiomatic) Java, you use EnumSets
(use case 1) or one of the various infinite-digits math libraries (use
case 2).
I have yet to see a third use case, and there's already libraries for
these, so I do not see a problem.

All of which, of course, just elaborates the point you were making
around this paragraph: Each language lives in its own niche, and when
coming from a different niche one usually sees all kinds of problems but
has yet to encounter the solutions.

Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Paul
In reply to this post by Alexey Raga
  • Eta does. Through a very nice FFI. But so does Haskell. We have nice FFI to use C libs. I maintain a couple of libs that use it extensively, works quite well.

 

I asked because never tried Eta. So, if you are right, seems no reasons to develop Eta...

 

  • Can I have a definition and laws of "monad++"? Otherwise, I don't understand what you are talking about. If it obeys monadic laws it is a monad. But I'll wait for the definition. 

 

No better definition then original: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computation-expressions  You see, they are different.

 

  • But it is not lazy - one. Remember, laziness is our requirement here. Whatever you propose _must _ work in a context of laziness.

 

Does it mean because Haskell is lazy (Clean – not) then linear types are impossible in Haskell? If they are possible why we need monads?

 

  • Second, the inability to track side effects in F# is not "simplification" and is not a benefit, but rather a limitation and a disadvantage of the language and its type system.

Why?

 

Haskell “tracks” effects obviously. But I shown example with State monad already. As I saw, nobody understand that State monad does not solve problem of spaghetti-code style manipulation with global state. Even more, it masks problem. But it was solved in OOP when all changes of state happen in one place under FSM control (with explicit rules of denied transitions: instead of change you have a request to change/a message, which can fail if transition is denied). Haskell HAS mutable structures, side-effects and allows spaghetti-code.  But magical word “monad” allows to forget about problem and the real solution and to lie that no such problem at whole (it automatically solved due to magical safety of Haskell). Sure, you can do it in Haskell too, but Haskell does not force you, but Smalltalk, for example, forces you.

 

We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit? Actually no any benefit, it’s easy understandable with simple experiment: if I have a big D program and I remove all “pure” keywords, will it become automatically buggy? No. If I stop to use “pure” totally, will it become buggy? No. If I add “print” for debug purpose in some subroutines, will they become buggy? No. If I mix read/write effects in my subroutine, will it make it buggy? No.

 

IMHO there is some substitution of concepts. Monads roots are not safety, but workaround to have effects in pure lambdas. And after monads introduction, a thesis was appeared: “monads make code more safe”. Motivation of monads was not to track effect (because it’s allegedly more safe), but to inject/allow/introduce effects in language like Haskell. Good example is State monad, again. State monad is needed to support manipulation of state (through argument replacement in chain of lambdas) but it’s totally other thing in comparison with idea to separate state manipulation in one isolated place under control of some FSM with explicit rules of allowed and denied transitions. If I am switching from read monad to RWST monad, will my code be more buggy? Again, no. Monads don’t decrease buggy or global-state-manipulation problems automatically, they are workaround for specific Haskell problem. Other languages don’t need monads. I CAN have monads in Python, Javascript but I don’t need them. My point is: monads are not valuable byself: Ocaml has not any monads and all is fine 😊

 

But it’s really very philosophical question, I think that monads are over-hyped actually. I stopped seeing the value of monads by themselves.

 

  • Third, AFAIK CLR restrictions do not allow implementing things like Functor, Monad, etc. in F# directly because they can't support HKT. So they workaround the problem.

 

https://fsprojects.github.io/FSharpPlus/abstractions.html (btw, you can see that monad is monoid here 😉)

 

  • But again, F# cannot be expressive enough: no HKT, no ability to express constraints, no ability to track effects...

 

If F# has monads (you call “monads” to computational expressions), then it CAN...

About HKT: yes, that’s true. But may be, it’s not so big problem? May be you can write effective, good and expressive code without them? Otherwise, we should agree that all languages without HKT are not expressive...

 

  • Really?  You keep mentioning F#, and I struggle with it right now _because_ of such limitations. There are no meaningful ways abstract over generics, it is impossible to reason about functions' behaviours from their type signatures (because side effects can happen anywhere), it has Option, but you still can get Null, you can't have constraints, etc., etc. It is sooooo muuuuch mooore limited.

 

IMHO fear of “side effects can happen anywhere” becomes traditional thesis. And what is the problem if I add “print” in some function?! 😊 Again, substitution of concepts, of monad’s motivations. Haskell compiler can not transform code with side-effects in effective way, and I must isolate all side-effects, mark such functions, but this is the problem of Haskell compiler, not mine. Why programmer should help compiler?! You can look at MLTon compiler, or OCaml one. IMHO they are good and work fine with side-effects; programs WITH side-effects under RWST or State or without those monads in ML are equal: if I port 101 ML functions with side-effects to Haskell, then I will add monad and will have 101 functions with the same side-effects, but now under monad. I can propagate my monad anywhere 😊 All my functions can be under RWST or State. Again, this problem should be solved in other way, not by wrapping the same actions in some wrapper-type (monad). A funny consequence of this, now my program become deep nested lambdas. Compiler should try to “flat” some of them, but I am not competent how Haskell good in it. Anyway, performance will be worse than in ML version. But focus here is a mark-with-monad-type, not the avoid side-effects. And it’s needed for compiler, not for me. May be I’m not clean, may be it is heavy to accept to the people hypnotized by magic invocations about monads power, may be I don’t understand something 😊

 

  • No, you can't. 

 

Something like this:   user?.Phone?.Company?.Name??"missing";     ?

 

(do

  u <- mbUser

   ph <- phoneOf u

   ...) <|> pure “missing”

 

  • OCaml exists for 22 years now, doing well and solves problems it has been designed for very well. So _already_ more than twice compare to your prediction.

 

It’s Ocaml. It follows to the golden middle, to avoid danger corner 😉

 

  • fields are not starting with “_” prefix, so I need to create lenses explicitly
  • No you don't. You don't have to have "_" prefix to generate a lense. You have total control here. 

 

Hmm, may be I’m not right. I’m using microlenses and call `makeLensesFor`...

 

  • For Haskell programmers, Java solves non-existing problems all the time :) Every single time you see on twitter or here something like "I replaced hundreds of lines of Java code with a single 'traverse'" you get the proof. And it happens so often.

 

It involves a long talk 😉 Business value - I’ll illustrate it, imagine a code:

 

.... lift ....   – NO business value in lift! It’s infrastructure code

m <- loadModel “mymodel.bin” – there is business value

checkModel m rules – there is business value too

 

So, I can mark infrastructure code with red color, business code with green and to calculate ratio. And to talk about “usefulness/effectivity” of language. How many infrastructure noise have the language (Java, btw, IMHO will have bad ratio too). I have a tons of types, JSON to/from instances, - I repeat models which are coded in external 3rd part services. But F# team thinks like me: it’s not enterprise way to do things in such manner and they introduced types providers – it’s only small example. In Haskell I wrote a lot of infrastructure code, different instances, etc, etc. But other languages are more concentrated on business value, on domain, on business logic. I though about DSLs, but DSLs can be antipattern and to lead to other problems...

 

  • Haskell code needs help from IDE, types hints, etc. 
  • Types are written and read by programmers. Java is impossible without IDE.  What is the point here?

 

Usually it’s difficult to understand for programmers. Most say: Perl looks like sh*t. Just look at these %, $, etc. And they don’t understand simple thesis: language is about humans, about linguistic, not about computation. Language should not be oriented to compiler or to its computational model, how will you like to work with bytes and words only in C++? So, we have “a”, “the” in English, we have “%”, “$” in Perl. And I don’t know exact object “type” but I can imagine its nature, it’s scalar, vector, etc. In Haskell I can skip most signatures and such code is not readable, I need Intero help to check some types. It’s very bad situation. It’s not 100% true for Java because you will have signatures in Java, you can not skip them, right? And if I add operators noise also (when I have not idea what does this ASCII-art do), the code becomes IDE-centric.

 

  • Better for whom? Definitely NOT better for me and my team using Haskell commercially. Again, to effectively meet requirements, functional and non-functional, we don't want just a mediocre compromise thing. I gave you an example with parsers already: different parsers have different tradeoffs. It is often a GOOD thing that there are many different libraries doing the same thing differently. 

 

Hm, if I have several libraries which are  doing similar things (only due to dependencies), then I have: 1) big Haskell installation (1Gb?) 2) slow compilation 3) big binaries, etc. I understand, you have freedom of choice. But let’s look to IT: C++ turned to one library (imported some Boost solutions, etc, etc), the same R7RS, D with its Phobos, Ocaml has batteries from Jane str., Python 😊 IMHO its general trend. Let’s imagine: project with 2, 3 parsers libraries, conduit and pipes, etc, due to dependencies. So, IMHO my point is not so strange or weird 😉 I’m talking about drop off those libraries (parsers, etc), but about creating of one solid library which components will depends only on it. Other alternatives will be somewhere in repos, who want, can use them without any problems. Something like Qt, Boost, Gtk, etc.

 

Let me be more precise, I’m comfort with Haskell at whole, but 1) I discussed Haskell with other people 2) I read opinion of other people in industry 2) I’m programmer since 97 and I have critical kind of the mind, so all of these allows me also to look from another POV. And I have been see how it’s similar to the fate of other languages which had good elegant ideas, they followed to one concept, abstraction only. This is the way to be marginalized, what happens with a lot of them. Actually, Haskell is already marginal: you can check how many programmers use it in the world, in most statistics it will not exist even. OK, I’m geek, in real life too, but IT industry is not a geek 😊

 

/Best regards

 

On Sun, Jul 15, 2018 at 1:28 AM Paul <[hidden email]> wrote:

Hello Alex!

 

> A small disclaimer: none of the members of our team has an academic background. We all have different backgrounds: C#, Java, Ruby, Python, C, even Perl if I am not mistaken. Yet we ended up with FP first, and then with Haskell.

> We have switched to Haskell from Scala, which _is_ a multi-paradigm language borrowing bits and pieces from other languages/paradigms and mixing them together. It is an enormously hard work to do it and for that, I very much respect

 

Oh, my 1st question will be: did you try Eta, Frege? May be I’m wrong but Eta should support Haskell libraries as well as Java ones? They allow you to use libraries from the both world...

 

> As a result, the language becomes overly complicated and less useful.

 

Yes, this is another side. You know, anything has several sides: good and bad...

 

> Your joke about how Haskell has been made misses one point: it was initially designed as a lazy language (at least as far as I know). Many features that Haskell has now are there because of laziness: if you want to be lazy, then you have to be pure, you have to sequence your effects, etc.

 

True. Laziness makes Haskell unique. I think Haskell makes laziness so popular in modern languages although it was known long ago (as data in “infinite streams”, etc). I think, Miranda was lazy, so Haskell is lazy too 😊 And IMHO there was some lazy dialect of ML (may be, I’m not right).

 

> "Let's defer lambda, name it IO and let's call it Monad" -  this bit isn't even funny. Monad isn't IO. IO happens to be a monad (as many things do, List as an example), but monad isn't IO and has nothing to do with IO. A horse is classified as Mammal, but Mammal doesn't mean horse _at all_.

 

Sure. I mean, the need of side-effects (and firstly I/O) led to the monads.

 

> In a context of a lazy language, you need to sequence your effects (including side effects), that's the first point. The second is that instead of disappearing from Haskell, monads (and other concepts) are making their way to other languages. Scala has them, F# has them, even C# has them (however indirectly). Try to take away List Monad from C# developers and they'll kill you ;)

 

Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler.

 

My point was that monads are workaround of Haskell problem, this was historically reason of their appearance. And if I have not such limitation in my language I don’t need any monads. What are the monad benefits in ML, for example? They are using in F#, but 1) comp. expressions are not monads but step forward, “monads++” and 2) they play different role in F#: simplifying of the code. And you can avoid them in all languages except Haskell. For example, Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. Monads have pros, sure, but they are not composable and workaround leads to another workaround – transformers. I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs All of this looks like overengineering due to mentioned limitation. No such one in ML, F#. D has keyword “pure”, and didn’t introduce monads. Performance is very important feature of the language, that limitation is the reason #1 why Haskell has bad and unpredictable performance. “do”-block is not the same as “flat” block of C# statements and its performance is not the same. I can achieve Maybe effect with nullable+exceptions or ?-family operators, List with permutations/LINQ, guard with if+break/continue and to do it without sacrificing performance.. ListT/conduits – are just generators/enumerators. Benefit of monads IMHO is small, they are workaround of Haskell problem and are not needed in other languages. Sure, there are monads in Ocaml, Javascript, Python (as experimental libraries), but the reason is hype. Nobody will remember them after 5-10 years...

 

Actually this is very-very subjective IMHHHHO 😊

 

> Lenses and generic lenses help, so be it. But I don't think that anything prevents Haskell from having it, and I don't think that Haskell as a language needs a dramatic change as you depict to make it happen. Just a feature.

 

When I have legacy code, there are a lot of types which fields are not starting with “_” prefix, so I need to create lenses explicitly... “Infrastructure” code. What is the business value of such code: nothing. For non-Haskell programmer it looks like you try to solve non-existing problem 😊  (very-very provocative point: all Haskell solutions looks very overengineering. The reason is: lambda-abstraction-only. When you try to build something big from little pieces then the process will be very overengineering. Imagine that the pyramids are built of small bricks).

 

> I don't agree that operators are noise. You certainly can write Haskell almost without operators if you wish.

 

Here I’m agree with D. Knuth ideas of literature programming: if code can not be easy read and understand on the hard-copy then used language is not fine. Haskell code needs help from IDE, types hints, etc. And I often meet a case when somebody does not understand what monads are in “do” blocks. Also there are a lot of operators in different libraries and no way to know what some operator means (different libraries, even different versions have own set of operators).

 

> As for extensions, I think that many more should be just switched on by default.

 

+1

 

> You mean that conversion should happen implicitly? Thank you, but no, thank you. This is a source of problems in many languages, and it is such a great thing that Haskell doesn't coerce types implicitly. 

 

No... Actually, I have not idea what is better. Currently there are a lot of conversions. Some libraries functions expect String, another - Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer). So, conversions happen often.

 

> I don't understand this "no business value" statement. Value for which business? What does it mean "check types, no business value"? 

There are libraries which nothing do in run-time. Only types playing. Only abstractions over types. And somebody says: oh man, see how many libraries has Haskell. But you can compare libraries of Haskell, Java, C#, Javascript, Perl, Python 😊 All libraries of Java, Python... have business value. Real-world functionality. Not abstract play with types. But more important point is a case with installed Agda 😊 or alternative libraries which does the same/similar things. The reason is important: Haskell moves a lot of functionality to libraries which is not good design IMHO. This is the root of the problem. Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Monads and monads transformers are central thing in Haskell. They a located in libraries. There is standard parser combinators in GHC itself, but you will have in the same project another one (or more than 1!). Etc, etc...

 

Also installed GHC... Why is it so big!? IMHO it’s time to clear ecosystem, to reduce it to “batteries” 😊

 

> And then it falls into a famous joke: "The problem with Open Source Software is YOU because YOU are not contributing" :) Meaning that if we want more good libs then we should write more good libs :)

 

Absolutely true 😊

 

On Sat, Jul 14, 2018 at 5:05 PM Paul <[hidden email]> wrote:

I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true...

 

Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen.

 

If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊

·       Let’s be pure lambda fanatics!

·       Is it possible to create a big application?

·       Is it possible to compile and optimize it?!

·       Let’s try...

·       Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago).

·       Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc)

·       Stop! How to do I/O? Real programming?!!

·       Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type)

·       Let’s call it... Monad!!

·       Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code)

·       But how to compose them???!?!

·       We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh...

·       How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila!

·       To be continued... 😊

 

I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊

 

Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them helped me A LOT (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future.

 

I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language

How I see commercial successful Haskell-like language:

·       No monads, no transformers

·       There are dependent types, linear types

·       There are other evaluation models/abstractions (not only lambda)

·       Special syntax for records fields, etc

·       Less operators noise, language extensions (but it’s very disputable)

·       Solve problems with numerous from/to conversions (strings, etc)

·       Solve problems with libraries

 

Last point needs explanation:

·       There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned

·       There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!)

·       Strange dependencies: I have installed Agda even! Why???!

 

IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler.

So, currently I see (again, it’s my observation only) 2 big “camps”:

1.       Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language

2.       Practical programmers, which interests are different from 1st “camp”

 

Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off).

 

Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...).

 

When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊  Actually all these are only observations: nobody knows the future.

 

 

/Best regards, Paul

 

From: [hidden email]
Sent: 13 июля 2018 г. 21:49
To: [hidden email]
Subject: Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?)

 

Am 13.07.2018 um 09:38 schrieb PY:

> 1. Haskell limits itself to lambda-only. Example, instead to add other

> abstractions and to become modern MULTI-paradigm languages,

 

"modern"?

That's not an interesting property.

"maintainable", "expressive" - THESE are interesting. Multi-paradigm can

help, but if overdone can hinder it - the earliest multi-paradigm

language I'm aware of was PL/I, and that was a royal mess I hear.

 

> So, point #1 is limitation in

> abstraction: monads, transformers, anything - is function. It's not

> good.

 

Actually limiting yourself to a single abstraciton tool can be good.

This simplifies semantics and makes it easier to build stuff on top of it.

 

Not that I'm saying that this is necessarily the best thing.

 

> There were such languages already: Forth, Joy/Cat, APL/J/K... Most of

> them look dead.

Which proves nothing, because many multi-paradigm languages look dead, too.

 

> When you try to be elegant, your product (language) died.

Proven by Lisp... er, disproven.

 

> This is not my opinion, this is only my observation. People like

> diversity and variety: in food, in programming languages, in relations,

> anywhere :)

 

Not in programming languages.

Actually multi-paradigm is usually a bad idea. It needs to be done in an

excellent fashion to create something even remotely usable, while a

single-paradigm language is much easier to do well.

And in practice, bad language design has much nastier consequences than

leaving out some desirable feature.

 

> 2. When language has killer app and killer framework, IMHO it has more

> chances. But if it has _killer ideas_ only... So, those ideas will be

> re-implemented in other languages and frameworks but with more simple

> and typical syntax :)

 

"Typical" is in the eye of the beholder, so that's another non-argument.

 

> It's difficult to compete with product,

> framework, big library, but it's easy to compete with ideas. It's an

> observation too :-)

 

Sure, but Haskell has product, framework, big library.

What's missing is commitment by a big company, that's all. Imagine

Google adopting Haskell, committing to building libraries and looking

for Haskell programmers in the streets - all of a sudden, Haskell is

going to be the talk of the day. (Replace "Google" with whatever

big-name company with deep pockets: Facebook, MS, IBM, you name it.)

 

> language itself is not argument for me.

 

You are arguing an awful lot about missing language features

("multi-paradigm") to credibly make that statement.

 

> Argument for me (I

> am usual developer) are killer apps/frameworks/libraries/ecosystem/etc.

> Currently Haskell has stack only - it's very good, but most languages

> has similar tools (not all have LTS analogue, but big frameworks are the

> same).

 

Yeah, a good library ecosystem is very important, and from the reports I

see on this list it's not really good enough.

The other issue is that Haskell's extensions make it more difficult to

have library code interoperate. Though that's a trade-off: The freedom

to add language features vs. full interoperability. Java opted for the

other opposite: 100% code interoperability at the cost of a really

annoying language evolution process, and that gave it a huge library

ecosystem.

 

But... I'm not going to make the Haskell developers' decisions. If they

don't feel comfortable with reversing the whole culture and make

interoperability trump everything else, then I'm not going to blame

them. I'm not even going to predict anything about Haskell's future,

because my glass orb is out for repairs and I cannot currently predict

the future.

 

Regards,

Jo

_______________________________________________

Haskell-Cafe mailing list

To (un)subscribe, modify options or view archives go to:

http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Only members subscribed via the mailman list are allowed to post.

 

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

 

 


_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Andrey Sverdlichenko
If you get large D program and remove all "pure" annotations from it, it will not become buggy _immediatelly_. But its chance to become buggy after a few changes will increase dramatically.

Constraints and forced purity in Haskell are tools allowing one to design safe programs, to let compiler catch your (or junior intern's) hand before bug is introduced. And I agree with Alexey, lack of ability to express "never ever this branch of code should change anything outside" is a big problem in majority of type systems.

On Sun, Jul 15, 2018, 12:07 Paul <[hidden email]> wrote:
  • Eta does. Through a very nice FFI. But so does Haskell. We have nice FFI to use C libs. I maintain a couple of libs that use it extensively, works quite well.

 

I asked because never tried Eta. So, if you are right, seems no reasons to develop Eta...

 

  • Can I have a definition and laws of "monad++"? Otherwise, I don't understand what you are talking about. If it obeys monadic laws it is a monad. But I'll wait for the definition. 

 

No better definition then original: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computation-expressions  You see, they are different.

 

  • But it is not lazy - one. Remember, laziness is our requirement here. Whatever you propose _must _ work in a context of laziness.

 

Does it mean because Haskell is lazy (Clean – not) then linear types are impossible in Haskell? If they are possible why we need monads?

 

  • Second, the inability to track side effects in F# is not "simplification" and is not a benefit, but rather a limitation and a disadvantage of the language and its type system.

Why?

 

Haskell “tracks” effects obviously. But I shown example with State monad already. As I saw, nobody understand that State monad does not solve problem of spaghetti-code style manipulation with global state. Even more, it masks problem. But it was solved in OOP when all changes of state happen in one place under FSM control (with explicit rules of denied transitions: instead of change you have a request to change/a message, which can fail if transition is denied). Haskell HAS mutable structures, side-effects and allows spaghetti-code.  But magical word “monad” allows to forget about problem and the real solution and to lie that no such problem at whole (it automatically solved due to magical safety of Haskell). Sure, you can do it in Haskell too, but Haskell does not force you, but Smalltalk, for example, forces you.

 

We often repeat this: “side-effects”, “tracks”, “safe”. But what does it actually mean? Can I have side-effects in Haskell? Yes. Can I mix side-effects? Yes. But in more difficult way than in ML or F#, for example. What is the benefit? Actually no any benefit, it’s easy understandable with simple experiment: if I have a big D program and I remove all “pure” keywords, will it become automatically buggy? No. If I stop to use “pure” totally, will it become buggy? No. If I add “print” for debug purpose in some subroutines, will they become buggy? No. If I mix read/write effects in my subroutine, will it make it buggy? No.

 

IMHO there is some substitution of concepts. Monads roots are not safety, but workaround to have effects in pure lambdas. And after monads introduction, a thesis was appeared: “monads make code more safe”. Motivation of monads was not to track effect (because it’s allegedly more safe), but to inject/allow/introduce effects in language like Haskell. Good example is State monad, again. State monad is needed to support manipulation of state (through argument replacement in chain of lambdas) but it’s totally other thing in comparison with idea to separate state manipulation in one isolated place under control of some FSM with explicit rules of allowed and denied transitions. If I am switching from read monad to RWST monad, will my code be more buggy? Again, no. Monads don’t decrease buggy or global-state-manipulation problems automatically, they are workaround for specific Haskell problem. Other languages don’t need monads. I CAN have monads in Python, Javascript but I don’t need them. My point is: monads are not valuable byself: Ocaml has not any monads and all is fine 😊

 

But it’s really very philosophical question, I think that monads are over-hyped actually. I stopped seeing the value of monads by themselves.

 

  • Third, AFAIK CLR restrictions do not allow implementing things like Functor, Monad, etc. in F# directly because they can't support HKT. So they workaround the problem.

 

https://fsprojects.github.io/FSharpPlus/abstractions.html (btw, you can see that monad is monoid here 😉)

 

  • But again, F# cannot be expressive enough: no HKT, no ability to express constraints, no ability to track effects...

 

If F# has monads (you call “monads” to computational expressions), then it CAN...

About HKT: yes, that’s true. But may be, it’s not so big problem? May be you can write effective, good and expressive code without them? Otherwise, we should agree that all languages without HKT are not expressive...

 

  • Really?  You keep mentioning F#, and I struggle with it right now _because_ of such limitations. There are no meaningful ways abstract over generics, it is impossible to reason about functions' behaviours from their type signatures (because side effects can happen anywhere), it has Option, but you still can get Null, you can't have constraints, etc., etc. It is sooooo muuuuch mooore limited.

 

IMHO fear of “side effects can happen anywhere” becomes traditional thesis. And what is the problem if I add “print” in some function?! 😊 Again, substitution of concepts, of monad’s motivations. Haskell compiler can not transform code with side-effects in effective way, and I must isolate all side-effects, mark such functions, but this is the problem of Haskell compiler, not mine. Why programmer should help compiler?! You can look at MLTon compiler, or OCaml one. IMHO they are good and work fine with side-effects; programs WITH side-effects under RWST or State or without those monads in ML are equal: if I port 101 ML functions with side-effects to Haskell, then I will add monad and will have 101 functions with the same side-effects, but now under monad. I can propagate my monad anywhere 😊 All my functions can be under RWST or State. Again, this problem should be solved in other way, not by wrapping the same actions in some wrapper-type (monad). A funny consequence of this, now my program become deep nested lambdas. Compiler should try to “flat” some of them, but I am not competent how Haskell good in it. Anyway, performance will be worse than in ML version. But focus here is a mark-with-monad-type, not the avoid side-effects. And it’s needed for compiler, not for me. May be I’m not clean, may be it is heavy to accept to the people hypnotized by magic invocations about monads power, may be I don’t understand something 😊

 

  • No, you can't. 

 

Something like this:   user?.Phone?.Company?.Name??"missing";     ?

 

(do

  u <- mbUser

   ph <- phoneOf u

   ...) <|> pure “missing”

 

  • OCaml exists for 22 years now, doing well and solves problems it has been designed for very well. So _already_ more than twice compare to your prediction.

 

It’s Ocaml. It follows to the golden middle, to avoid danger corner 😉

 

  • fields are not starting with “_” prefix, so I need to create lenses explicitly
  • No you don't. You don't have to have "_" prefix to generate a lense. You have total control here. 

 

Hmm, may be I’m not right. I’m using microlenses and call `makeLensesFor`...

 

  • For Haskell programmers, Java solves non-existing problems all the time :) Every single time you see on twitter or here something like "I replaced hundreds of lines of Java code with a single 'traverse'" you get the proof. And it happens so often.

 

It involves a long talk 😉 Business value - I’ll illustrate it, imagine a code:

 

.... lift ....   – NO business value in lift! It’s infrastructure code

m <- loadModel “mymodel.bin” – there is business value

checkModel m rules – there is business value too

 

So, I can mark infrastructure code with red color, business code with green and to calculate ratio. And to talk about “usefulness/effectivity” of language. How many infrastructure noise have the language (Java, btw, IMHO will have bad ratio too). I have a tons of types, JSON to/from instances, - I repeat models which are coded in external 3rd part services. But F# team thinks like me: it’s not enterprise way to do things in such manner and they introduced types providers – it’s only small example. In Haskell I wrote a lot of infrastructure code, different instances, etc, etc. But other languages are more concentrated on business value, on domain, on business logic. I though about DSLs, but DSLs can be antipattern and to lead to other problems...

 

  • Haskell code needs help from IDE, types hints, etc. 
  • Types are written and read by programmers. Java is impossible without IDE.  What is the point here?

 

Usually it’s difficult to understand for programmers. Most say: Perl looks like sh*t. Just look at these %, $, etc. And they don’t understand simple thesis: language is about humans, about linguistic, not about computation. Language should not be oriented to compiler or to its computational model, how will you like to work with bytes and words only in C++? So, we have “a”, “the” in English, we have “%”, “$” in Perl. And I don’t know exact object “type” but I can imagine its nature, it’s scalar, vector, etc. In Haskell I can skip most signatures and such code is not readable, I need Intero help to check some types. It’s very bad situation. It’s not 100% true for Java because you will have signatures in Java, you can not skip them, right? And if I add operators noise also (when I have not idea what does this ASCII-art do), the code becomes IDE-centric.

 

  • Better for whom? Definitely NOT better for me and my team using Haskell commercially. Again, to effectively meet requirements, functional and non-functional, we don't want just a mediocre compromise thing. I gave you an example with parsers already: different parsers have different tradeoffs. It is often a GOOD thing that there are many different libraries doing the same thing differently. 

 

Hm, if I have several libraries which are  doing similar things (only due to dependencies), then I have: 1) big Haskell installation (1Gb?) 2) slow compilation 3) big binaries, etc. I understand, you have freedom of choice. But let’s look to IT: C++ turned to one library (imported some Boost solutions, etc, etc), the same R7RS, D with its Phobos, Ocaml has batteries from Jane str., Python 😊 IMHO its general trend. Let’s imagine: project with 2, 3 parsers libraries, conduit and pipes, etc, due to dependencies. So, IMHO my point is not so strange or weird 😉 I’m talking about drop off those libraries (parsers, etc), but about creating of one solid library which components will depends only on it. Other alternatives will be somewhere in repos, who want, can use them without any problems. Something like Qt, Boost, Gtk, etc.

 

Let me be more precise, I’m comfort with Haskell at whole, but 1) I discussed Haskell with other people 2) I read opinion of other people in industry 2) I’m programmer since 97 and I have critical kind of the mind, so all of these allows me also to look from another POV. And I have been see how it’s similar to the fate of other languages which had good elegant ideas, they followed to one concept, abstraction only. This is the way to be marginalized, what happens with a lot of them. Actually, Haskell is already marginal: you can check how many programmers use it in the world, in most statistics it will not exist even. OK, I’m geek, in real life too, but IT industry is not a geek 😊

 

/Best regards

 

On Sun, Jul 15, 2018 at 1:28 AM Paul <[hidden email]> wrote:

Hello Alex!

 

> A small disclaimer: none of the members of our team has an academic background. We all have different backgrounds: C#, Java, Ruby, Python, C, even Perl if I am not mistaken. Yet we ended up with FP first, and then with Haskell.

> We have switched to Haskell from Scala, which _is_ a multi-paradigm language borrowing bits and pieces from other languages/paradigms and mixing them together. It is an enormously hard work to do it and for that, I very much respect

 

Oh, my 1st question will be: did you try Eta, Frege? May be I’m wrong but Eta should support Haskell libraries as well as Java ones? They allow you to use libraries from the both world...

 

> As a result, the language becomes overly complicated and less useful.

 

Yes, this is another side. You know, anything has several sides: good and bad...

 

> Your joke about how Haskell has been made misses one point: it was initially designed as a lazy language (at least as far as I know). Many features that Haskell has now are there because of laziness: if you want to be lazy, then you have to be pure, you have to sequence your effects, etc.

 

True. Laziness makes Haskell unique. I think Haskell makes laziness so popular in modern languages although it was known long ago (as data in “infinite streams”, etc). I think, Miranda was lazy, so Haskell is lazy too 😊 And IMHO there was some lazy dialect of ML (may be, I’m not right).

 

> "Let's defer lambda, name it IO and let's call it Monad" -  this bit isn't even funny. Monad isn't IO. IO happens to be a monad (as many things do, List as an example), but monad isn't IO and has nothing to do with IO. A horse is classified as Mammal, but Mammal doesn't mean horse _at all_.

 

Sure. I mean, the need of side-effects (and firstly I/O) led to the monads.

 

> In a context of a lazy language, you need to sequence your effects (including side effects), that's the first point. The second is that instead of disappearing from Haskell, monads (and other concepts) are making their way to other languages. Scala has them, F# has them, even C# has them (however indirectly). Try to take away List Monad from C# developers and they'll kill you ;)

 

Better IMHO to have less infrastructure code. Better is to hide all “machinery” in compiler.

 

My point was that monads are workaround of Haskell problem, this was historically reason of their appearance. And if I have not such limitation in my language I don’t need any monads. What are the monad benefits in ML, for example? They are using in F#, but 1) comp. expressions are not monads but step forward, “monads++” and 2) they play different role in F#: simplifying of the code. And you can avoid them in all languages except Haskell. For example, Prolog can be “pure” and to do I/O without monads, also Clean can as well as F#. Monads have pros, sure, but they are not composable and workaround leads to another workaround – transformers. I’m not unique in my opinion: https://www.youtube.com/watch?v=rvRD_LRaiRs All of this looks like overengineering due to mentioned limitation. No such one in ML, F#. D has keyword “pure”, and didn’t introduce monads. Performance is very important feature of the language, that limitation is the reason #1 why Haskell has bad and unpredictable performance. “do”-block is not the same as “flat” block of C# statements and its performance is not the same. I can achieve Maybe effect with nullable+exceptions or ?-family operators, List with permutations/LINQ, guard with if+break/continue and to do it without sacrificing performance.. ListT/conduits – are just generators/enumerators. Benefit of monads IMHO is small, they are workaround of Haskell problem and are not needed in other languages. Sure, there are monads in Ocaml, Javascript, Python (as experimental libraries), but the reason is hype. Nobody will remember them after 5-10 years...

 

Actually this is very-very subjective IMHHHHO 😊

 

> Lenses and generic lenses help, so be it. But I don't think that anything prevents Haskell from having it, and I don't think that Haskell as a language needs a dramatic change as you depict to make it happen. Just a feature.

 

When I have legacy code, there are a lot of types which fields are not starting with “_” prefix, so I need to create lenses explicitly... “Infrastructure” code. What is the business value of such code: nothing. For non-Haskell programmer it looks like you try to solve non-existing problem 😊  (very-very provocative point: all Haskell solutions looks very overengineering. The reason is: lambda-abstraction-only. When you try to build something big from little pieces then the process will be very overengineering. Imagine that the pyramids are built of small bricks).

 

> I don't agree that operators are noise. You certainly can write Haskell almost without operators if you wish.

 

Here I’m agree with D. Knuth ideas of literature programming: if code can not be easy read and understand on the hard-copy then used language is not fine. Haskell code needs help from IDE, types hints, etc. And I often meet a case when somebody does not understand what monads are in “do” blocks. Also there are a lot of operators in different libraries and no way to know what some operator means (different libraries, even different versions have own set of operators).

 

> As for extensions, I think that many more should be just switched on by default.

 

+1

 

> You mean that conversion should happen implicitly? Thank you, but no, thank you. This is a source of problems in many languages, and it is such a great thing that Haskell doesn't coerce types implicitly. 

 

No... Actually, I have not idea what is better. Currently there are a lot of conversions. Some libraries functions expect String, another - Text, also ByteString, lazy/strict, the same with the numbers (word/int/integer). So, conversions happen often.

 

> I don't understand this "no business value" statement. Value for which business? What does it mean "check types, no business value"? 

There are libraries which nothing do in run-time. Only types playing. Only abstractions over types. And somebody says: oh man, see how many libraries has Haskell. But you can compare libraries of Haskell, Java, C#, Javascript, Perl, Python 😊 All libraries of Java, Python... have business value. Real-world functionality. Not abstract play with types. But more important point is a case with installed Agda 😊 or alternative libraries which does the same/similar things. The reason is important: Haskell moves a lot of functionality to libraries which is not good design IMHO. This is the root of the problem. Better is to have one good solid library bundled with GHC itself (“batteries included”) and only specific things will live in libraries and frameworks. Monads and monads transformers are central thing in Haskell. They a located in libraries. There is standard parser combinators in GHC itself, but you will have in the same project another one (or more than 1!). Etc, etc...

 

Also installed GHC... Why is it so big!? IMHO it’s time to clear ecosystem, to reduce it to “batteries” 😊

 

> And then it falls into a famous joke: "The problem with Open Source Software is YOU because YOU are not contributing" :) Meaning that if we want more good libs then we should write more good libs :)

 

Absolutely true 😊

 

On Sat, Jul 14, 2018 at 5:05 PM Paul <[hidden email]> wrote:

I understand that my points are disputable, sure, example, multi-pardigm Oz – dead 😊 Any rule has exceptions. But my point was that people don’t like elegant and one-abstraction languages. It’s my observation. For me, Smalltalk was good language (mostly dead, except Pharo, which looks cool). Forth – high-level “stack-around-assembler”, mostly dead (Factor looks abandoned, only 8th looks super cool, but it’s not free). What else? Lisp? OK, there are SBCL, Clojure, Racket... But you don’t find even Clojure in languages trends usually. APL, J – super cool! Seems dead (I don’t know what happens with K). ML, SML? By the way, Haskell role was to kill SML community, sure it is sad to acknowledge it, but it’s 100% true...

 

Haskell try to be minimalistic and IMHO this can lead to death. Joachim, I’m not talking “it’s good/it’s bad”, “multiparadigm is good” or else... I don’t know what is right. It’s my observations only. Looks like it can happen.

 

If we will look to Haskell history then we see strange curve. I’ll try to describe it with humour, so, please, don;t take it seriously 😊

·       Let’s be pure lambda fanatics!

·       Is it possible to create a big application?

·       Is it possible to compile and optimize it?!

·       Let’s try...

·       Wow, it’s possible!!! (sure, it’s possible, Lisp did it long-long ago).

·       Looks like puzzle, can be used to write a lot of articles (there were articles about combinators, Jay/Cat/Scheme, etc, now there are a lot of Haskell articles – big interesting in academia. But IMHO academia interest to language can kill it too: Clean, Strongtalk, etc)

·       Stop! How to do I/O? Real programming?!!

·       Ohh, if we will wrap it in lambda and defer it to top level (main::IO ()), it will have I/O type (wrapper is hidden in type)

·       Let’s call it... Monad!!

·       Wow, cool! Works! Anybody should use monads! Does not your language have monads? Then we fly to you! (everybody forgot that monads are workaround of Haskell limitation and are not needed in another languages. Also they lead to low-performance code)

·       But how to compose them???!?!

·       We will wrap/unwrap, wrap/unwrap.. Let’s call it... transformers!!! “Monad transformers” – sounds super cool. Your language does not have “lift” operation, right? Ugh...

·       How to access records fields... How... That’s a question. ‘.’ - no! ‘#’ - no! Eureka! We will add several language extensions and voila!

·       To be continued... 😊

 

I love Haskell but I think such curve is absolutely impossible in commercial language. With IT managers 😊 To solve problem in a way when solution leads to another problem which needs new solution again and reason is only to keep lambda-abstraction-only (OK, Vanessa, backpacks also 😉) Can you imagine that all cars will have red color? Or any food will be sweet? It’s not technical question, but psychological and linguistic. Why native languages are not so limited? They even borrow words and forms from another one 😊

 

Haskell’s core team knows how better then me, and I respect a lot of Haskell users, most of them helped me A LOT (!!!). It’s not opinion even, because I don’t know what is a right way. Let’s call it observation and feeling of the future.

 

I feel: Haskell has 3 cases: 1) to die 2) to change itself 3) to fork to another language

How I see commercial successful Haskell-like language:

·       No monads, no transformers

·       There are dependent types, linear types

·       There are other evaluation models/abstractions (not only lambda)

·       Special syntax for records fields, etc

·       Less operators noise, language extensions (but it’s very disputable)

·       Solve problems with numerous from/to conversions (strings, etc)

·       Solve problems with libraries

 

Last point needs explanation:

·       There is a lot of libraries written to check some type concepts only, no any business value. Also there are a lot of libraries written by students while they are learning Haskell: mostly without any business value/abandoned

·       There is situation when you have alternative libraries in one project due to dependencies (but should be one only, not both!)

·       Strange dependencies: I have installed Agda even! Why???!

 

IMHO problems with libraries and lambda-only-abstraction lead to super slow compilation, big and complex compiler.

So, currently I see (again, it’s my observation only) 2 big “camps”:

1.       Academia, which has own interests, for example, to keep Haskell minimalistic (one-only-abstraction). Trade-off only was to add language extensions but they fragmentizes the language

2.       Practical programmers, which interests are different from 1st “camp”

 

Another my observation is: a lot of peoples tried Haskell and switched to another languages (C#, F#, etc) because they cannot use it for big enterprise projects (Haskell becomes hobby for small experiments or is dropped off).

 

Joachim, I’m absolutely agreed that a big company can solve a lot of these problems. But some of them have already own languages (you can compare measure units in Haskell and in F#, what looks better...).

 

When I said about killer app, I mean: devs like Ruby not due to syntax but RoR. The same Python: sure, Python syntax is very good, but without Zope, Django, TurboGears, SQLAlchemy, Twisted, Tornado, Cheetah, Jinja, etc – nobody will use Python. Sure, there are exceptions: Delphi, CBuilder, for example. But this is bad karma of Borland 😊 They had a lot of compilers (pascal, prolog, c/c++, etc), but... On the other hand after reincarnation we have C# 😊  Actually all these are only observations: nobody knows the future.

 

 

/Best regards, Paul

 

From: [hidden email]
Sent: 13 июля 2018 г. 21:49
To: [hidden email]
Subject: Re: [Haskell-cafe] Investing in languages (Was: What is yourfavourite Haskell "aha" moment?)

 

Am 13.07.2018 um 09:38 schrieb PY:

> 1. Haskell limits itself to lambda-only. Example, instead to add other

> abstractions and to become modern MULTI-paradigm languages,

 

"modern"?

That's not an interesting property.

"maintainable", "expressive" - THESE are interesting. Multi-paradigm can

help, but if overdone can hinder it - the earliest multi-paradigm

language I'm aware of was PL/I, and that was a royal mess I hear.

 

> So, point #1 is limitation in

> abstraction: monads, transformers, anything - is function. It's not

> good.

 

Actually limiting yourself to a single abstraciton tool can be good.

This simplifies semantics and makes it easier to build stuff on top of it.

 

Not that I'm saying that this is necessarily the best thing.

 

> There were such languages already: Forth, Joy/Cat, APL/J/K... Most of

> them look dead.

Which proves nothing, because many multi-paradigm languages look dead, too.

 

> When you try to be elegant, your product (language) died.

Proven by Lisp... er, disproven.

 

> This is not my opinion, this is only my observation. People like

> diversity and variety: in food, in programming languages, in relations,

> anywhere :)

 

Not in programming languages.

Actually multi-paradigm is usually a bad idea. It needs to be done in an

excellent fashion to create something even remotely usable, while a

single-paradigm language is much easier to do well.

And in practice, bad language design has much nastier consequences than

leaving out some desirable feature.

 

> 2. When language has killer app and killer framework, IMHO it has more

> chances. But if it has _killer ideas_ only... So, those ideas will be

> re-implemented in other languages and frameworks but with more simple

> and typical syntax :)

 

"Typical" is in the eye of the beholder, so that's another non-argument.

 

> It's difficult to compete with product,

> framework, big library, but it's easy to compete with ideas. It's an

> observation too :-)

 

Sure, but Haskell has product, framework, big library.

What's missing is commitment by a big company, that's all. Imagine

Google adopting Haskell, committing to building libraries and looking

for Haskell programmers in the streets - all of a sudden, Haskell is

going to be the talk of the day. (Replace "Google" with whatever

big-name company with deep pockets: Facebook, MS, IBM, you name it.)

 

> language itself is not argument for me.

 

You are arguing an awful lot about missing language features

("multi-paradigm") to credibly make that statement.

 

> Argument for me (I

> am usual developer) are killer apps/frameworks/libraries/ecosystem/etc.

> Currently Haskell has stack only - it's very good, but most languages

> has similar tools (not all have LTS analogue, but big frameworks are the

> same).

 

Yeah, a good library ecosystem is very important, and from the reports I

see on this list it's not really good enough.

The other issue is that Haskell's extensions make it more difficult to

have library code interoperate. Though that's a trade-off: The freedom

to add language features vs. full interoperability. Java opted for the

other opposite: 100% code interoperability at the cost of a really

annoying language evolution process, and that gave it a huge library

ecosystem.

 

But... I'm not going to make the Haskell developers' decisions. If they

don't feel comfortable with reversing the whole culture and make

interoperability trump everything else, then I'm not going to blame

them. I'm not even going to predict anything about Haskell's future,

because my glass orb is out for repairs and I cannot currently predict

the future.

 

Regards,

Jo

_______________________________________________

Haskell-Cafe mailing list

To (un)subscribe, modify options or view archives go to:

http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe

Only members subscribed via the mailman list are allowed to post.

 

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Joachim Durchholz
In reply to this post by Paul
Am 15.07.2018 um 18:06 schrieb Paul:
>   * But it is not lazy - one. Remember, laziness is our requirement
>     here. Whatever you propose _must _ work in a context of laziness.
>
> Does it mean because Haskell is lazy (Clean – not) then linear types are
> impossible in Haskell?

Laziness and linear types are orthogonal.

 > If they are possible why we need monads?

"Monadic" as a term is at the same level as "associative".
A very simple concept that is visible everywhere, and if you can arrange
your computations in a monadic manner you'll get a certain level of
sanity. And lo and behold, you can even write useful libraries just
based on the assumption that you're dealing with monadic structures,
that's monad transformers (so they're more interesting than associativity).

So monads are interesting and useful (read: important) regardless of
whether you have laziness or linear types.
Again: orthogonal.

> Haskell “tracks” effects obviously. But I shown example with State monad
> already. As I saw, nobody understand that State monad does not solve
> problem of spaghetti-code style manipulation with global state.

Actually that's pretty well-known.
Not just for State for for anything that hides state out of plain sight,
i.e. somewhere else than in function parameters. I.e. either some struct
type, or by returning a partially evaluated function that has that data.
People get bitten by those things, and they learn to avoid these
patterns except where it's safe - just as with spaghetti code, which
people stopped writing a few years ago (nowadays it's more spaghetti
data but at least that's analyzable).

So I think if you don't see anybody explicitly mentioning spaghetti
issues with State that's for some people it's just hiding in plain sight
and they either aren't consciously aware of it, or find that area so
self-explaining that they do not think they really need to explain that.

Or you simply misunderstood what people are saying.

> But it was solved in OOP when all changes of state happen in *one
> place* under FSM control
Sorry, but that's not what OO is about.
Also, I do not think that you're using general FSMs, else you'd be
having transition spaghetti.

> (with explicit rules of denied transitions: instead of change you
> have *a request to change*/a message, which can fail if transition is
> denied).
Which does nothing about keeping transitions under control.
Let me repeat: What you call a "message" is just a standard synchronous
function call. The one difference is that the caller allows the target
type to influence what function gets actually called, and while that's
powerful it's quite far from what people assume if you throw that
"message" terminology around.
This conflation of terminology has been misleading people since the
invention of Smalltalk. I wish people would finally stop using that
terminology, and highlight those things where Smalltalk really deviates
from other OO languages (#doesNotUnderstand, clean
everything-is-an-object concepts, Metaclasses Done Right). This message
send terminology is just a distraction.

> Haskell HAS mutable structures, side-effects and allows
> spaghetti-code.
Nope.
Its functions can model these, and to the point that the Haskell code is
still spaghetti.
But that's not the point. The point is that Haskell makes it easy to
write non-spaghetti.

BTW you have similar claims about FSMs. Ordinarily they are spaghetti
incarnate, but you say they work quite beautifully if done right.
(I'm staying sceptical because your arguments in that direction didn't
make sense to me, but that might be because I'm lacking background
information, and filling in these gaps is really too far off-topic to be
of interest.)

 >  But magical word
> “monad” allows to forget about problem and the real solution and to lie
> that no such problem at whole (it automatically solved due to magical
> safety of Haskell). Sure, you can do it in Haskell too, but Haskell does
> not force you, but Smalltalk, for example, forces you.

WTF? You can do spaghetti in Smalltalk. Easily actually, there are
plenty of antipatterns for that language.

> We often repeat this: “side-effects”, “tracks”, “safe”. But what does it
> actually mean? Can I have side-effects in Haskell? Yes. Can I mix
> side-effects? Yes. But in more difficult way than in ML or F#, for
> example. What is the benefit?

That it is difficult to accidentally introduce side effects.
Or, rather, the problems of side effects. Formally, no Haskell program
can have a side effect (unless using UnsafeIO or FFI, but that's not
what we're talking about here).

 >
  Actually no any benefit,

Really. You *should* listen more. If the overwhelming majority of
Haskell programmers who're using it in practice tell you that there are
benefits, you should question your analysis, not their experience. You
should ask rather than make bold statements that run contrary to
practical experience.
That way, everybody is going to learn: You about your misjudgements, and
(maybe) Haskell programmers about the limits of the approach.

The way you're approaching this is just going to give you an antibody
reaction: Everybody is homing in on you, with the sole intent of
neutralizing you. (Been there, done that, on both sides of the fence.)

 >
  it’s easy
> understandable with simple experiment: if I have a big D program and I
> remove all “pure” keywords, will it become automatically buggy? No. If I
> stop to use “pure” totally, will it become buggy? No.

Sure. It will still be pure.

 >
  If I add “print”
> for debug purpose in some subroutines, will they become buggy? No.

Yes they will. Some tests will fail if they expect specific output. If
the program has a text-based user interface, it will become unusable.

 >
  If I
> mix read/write effects in my subroutine, will it make it buggy? No.

Yes they will become buggy. You'll get aliasing issues. And these are
the nastiest thing to debug because they will hit you if and only if the
program is so large that you don't know all the data flows anymore, and
your assumptions about what might be an alias start to fall down. Or not
you but maybe the new coworker who doesn't yet know all the parts of the
program.
That's exactly why data flow is being pushed to being explicit.

> But it’s really very philosophical question, I think that monads are
> over-hyped actually. I stopped seeing the value of monads by themselves.

Yeah, a lot of people think that monads are somehow state.
It's just that state usually is pretty monadic. Or, rather, the
functions that are built for computing a "next state" are by nature
monadic, so that was the first big application area of monads.
But monads are really much more general than for handling state. It's
like assuming that associativity is for arithmetic, but there's a whole
lot of other associative operators in the world, some of them even
useful (such as string concatenation).

>   * Third, AFAIK CLR restrictions do not allow implementing things like
>     Functor, Monad, etc. in F# directly because they can't support HKT.
>     So they workaround the problem.
>
> https://fsprojects.github.io/FSharpPlus/abstractions.html(btw, you can
> see that monad is monoid here 😉)

Nope, monoid is a special case of monad (the case where all input and
output types are the same).
(BTW monoid is associativity + neutral element. Not 100% sure whether
monad's "return" qualifies as a neutral element, and my
monoid-equals-monotyped-monad claim above may fall down if it is not.
Also, different definitions of monad may add different operators so the
subconcept relationship may not be straightforward.)

(I'm running out of time and interest so I'll leave the remaining points
uncommented.)

Regards,
Jo
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What is your favourite Haskell "aha" moment?)

Tom Ellis-5
In reply to this post by Joachim Durchholz
On Fri, Jul 13, 2018 at 08:49:02PM +0200, Joachim Durchholz wrote:
> The other issue is that Haskell's extensions make it more difficult to have
> library code interoperate.

Do they?  Can you give any examples?
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: What is your favourite Haskell "aha" moment?

Tom Ellis-5
In reply to this post by Paul
On Sat, Jul 14, 2018 at 10:17:43AM +0300, Paul wrote:
> > Once the FSM holds more than a dozen states, these advantages evaporate.
>
> This is point only where I can not agree.  I used FSM with hundreds
> states/transitions.  It was automatically generated, I only check them.
> Also I know that in car automatics FSM are widely used (BMW, Mercedes,
> Audi).  Also it’s using in software for space industry widely.  My IMHO
> is: FSM is most reliable way to do soft without bugs.  Also it’s easy to
> verify them (for example, with transitions’ assertions)

It's interesting to see all this chat about FSMs, when FSMs are essentially
"just" a tail recursive function on a sum type.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Paul
In reply to this post by Joachim Durchholz

> So I think if you don't see anybody explicitly mentioning spaghetti
> issues with State that's for some people it's just hiding in plain
> sight and they either aren't consciously aware of it, or find that
> area so self-explaining that they do not think they really need to
> explain that.
>
IMHO State monad solution is orthogonal to my point. It does not force
you to isolate state change in one place with explicit control, it only
marks place where it happens. This info is needed to compiler, not to
me. For me - no benefits. Benefit to me - to isolate changing, but with
State I can (and all of us do it!) smear change points throughout the
code. So, my question is: what exact problem does solve State monad?
Which problem? Mine or compiler? Haskell pure lambda-only-abstraction
limitation? OK, if we imagine another Haskell, similar to  F#, will I
need State monad yet? IMHO - no. My point is: State monad is super, in
Haskell, and absolutely waste in other languages. I will isolate
mutability in another manner: more safe, robust and controllable. Recap:
1. State monad allows you to mark change of THIS state, so you can easy
find where THIS state is changing (tracking changes)
2. Singleton with FSM allows you to *control* change and to isolate all
change logic in one place

1st allows spaghetti, 2nd - does not. 2nd force you to another model:
not changes, but change requests, which can return: "not possible". With
Haskell way the check "possible/not possible" will happen in locations
where you change state in State monad: anywhere. So, my initial point
is: State monad is about Haskell abstraction problems, not about
developer problems.

> Sorry, but that's not what OO is about.
> Also, I do not think that you're using general FSMs, else you'd be
> having transition spaghetti.
To be precise, then yes, you are right. But such model forces me more,
then monadic model. When you create singleton "PlayerBehavior", and have
all setters/getters in this singleton and already check (in one place!)
changes - next step is to switch from checks to explicit FSM - in the
same place. Haskell nothing offers for this. You *can* do it, but monads
don't force you and they are about Haskell problems, not mine.
Motivation of State monad is not to solve problem but to introduce state
mutability in Haskell, this is my point. OK, State monad has helpful
side-effect: allows to track change of concrete THIS state, but I can do
it with my editor, it's more valuable to Haskell itself, then to me,
because no problem to mutate state: Haskell allows it, Haskell also does
not guard you to mutate this state anywhere in the application.

I'm agree with you 100%. My point is related to accents only, my thesis
is: monads have value, but it's small, it's justified in Haskell with
its limitation to one abstraction, but I don't need monads in other
languages, their value in other languages is super-small (if even
exists). So, motivation of monads introduction (for me, sure, I'm very
subjective) is to workaround Haskell model, not to make code more safe,
I'm absolutely sure: monads nothing to do with safety. It's like to use
aspirin with serious medical problem :)

> Let me repeat: What you call a "message" is just a standard
> synchronous function call. The one difference is that the caller
> allows the target type to influence what function gets actually
> called, and while that's powerful it's quite far from what people
> assume if you throw that "message" terminology around.
I mentioned Erlang early: the same - you send message to FSM which will
be lightweight process. Idea of agents and messages is the same in
Smalltalk, in QNX, in Erlang, etc, etc... So, "message" does not always
mean "synchronous call". For example, QNX "optimizes" local messages, so
they are more lightweight in comparison with remotely messages (which
are naturally asynchronous). But "message" abstraction is the same and
is more high-level then synchronous/asynchronous dichotomy. It allows
you to isolate logic - this is the point. Haskell nothing to do with it:
you smear logic anywhere. But now you mark it explicitly. And you have
illusion that your code is more safe.

> But that's not the point. The point is that Haskell makes it easy to
> write non-spaghetti.

How? In Haskell I propagate data to a lot of functions (as argument or
as hidden argument - in some monad), but with singleton+FSM - you can
not do it - data is hidden for you, you can only *call logic*, not
*access data*. Logic in Haskell is forced to be smeared between a lot of
functions. You *CAN* avoid it, but Haskell does not force you.

> BTW you have similar claims about FSMs. Ordinarily they are spaghetti
> incarnate, but you say they work quite beautifully if done right.
> (I'm staying sceptical because your arguments in that direction didn't
> make sense to me, but that might be because I'm lacking background
> information, and filling in these gaps is really too far off-topic to
> be of interest.)

I respect your position. Everybody has different experience, and this is
basically very good!

> We often repeat this: “side-effects”, “tracks”, “safe”. But what does
> it actually mean? Can I have side-effects in Haskell? Yes. Can I mix
> side-effects? Yes. But in more difficult way than in ML or F#, for
> example. What is the benefit?
>
> That it is difficult to accidentally introduce side effects.
> Or, rather, the problems of side effects. Formally, no Haskell program
> can have a side effect (unless using UnsafeIO or FFI, but that's not
> what we're talking about here).

Actually if we look to this from high-level, as to "black box" - we see
that it's truth. Haskell allows to have them, to mix them but in
different manner.

> Yes they will. Some tests will fail if they expect specific output. If
> the program has a text-based user interface, it will become unusable.

And wise-versa: if I will remove "print" from such tests and add "pure"
- they can fail too. IMHO purity/impurity in your example is related to
expected behavior and it violation, not to point that "more pure - less
bugs". Pure function can violate its contract as well as impure.

> Yes they will become buggy. You'll get aliasing issues. And these are
> the nastiest thing to debug because they will hit you if and only if
> the program is so large that you don't know all the data flows
> anymore, and your assumptions about what might be an alias start to
> fall down. Or not you but maybe the new coworker who doesn't yet know
> all the parts of the program.
> That's exactly why data flow is being pushed to being explicit.

So, to avoid this I should not mix read/write monads, to avoid RWST. In
this case they should be removed from the language. And monad
transformers too. My point is: there is some misunderstanding - I often
listen "side-effects are related to errors", "we should avoid them",
"they leads to errors", etc, etc, but IMHO pure/impure is needed to FP
language compiler, not to me. This is the real motto. Adding of
side-effects does not lead to bugs automatically. Mostly it does not.
More correct is to say: distinguish of pure/impure code is better to
analyze the code, to manipulate with it, to transform it (as programmer
I can transform F# code *easy because no monads*, in Haskell *compiler*
can transform code easy *because monads*). More important argument for
me is example with Free monads. They allows to simulate behavior, to
check logic without to involve real external actions (side-effects).
Yes, OK, this is argument. It's not explicitly related to buggy code,
but it's useful. It remember me homoiconic Lisp code where code can be
processed as data, as AST.

Actually, I had a big interesting discussion in my company with people
which does not like FP (the root why I become to ask such questions to
himself). And I got their arguments. I tried to find solid base of mine.
But currently I see that I like Haskell solutions itself, and I can not
show concrete examples where they are needed in real world, without
Haskell specific limitations. I know that those limitations lead to slow
compilation, to big and complex compiler, I can not prove that
side-effects means "lead to error", or (more interesting) that it's bad
to separate side-effects from each other. F#, ML, Lisps have "do-" block
and no problem with it. They don't need transformers to mix 2 different
effects in one do-block. If you can prove that this decision leads to
bugs and Haskell solution does not: it will be bomb :) I think, will be
a lot of people in CS which will not agree with you ever.

---
Best regards, Paul

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: What is your favourite Haskell "aha" moment?

Paul
In reply to this post by Tom Ellis-5
16.07.2018 09:44, Tom Ellis wrote:

> On Sat, Jul 14, 2018 at 10:17:43AM +0300, Paul wrote:
>>> Once the FSM holds more than a dozen states, these advantages evaporate.
>> This is point only where I can not agree.  I used FSM with hundreds
>> states/transitions.  It was automatically generated, I only check them.
>> Also I know that in car automatics FSM are widely used (BMW, Mercedes,
>> Audi).  Also it’s using in software for space industry widely.  My IMHO
>> is: FSM is most reliable way to do soft without bugs.  Also it’s easy to
>> verify them (for example, with transitions’ assertions)
> It's interesting to see all this chat about FSMs, when FSMs are essentially
> "just" a tail recursive function on a sum type.
Yes :) But more good is to represent FSM as table or diagram - then you
can easy find right/wrong transitions/states. Any information can be
represented in different forms but only some of them are good for human ;)

Btw, there are a lot of visual tools to work with FSMs, to develop them
and tests as well as to translate them to some language.


> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Sergiu Ivanov-2
In reply to this post by Paul
Hi Paul,

Thus quoth  PY  on Mon Jul 16 2018 at 09:44 (+0200):
> So, motivation of monads introduction (for me, sure, I'm very
> subjective) is to workaround Haskell model,

Sometimes (e.g., when you want to be able to prove correctness) you
actually want to express everything in a small basis of
concepts/operations/tools.  To me, monads are cool precisely because
they allow _explicit_ sequencing of actions in a non-sequential model.

By the way, the lift function you mentioned in a previous E-mail does
have a value for the programmer: it shows at which level of the monad
transformer stack the action takes place (for example).


Thus quoth  PY  on Mon Jul 16 2018 at 09:44 (+0200):
> 1. State monad allows you to mark change of THIS state, so you can easy
> find where THIS state is changing (tracking changes)
> 2. Singleton with FSM allows you to *control* change and to isolate all
> change logic in one place
>
> 1st allows spaghetti, 2nd - does not. 2nd force you to another model:

You seem to like it when your paradigm forces you to do something (just
like I do).  Now, monads force you to program in a certain way.
Furthermore, you may say that FSM are a workaround of the way in which
conventional operative languages manipulate state.

My point is: whether monads are a workaround or a solution depends on
the angle at which you look at the situation.  (I think you say
something similar too.)

-
Sergiu


Thus quoth  PY  on Mon Jul 16 2018 at 09:44 (+0200):

>> So I think if you don't see anybody explicitly mentioning spaghetti
>> issues with State that's for some people it's just hiding in plain
>> sight and they either aren't consciously aware of it, or find that
>> area so self-explaining that they do not think they really need to
>> explain that.
>>
> IMHO State monad solution is orthogonal to my point. It does not force
> you to isolate state change in one place with explicit control, it only
> marks place where it happens. This info is needed to compiler, not to
> me. For me - no benefits. Benefit to me - to isolate changing, but with
> State I can (and all of us do it!) smear change points throughout the
> code. So, my question is: what exact problem does solve State monad?
> Which problem? Mine or compiler? Haskell pure lambda-only-abstraction
> limitation? OK, if we imagine another Haskell, similar to F#, will I
> need State monad yet? IMHO - no. My point is: State monad is super, in
> Haskell, and absolutely waste in other languages. I will isolate
> mutability in another manner: more safe, robust and controllable. Recap:
> 1. State monad allows you to mark change of THIS state, so you can easy
> find where THIS state is changing (tracking changes)
> 2. Singleton with FSM allows you to *control* change and to isolate all
> change logic in one place
>
> 1st allows spaghetti, 2nd - does not. 2nd force you to another model:
> not changes, but change requests, which can return: "not possible". With
> Haskell way the check "possible/not possible" will happen in locations
> where you change state in State monad: anywhere. So, my initial point
> is: State monad is about Haskell abstraction problems, not about
> developer problems.
>
>> Sorry, but that's not what OO is about.
>> Also, I do not think that you're using general FSMs, else you'd be
>> having transition spaghetti.
> To be precise, then yes, you are right. But such model forces me more,
> then monadic model. When you create singleton "PlayerBehavior", and have
> all setters/getters in this singleton and already check (in one place!)
> changes - next step is to switch from checks to explicit FSM - in the
> same place. Haskell nothing offers for this. You *can* do it, but monads
> don't force you and they are about Haskell problems, not mine.
> Motivation of State monad is not to solve problem but to introduce state
> mutability in Haskell, this is my point. OK, State monad has helpful
> side-effect: allows to track change of concrete THIS state, but I can do
> it with my editor, it's more valuable to Haskell itself, then to me,
> because no problem to mutate state: Haskell allows it, Haskell also does
> not guard you to mutate this state anywhere in the application.
>
> I'm agree with you 100%. My point is related to accents only, my thesis
> is: monads have value, but it's small, it's justified in Haskell with
> its limitation to one abstraction, but I don't need monads in other
> languages, their value in other languages is super-small (if even
> exists). So, motivation of monads introduction (for me, sure, I'm very
> subjective) is to workaround Haskell model, not to make code more safe,
> I'm absolutely sure: monads nothing to do with safety. It's like to use
> aspirin with serious medical problem :)
>
>> Let me repeat: What you call a "message" is just a standard
>> synchronous function call. The one difference is that the caller
>> allows the target type to influence what function gets actually
>> called, and while that's powerful it's quite far from what people
>> assume if you throw that "message" terminology around.
> I mentioned Erlang early: the same - you send message to FSM which will
> be lightweight process. Idea of agents and messages is the same in
> Smalltalk, in QNX, in Erlang, etc, etc... So, "message" does not always
> mean "synchronous call". For example, QNX "optimizes" local messages, so
> they are more lightweight in comparison with remotely messages (which
> are naturally asynchronous). But "message" abstraction is the same and
> is more high-level then synchronous/asynchronous dichotomy. It allows
> you to isolate logic - this is the point. Haskell nothing to do with it:
> you smear logic anywhere. But now you mark it explicitly. And you have
> illusion that your code is more safe.
>
>> But that's not the point. The point is that Haskell makes it easy to
>> write non-spaghetti.
>
> How? In Haskell I propagate data to a lot of functions (as argument or
> as hidden argument - in some monad), but with singleton+FSM - you can
> not do it - data is hidden for you, you can only *call logic*, not
> *access data*. Logic in Haskell is forced to be smeared between a lot of
> functions. You *CAN* avoid it, but Haskell does not force you.
>
>> BTW you have similar claims about FSMs. Ordinarily they are spaghetti
>> incarnate, but you say they work quite beautifully if done right.
>> (I'm staying sceptical because your arguments in that direction didn't
>> make sense to me, but that might be because I'm lacking background
>> information, and filling in these gaps is really too far off-topic to
>> be of interest.)
>
> I respect your position. Everybody has different experience, and this is
> basically very good!
>
>> We often repeat this: “side-effects”, “tracks”, “safe”. But what does
>> it actually mean? Can I have side-effects in Haskell? Yes. Can I mix
>> side-effects? Yes. But in more difficult way than in ML or F#, for
>> example. What is the benefit?
>>
>> That it is difficult to accidentally introduce side effects.
>> Or, rather, the problems of side effects. Formally, no Haskell program
>> can have a side effect (unless using UnsafeIO or FFI, but that's not
>> what we're talking about here).
>
> Actually if we look to this from high-level, as to "black box" - we see
> that it's truth. Haskell allows to have them, to mix them but in
> different manner.
>
>> Yes they will. Some tests will fail if they expect specific output. If
>> the program has a text-based user interface, it will become unusable.
>
> And wise-versa: if I will remove "print" from such tests and add "pure"
> - they can fail too. IMHO purity/impurity in your example is related to
> expected behavior and it violation, not to point that "more pure - less
> bugs". Pure function can violate its contract as well as impure.
>
>> Yes they will become buggy. You'll get aliasing issues. And these are
>> the nastiest thing to debug because they will hit you if and only if
>> the program is so large that you don't know all the data flows
>> anymore, and your assumptions about what might be an alias start to
>> fall down. Or not you but maybe the new coworker who doesn't yet know
>> all the parts of the program.
>> That's exactly why data flow is being pushed to being explicit.
>
> So, to avoid this I should not mix read/write monads, to avoid RWST. In
> this case they should be removed from the language. And monad
> transformers too. My point is: there is some misunderstanding - I often
> listen "side-effects are related to errors", "we should avoid them",
> "they leads to errors", etc, etc, but IMHO pure/impure is needed to FP
> language compiler, not to me. This is the real motto. Adding of
> side-effects does not lead to bugs automatically. Mostly it does not.
> More correct is to say: distinguish of pure/impure code is better to
> analyze the code, to manipulate with it, to transform it (as programmer
> I can transform F# code *easy because no monads*, in Haskell *compiler*
> can transform code easy *because monads*). More important argument for
> me is example with Free monads. They allows to simulate behavior, to
> check logic without to involve real external actions (side-effects).
> Yes, OK, this is argument. It's not explicitly related to buggy code,
> but it's useful. It remember me homoiconic Lisp code where code can be
> processed as data, as AST.
>
> Actually, I had a big interesting discussion in my company with people
> which does not like FP (the root why I become to ask such questions to
> himself). And I got their arguments. I tried to find solid base of mine.
> But currently I see that I like Haskell solutions itself, and I can not
> show concrete examples where they are needed in real world, without
> Haskell specific limitations. I know that those limitations lead to slow
> compilation, to big and complex compiler, I can not prove that
> side-effects means "lead to error", or (more interesting) that it's bad
> to separate side-effects from each other. F#, ML, Lisps have "do-" block
> and no problem with it. They don't need transformers to mix 2 different
> effects in one do-block. If you can prove that this decision leads to
> bugs and Haskell solution does not: it will be bomb :) I think, will be
> a lot of people in CS which will not agree with you ever.
>
> ---
> Best regards, Paul
>
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

signature.asc (497 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Investing in languages (Was: What isyourfavouriteHaskell "aha" moment?)

Alexey Raga
In reply to this post by Paul
I actually lost interest, because you are kind of trying to tell me that because of your lack of _familiarity_ with monads, _my_ benefits do not count. I cannot agree with this statement and with this approach.
 
But since there were questions, I'll answer and do some small clarification.

I asked because never tried Eta. So, if you are right, seems no reasons to develop Eta...
I am not sure why you are bringing Eta as an example to this discussion just to point later that you have no experience with it. 
The point of Eta is to run Haskell on JVM. Haskell as of GHC, and not some hypothetical hybrid language (that would be Scala). If you want a decent language, and you must run on JVM then you use Eta. If you don't need to run on JVM - you don't use Eta. 

No better definition then original: https://docs.microsoft.com/en-us/dotnet/fsharp/language-reference/computation-expressions  You see, they are different.
Now it is your turn to read this link. The second sentence on that link says: 
"They can be used to provide a convenient syntax for monads, a functional programming feature that can be used to manage data, control, and side effects in functional programs."
The emphasis on _monads_ isn't mine, it is original. The computation expressions are _monadic_ (when they obey laws, but that link doesn't say anything about laws, unfortunately).

Something like this:   user?.Phone?.Company?.Name??"missing";     ?
Still no. 

It does not force you to isolate state change in one place with explicit control, it only 
marks place where it happens.

processAttack :: (MonadState s m, HasBpsCounters s) => Attack -> m Result

No benefits for you, tons of benefits for me: I guarantee that this function can only be called if there is access to some BPS counters. I guarantee that this function, and whatever if uses inside itself, can never touch or even look at anything else in my state. I can guarantee that it doesn't cause any other effects like it doesn't call something which calls something which prints to console or writes to DB. And that if during code evolution/refactoring someone does something like that, then it won't compile.

If I add “print” for debug purpose in some subroutines, will they become buggy? No.
Then you add this code inside a transaction and voila - yes, you do have a bug.
In fact, my colleague who used to work at Nasdaq, had a story about exactly this: Once upon a time there was a beautiful code that lived in a transaction. Then someone accidentally introduced a side effect to one of the functions. This function was called from something within that transaction. The bug was noticed after some time when it has done some damage.
In Haskell, you can't do IO in STM, so that wouldn't be possible.

Haskell also does not guard you to mutate this state anywhere in the application.
I think my example above proves otherwise: in Haskell, I can granularly control who can update which part of the state, which makes your statement invalid.

monads have value, but it's small ... their value in other languages is super-small
Again, a VERY bold statement I have to disagree with, once again.
F# workflows are monadic, C# LINQ is precisely modelled by E. Meijer as a list monad. They add great value. 

With this, I rest my case, thanks for the discussion.

Regards,
Alexey.
 
On Mon, Jul 16, 2018 at 5:45 PM PY <[hidden email]> wrote:

> So I think if you don't see anybody explicitly mentioning spaghetti
> issues with State that's for some people it's just hiding in plain
> sight and they either aren't consciously aware of it, or find that
> area so self-explaining that they do not think they really need to
> explain that.
>
IMHO State monad solution is orthogonal to my point. It does not force
you to isolate state change in one place with explicit control, it only
marks place where it happens. This info is needed to compiler, not to
me. For me - no benefits. Benefit to me - to isolate changing, but with
State I can (and all of us do it!) smear change points throughout the
code. So, my question is: what exact problem does solve State monad?
Which problem? Mine or compiler? Haskell pure lambda-only-abstraction
limitation? OK, if we imagine another Haskell, similar to  F#, will I
need State monad yet? IMHO - no. My point is: State monad is super, in
Haskell, and absolutely waste in other languages. I will isolate
mutability in another manner: more safe, robust and controllable. Recap:
1. State monad allows you to mark change of THIS state, so you can easy
find where THIS state is changing (tracking changes)
2. Singleton with FSM allows you to *control* change and to isolate all
change logic in one place

1st allows spaghetti, 2nd - does not. 2nd force you to another model:
not changes, but change requests, which can return: "not possible". With
Haskell way the check "possible/not possible" will happen in locations
where you change state in State monad: anywhere. So, my initial point
is: State monad is about Haskell abstraction problems, not about
developer problems.

> Sorry, but that's not what OO is about.
> Also, I do not think that you're using general FSMs, else you'd be
> having transition spaghetti.
To be precise, then yes, you are right. But such model forces me more,
then monadic model. When you create singleton "PlayerBehavior", and have
all setters/getters in this singleton and already check (in one place!)
changes - next step is to switch from checks to explicit FSM - in the
same place. Haskell nothing offers for this. You *can* do it, but monads
don't force you and they are about Haskell problems, not mine.
Motivation of State monad is not to solve problem but to introduce state
mutability in Haskell, this is my point. OK, State monad has helpful
side-effect: allows to track change of concrete THIS state, but I can do
it with my editor, it's more valuable to Haskell itself, then to me,
because no problem to mutate state: Haskell allows it, Haskell also does
not guard you to mutate this state anywhere in the application.

I'm agree with you 100%. My point is related to accents only, my thesis
is: monads have value, but it's small, it's justified in Haskell with
its limitation to one abstraction, but I don't need monads in other
languages, their value in other languages is super-small (if even
exists). So, motivation of monads introduction (for me, sure, I'm very
subjective) is to workaround Haskell model, not to make code more safe,
I'm absolutely sure: monads nothing to do with safety. It's like to use
aspirin with serious medical problem :)

> Let me repeat: What you call a "message" is just a standard
> synchronous function call. The one difference is that the caller
> allows the target type to influence what function gets actually
> called, and while that's powerful it's quite far from what people
> assume if you throw that "message" terminology around.
I mentioned Erlang early: the same - you send message to FSM which will
be lightweight process. Idea of agents and messages is the same in
Smalltalk, in QNX, in Erlang, etc, etc... So, "message" does not always
mean "synchronous call". For example, QNX "optimizes" local messages, so
they are more lightweight in comparison with remotely messages (which
are naturally asynchronous). But "message" abstraction is the same and
is more high-level then synchronous/asynchronous dichotomy. It allows
you to isolate logic - this is the point. Haskell nothing to do with it:
you smear logic anywhere. But now you mark it explicitly. And you have
illusion that your code is more safe.

> But that's not the point. The point is that Haskell makes it easy to
> write non-spaghetti.

How? In Haskell I propagate data to a lot of functions (as argument or
as hidden argument - in some monad), but with singleton+FSM - you can
not do it - data is hidden for you, you can only *call logic*, not
*access data*. Logic in Haskell is forced to be smeared between a lot of
functions. You *CAN* avoid it, but Haskell does not force you.

> BTW you have similar claims about FSMs. Ordinarily they are spaghetti
> incarnate, but you say they work quite beautifully if done right.
> (I'm staying sceptical because your arguments in that direction didn't
> make sense to me, but that might be because I'm lacking background
> information, and filling in these gaps is really too far off-topic to
> be of interest.)

I respect your position. Everybody has different experience, and this is
basically very good!

> We often repeat this: “side-effects”, “tracks”, “safe”. But what does
> it actually mean? Can I have side-effects in Haskell? Yes. Can I mix
> side-effects? Yes. But in more difficult way than in ML or F#, for
> example. What is the benefit?
>
> That it is difficult to accidentally introduce side effects.
> Or, rather, the problems of side effects. Formally, no Haskell program
> can have a side effect (unless using UnsafeIO or FFI, but that's not
> what we're talking about here).

Actually if we look to this from high-level, as to "black box" - we see
that it's truth. Haskell allows to have them, to mix them but in
different manner.

> Yes they will. Some tests will fail if they expect specific output. If
> the program has a text-based user interface, it will become unusable.

And wise-versa: if I will remove "print" from such tests and add "pure"
- they can fail too. IMHO purity/impurity in your example is related to
expected behavior and it violation, not to point that "more pure - less
bugs". Pure function can violate its contract as well as impure.

> Yes they will become buggy. You'll get aliasing issues. And these are
> the nastiest thing to debug because they will hit you if and only if
> the program is so large that you don't know all the data flows
> anymore, and your assumptions about what might be an alias start to
> fall down. Or not you but maybe the new coworker who doesn't yet know
> all the parts of the program.
> That's exactly why data flow is being pushed to being explicit.

So, to avoid this I should not mix read/write monads, to avoid RWST. In
this case they should be removed from the language. And monad
transformers too. My point is: there is some misunderstanding - I often
listen "side-effects are related to errors", "we should avoid them",
"they leads to errors", etc, etc, but IMHO pure/impure is needed to FP
language compiler, not to me. This is the real motto. Adding of
side-effects does not lead to bugs automatically. Mostly it does not.
More correct is to say: distinguish of pure/impure code is better to
analyze the code, to manipulate with it, to transform it (as programmer
I can transform F# code *easy because no monads*, in Haskell *compiler*
can transform code easy *because monads*). More important argument for
me is example with Free monads. They allows to simulate behavior, to
check logic without to involve real external actions (side-effects).
Yes, OK, this is argument. It's not explicitly related to buggy code,
but it's useful. It remember me homoiconic Lisp code where code can be
processed as data, as AST.

Actually, I had a big interesting discussion in my company with people
which does not like FP (the root why I become to ask such questions to
himself). And I got their arguments. I tried to find solid base of mine.
But currently I see that I like Haskell solutions itself, and I can not
show concrete examples where they are needed in real world, without
Haskell specific limitations. I know that those limitations lead to slow
compilation, to big and complex compiler, I can not prove that
side-effects means "lead to error", or (more interesting) that it's bad
to separate side-effects from each other. F#, ML, Lisps have "do-" block
and no problem with it. They don't need transformers to mix 2 different
effects in one do-block. If you can prove that this decision leads to
bugs and Haskell solution does not: it will be bomb :) I think, will be
a lot of people in CS which will not agree with you ever.

---
Best regards, Paul

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: What is your favourite Haskell "aha" moment?

amindfv
In reply to this post by Haskell - Haskell-Cafe mailing list
After a chat with a genomics Ph.D student, a couple of small ideas (and I'd also like to +1 Takenobu's suggestions):

  - Purity and laziness allowing the separation of producer and consumer for an arbitrary data model. For example, if you've got a function "legalNextMoves :: ChessBoard -> [ChessBoard]", you can easily construct an efficient tree of all possible chess games, "allGames :: Tree ChessBoard" and write different consumer functions to traverse the tree separately -- and walking the infinite tree is as simple as pattern-matching.

  - Large-scale refactoring with types: this is a huge selling point for Haskell in general, including at my job. The ability to have a codebase which defines

data Shape
   = Circle Double
   | Square Double

area :: Shape -> Double
area = \case
   Circle r -> pi * r ^ 2
   Square w -> w ^ 2

...and simply change the type to...

data Shape
   = Circle Double
   | Square Double
   | Rectangle Double Double

...and have GHC tell us with certainty every place where we've forgotten about the Rectangle case is a fantastic, fantastic benefit of Haskell in large codebases.

Tom


El 11 jul 2018, a las 08:10, Simon Peyton Jones via Haskell-Cafe <[hidden email]> escribió:

Friends

In a few weeks I’m giving a talk to a bunch of genomics folk at the Sanger Institute about Haskell.   They do lots of programming, but they aren’t computer scientists.

I can tell them plenty about Haskell, but I’m ill-equipped to answer the main question in their minds: why should I even care about Haskell?  I’m too much of a biased witness.

So I thought I’d ask you for help.  War stories perhaps – how using Haskell worked (or didn’t) for you.  But rather than talk generalities, I’d love to illustrate with copious examples of beautiful code.

  • Can you identify a few lines of Haskell that best characterise what you think makes Haskell distinctively worth caring about?   Something that gave you an “aha” moment, or that feeling of joy when you truly make sense of something for the first time.

The challenge is, of course, that this audience will know no Haskell, so muttering about Cartesian Closed Categories isn’t going to do it for them.  I need examples that I can present in 5 minutes, without needing a long setup.

To take a very basic example, consider Quicksort using list comprehensions, compared with its equivalent in C.  It’s so short, so obviously right, whereas doing the right thing with in-place update in C notoriously prone to fencepost errors etc.  But it also makes much less good use of memory, and is likely to run slower.  I think I can do that in 5 minutes.

Another thing that I think comes over easily is the ability to abstract: generalising sum and product to fold by abstracting out a functional argument; generalising at the type level by polymorphism, including polymorphism over higher-kinded type constructors.   Maybe 8 minutes.

But you will have more and better ideas, and (crucially) ideas that are more credibly grounded in the day to day reality of writing programs that get work done.

Pointers to your favourite blog posts would be another avenue.  (I love the Haskell Weekly News.)

Finally, I know that some of you use Haskell specifically for genomics work, and maybe some of your insights would be particularly relevant for the Sanger audience.

Thank you!  Perhaps your responses on this thread (if any) may be helpful to more than just me.

Simon

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: What is your favourite Haskell "aha" moment?

Johannes Waldmann-2
In reply to this post by Haskell - Haskell-Cafe mailing list
>  - Large-scale refactoring with types:
> this is a huge selling point for Haskell in general

I fully agree with the general idea, but the specific example

> data Shape
>   = Circle Double
>   | Square Double
>   | Rectangle Double Double   -- ^ added

is not convincing (well, to programmers
that know about static typing) Think of a Java program

interface Shape { .. }
class Circle implement Shape { ... }

When you add   class Rectangle {  } ,
then  Shape foo = new Rectangle ()  would be an error,

until you put   class Rectangle implements Shape {  }
then the compiler tells you what methods are missing.


I think the extra value of types in Haskell
(for everything, including refactoring)
is that they tend to express more of the program's properties
(w.r.t. statically typed imperative programs).

Examples:

the distinction between  a  and  IO a,
between  IO a  and   STM a.

In  Data.Set, function  "fromList"  has an  Ord  constraint,
but  "toList"  does not. Does  "singleton"  need the constraint?
I used this as an exam question just last week.


- J.W.
_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: Monad laws (Was: Investing in languages (Was: What is your favourite Haskell "aha" moment?))

Haskell - Haskell-Cafe mailing list
In reply to this post by Alexis King
Hi,

And if you draw the diagrams corresponding to the monad laws for join and return they have exactly the same shape as monoid laws (left and righ unitality and associativity), just the Cartesian product is exchanged for functor composition. It's a nice exercise on it's own to observe this fact so I will leave it here.

Regards,
Marcin


Sent from ProtonMail mobile



-------- Original Message --------
On 12 Jul 2018, 09:06, Alexis King < [hidden email]> wrote:

> On Jul 12, 2018, at 01:42, Tom Ellis <[hidden email]> wrote:
>
>> the monad laws are too hard to read.
>
> FWIW the monad laws are not hard to *read* if written in this form
>
> return >=> f = f
> f >=> return = f
> (f >=> g) >=> h = f >=> (g >=> h)
>
> (Whether they're easy to *understand* in that form is another matter.)

Here is another formulation of the monad laws that is less frequently discussed than either of the ones using bind or Kleisli composition:

(1) join . return = id
(2) join . fmap return = id
(3) join . join = join . map join

These laws map less obviously to the common ones, but I think they are easier to understand (not least because `join` is closer to the “essence” of what a monad is than >>=). (1) and (2) describe the intuitive notion that adding a layer and squashing it should be the identity function, whether you add the new layer on the outside or on the inside. Likewise, (3) states that if you have three layers and squash them all together, it doesn’t matter whether you squash the inner two or outer two together first.

(Credit goes to HTNW on Stack Overflow for explaining this to me. https://stackoverflow.com/a/45829556/465378)

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
Reply | Threaded
Open this post in threaded view
|

Re: What is your favourite Haskell "aha" moment?

Vaibhav Sagar
In reply to this post by Johannes Waldmann-2
I'm late to this discussion, but one aspect of Haskell that really
impressed me was the idea of *typed holes*, which I think revolutionise
the act of programming. I tried to communicate my love for them to
people who mostly don't know Haskell here:
https://www.youtube.com/watch?v=0oo8wIi2qBE


On 17/07/18 03:28, Johannes Waldmann wrote:

>>   - Large-scale refactoring with types:
>> this is a huge selling point for Haskell in general
> I fully agree with the general idea, but the specific example
>
>> data Shape
>>    = Circle Double
>>    | Square Double
>>    | Rectangle Double Double   -- ^ added
> is not convincing (well, to programmers
> that know about static typing) Think of a Java program
>
> interface Shape { .. }
> class Circle implement Shape { ... }
>
> When you add   class Rectangle {  } ,
> then  Shape foo = new Rectangle ()  would be an error,
>
> until you put   class Rectangle implements Shape {  }
> then the compiler tells you what methods are missing.
>
>
> I think the extra value of types in Haskell
> (for everything, including refactoring)
> is that they tend to express more of the program's properties
> (w.r.t. statically typed imperative programs).
>
> Examples:
>
> the distinction between  a  and  IO a,
> between  IO a  and   STM a.
>
> In  Data.Set, function  "fromList"  has an  Ord  constraint,
> but  "toList"  does not. Does  "singleton"  need the constraint?
> I used this as an exam question just last week.
>
>
> - J.W.
> _______________________________________________
> Haskell-Cafe mailing list
> To (un)subscribe, modify options or view archives go to:
> http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
> Only members subscribed via the mailman list are allowed to post.

_______________________________________________
Haskell-Cafe mailing list
To (un)subscribe, modify options or view archives go to:
http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Only members subscribed via the mailman list are allowed to post.
1 ... 3456