# Preventing sharing

17 messages
Open this post in threaded view
|

## Preventing sharing

 The old article         Preventing memoization in (AI) search problems         http://okmij.org/ftp/Haskell/index.html#memo-offdeals with the problem, explaining the trick to deliberately confuse GHC so that it won't perform memoization (sharing). Yes, I know how bad this confusing of GHC sounds: which is part of my argument that lazy evaluation by default was a mistake. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 On Mon, Dec 21, 2015 at 08:17:10PM +0900, Oleg wrote: > The old article >         Preventing memoization in (AI) search problems >         http://okmij.org/ftp/Haskell/index.html#memo-off> > deals with the problem, explaining the trick to deliberately confuse > GHC so that it won't perform memoization (sharing). Yes, I know how > bad this confusing of GHC sounds: which is part of my argument that > lazy evaluation by default was a mistake. Hi Oleg, As I explained here     https://mail.haskell.org/pipermail/haskell-cafe/2013-February/106673.html-fno-full-laziness fixes the space leak issue in your iterative deepening example. This isn't a problem with laziness.  It's a problem with performing a time optimization which is a space pessimization.  In the absence of the "optimization" there is no problem. Tom _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 In reply to this post by oleg-30 > -fno-full-laziness fixes the space leak issue in your iterative deepening > example. Yes, and I think it has been mentioned that the flag is a blunt weapon as it affects the whole module... > This isn't a problem with laziness.  It's a problem with performing a time > optimization which is a space pessimization.  In the absence of the > "optimization" there is no problem. How come it isn't the problem with laziness?! Recall, that pure call-by-name calculus is observationally undistinguishable from the call-by-need (i.e., lazy) calculus. The only reason to have laziness is to avoid recomputations of argument computations should an argument be used more than once -- at the cost of taking memory to store the result of the first evaluation. Thus "performing a time optimization which is a space pessimization" is exactly what laziness is all about -- as the article mentioned earlier argued. Laziness isn't an absolute good -- it is a time-space trade-off, which is not always beneficial. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 On Mon, Dec 21, 2015 at 08:55:05PM +0900, Oleg wrote: > > -fno-full-laziness fixes the space leak issue in your iterative deepening > > example. > Yes, and I think it has been mentioned that the flag is a blunt weapon > as it affects the whole module... Agreed. > > This isn't a problem with laziness.  It's a problem with performing a time > > optimization which is a space pessimization.  In the absence of the > > "optimization" there is no problem. > > How come it isn't the problem with laziness?! Recall, that pure > call-by-name calculus is observationally undistinguishable from the > call-by-need (i.e., lazy) calculus. The only reason to have laziness > is to avoid recomputations of argument computations should an argument > be used more than once -- at the cost of taking memory to store the > result of the first evaluation. Thus "performing a time optimization > which is a space pessimization" is exactly what laziness is all about > -- as the article mentioned earlier argued. Laziness isn't an absolute > good -- it is a time-space trade-off, which is not always beneficial. I don't agree at all.  To my mind you are assigning blame to the wrong thing.  The operational semantics of     f () = let x = in ... are perfectly clear.  x is reallocated each time, and is free to be released between calls to f.  It's only when an "optimization" rewrites this to     x =     f () = ... that there is a space leak.  Exactly the same applies if the language is strict. Tom _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 In reply to this post by oleg-30 On Mon, 21 Dec 2015 12:55:05 +0100, Oleg <[hidden email]> wrote: : > The only reason to have laziness > is to avoid recomputations of argument computations should an argument > be used more than once -- at the cost of taking memory to store the > result of the first evaluation. Thus "performing a time optimization > which is a space pessimization" is exactly what laziness is all about > -- as the article mentioned earlier argued. Laziness isn't an absolute > good -- it is a time-space trade-off, which is not always beneficial. In paper "Why Functional Programming Matters"[0], John Hughes shows how   lazy functional programming can be used for better modularity. A more   precise title for the paper would be "Why Lazy Functional Programming   Matters". Regards, Henk-Jan van Tuyl [0] http://www.cse.chalmers.se/~rjmh/Papers/whyfp.pdf-- Folding@home What if you could share your unused computer power to help find a cure? In   just 5 minutes you can join the world's biggest networked computer and get   us closer sooner. Watch the video. http://folding.stanford.edu/http://Van.Tuyl.eu/http://members.chello.nl/hjgtuyl/tourdemonad.htmlHaskell programming -- _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 Administrator On Mon, Dec 21, 2015 at 10:50 PM, Henk-Jan van Tuyl wrote:In paper "Why Functional Programming Matters"[0], John Hughes shows how lazy functional programming can be used for better modularity. A more precise title for the paper would be "Why Lazy Functional Programming Matters".This is Oleg. He's perfectly aware of the paper.The point he's making is not that laziness is bad, but that it shouldn't be the default.And if you note the recent work on -XStrict, there are good arguments about bolting laziness on top of strictness and not doing a nasty -- and thus necessarily heroic -- shoehorn in the reverse direction. See:https://www.reddit.com/r/programming/comments/3sux1d/strict_haskell_xstrict_has_landed/However, the record remains that Oleg has offered little by way of elegant bolting. His lazy programs based on a strict language tend to be cluttered with lazy and force functions that uglify previously elegant code.His arguments would persuade many more folks if, for instance, he could offer lazy-over-strict translations of Doug McIlroy's power serious one-liners with no loss in elegance:http://www.cs.dartmouth.edu/~doug/powser.html-- Kim-Ee _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 Administrator His lazy programs based on a strict language tend to be cluttered with lazy and force functions that uglify previously elegant code.* I think the pair of functions are called "delay" and "force", I forget the precise names.-- Kim-Ee _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 In reply to this post by Kim-Ee Yeoh I have a cunning plan. Default to call-by-name. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 In reply to this post by Tom Ellis > > Thus "performing a time optimization > > which is a space pessimization" is exactly what laziness is all about > > -- as the article mentioned earlier argued. Laziness isn't an absolute > > good -- it is a time-space trade-off, which is not always beneficial. > > I don't agree at all.  To my mind you are assigning blame to the wrong > thing.  The operational semantics of > >     f () = let x = in ... > > are perfectly clear.  x is reallocated each time, and is free to be released > between calls to f.  It's only when an "optimization" rewrites this to > >     x = > >     f () = ... > > that there is a space leak.  Exactly the same applies if the language is > strict. Let us take a step back. The article on my web page noted the great difficulty of writing AI search program in Haskell because the search tree is evaluated lazily: whenever a node is evaluated, its result is memoized for further use. That is precisely the wrong thing to do for such problems. Again, this problem is precisely of lazy evaluation (as defined in the book below). The obvious way to avoid memoization was to introduce thunks -- which didn't work. The article then developed a more involved solution. Yes, -no-full-laziness would have worked just as well. However, the solution in the article works on the case-by-case basis whereas -no-full-laziness affects the whole compilation unit. It is for that reason that I pointed out the article in this discussion. Let us turn to the problem of the "optimization" that we are discussing. Does it have anything to do with laziness? Yes, it does. Please see Chap 15 of the excellent book http://research.microsoft.com/en-us/um/people/simonpj/papers/slpj-book-1987/which explains the full laziness. Once I mention the book I must point out Chap 23 of the book (near the end). It should be the required read. The introduction to the section contains the following emphasized statement (emphasized by the author, Simon Peyton Jones):    A major weakness of functional languages is the difficulty    of reasoning about their space and time behavior. The last paragraph of the introduction says No good solutions are yet known to most of these problems.'' This is true even today. Sec 23.3 Space behavior'' is the excellent collection of the problems with lazy evaluation and full laziness. The book was published in 1987. The problems are still with us. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 In reply to this post by Henk-Jan van Tuyl Henk-Jan van Tuyl wrote: > In paper "Why Functional Programming Matters"[0], John Hughes shows how > lazy functional programming can be used for better modularity. A more > precise title for the paper would be "Why Lazy Functional Programming > Matters". The first paragraph of our paper (published 3 years ago)   http://okmij.org/ftp/ftp/continuations/PPYield/index.html#introductionis as follows     Lazy evaluation is regarded as one of the main reasons why functional     programming matters \cite{hughes:matters-cj}.  Lazy evaluation lets us     write \emph{producers} and \emph{consumers} separately, whereas the     two are inextricably intertwined in a call-by-value language.  This     separation allows a modular style of programming, in which a variety     of producers, consumers, and transformers can readily be plugged     together.'' Lazy evaluation is also an elegant implementation of a     form of coroutines, suspending and resuming computations based on the     demand for values, giving us memory-efficient, incremental computation     `for free'     \cite{McIlroy:1999:PSP:968592.968597,Bird:1984:UCP,AG-embed}. But do read the next paragraph and the rest of the paper, and other articles on the web site         http://okmij.org/ftp/continuations/PPYield/index.htmlOur conclusion is that the modularity benefit of lazy evaluation can be held without lazy evaluation, gaining the predictability of the space and time behavior. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

 Am 22.12.2015 um 13:31 schrieb Oleg: > > But do read the next paragraph and the rest of the paper, and other > articles on the web site >          http://okmij.org/ftp/continuations/PPYield/index.htmlI have to say I almost stopped reading at "worst in lazy evaluation: its incompatibility with effects". Incompatibility with side effects is a good thing, so that's not "worst", and there are frameworks for doing effects (most notably IO), so there goes "incompatibility". > Our conclusion is that the modularity benefit of lazy evaluation can > be held without lazy evaluation, gaining the predictability of the > space and time behavior. I just skimmed the paper, so correct me if I'm wrong. It seems to show how one can transform a specific class of lazy functions into generators. This seems to miss the point of laziness. Having lazy evaluation means that when writing a function, you don't know (and don't care) how much of the returned data structure is ever explored, that's the caller's decision. This means you do not ever transform your code as a generator, because you don't need to. As I said, I didn't do more than a quick scan, and I might be in error thinking it's a transformation just for a specific class of lazy functions. More importantly, I might have missed some essential point about how this isn't really a transformation, or that one does not need to transform the code to get a transformer out. So... please correct the omissions :-) Regards, Jo _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|

## Re: Preventing sharing

Open this post in threaded view
|

## Re: Preventing sharing

Open this post in threaded view
|

## Re: Preventing sharing

 Am 23.12.2015 um 04:00 schrieb wren romano: > To play the celestial advocate: the ability to not care about > strictness/laziness when writing a function is precisely what causes > it to be hard to reason about the space/time costs of that function. Sure. I'm not disputing well-known facts, I'm just wondering that the paper highlights just the problems and does not put them into proportion to the advantages. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://mail.haskell.org/cgi-bin/mailman/listinfo/haskell-cafe
Open this post in threaded view
|