Functional programming for processing of large raster images

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
66 messages Options
1234
Reply | Threaded
Open this post in threaded view
|

Functional programming for processing of large raster images

oleg-7

Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional
programming) can be of much use for computer graphics programming.

I'd like to point out a project that experimentally shown that
functional programming is of great help for processing of large raster
images (24-bit PPM files). The paper describing the system has been
accepted for `Science of Computer Programming' and is in press:

  Combining Partial Evaluation and Staged Interpretation in the
  Implementation of Domain-Specific Languages
  Christoph A. Herrmann, Tobias Langhammer

The code for the system is available online
 http://infosun.fmi.uni-passau.de/cl/metaprog/imagefilter2-2005-10-30.tgz

The previous [slightly obsolete] version of the paper,
presented at the MetaOCaml'04 workshop, is available online
        http://www.fmi.uni-passau.de/forschung/mip-berichte/MIP-0410.html

The paper includes a few pictures and the table with benchmark, so we
can observe the benefits. Now, with offshoring and native MetaOCaml
fully implemented, we can gain bigger benefits.

In this project, functional programming does what it is probably best
suited: to develop a `compiler' -- a compiler from a filter
specification to quite efficient code. The code is specialized to all
available static information. To be precise, the authors implement an
interpreter -- which is then specialized to the source program and so
interpretative overhead is removed and the code is optimized. Staging
an interpreter automatically gives a compiler. Futamura projections
are not only the fascinating, but useful, too.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Functional programming for processing of large raster images

Vo Minh Thu
Thanks for pointing this out, although I knew that kind of answer via
papers about Pan.
It means I'll have to improve my compiler writing knowlegde :)
mt

2006/6/21, [hidden email] <[hidden email]>:

>
> Recently Vo Minh Thu wondered if Haskell (or, I generalize, functional
> programming) can be of much use for computer graphics programming.
>
> I'd like to point out a project that experimentally shown that
> functional programming is of great help for processing of large raster
> images (24-bit PPM files). The paper describing the system has been
> accepted for `Science of Computer Programming' and is in press:
>
>  Combining Partial Evaluation and Staged Interpretation in the
>  Implementation of Domain-Specific Languages
>  Christoph A. Herrmann, Tobias Langhammer
>
> The code for the system is available online
>  http://infosun.fmi.uni-passau.de/cl/metaprog/imagefilter2-2005-10-30.tgz
>
> The previous [slightly obsolete] version of the paper,
> presented at the MetaOCaml'04 workshop, is available online
>        http://www.fmi.uni-passau.de/forschung/mip-berichte/MIP-0410.html
>
> The paper includes a few pictures and the table with benchmark, so we
> can observe the benefits. Now, with offshoring and native MetaOCaml
> fully implemented, we can gain bigger benefits.
>
> In this project, functional programming does what it is probably best
> suited: to develop a `compiler' -- a compiler from a filter
> specification to quite efficient code. The code is specialized to all
> available static information. To be precise, the authors implement an
> interpreter -- which is then specialized to the source program and so
> interpretative overhead is removed and the code is optimized. Staging
> an interpreter automatically gives a compiler. Futamura projections
> are not only the fascinating, but useful, too.
>
>
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Functional programming for processing of large raster images

Joel Reymont
I think the issue wasn't using functional programming for large image  
processing, it was using Haskell. OCaml is notoriously fast and  
strict. Haskell/GHC is... lazy.

Everyone knows that laziness is supposed to be a virtue. In practice,  
though, I'm one of the people who either can't wrap their heads  
around it or just find themselves having to fight it from the start.

Should we all switch to OCaml? I wish I had a criteria to determine  
when to use Haskell and when to use OCaml.

On Jun 21, 2006, at 8:15 AM, minh thu wrote:

> Thanks for pointing this out, although I knew that kind of answer via
> papers about Pan.
> It means I'll have to improve my compiler writing knowlegde :)
> mt
>
> 2006/6/21, [hidden email] <[hidden email]>:
>>
>> Recently Vo Minh Thu wondered if Haskell (or, I generalize,  
>> functional
>> programming) can be of much use for computer graphics programming.
>>
>> I'd like to point out a project that experimentally shown that
>> functional programming is of great help for processing of large  
>> raster
>> images (24-bit PPM files). The paper describing the system has been
>> accepted for `Science of Computer Programming' and is in press:

--
http://wagerlabs.com/





_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re[2]: Re: Functional programming for processing of large raster images

Bulat Ziganshin-2
Hello Joel,

Wednesday, June 21, 2006, 1:24:05 PM, you wrote:

> I think the issue wasn't using functional programming for large image
> processing, it was using Haskell. OCaml is notoriously fast and  
> strict. Haskell/GHC is... lazy.

+1 :)

> Everyone knows that laziness is supposed to be a virtue. In practice,
> though, I'm one of the people who either can't wrap their heads  
> around it or just find themselves having to fight it from the start.

i think that i just don't notice cases where i use lazy evaluation in
Haskell, it just silently works. just for example - i have in my
program filelist which can contain, say, 100.000 files. it is splitted
to sublists by file extension and each sublist is splitted again to
100 items chunks. the each sub-sub-list is processed in inner cycle.
that is done via lazy list evaluation and i even don't think how this
actually works - for me it' just split+mapM_ combination

> Should we all switch to OCaml? I wish I had a criteria to determine
> when to use Haskell and when to use OCaml.

i never tried to work in it but i've read books. for me it's just too
irregular, uncomfortable language. from the theoretic point of
view, Haskell has some stronger facilities (because it was created
much later), such as polymorphic recursion and type classes. although
OCaml is much later than original ML and contains numerous extensions,
they are more pragmatic and not really integrated into the language (imho).
Haskell, on the other side, grows with new high-level concepts that
are moved from the academician fields right to the real work (say, STM
and GADT)

language-of-my-dream is strict Haskell with ability to explicitly
specify lazy evaluation on need. it was discussed on this list
several months ago (search for "strict Haskell"). one possibly
interesting variant of these "strict Haskell" can be Clean language -
it's also lazy by default, afaik, but contains better features to
specify strictness, compiler that generates fast code (at the level of
OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems
somewhat like "Visual Basic" comparing to plain Basic. again, i don't
tried it (and it's not free, afair), so look himself. it's a list of
files i downloaded from site but not yet tried. first is compiler,
other - docs (and it's a HUGE docs):

HsCleanAll2.0.2.zip [http://aszt.inf.elte.hu/~fun_ver/2003/software/HsCleanAll2.0.2.zip]
CleanBookI.pdf 1-up [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/CleanBookI.pdf]
CleanBookI2up.pdf 2-up [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/CleanBookI2up.pdf]
III.1.ProgramDevelopment.pdf pdf [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/III.1.ProgramDevelopment.pdf]
III.2.ProgStylesParadigms.pdf pdf [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/III.2.ProgStylesParadigms.pdf]
III.3.Efficiency.pdf pdf [ftp://ftp.cs.kun.nl/pub/Clean/papers/cleanbook/III.3.Efficiency.pdf]
object-io.pdf [ftp://ftp.cs.kun.nl/pub/Clean/supported/ObjectIO.1.2/doc/tutorial.pdf]



--
Best regards,
 Bulat                            mailto:[hidden email]

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

... processing of large raster images : CLEAN

jerzy.karczmarczuk
Bulat Ziganshin wrote:

>  one possibly
> interesting variant of these "strict Haskell" can be Clean language -
> it's also lazy by default, afaik, but contains better features to
> specify strictness, compiler that generates fast code (at the level of
> OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems
> somewhat like "Visual Basic" comparing to plain Basic. again, i don't
> tried it (and it's not free, afair), so look himself.

Yourself...

... You will find that Clean IS FREE under LGPL, although commercial
versions exist as well.

Don't spread dubious "truths".

Jerzy Karczmarczuk
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: ... processing of large raster images : CLEAN

Bulat Ziganshin-2
Hello Jerzy,

Wednesday, June 21, 2006, 4:32:40 PM, you wrote:
>>  one possibly
>> interesting variant of these "strict Haskell" can be Clean language -
>> it's also lazy by default, afaik, but contains better features to
>> specify strictness, compiler that generates fast code (at the level of
>> OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems
>> somewhat like "Visual Basic" comparing to plain Basic. again, i don't
>> tried it (and it's not free, afair), so look yourself.

> ... You will find that Clean IS FREE under LGPL, although commercial
> versions exist as well.

can you write more about Clean, about experience with it? i don't know
exact LGPL rules - can it be used to develop commercial software and
which parts (if any) is omitted in LGPL version?

it will be very interesting to hear about Clean programming
environment comparing with well-known RADs - .NET, Eclipse, Delphi?
can you enlighten me?


--
Best regards,
 Bulat                            mailto:[hidden email]

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: ... processing of large raster images : CLEAN

Vo Minh Thu
hi Bulat,

shortly :
with GPL you have to make your code GPL if it uses GPLed code.
with LGPL you dont have to make your code LGPL/GPL if it only links
against LGPLed library.

regards,
mt

2006/6/21, Bulat Ziganshin <[hidden email]>:

> Hello Jerzy,
>
> Wednesday, June 21, 2006, 4:32:40 PM, you wrote:
> >>  one possibly
> >> interesting variant of these "strict Haskell" can be Clean language -
> >> it's also lazy by default, afaik, but contains better features to
> >> specify strictness, compiler that generates fast code (at the level of
> >> OCaml/jhc), plus IDE with GUI libs. comparing to Haskell, it seems
> >> somewhat like "Visual Basic" comparing to plain Basic. again, i don't
> >> tried it (and it's not free, afair), so look yourself.
>
> > ... You will find that Clean IS FREE under LGPL, although commercial
> > versions exist as well.
>
> can you write more about Clean, about experience with it? i don't know
> exact LGPL rules - can it be used to develop commercial software and
> which parts (if any) is omitted in LGPL version?
>
> it will be very interesting to hear about Clean programming
> environment comparing with well-known RADs - .NET, Eclipse, Delphi?
> can you enlighten me?
>
>
> --
> Best regards,
>  Bulat                            mailto:[hidden email]
>
> _______________________________________________
> Haskell-Cafe mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell-cafe
>
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Functional programming for processing of largeraster images

Brian Hulley
In reply to this post by Joel Reymont
Joel Reymont wrote:
> I think the issue wasn't using functional programming for large image
> processing, it was using Haskell. OCaml is notoriously fast and
> strict. Haskell/GHC is... lazy.
>
> Everyone knows that laziness is supposed to be a virtue. In practice,
> though, I'm one of the people who either can't wrap their heads
> around it or just find themselves having to fight it from the start.

Perhaps laziness is more "foundational", in that you can write

      if2 c x y = if c then x else y

However:

1) What's the advantage of being able to define if2?

2) Many control constructs, like >>=, foldl', map etc don't require laziness
because they already take a function as argument rather than a plain
expression. In fact, I can't think of any useful user defined control
constructs that actually use laziness in the same way as if2, and the
presence of laziness in foldr, foldl etc just seems to be a real pain
leading to excessive heap consumption. (Counterexample? )

3) "Lazy lists as glue" can easily be replaced by force/delay lists + an
extension to pattern matching where pattern matching against [a] forces the
argument and the syntax [h|t] is used as in Prolog, instead of h:t (This
would also free : to be used for "with type" or "with partial type" instead
of ::)

4) Other examples of the utility of laziness can turn out to be impractical
chimera. For example, the famous repmin replaces the traversal of a tree
twice with the dubious "advantage" of traversing it "only once" and the
building up of a cluster of expensive thunks instead, and since the thunks
effectively encode the structure of the tree, evaluation of them effectively
constitutes the second traversal. So nothing's gained except difficulty of
understanding the all-too-clever code (very bad software engineering
practice imho), more heap consumption, and more time consumption.

>
> Should we all switch to OCaml? I wish I had a criteria to determine
> when to use Haskell and when to use OCaml.

Everything else about Haskell is so great and well thought out (eg type
classes, no side effects, higher rank polymorphism, existentials) it seems a
pity to throw all this away just because of one unfortunate feature. An
alternative would be to write a converter that reads a file of Haskell
source which is to be interpreted strictly and outputs a file of lazy
Haskell source with stictness annotations everywhere, with desugaring of [a]
into force/delay representation (*). (Also in general, !x could mean "force
x" ie x(), and ~x could mean \()->x (in value syntax) or ()->x (in type
syntax), and !x could be allowed in patterns also (when the type at that
position is ~x))

    foo : ~Int -> Int  -- also taking the opportunity to replace :: by :
    foo !x = x

desugared to

    foo :: (() -> Int) -> Int
    foo cx = case cx () of
                      x -> x

The main challenge I think would be in converting bindings in let and where
to appropriate case bindings to ensure that as far as possible, redundant
bindings for a given path of control flow are not made, and analysis of
mutually recursive bindings to ensure that everything is bound before being
used.

Some issues would need to be resolved about what the user can expect. For
example whole program optimization could allow some expressions to be
optimized out (eg by splitting a function returning a pair into two
functions, one for each element) that would otherwise be non-terminating,
whereas with lazy Haskell, there is an easy rule that (effectively)
expressions are only evaluated when needed, regardless of optimization.

Then an interesting investigation would be to see how easy it is to port
lazy Haskell to the new, totally strict, language, and if there are any
situations where existing code has used laziness (for algorithmic reasons or
just for efficiency) without being aware of it.

(*) So that eventually a new compiler could be written to take full
advantage of the side-effect-free nature of the language + the strictness
which means that values can be accessed without having to go through the "if
unevaluated then evaluate, store, and return else return" for each
unoptimized access. Because OCaml and SML have side effects, which limit
optimizations due to possible aliasing etc, it might be possible to compile
such a language to code which is certainly no slower, and possibly *even
faster* than OCaml!!! :-)

Regards, Brian.

--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Functional programming for processing of largeraster images

Neil Mitchell
Hi,

I happen to like laziness, because it means that when I'm not thinking
about performance, I don't have to think about evaluation order _at
all_. And since my computer is a 750Mhz Athlon with Hugs, I never find
any need to worry about performance :) If it ever becomes an issue I
can move to GHC or buy a faster computer without too much hassle.

> 1) What's the advantage of being able to define if2?
What about &&, || ? Should they be "built in"? What about and, which
is just && a lot of times, should that be lazy? At what point do you
say no? Should I be able to define implies correctly?

> 3) "Lazy lists as glue" can easily be replaced by force/delay lists + an
> extension to pattern matching where pattern matching against [a] forces the
> argument and the syntax [h|t] is used as in Prolog, instead of h:t (This
> would also free : to be used for "with type" or "with partial type" instead
> of ::)
That seems like more "thought" when writing the program, maybe its
worth it, maybe its not, but it doesn't seem as "neat" as what we
already have.

> 4) Other examples of the utility of laziness can turn out to be impractical
> chimera. For example, the famous repmin replaces the traversal of a tree
> twice with the dubious "advantage" of traversing it "only once" and the
> building up of a cluster of expensive thunks instead, and since the thunks
> effectively encode the structure of the tree, evaluation of them effectively
> constitutes the second traversal. So nothing's gained except difficulty of
> understanding the all-too-clever code (very bad software engineering
> practice imho), more heap consumption, and more time consumption.
Laziness doesn't have to be exploited in complex ways - minimum = head
. sort is a nice example. isSubstr x y = any (isPrefix x) (inits y) is
another one. Often by just stating a definition, laziness gives you
the performance for free. Of course, if you wanted to think harder
(and I never do), you can write better performing and strict-safe
versions of these, but again its more effort.

The other thing you loose when moving to strictness is the ability to
inline functions arbitrarily - consider:

if2 c t f = if x then t else f

Consider the expression:
if2 True 1 undefined

Now lets inline it and expand it, and in Haskell we get 1, which
matches the evaluation. In strict Haskell the inlining is now invalid,
and thats quite a useful optimisation to make. While it seems that
compilers can get round this, my concern is for the poor programmer -
this nice property of viewing functions as just "replace this with
that" has disappeared.

I suspect that in years to come, lazy languages will also have the
upper hand when it comes to theorem proving and formal reasoning, but
I guess thats a matter for future consideration.

While laziness may not be all good, its certainly not all bad :)

Thanks

Neil
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Functional programming for processing of largeraster images

Ralf Hinze
In reply to this post by Brian Hulley
Am Mittwoch 21 Juni 2006 21:30 schrieb Brian Hulley:
> Perhaps laziness is more "foundational", in that you can write
>
>       if2 c x y = if c then x else y
>
> However:
>
> 1) What's the advantage of being able to define if2?

Well, you can easily define you own control structures (when,
unless etc). This feature is wildly used in combinator libraries.
Also, in a non-strict language recursive definitions are not
limited to function types. Users of parser combinators heavily rely
on this feature. Just try to define/use parsing combinators
ins a strict language. The problem with eager evaluation is
that it is too eager (expressions are sometimes evaluated too
early) just as the problem with lazy evaluation is that it
is too lazy (evaluations sometimes happen too late).

Cheers, Ralf
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Functional programming for processing of largeraster images

Bugzilla from robdockins@fastmail.fm
In reply to this post by Brian Hulley

On Jun 21, 2006, at 3:30 PM, Brian Hulley wrote:

> Joel Reymont wrote:
>> I think the issue wasn't using functional programming for large image
>> processing, it was using Haskell. OCaml is notoriously fast and
>> strict. Haskell/GHC is... lazy.
>>
>> Everyone knows that laziness is supposed to be a virtue. In practice,
>> though, I'm one of the people who either can't wrap their heads
>> around it or just find themselves having to fight it from the start.
>
> Perhaps laziness is more "foundational", in that you can write
>
>      if2 c x y = if c then x else y

[snip some conversation...]


For those who haven't seen this already, here is a presentation by  
Simon PJ in which he discusses his views on laziness (among other  
things).

http://research.microsoft.com/~simonpj/papers/haskell-retrospective/ 
HaskellRetrospective.pdf


Takeaway point about laziness: "Laziness keeps you honest" by not  
allowing you to slip in side effects.

Bonus takeaway: read Wadler's papers :-)



Rob Dockins

Speak softly and drive a Sherman tank.
Laugh hard; it's a long way to the bank.
           -- TMBG

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Functional _meta_programming for processing of large raster images

oleg-7
In reply to this post by Joel Reymont

I'm afraid the _meta_ programming aspect of the image processing
project may be overlooked.

Joel Reymont wrote:
> I think the issue wasn't using functional programming for large image  
> processing, it was using Haskell. OCaml is notoriously fast and  
> strict. Haskell/GHC is... lazy.

Well, in the raster image processing project, a dialect of OCaml was
used to _generate_ code. If the author used offshoring (which I think
they did later), the generated code was in C. That C code can be used
stand-alone or be linked with some other C (Java, etc) code. Our FFT
paper did exactly that: our MetaOCaml program produced C code, which
we plugged as it was in the FFTW testing framework (written in pure C)
for benchmarking. By 'plugged' above I meant moving the C code file
from one directory to another. We used both GCC and Intel C compilers
for benchmarking.  Offshoring can also produce Fortran code. It's
quite feasible to generate Verilog so we can program an FPGA and get
image processing even faster.

The generator doesn't have to be a speed demon; it is not that
relevant if the generator is lazy or strict. What really matters if
the generator can be easily judged correct. It immensely matters that
the generated code has correctness properties; at least, it should be
well-formed and well-typed (and preferably, has some other properties,
like space bounds). It is disheartening to see errors when compiling
the generated code (as happens with some other generators), because it
is very difficult to trace these errors back to the generator.

Here's the reference to another, quite large and very real project

        http://www.spiral.net/

which generates the fastest ever FFT, DCT, etc. codes for variety of
architectures. The source language is a DSL for linear algebra. The
point is that highest-performance computing nowadays is all about code
generation/meta-programming. And in this area, functional programming
and Haskell have definite advantage.
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Functional progr., images, laziness and all the rest

jerzy.karczmarczuk
Usually once a year somebody starts a discussion on the merits of
functional/lazy paradigms, etc., in an applicative context, and it is
quite good. People compare Haskell and Ocaml, from time to time
somebody says that - apparently - Clean has better handling of strictness
issues [saying at the same time that he/she doesn't use Clean...], people
divide into different philosophical branches, some complain that it
would be nice to have strict Haskell, others say, that they don't care,
and what is important is the provable/enforced correctness, and laziness
is helpful. People say that the laziness permits to consider the
conditionals as functions, others ask what for, and the discussion is
usually quite interesting.

And here apparently I am one of rare people  - I am not proud of it,
rather quite sad, who defends laziness as an *algorithmisation tool*,
which makes it easy and elegant to construct co-recursive codes. Circular
programs, run-away infinite streams, hidden backtracking etc.

In the domain of images this can be helpful for making filters, especially
infinite-response, recursive filters. For two-dimensional data this might
be clumsy, but for 1D, for example for the sound generation/processing,
you may transform a recurrential equation yielding Y out of X:
Y[n+1] = a*X[N+1] + b*Y[n]
usually (imperatively) implemented as a loop, into a stream definition:

filtr a b x@(x0:xq) = y where
 y  = (x0:yq)
 yq = a*xq + b*y

with (*) and (+) conveniently overloaded (or replaced by specific
obvious ops).

In such a way you can program in 2 - 6 lines some quite exquisite musical
instruments (for example the Karplus-Strong "guitar", or a flute), construct
the reverberation filters, make ever-rising Shepard/Risset paradoxical
sounds, etc. etc. With laziness it is a sheer pleasure and fun, without -
a pain. If you wish, find my PADL talk on it...

In this context, I found Clean more helpful than Haskell, for ONE reason.
Clean has a primitive datatype: unboxed, spine-lazy but head-strict lists.
The co-recursion works, as the construction of the tail is postponed, but
there is no pollution of the space by thunks - unevaluated list *elements*.

This I really do miss in Haskell... But perhaps I simply don't know how to
obtain a similar behaviour?

For image processing (rather: constructing, but with incremental algorithms)
Clean usage of unique arrays was for me more natural than monadic stuff in
Haskell, but this is probably just a question of style. I agree that here
the laziness is secondary...

Jerzy Karczmarczuk

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Functional progr., images, laziness and all the rest

Brian Hulley
[hidden email] wrote:
[snip]
> you may transform a recurrential equation yielding Y out of X:
> Y[n+1] = a*X[N+1] + b*Y[n]
> usually (imperatively) implemented as a loop, into a stream
> definition:
> filtr a b x@(x0:xq) = y where
> y  = (x0:yq)
> yq = a*xq + b*y

Can you explain how this transformation was accomplished?
I don't see how
     yq = a * xq + b * y
relates to
    Y[n+1] = a*X[n+1] + b*Y[n]  -- (assuming the X[N+1] was a typo)

since y is a longer list than yq but Y[n] is an earlier element than Y[n+1],
so it seems that the function is multiplying b by a later factor than it
should.

So:
1) Someone reading the code needs to do a lot of work to try to recover the
original equation
2) Wouldn't an imperative loop, using the original equation directly, have
made everything much simpler?
3) Therefore laziness has lead to obfuscated code.

>
> with (*) and (+) conveniently overloaded (or replaced by specific
> obvious ops).
>
> In such a way you can program in 2 - 6 lines some quite exquisite
> musical instruments (for example the Karplus-Strong "guitar", or a
> flute), construct the reverberation filters, make ever-rising
> Shepard/Risset paradoxical sounds, etc. etc. With laziness it is a
> sheer pleasure and fun, without - a pain. If you wish, find my PADL talk
> on it...
>
> In this context, I found Clean more helpful than Haskell, for ONE
> reason. Clean has a primitive datatype: unboxed, spine-lazy but
> head-strict lists. The co-recursion works, as the construction of the
> tail is postponed, but there is no pollution of the space by thunks -
> unevaluated list *elements*.
> This I really do miss in Haskell... But perhaps I simply don't know
> how to obtain a similar behaviour?

If you only needed the head-strict aspect, something like

     data HSList a = Empty | Cons !a (HSList a)

(GHC also has unboxed types so perhaps something like data HSList = Empty |
Cons Double# HSList but see the restrictions on their use at
http://www.haskell.org/ghc/docs/latest/html/users_guide/primitives.html )

Regards, Brian.

--
Logic empowers us and Love gives us purpose.
Yet still phantoms restless for eras long past,
congealed in the present in unthought forms,
strive mightily unseen to destroy us.

http://www.metamilk.com 

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Functional progr., images, laziness and all the rest

Piotr Kalinowski
On 22/06/06, Brian Hulley <[hidden email]> wrote:
> 1) Someone reading the code needs to do a lot of work to try to recover the
> original equation

That's what comments are for, I suppose.

> 2) Wouldn't an imperative loop, using the original equation directly, have
> made everything much simpler?
> 3) Therefore laziness has lead to obfuscated code.

In fact, according to your point 2, it is not the laziness that
brought us here, but being functional.

Just my 2 cents.

Regards,
Piotr Kalinowski

--
Intelligence is like a river: the deeper it is, the less noise it makes
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re[2]: Re: Functional programming for processing of largeraster images

Bulat Ziganshin-2
In reply to this post by Ralf Hinze
Hello Ralf,

Thursday, June 22, 2006, 12:02:43 AM, you wrote:

> limited to function types. Users of parser combinators heavily rely
> on this feature. Just try to define/use parsing combinators
> ins a strict language.

C++ Boost contains one parsing combinators library. part of it is, of
course, a lazy eveluation sub-library :)


--
Best regards,
 Bulat                            mailto:[hidden email]

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

RE: Re: Functional programming for processing oflargeraster images

Simon Peyton Jones
In reply to this post by Brian Hulley
| Everything else about Haskell is so great and well thought out (eg
type
| classes, no side effects, higher rank polymorphism, existentials) it
seems a
| pity to throw all this away just because of one unfortunate feature

I thought it might be worth mentioning that GHC (well, the HEAD, which
will become 6.6) supports "bang patterns".  See
http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/BangPatter
ns

Bang patterns make it much more convenient to write a strict function.
E.g
        f (x, !y) = ...
is strict both in the pair (of course) but also in the second component
of the pair, y.

You can also use them in lets
        let !x = <rhs> in <body>
which will evaluate <rhs> before <body>.

It's an experimental feature, and I'm interested to know how useful, or
otherwise, it turns out to be.

Simon
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Functional progr., images, laziness and all the rest

jerzy.karczmarczuk
In reply to this post by Brian Hulley
Brian Hulley wrote:
> [hidden email] wrote:

>> you may transform a recurrential equation yielding Y out of X:
>> Y[n+1] = a*X[n+1] + b*Y[n]
>> usually (imperatively) implemented as a loop, into a stream
>> definition:
...
> Can you explain how this transformation was accomplished?
> I don't see how
>     yq = a * xq + b * y
> relates to
>    Y[n+1] = a*X[n+1] + b*Y[n]  -- (assuming the X[N+1] was a typo)
>
> since y is a longer list than yq but Y[n] is an earlier element than
> Y[n+1], so it seems that the function is multiplying b by a later factor
> than it should.

Sure, y[n] is earlier, so defining the tail by the stream itself refers
an element to its predecessor. No problemo Baby, as used to say
Terminator 2, whose relevance to our problem is obvious, since he
also couldn't terminate him(it)self...

Let's write the stream construction once more. Starting with x=(x0:xq)
and parameters a and b, assuming that for n<0 you take zero:

y  = (a*x0:yq)       -- Here I was in error, "a" missing
yq = a*xq + b*y

with, of course, a*xq meaning map(a*) xq; x+y meaning zipWith(*) x y ...


                 y0 = a*x0
Look yourself:  y1 = a*x1 + b*y0
                 y2 = a*x2 + b*y1, etc. So, first, this is correct,
                                   element by element.

Don't tell me you have never seen the assembly of all non-negative
integers as an infinite list thereof:

integs = 0 : (integs + ones)   where ones = 1:ones

it is quite similar (conceptually).

y IS NOT a longer list than yq, since co-recursive equations without
limiting cases, apply only to *infinite* streams. Obviously, the
consumer of such a stream will generate a finite segment only, but it
is his/her/its problem, not that of the producer.


> So:
> 1) Someone reading the code needs to do a lot of work to try to recover
> the original equation
> 2) Wouldn't an imperative loop, using the original equation directly,
> have made everything much simpler?
> 3) Therefore laziness has lead to obfuscated code.

1. Such codes are not meant to be thrown at an unprepared audience. Either
    the reader knows something about stream processing, or some introduction
    is necessary.

2. Are we speaking about assembly-style, imperative programming, or about
    functional style? Please write your loop in Haskell or any other fun.
    language, and share with us its elegance.
    Please note that for stream-oriented applications, as the sound processing
    I mentioned, this "n" index has no meaning whatsoever, it just denotes
    a position within the stream. Index-less style is more compact, with less
    redundant entities.

3. Full disagreement. There is NOTHING obfuscated here, on the contrary, the
    full semantics is in front of your eyes, it requires only some reasoning
    in terms of infinite lists. See point (1).

Thanks.

Jerzy Karczmarczuk
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: Re: Functional programming for processing oflargeraster images

Matthias Fischmann
In reply to this post by Simon Peyton Jones

On Thu, Jun 22, 2006 at 09:22:34AM +0100, Simon Peyton-Jones wrote:

> To: Brian Hulley <[hidden email]>, Joel Reymont <[hidden email]>
> Cc: [hidden email]
> From: Simon Peyton-Jones <[hidden email]>
> Date: Thu, 22 Jun 2006 09:22:34 +0100
> Subject: RE: [Haskell-cafe] Re: Functional programming for processing
> oflargeraster images
>
> http://haskell.galois.com/cgi-bin/haskell-prime/trac.cgi/wiki/BangPatterns
>
> Bang patterns make it much more convenient to write a strict function.
> E.g
> f (x, !y) = ...
> is strict both in the pair (of course) but also in the second component
> of the pair, y.
i am ecstatic to hear that :).

if it really means that 'y' will be fully evaluated (not top level
normal form, but whatsthenameforthis, in the way ocaml evaluates
expressions), it's something i have been missing so much that i was
thinking of switching back to a strict language again.

will upgrade as soon as i can, thanks!


m.

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe

signature.asc (196 bytes) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Functional progr., images, laziness and all the rest

Vo Minh Thu
In reply to this post by Brian Hulley
hi !

2006/6/22, Brian Hulley <[hidden email]>:
[snip]
> So:
> 1) Someone reading the code needs to do a lot of work to try to recover the
> original equation
> 2) Wouldn't an imperative loop, using the original equation directly, have
> made everything much simpler?
> 3) Therefore laziness has lead to obfuscated code.

the way i see :
1/ but why do you call it 'original equation' ? the original thing is
expressed informaly in your head, then into a formula or an algorithm.
Here i find the first one-liner really readable. (although i think it
misses Y[0] = X[0]). But the loop is really readable too for
imperative enabled mind.
2/ for me, list or loop is quite the same thing (in this case)
(although loop is mor general since you can use the base index in
weird ways).
3/ see 1/ and 2/

Jerzy :
can you know a mean to express such computation but with elements
depending of time (in about the same way as languages as esterel)
(i.e. depending of IO)?
Paul Hudak uses a Channel in his book Haskell SOE .. but is there another way ?

thanks,
vo minh thu
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
1234