[Fwd: Re: Computer Language Shootout]

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
27 messages Options
12
Reply | Threaded
Open this post in threaded view
|

[Fwd: Re: Computer Language Shootout]

Simon Marlow-5
Forwarding on behalf of Andrzej Jaworski <[hidden email]>:

-------- Original Message --------
From: Andrzej Jaworski <[hidden email]>

Dear fellows,

It is ironic that just after SPJ disclosed Comments from Brent Fulgham on
Haskell and the shootout the situation has radically changed for the worse.
Without knowing that I committed a blunder referring to the damn benchmark
http://shootout.alioth.debian.org/gp4/benchmark.php?test=all&lang=all
together with multicore support arguments when trying to convert a prominent
OCaml programmer to Haskell. Now they know more of us:-)

What a language it is that jumps 30% up and down on benchmark while other
languages gracefully stay in line? Can any of you explain the reason for
this disgraceful degradation in Computer Language Shootout?

Clean has also declined in these benchmarks but not that much as Haskell.
According to John van Groningen Clean's binary-trees program in the previous
shootout version used lazy data structure which resulted in lower memory
usage and much faster execution. That was removed by the maintainer of the
shootout and replaced by a much slower one using strict data structure.

I fear that if laziness accounted for previous good scores of GHC then
algorithms where laziness is downplayed must be responsible for that
anecdotal opinion that Haskell can be extremely slow. For example: on Royal
Road Problem in genetic algorithms
http://www.dcs.ed.ac.uk/home/stg/sfpw/book/Garmendia-Doval/cameraready.ps
Haskell was found to be on average over 500 times slower than SML
implementation!

If such extreme variations in performance are inherent for Haskell then the
support for multicore rather than boosting its relative performance (against
e.g. one-core-bound OCaml) may merely amplify Haskell unevenness to the
point where programming becomes more of an art than a science like it is in
Prolog.

Perhaps making a collective effort towards benchmarking Haskell programs and
analyzing the results in some methodic way could prove helpful?

Regards,

Andrzej Jaworski


_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Joel Reymont
Where are SPJs disclosed comments from Brent Fulgham?

On Jan 25, 2007, at 8:55 AM, Simon Marlow wrote:

> Forwarding on behalf of Andrzej Jaworski <[hidden email]>:
>
> -------- Original Message --------
> From: Andrzej Jaworski <[hidden email]>
>
> Dear fellows,
>
> It is ironic that just after SPJ disclosed Comments from Brent  
> Fulgham on
> Haskell and the shootout the situation has radically changed for  
> the worse.

--
http://wagerlabs.com/





_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

RE: [Fwd: Re: Computer Language Shootout]

Simon Peyton Jones
http://www.mail-archive.com/haskell@.../msg18863.html

(It was Simon Marlow actually, but we are joined at the hip so it hardly matters)

| -----Original Message-----
| From: [hidden email] [mailto:[hidden email]] On Behalf Of Joel Reymont
| Sent: 25 January 2007 09:01
| To: Simon Marlow
| Cc: [hidden email]
| Subject: Re: [Haskell] [Fwd: Re: Computer Language Shootout]
|
| Where are SPJs disclosed comments from Brent Fulgham?
|
| On Jan 25, 2007, at 8:55 AM, Simon Marlow wrote:
|
| > Forwarding on behalf of Andrzej Jaworski <[hidden email]>:
| >
| > -------- Original Message --------
| > From: Andrzej Jaworski <[hidden email]>
| >
| > Dear fellows,
| >
| > It is ironic that just after SPJ disclosed Comments from Brent
| > Fulgham on
| > Haskell and the shootout the situation has radically changed for
| > the worse.
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Kenneth Hoste
In reply to this post by Simon Marlow-5
(Simon and Andy, if you guys got this twice, sorry about the double  
mail)

On 25 Jan 2007, at 09:55, Simon Marlow wrote:

> Forwarding on behalf of Andrzej Jaworski <[hidden email]>:
>
> -------- Original Message --------
> From: Andrzej Jaworski <[hidden email]>
>
> <snip>

> Perhaps making a collective effort towards benchmarking Haskell  
> programs and
> analyzing the results in some methodic way could prove helpful?

In which way? Using hardware performance counter metrics on actual  
hardware? Or determining which bottlenecks occur with Haskell  
programs in binary form (in terms of data locality and the lot)? Is  
there a 'Haskell-program benchmark suite' out there?

greetings,

Kenneth

--

Statistics are like a bikini. What they reveal is suggestive, but  
what they conceal is vital (Aaron Levenstein)

Kenneth Hoste
ELIS - Ghent University
[hidden email]
http://www.elis.ugent.be/~kehoste


_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

John Meacham
In reply to this post by Simon Marlow-5
On Thu, Jan 25, 2007 at 08:55:37AM +0000, Simon Marlow wrote:
> Clean has also declined in these benchmarks but not that much as Haskell.
> According to John van Groningen Clean's binary-trees program in the previous
> shootout version used lazy data structure which resulted in lower memory
> usage and much faster execution. That was removed by the maintainer of the
> shootout and replaced by a much slower one using strict data structure.

Why was this done?

I notice that a lot of people expouse the 'strictness is good, lazy is
bad' methodology of optimization. We really should be careful about
that, naive additions of seqs (or even worse, deepSeqs!) can kill
algorithmic performance of code and create bad space leaks. Lazy is
good, it is much better to never compute something at all then compute
it just a bit faster. Haskell is lazy by default and users need to start
thinking in terms of lazyness by default to fully take advantage of
haskell's goodness.

Not that careful manipulation of strictness isn't key to good
performance, but the 'shotgun' approach of just adding 'deepSeqs' and
'seq's everywhere (without benchmarking to see if it actually helps)
should be avoided and certainly not advocated to new users.

Although it is especially hard to generalize optimization rules for
haskell, I think the closest one can to a rule of thumb is "make
operations on integral or basic types as strict as possible, make
everything else as lazy as possible.". at least, that is as far as one
should go without benchmarks or some good reasoning to manipulate
strictness further.

        John

--
John Meacham - ⑆repetae.net⑆john⑈
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Tim Chevalier
On 1/25/07, John Meacham <[hidden email]> wrote:
> On Thu, Jan 25, 2007 at 08:55:37AM +0000, Simon Marlow wrote:
> > Clean has also declined in these benchmarks but not that much as Haskell.
> > According to John van Groningen Clean's binary-trees program in the previous
> > shootout version used lazy data structure which resulted in lower memory
> > usage and much faster execution. That was removed by the maintainer of the
> > shootout and replaced by a much slower one using strict data structure.
>
> Why was this done?
>

Careful about your attributions! The passage you quoted was from
Andrzej Jaworski's message that Simon forwarded, and was not written
by Simon.

(I don't know the answer to your actual question, but maybe answers to
it should go to haskell-cafe?)

Cheers,
Kirsten

--
Kirsten Chevalier* [hidden email] *Often in error, never in doubt
"Getting an education was a bit like a communicable sexual disease. It made
you unsuitable for a lot of jobs, and then you had the urge to pass it on."
-- Terry Pratchett
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Brent Fulgham
In reply to this post by Simon Marlow-5
Hi,

I recently re-ran most of the benchmarks after we changed the following:

A couple of things I changed recently on the shootout are:

1.  I reran the various tests using higher 'N' values to force more work to be done.
2.  I upgraded to the most recent GHC avaliable through Debian.
3.  I don't recall which test was changed to its less-efficient counterpart, but as I recall this was due to some sniping that the lazy analysis was avoiding doing some of the work.  After comparing the tests, we ended up agreeing and the Haskell implementation was modified.

Let me know if you have any updates or revisions you would like us to apply.

Thanks,

-Brent

----- Original Message ----
From: Simon Peyton-Jones <[hidden email]>
To: Joel Reymont <[hidden email]>
Cc: "[hidden email]" <[hidden email]>
Sent: Thursday, January 25, 2007 1:04:41 AM
Subject: RE: [Haskell] [Fwd: Re: Computer Language Shootout]

http://www.mail-archive.com/haskell@.../msg18863.html

(It was Simon Marlow actually, but we are joined at the hip so it hardly matters)

| -----Original Message-----
| From: [hidden email] [mailto:[hidden email]] On Behalf Of Joel Reymont
| Sent: 25 January 2007 09:01
| To: Simon Marlow
| Cc: [hidden email]
| Subject: Re: [Haskell] [Fwd: Re: Computer Language Shootout]
|
| Where are SPJs disclosed comments from Brent Fulgham?
|
| On Jan 25, 2007, at 8:55 AM, Simon Marlow wrote:
|
| > Forwarding on behalf of Andrzej Jaworski <[hidden email]>:
| >
| > -------- Original Message --------
| > From: Andrzej Jaworski <[hidden email]>
| >
| > Dear fellows,
| >
| > It is ironic that just after SPJ disclosed Comments from Brent
| > Fulgham on
| > Haskell and the shootout the situation has radically changed for
| > the worse.
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell



_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Cale Gibbard
On 25/01/07, Brent Fulgham <[hidden email]> wrote:
> 3.  I don't recall which test was changed to its less-efficient counterpart, but as I recall this was due to some sniping that the lazy analysis was avoiding doing some of the work.  After comparing the tests, we ended up agreeing and the Haskell implementation was modified.

How can it be that some of the work is avoided if the final result can
be printed and is correct? If there are requirements to produce
specific intermediate values, it may be a good idea to add those to
the required output.

 - Cale
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Florian Weimer
In reply to this post by John Meacham
* John Meacham:

>> Clean has also declined in these benchmarks but not that much as Haskell.
>> According to John van Groningen Clean's binary-trees program in the previous
>> shootout version used lazy data structure which resulted in lower memory
>> usage and much faster execution. That was removed by the maintainer of the
>> shootout and replaced by a much slower one using strict data structure.
>
> Why was this done?

I suppose the itent of the binary-trees benchmark is to measure
allocation performance in the presence of a fairly large (well, not in
today's terms) data structure.  Using laziness to prevent that data
structure from being built (or use additional sharing) kind of defeats
the purpose of the benchmark.

Note that these are microbenchmarks, not real applications.  Imposing
such rules makes sense.
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Donald Bruce Stewart
fw:

> * John Meacham:
>
> >> Clean has also declined in these benchmarks but not that much as Haskell.
> >> According to John van Groningen Clean's binary-trees program in the previous
> >> shootout version used lazy data structure which resulted in lower memory
> >> usage and much faster execution. That was removed by the maintainer of the
> >> shootout and replaced by a much slower one using strict data structure.
> >
> > Why was this done?
>
> I suppose the itent of the binary-trees benchmark is to measure
> allocation performance in the presence of a fairly large (well, not in
> today's terms) data structure.  Using laziness to prevent that data
> structure from being built (or use additional sharing) kind of defeats
> the purpose of the benchmark.
>
> Note that these are microbenchmarks, not real applications.  Imposing
> such rules makes sense.

Agreed. I've submitted a strict variant that should allocate similarly
to OCaml. I'd suggest stating this requirement for strict allocation in
the spec.

Regards,
  Don
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Andrzej Jaworski
In reply to this post by Florian Weimer
It sounds reasonable. However knowledge of how program performs in
micro-steps does not add up, so the benchmarks may wet up appetite for lunch
that does not come. I have pointed into such example - an astonishing and
unexplained underperformance of Haskell with all the profiling information
at hand.

I guess Haskell compilers are not particularly good at detecting specific
properties of a program and hence with optimizing it. This however shows up
with size so Donald's benchmarks cannot catch that out.

For this reason, undiagnosed and untreated, Haskell has been abandoned for
example in Algebraic Dynamic Programming, in spite of its unparallel
expressive power and a lot of hope. In ILP/IFP and GP it failed too.

Cheers,
--Andrzej

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Stefan O'Rear
On Mon, Feb 26, 2007 at 03:43:08AM +0100, Andrzej Jaworski wrote:

> It sounds reasonable. However knowledge of how program performs in
> micro-steps does not add up, so the benchmarks may wet up appetite for lunch
> that does not come. I have pointed into such example - an astonishing and
> unexplained underperformance of Haskell with all the profiling information
> at hand.
>
> I guess Haskell compilers are not particularly good at detecting specific
> properties of a program and hence with optimizing it. This however shows up
> with size so Donald's benchmarks cannot catch that out.
>
> For this reason, undiagnosed and untreated, Haskell has been abandoned for
> example in Algebraic Dynamic Programming, in spite of its unparallel
> expressive power and a lot of hope. In ILP/IFP and GP it failed too.

C, thirty years ago: (disclaimer, I'm 16)
 * Very much slower than assembly
 * Very much easier to use than assembly
 * Very easy to interface with assembly

So everyone used C with assembler inner loops, no big deal.

Haskell, now:
 * Very much slower than C
 * Very much easier to use than C
 * Very easy to interface with C

So I think we should do the same.  It even shows in the Shootout - the
programs that are simultaneously fastest and clearest are not pure
Haskell, but delegate their innermost loops to tuned C libraries (FPS
and GMP).

Stefan
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Daniel Franke-6
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

On Sun, Feb 25, 2007 at 06:57:29PM -0800, Stefan O'Rear wrote:
> Haskell, now:
>  * Very easy to interface with C

I might buy "as easy as interfacing a GCed language with a non-GCed
language can reasonably be".  When the GC abstraction leaks, it leaks
angry wasps.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFF4k8+KTA17JAC/eYRApLXAKDDfGxlcHubvSUYjaeDxxIq+xzj8QCeNmsp
If52gkdoa5ufeQZclnNbTKo=
=s+B5
-----END PGP SIGNATURE-----
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Fawzi Mohamed-2
In reply to this post by Andrzej Jaworski
I am new to haskell, but I find your assertions surprising, given  
that from my experience the really performance critical code is  
little, and the reset can be even interpreted.
As far as I know C/C++ or similar are not really that advanced with  
respect to whole program optimization (not much more than inlining).

I had the impression that haskell, until the shootout push, was not  
good at optimizing/had not optimized libraries for some common  
computational kernels, but now is in a much better shape (for ghc),  
and with Don is doing, hopefully it will stay so.

Can you corroborate a little more your points?

cheers
Fawzi
On Feb 26, 2007, at 3:43 AM, Andrzej Jaworski wrote:

> It sounds reasonable. However knowledge of how program performs in
> micro-steps does not add up, so the benchmarks may wet up appetite  
> for lunch
> that does not come. I have pointed into such example - an  
> astonishing and
> unexplained underperformance of Haskell with all the profiling  
> information
> at hand.
>
> I guess Haskell compilers are not particularly good at detecting  
> specific
> properties of a program and hence with optimizing it. This however  
> shows up
> with size so Donald's benchmarks cannot catch that out.
>
> For this reason, undiagnosed and untreated, Haskell has been  
> abandoned for
> example in Algebraic Dynamic Programming, in spite of its unparallel
> expressive power and a lot of hope. In ILP/IFP and GP it failed too.
>
> Cheers,
> --Andrzej
>
> _______________________________________________
> Haskell mailing list
> [hidden email]
> http://www.haskell.org/mailman/listinfo/haskell

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re[2]: [Fwd: Re: Computer Language Shootout]

Bulat Ziganshin-2
Hello Fawzi,

Monday, February 26, 2007, 3:44:09 PM, you wrote:

> I am new to haskell, but I find your assertions surprising, given  
> that from my experience the really performance critical code is  
> little, and the reset can be even interpreted.
> As far as I know C/C++ or similar are not really that advanced with  
> respect to whole program optimization (not much more than inlining).

yes, Haskell provides better opportunities of higher-level program
optimizations and i guess that ghc better in this aspect than
traditional C compilers. but C programmers just make such sort of
optimizations manually. on the side of lower-level optimizations i
guess that only jhc currently can compete with C compilers - only for
low-level optimized Haskell code (that is harder to write that
low-level optimized C code) and only because jhc generates C itself as
intermediate code

> I had the impression that haskell, until the shootout push, was not  
> good at optimizing/had not optimized libraries for some common  
> computational kernels, but now is in a much better shape (for ghc),  
> and with Don is doing, hopefully it will stay so.

naive youth :)  first, most of shootout entries depend on libraries
speed (multithreading, regexps), not speed of code generated by the
compiler. second, ghc was not changed much, but programs was rewritten
in the non-idiomatic way. you can find ideas of such rewriting in
http://www.cse.unsw.edu.au/~dons/papers/fusion.pdf
http://www.cse.unsw.edu.au/~chak/papers/afp-arrays.ps.gz


--
Best regards,
 Bulat                            mailto:[hidden email]

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Andrzej Jaworski
In reply to this post by Andrzej Jaworski
The examples I pointed to seem to share strong and relatively consistent
logic of a program. In case of large GA (e.g. Royal Road Problem) and IFP
(e.g. ADATE) SML was exhaustively proved to predict this logic much better.

In case of Algebraic Dynamic Programming C compiler seems to address
specific properties of a program more adequately which leads to substantial
improvements in optimization. It is important to stress that we deal in ADP
with strong logic of the domain application. Handcrafting C code for regular
applications does not show that kind of advantage, so it wouldn't leave
Haskell in the dust. In general declarative nature has the edge over C
craftsmanship (see:
http://www.clip.dia.fi.upm.es/papers/carro06:stream-interpreter-TR.pdf).

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Fawzi Mohamed-2
Thanks for the answers Bulat and Andrzej,

so it seems that I was a little to naive, I think that I have  
understood what Andrzej wanted to say,
but I still don't buy it all.
With google I could find only something on Algebraic Dynamic  
Programming (links to the others?), there they went from an embedded  
haskell implementation to a direct compiler implementation.

 From what I understood I believe that mis-performance in that case  
came from the embedded nature of the DSL
1) not having a specialized parsing step (making the usage more  
difficult)
2) no abstract representation of the program that could be optimized  
with special transformations relative to the DSL
3) not fully optimized kernel methods

writing a real compiler for that language made sense, and also the  
choice of c as language for it, but I think that it would have been  
possible to write it in haskell without a big performance hit. I  
haven't seen that haskell compiler significantly worse than others  
except in really low level   computational code, which (with more  
effort than with C) can be optimized so that it is not really so  
terribly worse.

Sincerely
Fawzi

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Andrzej Jaworski
> writing a real compiler for that language made sense, and also the
> choice of c as language for it, but I think that it would have been
> possible to write it in haskell without a big performance hit.

ADP was conceived in Haskell and the research is done by very brainy people,
so I suggest to buy this stuff:-)

My intention was to signal facts and hope for experts attention. I would
rather not bore anybody with my own explanations.

Regards,
--Andrzej

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Duncan Coutts
In reply to this post by Stefan O'Rear
On Sun, 2007-02-25 at 18:57 -0800, Stefan O'Rear wrote:

> Haskell, now:
>  * Very much slower than C
>  * Very much easier to use than C
>  * Very easy to interface with C
>
> So I think we should do the same.  It even shows in the Shootout - the
> programs that are simultaneously fastest and clearest are not pure
> Haskell, but delegate their innermost loops to tuned C libraries (FPS
> and GMP).

I should note that FPS is almost completely Haskell code, not C. We use
C for things like memcmp, memcpy, memset, reverse_copy, intersperse,
maximum, minimum and count.

Certainly some of the innards are low level style Haskell, though not
the kind that could be replicated in C because we use high level
transformations to fuse loop bodies together and wrap them in high
performance, low level loop code.

This is not the style where we just wrap well tuned C code, this is a
style where we generate high performance low level code from a high
level spec. This relies on GHC's excellent and programmable optimiser.

It's wrong to say that the shootout improvements were only down to
improved libraries. The performance of ByteString code improved very
significantly between GHC 6.4 and 6.6 and a large part of that was down
to optimiser improvements (not just the ForeignPtr rep change).

Duncan

_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
Reply | Threaded
Open this post in threaded view
|

Re: [Fwd: Re: Computer Language Shootout]

Sven Panne
In reply to this post by Andrzej Jaworski
On Tuesday 27 February 2007 02:13, Andrzej Jaworski wrote:
> > writing a real compiler for that language made sense, and also the
> > choice of c as language for it, but I think that it would have been
> > possible to write it in haskell without a big performance hit.
>
> ADP was conceived in Haskell and the research is done by very brainy
> people, so I suggest to buy this stuff:-) [...]

A classic invalid technique of proof:

http://www.maths.uwa.edu.au/~berwin/humour/invalid.proofs.html#1.10Proofbyeminentauthority

;-)

Cheers,
   S.
_______________________________________________
Haskell mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell
12