I only just subscribed to this mailing list and I am a complete Haskell newbie, so forgive me if this is too OT. I noticed a recent thread about writing a Mathematica implementation in Haskell. I think this is an excellent idea and would be a great project for a Haskell newbie. I wrote a toy Mathematica implementation in OCaml while I waited to be viva'd for my PhD. It garnered so much interest that Wolfram Research bought it from me for £4,500 and gave me several free copies of Mathematica. Regarding the specific points made: 1. Numerical libraries: you should be able to reuse existing libraries like GMP, BLAS, LAPACK, FFTW and so on. These are often much faster than Mathematica's. For example, FFTW was about 4x faster than Mathematica's FFT the last time I benchmarked it. However, they do not support interval arithmetic. 2. GUI: I would take our existing vector graphics software: http://www.ffconsultancy.com/products/smoke_vector_graphics/ http://www.ffconsultancy.com/products/fsharp_for_visualization/ and rewrite it in Haskell as the foundation. This would far exceed anything that Mathematica has to offer, in part because Mathematica's graphics are still evaluated via the completely generic rewrite engine which is extremely slow. Our code already implements high-performance hardware-accelerated vector graphics and it is probably one of the first things I would consider porting to Haskell (if there is any commercial interest in such a library). 3. The language: the hardest part of reimplementing Mathematica is inferring what it means (there are no formal evaluation semantics). Once you've done that it is just a case of implementing an extensible term rewriter and putting in about 20 core rules. The pattern matcher is well under 100 LOC and you can do various things to make it more efficient. There are two tricks that vastly improve performance of the rewriter: return physically identical results whenever possible, and perform substitution and evaluation at the same time rather than as two separate passes. 4. Libraries: You should have no trouble exceeding the capabilities of Mathematica by pulling in existing libraries. For example, Mathematica provides no wavelet transforms, no time-frequency transforms, no function minimization over an arbitrary number of variables etc. Having worked in numerical computing for many years, I can say that Mathematica is an excellent tool for prototyping and for doing disposable analyses but many people reach for conventional languages when Mathematica's capabilities run dry. You should easily be able to implement a rewriter for the language that is ten times faster and doesn't leak. Incidentally, my implementation of Mathematica in OCaml took four days, and it was one of my first OCaml programs. -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. OCaml for Scientists http://www.ffconsultancy.com/products/ocaml_for_scientists/?e _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Jon Harrop wrote:
> I noticed a recent thread about writing a Mathematica implementation in > Haskell. Yeah, that was me. > I think this is an excellent idea and would be a great project for a > Haskell newbie. Uh... I think it's actually a tad harder than it looks. [Understatement!] > I wrote a toy Mathematica implementation in OCaml while I > waited to be viva'd for my PhD. It garnered so much interest that Wolfram > Research bought it from me for £4,500 and gave me several free copies of > Mathematica. > Are you serious?! o_O > Regarding the specific points made: > > 1. Numerical libraries: you should be able to reuse existing libraries like > GMP, BLAS, LAPACK, FFTW and so on. These are often much faster than > Mathematica's. For example, FFTW was about 4x faster than Mathematica's FFT > the last time I benchmarked it. However, they do not support interval > arithmetic. > Now this is interesting. The claim is that Mathematica is the fastest, most powerful software on planet earth, second to none. Actually it turns out that at least for factoring moderately big integers, pari/gp seems to be a fair bit faster (50% or so). I have no idea about the rest. Note that (as I understand it) GHC implements Haskell's Integer type using the GMP. And for some reason or other, they want to remove this feature... > 2. GUI: I would take our existing vector graphics software: > > http://www.ffconsultancy.com/products/smoke_vector_graphics/ > http://www.ffconsultancy.com/products/fsharp_for_visualization/ > > and rewrite it in Haskell as the foundation. This would far exceed anything > that Mathematica has to offer, in part because Mathematica's graphics are > still evaluated via the completely generic rewrite engine which is extremely > slow. Our code already implements high-performance hardware-accelerated > vector graphics and it is probably one of the first things I would consider > porting to Haskell (if there is any commercial interest in such a library). > Erm... have you seen Mathematica 6? That's OpenGL accelerated too. I've just been playing with it in fact - it's pretty fast as far as I can tell. > 3. The language: the hardest part of reimplementing Mathematica is inferring > what it means (there are no formal evaluation semantics). Once you've done > that it is just a case of implementing an extensible term rewriter and > putting in about 20 core rules. The pattern matcher is well under 100 LOC and > you can do various things to make it more efficient. There are two tricks > that vastly improve performance of the rewriter: return physically identical > results whenever possible, and perform substitution and evaluation at the > same time rather than as two separate passes. > Haskell has pattern matching, but what Mathematica does is much more sophisticated. I have tried to implement it several times, and failed. (But that was Pascal, this is Haskell...) > 4. Libraries: You should have no trouble exceeding the capabilities of > Mathematica by pulling in existing libraries. For example, Mathematica > provides no wavelet transforms, no time-frequency transforms, no function > minimization over an arbitrary number of variables etc. > What...the...hell...? Mathematica contains the largest, most comprehensive set of implementations of special functions anywhere in the world. It has a *vast* collection of identities and transformation rules constituting man-centuries of R&D work. It has cutting edge symbolic integration capabilities. It has multiple numerical solver algorithms. It has... Yeah, should only take 5 minutes or so to exceed those capabilities. Easy really... > You should easily be able to implement a rewriter for the language that is ten > times faster and doesn't leak. > > Incidentally, my implementation of Mathematica in OCaml took four days, and it > was one of my first OCaml programs. > OK, so you're saying that in 4 days you wrote something that out-performs Mathematica, a program that has existed for decades and has a vast, highly-funded R&D effort behind it featuring some of the brightest minds in the field? I'm in a state of disbelief here. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Hallo,
On 5/30/07, Andrew Coppin <[hidden email]> wrote: > > OK, so you're saying that in 4 days you wrote something that > out-performs Mathematica, a program that has existed for decades and has > a vast, highly-funded R&D effort behind it featuring some of the > brightest minds in the field? > > I'm in a state of disbelief here. > If you want some amusement, just search for "Jon Harrop" in comp.lang.lisp. -- -alex http://www.ventonegro.org/ _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
Why do you seem so in awe of Mathematica? It's just another language with
a good set of libraries. Claims that it is the best, fastest, etc comes from Wolfram advertising, no doubt. :) -- Lennart On Wed, 30 May 2007, Andrew Coppin wrote: > Date: Wed, 30 May 2007 22:15:55 +0100 > From: Andrew Coppin <[hidden email]> > To: [hidden email] > Subject: Re: [Haskell-cafe] Implementing Mathematica > > Jon Harrop wrote: >> I noticed a recent thread about writing a Mathematica implementation in >> Haskell. > > Yeah, that was me. > >> I think this is an excellent idea and would be a great project for a >> Haskell newbie. > > Uh... I think it's actually a tad harder than it looks. [Understatement!] > >> I wrote a toy Mathematica implementation in OCaml while I waited to be >> viva'd for my PhD. It garnered so much interest that Wolfram Research >> bought it from me for £4,500 and gave me several free copies of >> Mathematica. >> > > Are you serious?! o_O > >> Regarding the specific points made: >> >> 1. Numerical libraries: you should be able to reuse existing libraries like >> GMP, BLAS, LAPACK, FFTW and so on. These are often much faster than >> Mathematica's. For example, FFTW was about 4x faster than Mathematica's FFT >> the last time I benchmarked it. However, they do not support interval >> arithmetic. >> > > Now this is interesting. The claim is that Mathematica is the fastest, most > powerful software on planet earth, second to none. Actually it turns out that > at least for factoring moderately big integers, pari/gp seems to be a fair > bit faster (50% or so). I have no idea about the rest. > > Note that (as I understand it) GHC implements Haskell's Integer type using > the GMP. And for some reason or other, they want to remove this feature... > >> 2. GUI: I would take our existing vector graphics software: >> >> http://www.ffconsultancy.com/products/smoke_vector_graphics/ >> http://www.ffconsultancy.com/products/fsharp_for_visualization/ >> >> and rewrite it in Haskell as the foundation. This would far exceed anything >> that Mathematica has to offer, in part because Mathematica's graphics are >> still evaluated via the completely generic rewrite engine which is >> extremely slow. Our code already implements high-performance >> hardware-accelerated vector graphics and it is probably one of the first >> things I would consider porting to Haskell (if there is any commercial >> interest in such a library). >> > > Erm... have you seen Mathematica 6? That's OpenGL accelerated too. I've just > been playing with it in fact - it's pretty fast as far as I can tell. > >> 3. The language: the hardest part of reimplementing Mathematica is >> inferring what it means (there are no formal evaluation semantics). Once >> you've done that it is just a case of implementing an extensible term >> rewriter and putting in about 20 core rules. The pattern matcher is well >> under 100 LOC and you can do various things to make it more efficient. >> There are two tricks that vastly improve performance of the rewriter: >> return physically identical results whenever possible, and perform >> substitution and evaluation at the same time rather than as two separate >> passes. >> > > Haskell has pattern matching, but what Mathematica does is much more > sophisticated. I have tried to implement it several times, and failed. (But > that was Pascal, this is Haskell...) > >> 4. Libraries: You should have no trouble exceeding the capabilities of >> Mathematica by pulling in existing libraries. For example, Mathematica >> provides no wavelet transforms, no time-frequency transforms, no function >> minimization over an arbitrary number of variables etc. >> > > What...the...hell...? > > Mathematica contains the largest, most comprehensive set of implementations > of special functions anywhere in the world. It has a *vast* collection of > identities and transformation rules constituting man-centuries of R&D work. > It has cutting edge symbolic integration capabilities. It has multiple > numerical solver algorithms. It has... > > Yeah, should only take 5 minutes or so to exceed those capabilities. Easy > really... > >> You should easily be able to implement a rewriter for the language that is >> ten times faster and doesn't leak. >> >> Incidentally, my implementation of Mathematica in OCaml took four days, and >> it was one of my first OCaml programs. >> > > OK, so you're saying that in 4 days you wrote something that out-performs > Mathematica, a program that has existed for decades and has a vast, > highly-funded R&D effort behind it featuring some of the brightest minds in > the field? > > I'm in a state of disbelief here. > > _______________________________________________ > Haskell-Cafe mailing list > [hidden email] > http://www.haskell.org/mailman/listinfo/haskell-cafe > -- Lennart _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Andrew Coppin
On Wednesday 30 May 2007 22:15:55 Andrew Coppin wrote:
> Jon Harrop wrote: > > I wrote a toy Mathematica implementation in OCaml while I > > waited to be viva'd for my PhD. It garnered so much interest that Wolfram > > Research bought it from me for £4,500 and gave me several free copies of > > Mathematica. > > Are you serious?! o_O Yes. > > 1. Numerical libraries: you should be able to reuse existing libraries > > like GMP, BLAS, LAPACK, FFTW and so on. These are often much faster than > > Mathematica's. For example, FFTW was about 4x faster than Mathematica's > > FFT the last time I benchmarked it. However, they do not support interval > > arithmetic. > > Now this is interesting. The claim is that Mathematica is the fastest, > most powerful software on planet earth, second to none. If you write a simple, numerically-intensive program that runs in the Mathematica rewriter then its performance is about 100-1,000x slower than that of a native-code compiled language like Haskell. Mathematica is often 30x slower than interpreted OCaml bytecode. Take this ray tracer, for example: scene = {Sphere[{0., 0., 4.}, 1.], Sphere[{-1., 1., 4.}, 1.], Sphere[{-1., -1., 4.}, 1.], Sphere[{1., 1., 4.}, 1.], Sphere[{1., -1., 4.}, 1.]}; \[Delta] = Sqrt[$MachineEpsilon]; Unitise[p_] := p/Sqrt[p.p] RaySphere[o_, d_, c_, r_] := Block[{v = c - o, b = v.d, disc = b^2 - v.v + r^2}, If[disc <= 0., \[Infinity], disc = Sqrt[disc]; Block[{t2 = b + disc}, If[t2 <= 0., \[Infinity], Block[{t1 = b - disc}, If[t1 > 0., t1, t2]]]]]] Intersect[o_, d_][{lambda_, n_}, Sphere[c_, r_]] := Block[{lambda2 = RaySphere[o, d, c, r]}, If[lambda2 >= lambda, {lambda, n}, {lambda2, Unitise[o + lambda2 d - c]}] ] Intersect[o_, d_][hit_, list_List] := Fold[Intersect[o, d], hit, list] nohit = {\[Infinity], {0., 0., 0.}}; RayTrace[o_, d_, scene_] := Block[{lambda, n, g, p}, {lambda, n} = Intersect[o, d][nohit, scene]; If[lambda === \[Infinity], 0., g = n.neglight; If[g <= 0., 0., p = o + lambda d + \[Delta] n; {lambda, n} = Intersect[p, neglight][nohit, scene]; If[lambda < \[Infinity], 0., g]]]] Timing[image = Table[Table[ RayTrace[{0., 0., -2.5}, Unitise[{x, y, 128.}], scene], {y, -64, 64}], {x, -64, 64}];] This program takes 4.8s to run here. I bet if we translate it into Haskell it will run much faster than that. As a guide, this Haskell ray tracer is much more advanced and it can render a bigger (100x100) image in only 0.2s: http://www.nobugs.org/developer/htrace/ Incidentally, when I try to recompile with optimizations turned on, GHC refuses to work: $ ghc htrace.hs -o htrace $ ghc -O2 htrace.hs -o htrace compilation IS NOT required I must delete the target or edit the source to get it to recompile. I assume this is a known bug? > Actually it > turns out that at least for factoring moderately big integers, pari/gp > seems to be a fair bit faster (50% or so). I have no idea about the rest. Right, but that is just calling an internal function that is written in C. Provided you only ever call a few library functions, performance will be excellent in Mathematica. But when you cannot phrase your program in terms of the built-in library functions, performance is terrible and this is when everyone reaches for a more efficient tool. To me, performance is way down on the list of things that make Mathematica great. Integrated graphics is probably the main benefit. I mostly use MMA as a glorified graph plotter. > Note that (as I understand it) GHC implements Haskell's Integer type > using the GMP. And for some reason or other, they want to remove this > feature... Arbitrary precision integers are quite a performance burden and they are rarely used. I would not expect a language that is trying to be efficient to impose arbitrary precision integers (or floats). > Erm... have you seen Mathematica 6? Yes. > That's OpenGL accelerated too. Yes. Similarly, making graphics fast takes a lot more than just "using OpenGL". > I've just been playing with it in fact - it's pretty fast as far as I can > tell= It still invokes a completely generic term rewriter for everything it evaluates. You can really feel this when you play with some of their interactive demos. Even the simple ones are notably sluggish. Try translating the Tiger demo from our site into Mathematica, for example. Many of their demos will be trivial to write in Haskell but performance would be a lot better. I'd like to write some graphics demos in Haskell using OpenGL... -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. OCaml for Scientists http://www.ffconsultancy.com/products/ocaml_for_scientists/?e _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
On Wed, May 30, 2007 at 11:56:30PM +0100, Jon Harrop wrote:
> On Wednesday 30 May 2007 22:15:55 Andrew Coppin wrote: > > Jon Harrop wrote: > > > I wrote a toy Mathematica implementation in OCaml while I > > > waited to be viva'd for my PhD. It garnered so much interest that Wolfram > > > Research bought it from me for £4,500 and gave me several free copies of > > > Mathematica. > > > > Are you serious?! o_O > > Yes. You said that constructing a specification is the hardest part of implementing Mathematica, and you also say you managed to clone it. Can you reveal your specification, or did WR give you a NDA? Stefan _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jon Harrop
On 5/30/07, Jon Harrop <[hidden email]> wrote:
> > Incidentally, when I try to recompile with optimizations turned on, GHC > refuses to work: > > $ ghc htrace.hs -o htrace > $ ghc -O2 htrace.hs -o htrace > compilation IS NOT required > > I must delete the target or edit the source to get it to recompile. I assume > this is a known bug? > If the sources haven't changed and you're only using a different combination of command-line options, GHC's recompilation checker will determine that no recompilation is necessary. You can turn off the recompilation checker and force recompilation unconditionally by adding the -no-recomp flag. (There's already a feature request to make the recompilation checker consider changes to command-line options as well as code, it just haven't been implemented.) Cheers, Tim -- Tim Chevalier * [hidden email] * Often in error, never in doubt _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jon Harrop
On Wednesday 30 May 2007 07:04:31 Jon Harrop wrote:
> 3. The language: the hardest part of reimplementing Mathematica is > inferring what it means (there are no formal evaluation semantics). Once > you've done that it is just a case of implementing an extensible term > rewriter and putting in about 20 core rules. The pattern matcher is well > under 100 LOC and you can do various things to make it more efficient. > There are two tricks that vastly improve performance of the rewriter: > return physically identical results whenever possible, and perform > substitution and evaluation at the same time rather than as two separate > passes. Sorry for replying to myself. :-) It occurs to me that laziness will eliminate the intermediate data structure between substitution and evaluation anyway, so that isn't such a concern in Haskell. However, I can't think how you might return physically identical results when possible in Haskell. Essentially, you need a higher-order map function: val id_map : ('a -> 'a) -> 'a t -> 'a t that returns its input when "f x = x" for every x. How might this be done? -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. OCaml for Scientists http://www.ffconsultancy.com/products/ocaml_for_scientists/?e _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Jon,
> However, I can't think how you might return physically identical > results when > possible in Haskell. Essentially, you need a higher-order map > function: > > val id_map : ('a -> 'a) -> 'a t -> 'a t > > that returns its input when "f x = x" for every x. How might this > be done? fmap :: (Functor f) => (a -> b) -> f a -> f b Cheers, Stefan _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jon Harrop
| Incidentally, when I try to recompile with optimizations turned on, GHC
| refuses to work: | | $ ghc htrace.hs -o htrace | $ ghc -O2 htrace.hs -o htrace | compilation IS NOT required Yes, I think it's a bug. GHC should really compare the flags used last time with the flags used this time, and recompile if they have changed. If enough people yell about it, we'll probably fix it! Meanwhile opening a Trac bug report (or perhpas feature request) would be a good start. Simon _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
http://hackage.haskell.org/trac/ghc/ticket/106
It got changed to Won't Fix. Consider this a yell! On 31/05/07,
Simon Peyton-Jones <[hidden email]> wrote: | Incidentally, when I try to recompile with optimizations turned on, GHC _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Simon Peyton Jones
On Thu, 2007-05-31 at 08:46 +0100, Simon Peyton-Jones wrote:
> | $ ghc htrace.hs -o htrace > | $ ghc -O2 htrace.hs -o htrace > | compilation IS NOT required > Yes, I think it's a bug. GHC should really compare the flags used > last time with the flags used this time [...] As an (easier) alternative, I would find it in line with my expectations if GHC always recompiled in the absence of --make, and recompiled based on time stamps in the presence of --make. -k _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jon Harrop
Jon Harrop wrote: > However, I can't think how you might return physically identical > results when possible in Haskell. Perhaps you might be interested then in the following function that non-destructively updates a subterm in a large term, preserving sharing. The function can be used to do a substitution in a term. The function is described in http://okmij.org/ftp/Haskell/Zipper2.lhs beginning with the phrase ``We start therefore with an improved term enumerator that maximally preserves sharing. If no updates were done, the result of the traversal is the original term itself -- rather than its copy. Furthermore, this property holds for each subterm. The new traversal function lets us operate on subterms in pre-order, in-order, or post-order. More importantly, it lets us effectively span a `canonical' spanning tree on a term, so each node can be unambiguously identified. We do not require equality on (sub)terms.'' That was the second article in a series; please see http://okmij.org/ftp/Computation/Continuations.html#zipper for the full series. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Stefan O'Rear
On Thursday 31 May 2007 00:10:27 Stefan O'Rear wrote:
> You said that constructing a specification is the hardest part of > implementing Mathematica, and you also say you managed to clone it. > Can you reveal your specification, or did WR give you a NDA? NDA, although I did most of the reverse engineering independently beforehand. They use lots of nifty tricks (basically any trick that you can do easily in C) but there were plenty of other tricks they didn't try because they are far from obvious in C. I found an interesting alternative was to keep a lazily evaluated set of the symbols found in each subexpression. This could be used to cull searches and avoid unnecessary rewriting but also had no significant performance overhead. -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. OCaml for Scientists http://www.ffconsultancy.com/products/ocaml_for_scientists/?e _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Lennart Augustsson
This will be a long sermon. Sorry.
Lennart Augustsson writes: > Why do you seem so in awe of Mathematica? It's just another language with > a good set of libraries. Claims that it is the best, fastest, etc comes > from Wolfram advertising, no doubt. :) All this discussion began to degenerate a bit (I don't know why, but it happens almost always when people begin to speak about Mathematica in a context far from it... There is, it seems, some Original Sin in this business, but most of you are too young to remember the well known Wolfram Controversy when SMP transmuted into Mathematica...) Anyway... Mathematica made its career not as a *language*, and not immediately as a set of libraries, but as an *integrated package* capable of doing SYMBOLIC mathematics, with a very decent user interfacing and graphics a bit better than its competitors. The conditions of its career were far from obvious. The World had many symbolic math packages: Reduce, Macsyma, Schoonschip (beloved by high- energy physicists), Maple, Scratchpad2/Axiom, later MuSIMP/MuMATH for small platforms, etc. The group of Wolfram knew all that, they knew that in order to implement something reasonable, one has to fulfil several conditions. * The algebraic expressions must have a sound design, there must be a sufficiently universal, yet efficient representation of math formulae. For the polynomial arithmetic this is trivial, it is one of my standard Haskell exercices at the undergraduate level. The symbolic differentia- tion as well. But already the polynomial factorization may be a mess, and requires a good deal of algorithmic knowledge. I am reluctant to believe that anybody implemented this in 4 days... Anybody tried Zassenhaus? Not *too* complicated, implemented in Pari and elsewhere, but quite elaborate. For general functors the *simplification* issue is not decidable. You can't assess a given representation as "the best" formula with a given semantics. Again, the simplifier/evaluator is a complicated part of the package, not something you can do in a few days. Please, have a look on the internal structure of DoCon of Sergei Mechveliani, he did a lot of work in Haskell, and the story is still incomplete. (Let's omit the real mess, for example the Risch symbolic integration algorithms, efficient Gröbner bases, etc.) * First symbolic packages treated *first* the symbolic expressions as something to be evaluated/simplified. One sees that Maple has been built on this principle. Mathematica changed a bit the perspective, along - perhaps - the same lines as Schoonschip, where the fundamental stuff was *rewriting/ transformations*. So, Mathematica since the begininng was equipped with a very powerful pattern-matcher/substitution contraption. For the sake of efficiency it was less powerful than a general unifier, but it was really nice (and it existed already in SMP, before the birth of Mathematica). Now, again, somebody would do that in 4 days?? The semantic pattern-matcher within an algebraic package, is worlds apart from the syntactic/structural pattern-matcher of Haskell. This helped a lot to popularize Mathematica, and has been shamelessly abused in the advertising, where our friends used to say "we DO MATHEMATICS with computers". Non-sense, of course... * All the numerical, standard stuff, the interface between the symbolic and the numerical functions, with plots 2D/3D, etc. Too often people speak about that, comparing, say, Matlab with Mathematica (while Matlab has no symbolics, although, being a decent object-oriented language, has tools which permitted the construction of symbolic toolboxes, the linking of the Maple library, etc.) Here the Mathematica team did a serious, thorough job, trying to adapt the richness of this layer to many branches of science and engineering. It was mainly a compilation process, they hardly added anything new, but made a coherent, useful library. Won't repeat it in 4 days, or even in 4 months. ===================================== Is there any sense in "making Mathematica with Haskell"? As a package, most certainly no, too much to implement, for what? In order to have another tool of the kind which already exists? Sergei did a feasibility study, and worked hard on the interplay between mathematical structures and the Haskell type system. This, surely, *IS* a useful work. And it should continue. We (in the generic meaning of the True Believers in the Functional Church) can implement other things, for example some formal mathematics dealing with logic, or with the category theory, or with the computational geometry or with (my dream) the *true* implementation of quantum calculi. Knock, knock! Wake up, the sermon is over. Jerzy Karczmarczuk _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jon Harrop
Jon Harrop wrote:
> On Wednesday 30 May 2007 22:15:55 Andrew Coppin wrote: >> Note that (as I understand it) GHC implements Haskell's Integer type >> using the GMP. And for some reason or other, they want to remove this >> feature... > > Arbitrary precision integers are quite a performance burden and they are > rarely used. I would not expect a language that is trying to be efficient to > impose arbitrary precision integers (or floats). In Haskell, Int gives you the standard signed, fixed size integer for your machine, and Integer gives arbitrary precision integers. Int8, Int16, ... provide signed ints of a known size, and Word8, Word16 give the unsigned. They are all instances of Num, so integer literals will be whatever type is needed. _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Jon Harrop wrote: > Arbitrary precision integers are quite a performance burden and they > are rarely used. I would not expect a language that is trying to be > efficient to impose arbitrary precision integers (or floats). Apparently you have looked inside a computer *algebra* system, one that does *exact* computations (i.e. on polynomials with exact coefficients, not floats/doubles). Arbitrary precision integers are not a 'burden', they are an absolute necessity. Algorithms on polynomials will almost inevitably produce 'coefficient growth'. Even things as simple as sub-resultant computations (for computing extended GCDs) have this problem. And this is not a fluke or a problem with state-of-the-art, there are known cases where this is inevitable. Like Jerzy, I wonder why I get suckered in to these conversations. I guess we both have this silly need to set the record straight. Jacques _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by jerzy.karczmarczuk
On Thursday 31 May 2007 11:39:14 [hidden email] wrote:
> ... > Mathematica changed a bit the perspective, along - perhaps - the same > lines as Schoonschip, where the fundamental stuff was *rewriting/ > transformations*. So, Mathematica since the begininng was equipped with > a very powerful pattern-matcher/substitution contraption. For the sake > of efficiency it was less powerful than a general unifier, but it was > really nice (and it existed already in SMP, before the birth of > Mathematica). Now, again, somebody would do that in 4 days?? > The semantic pattern-matcher within an algebraic package, is worlds > apart from the syntactic/structural pattern-matcher of Haskell. Can you elaborate on this? I would imagine that the pattern matcher in a term-level Haskell interpreter would be quite similar to one from a toy Mathematica-like rewriter. Also, what aspects of this do you think would take more than 4 days? -- Dr Jon D Harrop, Flying Frog Consultancy Ltd. OCaml for Scientists http://www.ffconsultancy.com/products/ocaml_for_scientists/?e _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Lennart Augustsson
Lennart Augustsson wrote:
> Why do you seem so in awe of Mathematica? Oh, well, I guess it is only the most powerful maths software ever written... no biggie. > It's just another language with a good set of libraries. Claims that > it is the best, fastest, etc comes from Wolfram advertising, no doubt. :) The claim that it is the fastest clearly doesn't hold (much to my surprise). The claim that it is the most powerful, well... I have yet to see anything that can come close to the symbolic power of Mathematica. Let's face it, the thing costs nearly ten grand for a reason... _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
In reply to this post by Jon Harrop
Jon Harrop wrote:
> If you write a simple, numerically-intensive program that runs in the > Mathematica rewriter then its performance is about 100-1,000x slower than > that of a native-code compiled language like Haskell. Mathematica is often > 30x slower than interpreted OCaml bytecode. > Is this before or after compiling the Mathematica code? > Incidentally, when I try to recompile with optimizations turned on, GHC > refuses to work: > > $ ghc htrace.hs -o htrace > $ ghc -O2 htrace.hs -o htrace > compilation IS NOT required > > I must delete the target or edit the source to get it to recompile. I assume > this is a known bug? > This one surprised me... I'm pretty sure I tried recompiling with -O2 and it recompiled everything. Maybe I imagined it? > Right, but that is just calling an internal function that is written in C. > Provided you only ever call a few library functions, performance will be > excellent in Mathematica. But when you cannot phrase your program in terms of > the built-in library functions, performance is terrible and this is when > everyone reaches for a more efficient tool. > I don't know, I thought one of the big advantages of Mathematica was supposed to be that it can transform problems into the most efficiently solvable form, select between multiple algorithms, etc. > To me, performance is way down on the list of things that make Mathematica > great. Integrated graphics is probably the main benefit. I mostly use MMA as > a glorified graph plotter. > And I mostly use it to do insane things like give me parametric solutions to giant polynomials and so forth... ;-) _______________________________________________ Haskell-Cafe mailing list [hidden email] http://www.haskell.org/mailman/listinfo/haskell-cafe |
Free forum by Nabble | Edit this page |