Re: matrix computations based on the GSL

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

Re: matrix computations based on the GSL

Frederik Eaton-3
Hi Alberto,

I'm sorry if this has been discussed before...

I'm reading your paper, at one point it says (re. the PCA example):
"Octave achieves the same result, slightly faster. (In this experiment
we have not used optimized BLAS libraries which can improve efficiency
of the GSL)"

That seems to imply that there is a way to use optimized BLAS
libraries? How can I do that?

Also, in my experiments (with matrix inversion) it seems,
subjectively, that Octave is about 5 or so times faster for operations
on large matrices. Presumably you've tested this as well, do you have
any comparison results?

Thanks,

Frederik

--
http://ofb.net/~frederik/
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: matrix computations based on the GSL

Alberto Ruiz-2
Hi Frederick,

I refer to the ATLAS library:

http://math-atlas.sourceforge.net/

Some versions of octave use it. I have not yet linked the GSL library with it,
you must compile it yourself to take advantage of the optimizations for your
architecture, but I think that it must be easy. It is in my TO DO list... :)

Curiously, I am just now working in a new (and hopefully improved)
version of the GSL wrappers. A first and preliminary approximation to the
documentation can be found at:

http://dis.um.es/~alberto/GSLHaskell/doc/

The code will be available in two or three days in a darcs repository.

I have simplified the wrapper infrastructure and the user interface. Now we
can freely work both with real and complex vectors or matrices, still with
static type checking. There are also some bug corrections (the eigensystem
wrapper destroys its argument!).

I made some run time comparisons, precisely in the PCA example with the big
matrix. I don't remember the exact difference, but I think that 5 times
faster is too much... I will check it as soon as possible. Of course our goal
is to have something like a functional "octave" with the same performance
(and a much nicer language :)

Many thanks for your message!

Alberto

On Thursday 16 March 2006 18:13, Frederik Eaton wrote:

> Hi Alberto,
>
> I'm sorry if this has been discussed before...
>
> I'm reading your paper, at one point it says (re. the PCA example):
> "Octave achieves the same result, slightly faster. (In this experiment
> we have not used optimized BLAS libraries which can improve efficiency
> of the GSL)"
>
> That seems to imply that there is a way to use optimized BLAS
> libraries? How can I do that?
>
> Also, in my experiments (with matrix inversion) it seems,
> subjectively, that Octave is about 5 or so times faster for operations
> on large matrices. Presumably you've tested this as well, do you have
> any comparison results?
>
> Thanks,
>
> Frederik
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: matrix computations based on the GSL

Alberto Ruiz-2
In reply to this post by Frederik Eaton-3
On Thursday 16 March 2006 18:13, Frederik Eaton wrote:

> Also, in my experiments (with matrix inversion) it seems,
> subjectively, that Octave is about 5 or so times faster for operations
> on large matrices. Presumably you've tested this as well, do you have
> any comparison results?
>

Frederik, you are right, in my machine Octave is at least 3x faster than gsl
(v1.5). Too much, specially in a simple matrix multiplication, and I don't
know why. See below the times measured in the C side for the PCA example.

I will look into this...

Alberto

gsl 1.5
GHC> main
Loading package haskell98-1.0 ... linking ... done.
GSL Wrapper submatrix: 0 s
GSL Wrapper constant: 0 s
GSL Wrapper multiplyR (gsl_blas_dgemm): 0 s
GSL Wrapper vector_scale: 0 s
GSL Wrapper vector_scale: 1 s
GSL Wrapper gsl_vector_add: 0 s
GSL Wrapper trans: 0 s
GSL Wrapper multiplyR (gsl_blas_dgemm): 36 s   <---- (784x5000) x (5000 x 784)
GSL Wrapper vector_scale: 0 s
GSL Wrapper trans: 0 s
GSL Wrapper vector_scale: 0 s
GSL Wrapper gsl_vector_add: 0 s
GSL Wrapper toScalar: 0 s
GSL Wrapper eigensystem: 11 s    <---------- (784x784)
[337829.65625394206,253504.7867502207, etc.
(97.95 secs, 13096315340 bytes) (includes file loading, to be optimized)

GNU Octave, version 2.1.57 (i686-pc-linux-gnu).
octave:6> testmnist
x - repmat(mean(x),rows(x),1)
0.46144
(xc'*xc)/rows(x)
11.623            <--------------
eig
4.3873            <--------------
   3.3776e+05
   2.5345e+05
   2.0975e+05
   etc.

----------octave------------------
load mnist.txt      

x = mnist(:,1:784);          
d = mnist(:,785);


t0=time();
xc = x - repmat(mean(x),rows(x),1);
disp("x - repmat(mean(x),rows(x),1)");
disp(time()-t0)

t0=time();
mc = (xc'*xc)/rows(x);
disp("(xc'*xc)/rows(x)");
disp(time()-t0)

t0=time();
[v,l]=eig(mc);
disp("eig");
disp(time()-t0)

disp(flipud(diag(l))(1:10));

------------- GSL + Haskell ---------

module Main where

import GSL
import GSL.Util

mean x = sumCols x ./. fromIntegral (rows x)

center x = x |-| fromRows (replicate (rows x) (mean x))

cov x = (trans xc <> xc) ./. n
    where n = fromIntegral (rows x -1)
          xc = center x

m ./. r = m <> (1/r ::Double)

test :: M -> [Double]
test x = take 10 (toList lambdas) where
    mc = cov x
    (lambdas, _)  = eig mc
   
main = do
    m <- fromFile "mnist.txt"  
    let x = subMatrix 0 (rows m-1) 0 (cols m -2) m
    print (test x)
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: matrix computations based on the GSL

Alberto Ruiz-2
The Atlas library (Linux_P4SSE2) seems to be 9x faster (!) than gslcblas in
matrix multiplication:

ghc-6.4.1 --make examples/pca.hs GSL/debuggslaux.o \
 -L$(LIBATLAS) -lcblas  /usr/lib/libgsl.a -latlas

$ time ./a.out
GSL Wrapper gsl_matrix_fscanf: 2 s
GSL Wrapper submatrix: 0 s
GSL Wrapper constant: 0 s
GSL Wrapper constant: 0 s
GSL Wrapper multiplyR (gsl_blas_dgemm): 1 s
GSL Wrapper vector_scale: 0 s
GSL Wrapper multiplyR (gsl_blas_dgemm): 0 s
GSL Wrapper vector_scale: 0 s
GSL Wrapper gsl_vector_add: 0 s
GSL Wrapper trans: 0 s
GSL Wrapper multiplyR (gsl_blas_dgemm): 4 s  <---(the atlas version)
GSL Wrapper vector_scale: 0 s
GSL Wrapper trans: 0 s
GSL Wrapper vector_scale: 0 s
GSL Wrapper gsl_vector_add: 0 s
GSL Wrapper toScalar: 0 s
GSL Wrapper eigensystem: 11 s
[337829.6562539419,253504.78675022084,209790.38118221355, etc....
real    0m17.630s
user    0m17.255s
sys     0m0.179s

So gsl+atlas+haskell is not that bad... Also, the 5000x785 matrix can now be
loaded just in 2 seconds, using a wrapper for gsl_matrix_fscanf. We could
also try to include some lapack or atlas-lapack routines.

Precompiled atlas for different processors can be downloaded from

https://sourceforge.net/project/showfiles.php?group_id=23725

However, I have not yet been able to link them in interactive mode.

The latest version of the library (extremely provisional, I am currently
working actively in it) can be obtained from:

darcs get http://dis.um.es/~alberto/GSLHaskell

Of course, any contribution or suggestion will be greatly appreciated,
including code samples that "should work" with this kind of library, to be
used as examples or tests.

Best regards,

Alberto

On Friday 17 March 2006 13:30, Alberto Ruiz wrote:

> On Thursday 16 March 2006 18:13, Frederik Eaton wrote:
> > Also, in my experiments (with matrix inversion) it seems,
> > subjectively, that Octave is about 5 or so times faster for operations
> > on large matrices. Presumably you've tested this as well, do you have
> > any comparison results?
>
> Frederik, you are right, in my machine Octave is at least 3x faster than
> gsl (v1.5). Too much, specially in a simple matrix multiplication, and I
> don't know why. See below the times measured in the C side for the PCA
> example.
>
> I will look into this...
>
> Alberto
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

RE: recursive matrix computations

fromGeras
In reply to this post by Alberto Ruiz-2
 
it appears possible to define matrix operations like LU-decomposition, SVD,
inverse etc. in a recursive fashion (see transpose in the prelude).

is this possible? has anybody code in haskell which does this (not asking
for high performance here, more instructive value).

thank you!

andrew

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: recursive matrix computations

Wilhelm B. Kloke
Andrew U. Frank <[hidden email]> schrieb:
>  
> it appears possible to define matrix operations like LU-decomposition, SVD,
> inverse etc. in a recursive fashion (see transpose in the prelude).
>
> is this possible? has anybody code in haskell which does this (not asking
> for high performance here, more instructive value).

I would like to see this, too. Myself I did some experiments in trying
to implement the transformation to upper diagonal form by rational Givens
transforms. The following code fragment does this recursively:

-- rationalGivens :: Fractional a => a -> [a] -> [[a]] -> [[a]]
-- marginal cases first
rationalGivens qq [x] ((pp:[]):[]) = ( pp + qq * x * x : [] ) : []
rationalGivens _ [] [[]] = [[]]
rationalGivens qq xy pmat | qq == 0 = pmat
rationalGivens qq (x:y) (ppr:pmat) | x == 0 = ppr:rationalGivens qq y pmat
rationalGivens qq (x:y) [[]] = ( qq * x * x : (1/x).*y ) : []
-- main case
rationalGivens qq (x:y) ((pp:r):pmat) = let
        pp' = pp + qq * x * x
        qq' = pp * qq / pp'
        s = x * qq / pp'
        y' = y - x .* r
        r' = r + s .* y'
        in ( pp' : r' ) : rationalGivens qq' y' pmat

-- rationalGivens 1 [2,1] [[1,0],[1]] == [[5.0,0.4],[1.2]]

Arrays are double lists in this code,
q a scale factor (starting with 1)
xy a row vector to be added to the u.t. matrix pmat.

The diagonal elements of pmat contain the square of the scale factor of
the row, i.E. the matrix [[5.0,0.4],[1.2]] is to be interpreted as the product

        (sqrt(5)      0   ) ( 1  0.4 )
        (   0    sqrt(1.2)) ( 0   1  )

--
Dipl.-Math. Wilhelm Bernhard Kloke
Institut fuer Arbeitsphysiologie an der Universitaet Dortmund
Ardeystrasse 67, D-44139 Dortmund, Tel. 0231-1084-257

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

RE: recursive matrix computations

Henning Thielemann
In reply to this post by fromGeras

On Wed, 19 Apr 2006, Andrew U. Frank wrote:

> it appears possible to define matrix operations like LU-decomposition, SVD,
> inverse etc. in a recursive fashion (see transpose in the prelude).
>
> is this possible? has anybody code in haskell which does this (not asking
> for high performance here, more instructive value).

There is some code by Jan Skibinski called 'indexless linear algebra':
  http://www.haskell.org/haskellwiki/Libraries_and_tools/Mathematics
_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe
Reply | Threaded
Open this post in threaded view
|

Re: recursive matrix computations

Brian Boutel
In reply to this post by fromGeras

On 19/04/2006, at 10:32 PM, Andrew U. Frank wrote:

>
> it appears possible to define matrix operations like LU-
> decomposition, SVD,
> inverse etc. in a recursive fashion (see transpose in the prelude).
>
> is this possible? has anybody code in haskell which does this (not  
> asking
> for high performance here, more instructive value).
>
> thank you!
>


I recall Paul Hudak coming up with a version of LU-decomposition in  
Haskell using Dooliitle's method.
This was in response to a challenge by (I think) Arvind, at a meeting  
in Mystic CT, in 1989.

--brian


Brian Boutel

Wellington
New Zealand

_______________________________________________
Haskell-Cafe mailing list
[hidden email]
http://www.haskell.org/mailman/listinfo/haskell-cafe