slaSvd -


double rms()



  Singular value decomposition.

  (double precision)

  This routine expresses a given matrix a as the product of

three matrices u, w, v

     a = u x w x vt


     a   is any m (rows) x n (columns) matrix, where m >= n
     u   is an m x n column-orthogonal matrix
     w   is an n x n diagonal matrix with w(i,i) >= 0
     vt  is the transpose of an n x n orthogonal matrix

     Note that m and n, above, are the logical dimensions of the
     matrices and vectors concerned, which can be located in
     arrays of larger physical dimensions, given by mp and np.


     m,n    int            numbers of rows and columns in matrix a
     mp,np  int            physical dimensions of the array containing a
     a      double[mp][np] array containing m x n matrix a


     a      double[mp][np] array containing m x n column-orthogonal matrix u
     w      double[n]      n x n diagonal matrix w (diagonal elements only)
     v      double[np][np] array containing n x n orthogonal matrix v
     work   double[n]      workspace
     *jstat int            0 = OK
                          -1 = the a array is the wrong shape
                          >0 = 1 + index of w for which convergence failed.

     (n.b. v contains matrix v, not the transpose of matrix v)


     The algorithm is an adaptation of the routine SVD in the EISPACK
     library (Garbow et al 1977, Eispack guide extension, Springer
     Verlag), which is a Fortran 66 implementation of the Algol
     routine SVD of Wilkinson & Reinsch 1971 (Handbook for Automatic
     Computation, vol 2, Ed Bauer et al, Springer Verlag).  For the
     non-specialist, probably the clearest general account of the use
     of SVD in least squares problems is given in Numerical Recipes
     (Press et al 1986, Cambridge University Press).

  From slamac.h:  TRUE, FALSE

  P.T.Wallace   Starlink   22 December 1993