Search code examples
pythonc++lapacksvdlapacke

Singular Value Decomposition with LAPACK: problems with big matrices


I am using the C interface of LAPACK to compute the Singular Value Decomposition (SVD) of a matrix. To do so, I am using the routine dgesvd_.

I have created a simple C++ script which creates a random matrix (with M rows and N columns), and that computes its SVD. The code of this script is the following:

#include <iostream>
#include <random>
#include "lapacke.h"

#define M 150
#define N 3
#define MEAN 0
#define STD 0.25


int main(int argc, char *argv[])
{

    std::default_random_engine generator;
    std::normal_distribution<double> distribution(MEAN, STD);

    int m = M, n = N, lda = M, ldu = M, ldvt = N, info, lwork;
    double wkopt;
    double* work;
    double s[n], u[ldu * m], vt[ldvt * n], a[lda * n];

    for(unsigned int k = 0; k < (lda * n); k++){
        a[k] = distribution(generator);
    }

    lwork = -1;
    dgesvd_("All", "All", &m, &n, a, &lda, s, u, &ldu, vt, &ldvt, &wkopt, &lwork, &info);
    lwork = (int)wkopt;
    work = (double*) malloc(lwork * sizeof(double));
    dgesvd_("All", "All", &m, &n, a, &lda, s, u, &ldu, vt, &ldvt, work, &lwork, &info);  

    if(info > 0){
        std::cerr<< "The algorithm computing SVD failed to converge\n"     ;
        exit(1);
    }

    free(work);

    std::cout<<"Singular values:\n";
    for(unsigned int k = 0; k < n; k++){
        std::cout<<s[k]<<" ";
    }
    std::cout<<"\n";
    return(0);
}

A matrix with 150 rows and 3 columns seems to compute the SVD correctly. However, when the matrix has a bigger number of rows (for example, a matrix with 1500 rows and 3 columns), the execution of the compiled script raises this error: Segmentation fault (core dumped).

I tried to do something similar in a Python script:

import numpy as np
M = 1500
N = 3
MEAN = np.zeros(3)
STD = 0.25
result = np.linalg.svd(points)

When I use Python, the singular values seems to be computed correctly whithout any error, although the NumPy method that I used is performed using the LAPACK routine dgesdd_ (I tried this and the same error is raised).

Does anyone know why is this error happening? Any help to solve the issue would be appreciated. Thanks.

Pd: I am using version 3.6.0 of lapacke, and Ubuntu 16.04 LTS.


Solution

  • Allocating the matrices on the heap like this solves the problem:

    double* s = new double[n];
    double* u = new double[ldu * m];
    double* vt = new double[ldvt * n];
    double* a = new double[lda * n];