I need to inverse large matrices and I would like to modify my current LAPACKE version routine in order to exploit the powerfull of a GPU NVIDIA Card.
Indeed, my LAPACKE routines works well for relative small matrices but not for large matrices.
Below thr implementation of this LAPACKE routine :
#include <mkl.h>
// Passing Matrixes by Reference
void matrix_inverse_lapack(vector<vector<double>> const &F_matrix, vector<vector<double>> &F_output) {
// Index for loop and arrays
int i, j, ip, idx;
// Size of F_matrix
int N = F_matrix.size();
int *IPIV = new int[N];
// Output Diagonal block
double *diag = new double[N];
for (i = 0; i<N; i++){
for (j = 0; j<N; j++){
idx = i*N + j;
arr[idx] = F_matrix[i][j];
}
}
// LAPACKE routines
int info1 = LAPACKE_dgetrf(LAPACK_ROW_MAJOR, N, N, arr, N, IPIV);
int info2 = LAPACKE_dgetri(LAPACK_ROW_MAJOR, N, arr, N, IPIV);
for (i = 0; i<N; i++){
for (j = 0; j<N; j++){
idx = i*N + j;
F_output[i][j] = arr[idx];
}
}
delete[] IPIV;
delete[] arr;
}
with is called like this to inverse CO_CL matrix :
matrix_inverse_lapack(CO_CL, CO_CL);
with CO_CL defined by:
vector<vector<double>> CO_CL(lsize*(2*Dim_x+Dim_y), vector<double>(lsize*(2*Dim_x+Dim_y), 0));
How can I use MAGMA for NVIDIA for inversing matrix in my case instead of using LAPACKE?
UPDATE 1: I have donwloaded magma-2.6.1
and firstly, I have to modify the original Makefile :
CXX = icpc -std=c++11 -O3 -xHost
CXXFLAGS = -Wall -c -I${MKLROOT}/include -I/opt/intel/oneapi/compiler/latest/linux/compiler/include -qopenmp -qmkl=parallel
LDFLAGS = -L${MKLROOT}/lib -Wl,-rpath,${MKLROOT}/lib -Wl,-rpath,${MKLROOT}/../compiler/lib -qopenmp -qmkl
SOURCES = main_intel.cpp XSAF_C_intel.cpp
EXECUTABLE = main_intel.exe
I didn't see mkl
headers in magma-2.6.1
: nvcc
and MKL
are compatibles ?
Try using magma sgetri gpu
- inverse matrix in single precision, GPU
interface.
This function computes in single precision the inverse A^−1 of an m × m
matrix A.
magma_ssetmatrix ( m, m, a,m, d_a ,m, queue ); // copy a -> d_a
magmablas_slacpy ( MagmaFull ,m,m,d_a ,m,d_r ,m, queue ); // d_a - >d_r
// find the inverse matrix : d_a *X=I using the LU factorization
// with partial pivoting and row interchanges computed by
// magma_sgetrf_gpu ; row i is interchanged with row piv (i);
// d_a -mxm matrix ; d_a is overwritten by the inverse
gpu_time = magma_sync_wtime ( NULL );
magma sgetrf gpu( m, m, d a, m, piv, &info);
magma sgetri gpu(m,d a,m,piv,dwork,ldwork,&info);
The official documentation of NVIDIA contains quite a lot of examples, so you may also take a look at them: