Sample QR decomposition Code (Python):
import numpy as np
import matplotlib.pyplot as plt
# Define the 2D array
data = np.array([
[12, -51, 4],
[6, 167, -68],
[-4, 24, -41],
[-1, 1, 0],
[2, 0, 3]
], dtype=float) # specify data type as 'float' to match C++ double
q, r = np.linalg.qr(data)
print(q)
print(r)
Sample QR-Decomposition C++ code (Eigen):
#include <iostream>
#include <Eigen/Dense>
void qrDecomposition(const Eigen::MatrixXd& A, Eigen::MatrixXd& Q, Eigen::MatrixXd& R) {
int nRows = A.rows();
int nCols = A.cols();
Q = Eigen::MatrixXd::Zero(nRows, nCols);
R = Eigen::MatrixXd::Zero(nCols, nCols);
Eigen::MatrixXd v(nRows, nCols);
Eigen::MatrixXd u(nRows, nCols);
for (int j = 0; j < nCols; j++) {
v.col(j) = A.col(j);
for (int i = 0; i < j; i++) {
R(i, j) = Q.col(i).dot(A.col(j));
v.col(j) -= R(i, j) * Q.col(i);
}
R(j, j) = v.col(j).norm();
Q.col(j) = v.col(j) / R(j, j);
}
}
int main() {
int nRows = 5; // Number of rows
int nCols = 3; // Number of columns
// Sample flattened array (replace with your data)
Eigen::Map<Eigen::MatrixXd> A_data(new double[nRows * nCols], nRows, nCols);
A_data <<
12, -51, 4,
6, 167, -68,
-4, 24, -41,
-1, 1, 0,
2, 0, 3;
Eigen::MatrixXd Q, R;
qrDecomposition(A_data, Q, R);
std::cout << "Matrix Q:\n" << Q << "\n";
std::cout << "Matrix R:\n" << R << "\n";
return 0;
}
RESULTS:
python results:
q
array([[-0.84641474, 0.39129081, -0.34312406],
[-0.42320737, -0.90408727, 0.02927016],
[ 0.28213825, -0.17042055, -0.93285599],
[ 0.07053456, -0.01404065, 0.00109937],
[-0.14106912, 0.01665551, 0.10577161]])
r
array([[ -14.17744688, -20.66662654, 13.4015667 ],
[ 0. , -175.04253925, 70.08030664],
[ 0. , 0. , 35.20154302]])
c++ results:
Matrix Q:
0.846415 -0.391291 -0.343124
0.423207 0.904087 0.0292702
-0.282138 0.170421 -0.932856
-0.0705346 0.0140407 0.00109937
0.141069 -0.0166555 0.105772
Matrix R:
14.1774 20.6666 -13.4016
0 175.043 -70.0803
0 0 35.2015
PROBLEM: As can be seen in the results above, the signs of the values are different (not always flipped). Why is it so?
You're making the assumption that numpy
is using Gram-Schmidt. The documentation suggests that is not always the case. They mention Householder, Gram-Schmidt and least squares. The source code is available here if you want to see what they actually do. You can use eigen
to directly solve the problem with their Householder method and you will see that eigen
and numpy
return the same values.
Eigen::Matrix<double, 5, 3> data;
data << 12,-51,4,6,167,-68,-4,24,-41,-1,1,0,2,0,3;
auto QR = data.householderQr(); //auto may not be efficient here...
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> Q1 = QR.householderQ();
Eigen::Matrix<double, Eigen::Dynamic, Eigen::Dynamic> R1 = QR.matrixQR().triangularView<Eigen::Upper>();
std::cout << Q1 << "\n\n" << R1 << "\n\n" << Q1 * R1 << std::endl;
The output is:
-0.846415 0.391291 -0.343124 0.0661374 -0.0914621
-0.423207 -0.904087 0.0292702 0.0173785 -0.0486104
0.282138 -0.170421 -0.932856 -0.021942 0.143712
0.0705346 -0.0140407 0.00109937 0.997401 0.00429488
-0.141069 0.0166555 0.105772 0.00585613 0.984175
-14.1774 -20.6666 13.4016
0 -175.043 70.0803
0 0 35.2015
0 0 0
0 0 0
12 -51 4
6 167 -68
-4 24 -41
-1 1 4.3715e-16
2 0 3
You'll of course notice that eigen
returns the full Q matrix which can be thinned by a 5x3 identity matrix. If you implement householder I think you'll find that it will match numpy
in this case.