I have (dense and large) row vectors v_1, ..., v_D, and I need a matrix X to be block diagonal (whose i-th block is v_i), in order to perform matrix products like
X.transpose() * H.selfadjointView<Lower>() * X
That is, X should be sparse and its i-th row is v_i :
(pseudo code)
RowVectorXd v1(1,2,3), v2(4,5), v3(6,7,8,9);
SparseMatrix<double> X(3,9);
// I need X to be
X = 1 2 3 . . . . . .
. . . 4 5 . . . .
. . . . . 6 7 8 9
// where . means 0
EDIT: My question is: is it possible to do this product without actually forming X nor copying the v_i's, but simply using references to them? My concern is about performance and memory footprint.
I think the solution has something to do with Eigen::Map<...Stride...> but I can't get it.
Many thanks.
I ended up finding an answer to my question. I post it here in case anyone else needs it. For this particular configuration of the matrix X, we can form directly
B = X^T * H * X
where X is as in the question and H is symmetric, without explicitly forming X. As X^T H X is symmetric, we only need to form the lower diagonal and use
B.selfadjointView<Lower>()
to call B.
Note that X^T H X is equivalent to H ** W, where W is the following lower triangular matrix with blocks being
W = v_1^T * v_1
v_2^T * v_1 v_2^T * v_2
. . v_3^T * v_3
. . . ...
. . . ...
v_D^T * v_1 v_D^T * v_2 v_D^T * v_3 ... v_D^T * v_D
and ** is an overloaded operator that computes the component-wise product between the (i,j) scalar of H and the block (i,j) in W. So
B = H ** (v^T * v)