I am calculating the solution of a constrained linear least-squares problem as follows:
lb = zeros(7,1);
ub = ones(7,1);
for i = 1:size(b,2)
x(:,i) = lsqlin(C,b(:,i),[],[],[],[],lb,ub);
end
where C
is m x 7
and b
is m x n
. n
is quite large leading to a slow computation time. Is there any way to speed up this procedure and get rid of the slow for
loop. I am using lsqlin
instead of pinv
or \
because I need to constrain my solution to the boundaries of 0–1 (lb
and ub
).
The for
loop is not necessarily the reason for any slowness – you're not pre-allocating and lsqlin
is probably printing out a lot of stuff on each iteration. However, you may be able to speed this up by turning your C
matrix into a sparse block diagonal matrix, C2
, with n
identical blocks (see here). This solves all n
problems in one go. If the new C2
is not sparse you may use a lot more memory and the computation may take much longer than with the for
loop.
n = size(b,2);
C2 = kron(speye(n),C);
b2 = b(:);
lb2 = repmat(lb,n,1); % or zeros(7*n,1);
ub2 = repmat(ub,n,1); % or ones(7*n,1);
opts = optimoptions(@lsqlin,'Algorithm','interior-point','Display','off');
x = lsqlin(C2,b2,[],[],[],[],lb2,ub2,[],opts);
Using optimoptions
, I've specified the algorithm and set 'Display'
to 'off'
to make sure any outputs and warnings don't slow down the calculations.
On my machine this is 6–10 times faster than using a for
loop (with proper pre-allocation and setting options). This approach assumes that the sparse C2
matrix with m*n*7
elements can fit in memory. If not, a for
loop based approach will be the only option (other than writing your own specialized version of lsqlin
or taking advantage any other spareness in the problem).