Let's consider this example code:
rng('default')
% creating fake data
data = randi([-1000 +1000],30,500);
yt = randi([-1000 1000],30,1);
% creating fake missing values
row = randi([1 15],1,500);
col = rand(1,500) < .5;
% imputing missing fake values
for i = 1:500
if col(i) == 1
data(1:row(i),i) = nan;
end
end
%% here starts my problem
wgts = ones(1,500); % optimal weights needs to be binary (only zero or one)
% this would be easy with matrix formulas but I have missing values at the
% beginning of the series
for j = 1:30
xt(j,:) = sum(data(j,:) .* wgts,2,'omitnan');
end
X = [xt(3:end) xt(2:end-1) xt(1:end-2)];
y = yt(3:end);
% from here I basically need to:
% maximize the Adjusted R squared of the regression fitlm(X,y)
% by changing wgts
% subject to wgts = 1 or wgts = 0
% and optionally to impose sum(wgts,'all') = some number;
% basically I need to select the data cols with the highest explanatory
% power, omitting missing data
This is relatively easy to implement with Excel solver, but but it can handle only 200 decision variables and it takes a lot of time. Thank you in advance.
lasso seems to give interesting results:
% creating fake data (but having an actual relationship between `yt` and the predictors)
rng('default')
data = randi([-1000 +1000],30,500);
alphas = rand(1,500);
yt = sum(alphas.*data,2) + 10*randn(30,1);
plot(yt)
% Use lasso algorithm with no constant coefficients
% keep the column of coefficients that minimizes MSE.
% By design, lasso minimizes the amount of non zero coefficients
[B,FitInfo] = lasso(data,yt,'Intercept',false);
idxLambda1SE = find(FitInfo.MSE == min(FitInfo.MSE));
coef = B(:,idxLambda1SE);
y_verif = data*coef;
hold on;plot(y_verif)
sum(coef~=0)
ans =
29
Output has been explained by 29 columns only, whereas all the values in alpha
were non zero