I have a 192x256x192 cell where every cell is a 13-entry-long vector. I am tasked with creating 192 HeatMaps with 192x256 squares in each map. The value of each square is supposed to be found as the b-value of an exponential fit to the 13-entry-long vector.
I.e. Cell{:,:,1} is an image. Cell{1,1,1} represents the 13 changes in intensity a pixel experiences over time. I want to make 192 HeatMaps where each spot on the heatmap is the b-value of an exponential fit for that pixels changes in intensity over time.
I have some code for this (see below), but I am not a great programmer and it runs incredibly slowly. Does anyone have advice on an alternate means of doing this? Or advice on this topic.
Thanks.
ExpHeatMap = zeros(192,256,192);
t = 1:13;
ft=fittype('exp1');
for i = 1:192
for j = 1:256
for k = 1:192
testArray = HeatMapValues{k,j,i}(1:end);
numZeros = find(testArray == 0);
if numZeros > 10
ExpHeatMap(k,j,i) = 0;
else
cf = fit(t',HeatMapValues{k,j,i}',ft);
ExpHeatMap(k,j,i) = cf.b;
end
end
end
end
Here is a no-for-loop approach based on cellfun
-
fv = cellfun(@(x) getfield(fit(t(:),x',ft),'b'), HeatMapValues);
ExpHeatMap = fv.*reshape(~(sum(vertcat(HeatMapValues{:}),2)>10),...
size(HeatMapValues))
I think the majority of the speeding up process would still depend on how you can vectorize the fit
calculations. cellfun
is one way to avoid for-loops
, but it doesn't necessarily speed up calculations by a huge margin. But I don't think it would be slower than for-loop
approach.