Search code examples
matlabbatch-processingparfor

Running Matlab batch job on HPC cluster


I am trying to get Matlab to execute a number of scripts as individual batch jobs. Each script loads some data from excel sheets and implements a neural network. The neural network uses parfor loops internally for parameter tuning.

When I run the batch job on my local machine it works fine. My Matlab code looks like

job1 = batch('Historical1Step',...
'Profile', 'local',...
'Matlabpool', 3,...
'CaptureDiary',true,...
'CurrentDirectory', '.');

try
    job1Results = fetchOutputs(job1);
catch err
    delete(job1);
    rethrow(err);
end
delete(job1);

and the diary output I get is

--- Start Diary ---
Analysing data for stock BAX

num_its =

 2

100%[============================
100%[===================================================]

--- End Diary ---

However, when I change from the 'local' config to my server config I get

--- Start Diary ---
--- End Diary ---
Error using parallel.Job/fetchOutputs (line 869)
An error occurred during execution of Task with ID 1.

Error in SOExample (line 14)
    job1Results = fetchOutputs(job1);

Caused by:
    Index exceeds matrix dimensions.

I am assuming the problem is something to do with the visibility of my functions/data on the workers, but I have tried every combination of the 'FileDependencies' and 'PathDependencies' options I can think of within the batch function to no avail.

Any help would be much appreciated, and apologies in advance if I have done something monumentally stupid without realising it!

EDIT-

The error stack is as follows:

Index exceeds matrix dimensions.

Error in Historical1Step (line 13)


Error in parallel.internal.cluster/executeScript (line 22)
eval(['iClearAndSetCallerWorkspace(workspaceIn);' scriptName]);

Error in parallel.internal.evaluator/evaluateWithNoErrors (line 14)
        [out{1:nOut}] = feval(fcn, args{:});

Error in parallel.internal.evaluator/CJSStreamingEvaluator/evaluate (line 31)
            [out, errOut] = parallel.internal.evaluator.evaluateWithNoErrors( fcn, nOut, args );

Error in dctEvaluateTask>iEvaluateTask/nEvaluateTask (line 281)
        [output, errOutput, cellTextOutput{end+1}] = evaluator.evaluate(fcn, nOut, args);

Error in dctEvaluateTask>iEvaluateTask (line 141)
    nEvaluateTask();

Error in dctEvaluateTask (line 57)
    [resultsFcn, taskPostFcn, taskEvaluatedOK] = iEvaluateTask(job, task, runprop);

Error in distcomp_evaluate_filetask_core>iDoTask (line 149)
dctEvaluateTask(postFcns, finishFcn);

Error in distcomp_evaluate_filetask_core (line 48)
iDoTask(handlers, postFcns);


Error using parallel.Job/fetchOutputs (line 869)
An error occurred during execution of Task with ID 1.

Error in SOExample (line 14)
    job1Results = fetchOutputs(job1);

Caused by:
    Index exceeds matrix dimensions.

The file 'Historical1Step' is the script I am trying to run. The first lines (until the code falls over) are:

wrkDir = 'V:\Individual\SOFNN'; % this is where the files are on cluster headnode
wrkFldr = [wrkDir '\ExcelSheets\1-stepAhead\']; % location of excel sheets

%%
folder = dir(wrkFldr);
isub = [folder(:).isdir]; % data is stored in sub-directory based on stock symbol
stockNames = {folder(isub).name}'; % extract stock names from names of sub-dirs
stockNames(ismember(stockNames,{'.','..'})) = []; % remove names '.' and '..'

for i = 1:1 % this loop should read in data for stock i from correct sub-dir
    close all;
    clc;
    sym = stockNames{i};
    disp(['Analysing data for stock ' sym]);
    fldrName = strcat(wrkFldr,'\', sym, '\');
end % added for completion

Solution

  • In your code, you're using a mapped-drive letter on the workers. Typically, workers cannot see mapped-drive letters because of the way the processes are launched. Try using a UNC path instead. A little more info in the documentaton here : http://www.mathworks.com/help/distcomp/troubleshooting-and-debugging.html