So i have used the same autoencoder model with a batch size of 10 without a generator (by loading the elements in memory ) and the model runs without any issues at all.
I have defined a python generator so that I can take on more data in the following way :-
from sklearn.utils import shuffle
def nifti_gen(samples, batch_size = 5):
num_samples = len(samples)
while True:
for bat in range(0,num_samples,batch_size):
temp_batch = samples[bat:bat+batch_size]
batch_data = []
batch_data = np.asarray(batch_data)
for i,element in enumerate(temp_batch):
temp = get_input(element)
if i == 0:
batch_data = temp
else :
batch_data = np.concatenate((batch_data,temp))
yield batch_data,batch_data
from sklearn.model_selection import train_test_split
train_samples, validation_samples = train_test_split(IO_paths[:400], test_size=0.1)
train_generator = nifti_gen(train_samples, batch_size=5)
validation_generator = nifti_gen(validation_samples, batch_size=5)
However when i try to train the model , i get the following error before even one epoch is completed :-
autoencoder_train = MRA_autoencoder.fit(train_generator, steps_per_epoch= 36 , callbacks= [es,mc] , epochs= 300)
Epoch 1/300
---------------------------------------------------------------------------
ResourceExhaustedError Traceback (most recent call last)
<ipython-input-32-65838b7c908e> in <module>()
----> 1 autoencoder_train = MRA_autoencoder.fit(train_generator, steps_per_epoch= 36 , callbacks= [es,mc] , epochs= 300)
8 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
58 ctx.ensure_initialized()
59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
---> 60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
62 if name is not None:
ResourceExhaustedError: OOM when allocating tensor with shape[500,84,400,400] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[node functional_5/functional_1/conv2d/Conv2D (defined at <ipython-input-32-65838b7c908e>:1) ]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
[Op:__inference_train_function_4760]
Function call stack:
train_function
I have no clue as to why this happens as i know for sure that i definitely have enough memory for a batch size of at least 10. Any help would be appreciated! Thanks
Looks like the data is huge. [500, 84, 400, 400] is a very very large data to be processed and that too at each layer, best is to revert back to a batch size of 5 or move to multi-gpu cloud based training.