Search code examples
pytorch

Floating-point overflow using `train_epochs` in PyTorch


I am getting the following error (see the stacktrace) when I ran my code in a different GPU (Tesla K-20, cuda 7.5 installed, 6GB memory). Code works fine if I run in GeForce 1080 or Titan X GPU.

Stacktrace:

File "code/source/main.py", line 68, in <module>
    train.train_epochs(train_batches, dev_batches, args.epochs)
  File "/gpfs/home/g/e/geniiexe/BigRed2/code/source/train.py", line 34, in train_epochs
    losses = self.train(train_batches, dev_batches, (epoch + 1))
  File "/gpfs/home/g/e/geniiexe/BigRed2/code/source/train.py", line 76, in train
    self.optimizer.step()
  File "/gpfs/home/g/e/geniiexe/BigRed2/anaconda3/lib/python3.5/site-packages/torch/optim/adam.py", line 70, in step
    bias_correction1 = 1 - beta1 ** state['step']
OverflowError: (34, 'Numerical result out of range')

So, what can be the reason to get such error in a different GPU (Tesla K-20) while it works fine in GeForce or Titan X GPU? Moreover what the error means? Is it related to memory overflow which I don't think so.


Solution

  • One workaround that is suggested in discuss.pytorch.org is as follows.

    Replacing the following lines in adam.py:-

    bias_correction1 = 1 - beta1 ** state['step']
    bias_correction2 = 1 - beta2 ** state['step']
    

    BY

    bias_correction1 = 1 - beta1 ** min(state['step'], 1022)
    bias_correction2 = 1 - beta2 ** min(state['step'], 1022)