Search code examples
pythontensorflowcudnnopennmt

Cudnn issue while using OpenNMT-tf 2.10 with tensorflow 2.2 inAnaconda virtual env


I am trying to train OpenNMT-tf transformer model on GPU GeForce RTX 2060 8GB Memory. You can see steps Here.

I have created Anaconda virtual environment and installed tensorflow-gpu using following commend.

conda install tensorflow-gpu==2.2.0

After running the above command conda env will handle all things and will install cuda 10.1 and cudnn 7.6.5 in env. Then I installed openNMT-tf 2.10 that was compatible with tf 2.2 gpu using following command.

~/anaconda3/envs/nmt/bin/pip install openNMT-tf==2.10

The above command will install openNMT within conda environment.

When i tried running commands available on 'Quicstart' page in OpenNMT-tf documentation, it recognised GPU while making vocab. But when i started training of transformer model it gives following cudnn error.

tensorflow.python.framework.errors_impl.InternalError: 2 root error(s) found.
  (0) Internal:  cuDNN launch failure : input shape ([1,504,512,1])
     [[node transformer_base/self_attention_decoder/self_attention_decoder_layer/transformer_layer_wrapper_12/layer_norm_14/FusedBatchNormV3 (defined at /site-packages/opennmt/layers/common.py:128) ]]
     [[Func/gradients/global_norm/write_summary/summary_cond/then/_302/input/_893/_52]]
  (1) Internal:  cuDNN launch failure : input shape ([1,504,512,1])
     [[node transformer_base/self_attention_decoder/self_attention_decoder_layer/transformer_layer_wrapper_12/layer_norm_14/FusedBatchNormV3 (defined at /site-packages/opennmt/layers/common.py:128) ]]
0 successful operations.
0 derived errors ignored. [Op:__inference__accumulate_next_33440]

Function call stack:
_accumulate_next -> _accumulate_next

2021-03-01 13:01:01.138811: I tensorflow/stream_executor/stream.cc:1990] [stream=0x560490f17b10,impl=0x560490f172c0] did not wait for [stream=0x5604906de830,impl=0x560490f17250]
2021-03-01 13:01:01.138856: I tensorflow/stream_executor/stream.cc:4938] [stream=0x560490f17b10,impl=0x560490f172c0] did not memcpy host-to-device; source: 0x7ff4467f8780
2021-03-01 13:01:01.138957: F tensorflow/core/common_runtime/gpu/gpu_util.cc:340] CPU->GPU Memcpy failed
Aborted (core dumped)

It would be great if someone can guide here.

Ps. i don't think it is a version issue as i verified openNMT-tf 2.10 requires tensorflow 2.2 and with installing tensorflow-gpu 2.2, anaconda installed cuda 10.1 and cudnn 7.6.5 by itself(by default handling GPU dependency).


Solution

  • It was a memory issue. Some people suggested some things here on StackOverflow on cudnn issues. before running this command set an environment variable 'TF_FORCE_GPU_ALLOW_GROWTH' to true.

    import os
    os.environ['TF_FORCE_GPU_ALLOW_GROWTH'] = "true"
    os.system('onmt-main --model_type Transformer --config data.yml train --with_eval')
    

    I finally started training using above script and it solved my issue.