I am creating a NN for NLP, starting with a Embedding
layer (using pre-trained embeddings). But when I declare the Embedding
layer in Keras (Tensorflow backend), I have a ResourceExhaustedError
:
ResourceExhaustedError: OOM when allocating tensor with shape[137043,300] and type float on /job:localhost/replica:0/task:0/device:GPU:0 by allocator GPU_0_bfc
[[{{node embedding_4/random_uniform/RandomUniform}} = RandomUniform[T=DT_INT32, dtype=DT_FLOAT, seed=87654321, seed2=9524682, _device="/job:localhost/replica:0/task:0/device:GPU:0"](embedding_4/random_uniform/shape)]]
Hint: If you want to see a list of allocated tensors when OOM happens, add report_tensor_allocations_upon_oom to RunOptions for current allocation info.
I already checked Google : most of ResourceExhaustedError happen at training time, and is because the RAM of the GPU is not big enough. it is fixed by reducing batch size.
But in my case, I didn't even start training ! This line is the problem :
q1 = Embedding(nb_words + 1,
param['embed_dim'].value,
weights=[word_embedding_matrix],
input_length=param['sentence_max_len'].value)(question1)
Here, word_embedding_matrix
is a matrix of size (137043, 300)
, the pretrained embeddings.
As far as I know, this will not take gigantic amount of memory (unlike here) :
137043 * 300 * 4 bytes = 53 kiB
And here is the GPU used :
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 396.26 Driver Version: 396.26 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce GTX 108... Off | 00000000:02:00.0 Off | N/A |
| 23% 32C P8 16W / 250W | 6956MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 1 GeForce GTX 108... Off | 00000000:03:00.0 Off | N/A |
| 23% 30C P8 16W / 250W | 530MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 2 GeForce GTX 108... Off | 00000000:82:00.0 Off | N/A |
| 23% 34C P8 16W / 250W | 333MiB / 11178MiB | 0% Default |
+-------------------------------+----------------------+----------------------+
| 3 GeForce GTX 108... Off | 00000000:83:00.0 Off | N/A |
| 24% 46C P2 58W / 250W | 4090MiB / 11178MiB | 23% Default |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1087 C uwsgi 1331MiB |
| 0 1088 C uwsgi 1331MiB |
| 0 1089 C uwsgi 1331MiB |
| 0 1090 C uwsgi 1331MiB |
| 0 1091 C uwsgi 1331MiB |
| 0 4176 C /usr/bin/python3 289MiB |
| 1 2631 C ...e92/venvs/wordintent_venv/bin/python3.6 207MiB |
| 1 4176 C /usr/bin/python3 313MiB |
| 2 4176 C /usr/bin/python3 323MiB |
| 3 4176 C /usr/bin/python3 347MiB |
| 3 10113 C python 1695MiB |
| 3 13614 C python3 1347MiB |
| 3 14116 C python 689MiB |
+-----------------------------------------------------------------------------+
Does anyone know why I meet this exception ?
From this link, configuring TensorFlow to not allocate maximum GPU directly seems to fix the problem.
Running this before the declaration of the model's layers fixed the problem :
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
config.gpu_options.per_process_gpu_memory_fraction = 0.3
session = tf.Session(config=config)
K.set_session(session)