Search code examples
tensorflowgpujupyterjupyterhub

Jupyterhub config for limit tensorflow gpu memory


I am building a tensorflow environment with Jupyterhub(docker spawner) for my students in class, but I face a problem with this.

By default, TensorFlow maps nearly all of the GPU memory of all GPUs (subject to CUDA_VISIBLE_DEVICES) visible to the process. This is done to more efficiently use the relatively precious GPU memory resources on the devices by reducing memory fragmentation. (from https://www.tensorflow.org/tutorials/using_gpu)

If anyone in class use python program with gpu,then the gpu memory will nearly exhaust.According to this situation,I need to add some limit code manually. like:

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

But this is not a great solution.I should add this code every time when new code generated.

If jupyterhub can add some config to avoid this situation or other great solutions? Please let me know,thanks!


Solution

  • import tensorflow as tf
    gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.2)
    sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))
    

    this works well