Search code examples
tensorflowgputensorboard

Tensorflow not using GPU (according to TensorBoard)


edit : GTX 1070, ubuntu 16.04, git hash : 3b75eb34ea2c4982fb80843be089f02d430faade

I am retraining inception model on my own data. Everything is fine until the final command :

bazel-bin/inception/flowers_train \
  --config=cuda \
  --train_dir="${TRAIN_DIR}" \
  --data_dir="${OUTPUT_DIRECTORY}" \
  --pretrained_model_checkpoint_path="${MODEL_PATH}" \
  --fine_tune=True \
  --initial_learning_rate=0.001 \
  --input_queue_memory_factor=1

According to the logs, Tensorflow seems to be using the GPU :

I tensorflow/core/common_runtime/gpu/gpu_device.cc:951] Found device 0 with properties:
name: GeForce GTX 1070
major: 6 minor: 1 memoryClockRate (GHz) 1.7715
pciBusID 0000:03:00.0
Total memory: 7.92GiB
Free memory: 7.77GiB
I tensorflow/core/common_runtime/gpu/gpu_device.cc:972] DMA: 0
I tensorflow/core/common_runtime/gpu/gpu_device.cc:982] 0:   Y
I tensorflow/core/common_runtime/gpu/gpu_device.cc:1041] Creating TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 1070, pci bus id: 0000:03:00.0)

But when I am checking the learning in TensorBoard, the net is using mainly the CPU (blue /device:CPU:0, green /device:GPU:0):

TensorBoard graph:

TensorBoard graph

I have tried this two TensorFlow setups :

  1. Install from the source with nvidia-367 drivers, CUDA8 8.0, cuDNN v5, source from the master (16/10/06 - r11?). compiled for GPU use:

    bazel build -c opt --config=cuda //tensorflow/cc:tutorials_example_trainer
    bazel-bin/tensorflow/cc/tutorials_example_trainer --use_gpu    
    bazel build -c opt --config=cuda //tensorflow/tools/pip_package:build_pip_package
    
  2. docker GPU image of Tensorflow on a PC with a GTX 1070 8Go

    nvidia-docker run -it -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu /bin/bash
    

Any help ?


Solution

  • According to this issue , the inception 'tower' is where the bulk of the work is being performed. So it seems mostly fine.

    Except there is still something weird. Running watch nvidia-smi gives :

    Mon Oct 10 10:31:04 2016

    +-----------------------------------------------------------------------------+
    | NVIDIA-SMI 367.48                 Driver Version: 367.48                    |
    |-------------------------------+----------------------+----------------------+
    | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
    | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
    |===============================+======================+======================|
    |   0  GeForce GTX 1070    Off  | 0000:03:00.0      On |                  N/A |
    | 29%   57C    P2    41W / 230W |   7806MiB /  8113MiB |      0%      Default |
    +-------------------------------+----------------------+----------------------+
    
    +-----------------------------------------------------------------------------+
    | Processes:                                                       GPU Memory |
    |  GPU       PID  Type  Process name                               Usage      |
    |=============================================================================|
    |    0      1082    G   /usr/lib/xorg/Xorg                              69MiB |
    |    0      3082    C   /usr/bin/python                               7729MiB |
    +-----------------------------------------------------------------------------+
    

    While top gives : PID UTIL. PR NI VIRT RES SHR S %CPU %MEM TEMPS+ COM. 3082 root 20 0 26,739g 3,469g 1,657g S 101,3 59,7 7254:50 python

    GPU seems to be ignored...