Search code examples
pythondockerneural-networkdeep-learningcaffe

Caffe - Check failed: mdb_status == 0 (2 vs. 0) No such file or directory


I am trying to train a custom Caffe model for face recognition.

(Refer to Github project here: https://github.com/danduncan/HappyNet)

For that I am using a Docker image for Caffe (which works), in a sharedfolder on my machine:

Docker sharedfolder structure:

execute_3_train_custom_model
datasets/
        mean_training_image.binaryproto
        training_set_lmdb/
                         data.mdb
                         lock.mdb
models/
      custom_Model/
                   deploy.prototxt
                   EmotiW_VGG_S.caffemodel
                   loss_history.txt
                   solver.prototxt
                   train.prototxt

At training, I'm running a script at command line:

execute_3_train_custom_model:

time ~/caffe/build/tools/caffe train -solver models/Custom_Model/solver.prototxt -weights models/Custom_Model/EmotiW_VGG_S.caffemodel | tee caffe_loss_history.txt

like so:

root@3f3220158436:~/sharedfolder/caffe/docker/image/happyNet# sudo ./execute_3_train_custom_model

Other relevant files:

train.prototxt

name: "CaffeNet"
layers {
  name: "training_train"
  type: DATA
  data_param {
    source: "datasets/training_set_lmdb"
    backend: LMDB
    batch_size: 400
  }
  transform_param{
    mean_file: "datasets/mean_training_image.binaryproto"
  }
  top: "data"
  top: "label"
  include {
    phase: TRAIN
  }
}
layers {
  name: "training_test"
  type: DATA
  data_param {
    source: "datasets/validation_set_lmdb"
    backend: LMDB
    batch_size: 14
  }
  transform_param{
    mean_file: "datasets/mean_training_image.binaryproto"
  }
  top: "data"
  top: "label"
  include {
    phase: TEST
  }
  (...)

solver.prototxt

net: "models/Custom_Model/train.prototxt"
# test_iter specifies how many forward passes the test should carry out
test_iter: 1
# Carry out testing every X training iterations
test_interval: 20
# Learning rate and momentum parameters for Adam
base_lr: 0.001
momentum: 0.9
momentum2: 0.999
# Adam takes care of changing the learning rate
lr_policy: "fixed"
# Display every X iterations
display: 10
# The maximum number of iterations
max_iter: 3000
# snapshot intermediate results
snapshot: 100
snapshot_prefix: "snapshot"
# solver mode: CPU or GPU
type: "Adam"
solver_mode: CPU

but when I run the script I get the following traceback:

I0415 01:49:52.260529   148 layer_factory.hpp:77] Creating layer training_test
I0415 01:49:52.260713   148 net.cpp:91] Creating Layer training_test
I0415 01:49:52.260766   148 net.cpp:399] training_test -> data
I0415 01:49:52.260816   148 net.cpp:399] training_test -> label
I0415 01:49:52.260861   148 data_transformer.cpp:25] Loading mean file from: datasets/mean_training_image.binaryproto
F0415 01:49:52.268076   153 db_lmdb.hpp:15] Check failed: mdb_status == 0 (2 vs. 0) No such file or directory
*** Check failure stack trace: ***
    @     0x7f9e61bf7daa  (unknown)
    @     0x7f9e61bf7ce4  (unknown)
    @     0x7f9e61bf76e6  (unknown)
    @     0x7f9e61bfa687  (unknown)
    @     0x7f9e6229d0b1  caffe::db::LMDB::Open()
    @     0x7f9e6224d754  caffe::DataReader::Body::InternalThreadEntry()
    @     0x7f9e6072ca4a  (unknown)
    @     0x7f9e6050b184  start_thread
    @     0x7f9e60a3137d  (unknown)
    @              (nil)  (unknown)

real    0m1.741s
user    0m0.580s
sys 0m1.230s

I wonder if I'm messing with my path somehow, since I'm using Docker, and I don't know how to debug this.


Solution

  • It seems like you should have two lmdbs:

    datasets/training_set_lmdb  # which you seem to have
    datasets/validation_set_lmdb  # where is this one?
    

    When caffe is constructing layer "training_test" for phase: TEST you get this error: caffe cannot find datasets/validation_set_lmdb. Make sure you got it.