Search code examples
opencvclassificationdeep-learningcaffenvidia-digits

Why the result of DIGITS and OpenCV 3.1 is different?


I use DIGIT to classify (I test GoogLeNet with Adaptive Gradient, Stochastic gradient descent, and Nesterov's accelerated gradient). The images are color and 256*256. After training I use "Test a single image" option and test one image. The result is show prefect match and classify image correctly. Then I use downloaded model for applying in OpenCV 3.1 (windows 64bit, visual studio 2013, Nvidia GPU) based on "http://docs.opencv.org/trunk/d5/de7/tutorial_dnn_googlenet.html". However, always I got different class and wrong answer.
Edit:
I try cvtColor(img, img, COLOR_BGR2RGB) and the problem not solve. Still I got wrong result. I try different data transformations like none, image, and pixel. Also different solver type.


Solution

  • I would be surprised if OpenCV 3 vs 2 is causing this issue. Instead, I expect that the discrepancy is due to a difference in data pre-processing.

    Here's an example of how to do data pre-processing for a Caffe model that was trained in DIGITS: https://github.com/NVIDIA/DIGITS/blob/v4.0.0/examples/classification/example.py#L40-L85

    Also make sure you read these "gotchas": https://github.com/NVIDIA/DIGITS/blob/v4.0.0/examples/classification/README.md#limitations