Search code examples
tensorflowtensorflow-litedarkflow

Porting Darkflow tensorflow model into Tensorflow Android Camera Detection Demo


I have a custom built YOLO model in the form of cfg and weights. I converted this model into a .pb and .meta file using darkflow (https://github.com/thtrieu/darkflow) as

sudo ./flow --model cfg/license.cfg --load bin/yololp1_420000.weights --savepb --verbalise

Analysis of the resultant .pb(/ license.pb) is

>>> import tensorflow as tf
>>> gf = tf.GraphDef()
>>> gf.ParseFromString(open('/darkflow/built_graph/license.pb','rb').read())
202339124
>>> [n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Placeholder')]
[u'input=>Placeholder']
>>> [n.name + '=>' +  n.op for n in gf.node if n.op in ( 'Softmax','Mul')]
[u'mul=>Mul', u'mul_1=>Mul', u'mul_2=>Mul', u'mul_3=>Mul', u'mul_4=>Mul', u'mul_5=>Mul', u'mul_6=>Mul', ...]

it has 'input' layer but no 'output' layer. I tried to port the model into tensorflow camera demo detection (https://github.com/tensorflow/tensorflow/tree/master/tensorflow/examples/android). The camera preview stops after a second. The android exception is below:

04-27 15:06:32.727 21721 21737 D gralloc : gralloc_lock_ycbcr success. format : 11, usage: 3, ycbcr.y: 0xc07cf000, .cb: 0xc081a001, .cr: 0xc081a000, .ystride: 640 , .cstride: 640, .chroma_step: 2
04-27 15:06:32.735 21721 21736 E TensorFlowInferenceInterface: Failed to run TensorFlow inference with inputs:[input], outputs:[output]
04-27 15:06:32.736 21721 21736 E AndroidRuntime: FATAL EXCEPTION: inference
04-27 15:06:32.736 21721 21736 E AndroidRuntime: Process: org.tensorflow.demo, PID: 21721
04-27 15:06:32.736 21721 21736 E AndroidRuntime: java.lang.IllegalArgumentException: No OpKernel was registered to support Op 'ExtractImagePatches' with these attrs.  Registered devices: [CPU], Registered kernels:
04-27 15:06:32.736 21721 21736 E AndroidRuntime:   <no registered kernels>
04-27 15:06:32.736 21721 21736 E AndroidRuntime:     [[Node: ExtractImagePatches = ExtractImagePatches[T=DT_FLOAT, ksizes=[1, 2, 2, 1], padding="VALID", rates=[1, 1, 1, 1], strides=[1, 2, 2, 1]](47-leaky)]]

How to fix this? I tried with "optimize_for_inference.py" to convert the .pb to a mobile optimized .pb too but no use. Given this, how to get the input & output tensors/ layers defined properly in the converted .pb file? or how to port the resultant .pb properly on the TF camera detection demo?


Solution

  • No Opkernel means there is no implementation for the hardware running this .pb. To resolve this, look at class reorg of ./net/ops/convolution.py. It has two methods _forward and forward. The current default option is using forward, which has extract_image_patches - a built-in method of tensorflow.

    Swap the names of two methods and you will be using my manual implementation, which should have no problem with Opkernel implementation.

    Ref: https://github.com/thtrieu/darkflow/issues/56