Search code examples
androidtensorflowqualcommsnpe

Error when using SNPE to convert tensorflow dense layer


When converting a custom tensorflow graph I am seeing errors relating to the conversion of a dense layer from pb to DLC format:

2017-11-02 13:43:35,260 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Tensordot/transpose) not consumed by converter: Transpose.
2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Tensordot/transpose_1) not consumed by converter: Transpose.
2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/Tensordot/MatMul) not consumed by converter: MatMul.
2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (dense/BiasAdd) not consumed by converter: BiasAdd.
2017-11-02 13:43:35,261 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/Tensordot/transpose) not consumed by converter: Transpose.
2017-11-02 13:43:35,262 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/Tensordot/transpose_1) not consumed by converter: Transpose.
2017-11-02 13:43:35,262 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/Tensordot/MatMul) not consumed by converter: MatMul.
2017-11-02 13:43:35,262 - 305 - WARNING - WARNING_TF_SCOPE_OP_NOT_CONSUMED: Operation (logit/BiasAdd) not consumed by converter: BiasAdd.
2017-11-02 13:43:35,263 - 123 - ERROR - Conversion failed: Some operations in the Tensorflow graph were not resolved to a layer!

I am a bit confused by this because the layer is simply a dense layer following a 2D convolutional which I am sure is supported by the SNPE. What is the cause of the error?

The topology of the graph is as follows:

0 input_layer Placeholder
1 conv2d/kernel Const
2 conv2d/kernel/read Identity
└─── Input0 ─ conv2d/kernel
3 conv2d/bias Const
4 conv2d/bias/read Identity
└─── Input0 ─ conv2d/bias
5 conv2d/convolution Conv2D
└─── Input0 ─ input_layer
└─── Input1 ─ conv2d/kernel/read
6 conv2d/BiasAdd BiasAdd
└─── Input0 ─ conv2d/convolution
└─── Input1 ─ conv2d/bias/read
7 conv2d/Relu Relu
└─── Input0 ─ conv2d/BiasAdd
8 max_pooling2d/MaxPool MaxPool
└─── Input0 ─ conv2d/Relu
9 conv2d_1/kernel Const
10 conv2d_1/kernel/read Identity
└─── Input0 ─ conv2d_1/kernel
11 conv2d_1/bias Const
12 conv2d_1/bias/read Identity
└─── Input0 ─ conv2d_1/bias
13 conv2d_2/convolution Conv2D
└─── Input0 ─ max_pooling2d/MaxPool
└─── Input1 ─ conv2d_1/kernel/read
14 conv2d_2/BiasAdd BiasAdd
└─── Input0 ─ conv2d_2/convolution
└─── Input1 ─ conv2d_1/bias/read
15 conv2d_2/Relu Relu
└─── Input0 ─ conv2d_2/BiasAdd
16 max_pooling2d_2/MaxPool MaxPool
└─── Input0 ─ conv2d_2/Relu
17 conv2d_2/kernel Const
18 conv2d_2/kernel/read Identity
└─── Input0 ─ conv2d_2/kernel
19 conv2d_2/bias Const
20 conv2d_2/bias/read Identity
└─── Input0 ─ conv2d_2/bias
21 conv2d_3/convolution Conv2D
└─── Input0 ─ max_pooling2d_2/MaxPool
└─── Input1 ─ conv2d_2/kernel/read
22 conv2d_3/BiasAdd BiasAdd
└─── Input0 ─ conv2d_3/convolution
└─── Input1 ─ conv2d_2/bias/read
23 conv2d_3/Relu Relu

Note: I have also posted this question to the qualcomm developer network but it doesn't seem to have shown up, possibly because of a moderation queue.


Solution

  • I faced the same issue while using the dense layer (tf.layers.dense API). The reason for the issue is that there is a reshape operation being applied to weights (introduced by tf.layer.dense API). The converter misinterprets it as part of the model execution and hence tries to convert to a layer which it can't since there are no input layers to it.

    You can use reshape(tf.reshape API) between a convolution and fully connected to flatten the tensor and it will work fine.