I use torch-caffe-binding to convert caffe model to torch. And I want to delete the loss layer in the end and add other torch layers, could I just delete the layer in the .prototxt
file and "train" the model to get the .caffemodel
file and import in torch?
And the model uses the lmdb type data, when I use net:forward(input)
to train the model, the model just uses the data defined in the data layer instead of uses the input
data. So how to train the model that uses the lmdb data?
the caffe model has some custom layer so I can't use the loadcaffe
to load the model in torch
You have 3 issues here -
In order to use the lmdb rather than using the data layer, connect your input to the first conv layer (assuming your first non-input layer is conv, e.g. say you have
layer {
name: "input-data"
type: "DummyData"
top: "data"
top: "im_info"
dummy_data_param {
shape { dim: 1 dim: 3 dim: 224 dim: 224 }
}
}
and also
input: "data"
input_shape: {
dim: 1
dim: 3
dim: 224
dim: 224
}
and then
layer {
name: "conv1"
type: "Convolution"
bottom: "data" --> **here put data instead of input-data**
top: "conv1"
convolution_param {
num_output: 96
kernel_size: 3
pad: 1
stride: 1
}
}