I'm writing a multi-thread face recognition program, using Keras as high level model with tensorflow as backend. The code is as blow:
class FaceRecognizerTrainThread(QThread):
def run(self):
print("[INFO] Loading images...")
images, org_labels, face_classes = FaceRecognizer.load_train_file(self.train_file)
print("[INFO] Compiling Model...")
opt = SGD(lr=0.01)
face_recognizer = LeNet.build(width=Const.FACE_SIZE[0], height=Const.FACE_SIZE[1], depth=Const.FACE_IMAGE_DEPTH,
classes=face_classes, weightsPath=None)
face_recognizer.compile(loss="categorical_crossentropy", optimizer=opt, metrics=["accuracy"])
images = np.array(images)[:, np.newaxis, :, :] / 255.0
labels = np_utils.to_categorical(org_labels, face_classes)
print("[INFO] Training model...")
try:
face_recognizer.fit(images, labels, epochs=50, verbose=2, batch_size=10)
except Exception as e:
print(e)
print("[INFO] Training model done...")
save_name = "data/CNN_" + time.strftime("%Y%m%d%H%M%S", time.localtime()) + ".hdf5"
if save_name:
face_recognizer.save(save_name)
self.signal_train_end.emit(save_name)
every thing is ok when running it in a normal mode, but when I run it in a QThread and when it goes to
face_recognizer.fit(images, labels, epochs=50, verbose=2, batch_size=10)
it gives me the error:
Cannot interpret feed_dict key as Tensor: Tensor Tensor("conv2d_1_input:0", shape=(?, 1, 30, 30), dtype=float32) is not an element of this graph.
How can I fix it? Any suggestion is welcome, thank you very much~~~~
TensorFlow allows you to define a tf.Graph()
that you can then create a tf.Session()
with the graph and then run the operations defined in the graph. When you're doing it in this way, each QThread is trying to create it's own TF Graph. Which is why you get that error of not an element of this graph
. I don't see your feed_dict
code so I would assume that it's probably run in a main thread that your other threads do not see. Including your feed_dict
in each thread might make it work but it's hard to conclude without looking at your full code.
Replicating models in Keras and Tensorflow for a multi-threaded setting might help you.
To solve your problem, you should be using something similar to this post. Code reproduced from that post:
# Thread body: loop until the coordinator indicates a stop was requested.
# If some condition becomes true, ask the coordinator to stop.
def MyLoop(coord):
while not coord.should_stop():
...do something...
if ...some condition...:
coord.request_stop()
# Main thread: create a coordinator.
coord = tf.train.Coordinator()
# Create 10 threads that run 'MyLoop()'
threads = [threading.Thread(target=MyLoop, args=(coord,)) for i in xrange(10)]
# Start the threads and wait for all of them to stop.
for t in threads:
t.start()
coord.join(threads)
It is also worth reading about inter_op
and intra_op
parallelism here.