I created a data set and converted it into a TFRecords file. This is part of the code I used to write the file:
example = tf.train.Example(features=tf.train.Features(feature={
'height': _int64_feature(rows),
'width': _int64_feature(cols),
'depth': _int64_feature(depth),
'label': _int64_feature(int(labels[index])),
'name': _bytes_feature(imagePaths[index].encode(encoding='utf-8')),
'image_raw': _bytes_feature(imageRaw.tostring())}))
The data in the records is read just fine, when I use the python_io module of tensorflow and all data is identical to the original images and labels. When I now try to read the file in a graph, I get following error message:
OutOfRangeError (see above for traceback): RandomShuffleQueue '_0_shuffle_batch/random_shuffle_queue' is closed and has insufficient elements (requested 10, current size 0)
Other threads suggested that the queue is shutting down, when a faulty element is added, so I scrapped all but the essential reshapes and cast. The error persists. Here is my test code:
testInput.py
import tensorflow as tf
def inputs(dataDir):
feature = {'image_raw': tf.FixedLenFeature([], tf.string),
'label': tf.FixedLenFeature([], tf.int64)}
# Create a list of filenames and pass it to a queue
filename_queue = tf.train.string_input_producer([dataDir], num_epochs=1)
# Define a reader and read the next record
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
# Decode the record read by the reader
features = tf.parse_single_example(serialized_example, features=feature)
# Convert the image data from string back to the numbers
image = tf.decode_raw(features['image_raw'], tf.uint8)
# Cast label data into int32
label = tf.cast(features['label'], tf.int32)
# Reshape image data into the original shape
image = tf.reshape(image, [64, 64, 3])
# Any preprocessing here ...
# Creates batches by randomly shuffling tensors
images, labels = tf.train.shuffle_batch([image, label],
batch_size=10,
capacity=30,
num_threads=1,
min_after_dequeue=10)
return images, labels
test.py
import tensorflow as tf
from PIL import Image
import skimage.io as io
import testInput as data
import numpy as np
images, labels = data.inputs('./train.tfrecords')
with tf.Session() as sess:
tf.local_variables_initializer()
tf.global_variables_initializer()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
img, lab = sess.run([images, labels])
print(img[0, :, :, :].shape)
io.imshow(img[0, :, :, :])
io.show()
input('Press key...')
coord.join(threads)
sess.close()
I made sure that the shape of the image is really [64, 64, 3] and even tried to leave it in its one dimensional shape, but I still get the error. I am out of ideas, so I am asking you for help. Thanks in advance.
Forgot to run the initialization ops. Never mind -.-