Search code examples
tensorflowmachine-learningserializationprotocol-bufferstfrecord

Tensorflow parses the record incorrectly


I trying to create tfrecords for my semantic segmentation dataset (rgb_image_in -> binary_raycast_out).

Below is my code to write the list of images to a train.tfrecord.

    def _process_image_files(image_names, raycast_names):

        writer = tf.python_io.TFRecordWriter('train')

        #My implementation of decoding jpeg/png image
        coder = ImageCoder()

        for i in range(len(image_names)):
            print('{}\n{}\n\n'.format(image_names[i], raycast_names[i]))

            image_buffer, im_height, im_width, im_channels = _process_image(image_names[i], coder)

            raycast_buffer, rc_height, rc_width, rc_channels = _process_image(raycast_names[i], coder)

            example = _convert_to_example(image_names[i], raycast_names[i], image_buffer, raycast_buffer, \
                                          im_height, im_width, im_channels)

            writer.write(example.SerializeToString())
        writer.close()
        sys.stdout.flush() 

def _process_image(filename, coder):
    with tf.gfile.FastGFile(filename, 'rb') as f:
        image_data = f.read()

    # Convert any PNG to JPEG's for consistency.
    if _is_png(filename):
        print('Converting PNG to JPEG for %s' % filename)
        image_data = coder.png_to_jpeg(image_data)

    # Decode the RGB JPEG.
    image = coder.decode_jpeg(image_data)

    # Check that image converted to RGB
    assert len(image.shape) == 3
    height = image.shape[0]
    width = image.shape[1]
    channels = image.shape[2]
    assert channels == 3

    return image_data, height, width, channels


def _convert_to_example(image_name, raycast_name, image_buffer, raycast_buffer, sample_height, sample_width, sample_channels):

    example = tf.train.Example(features=tf.train.Features(feature={
        'height': _int64_feature(sample_height),
        'width': _int64_feature(sample_width),
        'channels': _int64_feature(sample_channels),
        'image/filename': _bytes_feature(tf.compat.as_bytes(image_name)),
        'image/encoded': _bytes_feature(tf.compat.as_bytes(image_buffer)),
        'raycast/filename': _bytes_feature(tf.compat.as_bytes(raycast_name)),
        'raycast/encoded': _bytes_feature(tf.compat.as_bytes(raycast_buffer))}))

    return example

The above code works fine in creating the tfrecord file. I put some print statements inside the _convert_to_example method to make sure the corresponding filenames (image_file & raycast_file) are getting written in one example.

However, when I read the examples from tfrecord and print the image names, it looks like the image_file & raycast_file names do not correspond. The pair of images read by the tfRecordReader() is wrong.

Below is my code to read the record:

def parse_example_proto(example_serialized):

    feature_map = {
                    'image/encoded': tf.FixedLenFeature([], dtype=tf.string, default_value=''),
                    'raycast/encoded': tf.FixedLenFeature([], dtype=tf.string, default_value=''),
                    'height': tf.FixedLenFeature([1], dtype=tf.int64, default_value=-1),
                    'width': tf.FixedLenFeature([1], dtype=tf.int64, default_value=-1),
                    'channels': tf.FixedLenFeature([1], dtype=tf.int64, default_value=-1),
                    'image/filename': tf.FixedLenFeature([], dtype=tf.string, default_value=''),
                    'raycast/filename': tf.FixedLenFeature([], dtype=tf.string, default_value='')
                    }

    features = tf.parse_single_example(example_serialized, feature_map)

    return features['image/encoded'], features['raycast/encoded'], \
           features['height'], features['width'], features['channels'],\
           features['image/filename'], features['raycast/filename']



def retrieve_samples():

    with tf.name_scope('batch_processing'):
        data_files = ['train']

        filename_queue = tf.train.string_input_producer(data_files, shuffle=False)

        reader = tf.TFRecordReader()

        _, example_serialized = reader.read(filename_queue)

        image_buffer, raycast_buffer, height, width, channels, image_name, raycast_name = parse_example_proto(example_serialized)            

        orig_image = tf.image.resize_images(tf.image.decode_jpeg(image_buffer, channels=3), 
                                            [480, 856])
        orig_raycast = tf.image.resize_images(tf.image.decode_jpeg(raycast_buffer, channels=3), 
                                              [480, 856])

        return image_name, raycast_name

Below is my code to print a pair of filenames

image_name, raycast_name = retrieve_samples()
with tf.Session() as sess:    
    for i in range(1):
        coord = tf.train.Coordinator()
        threads = tf.train.start_queue_runners(sess=sess, coord=coord)
        print(sess.run(image_name))
        print(sess.run(raycast_name))
        coord.request_stop()
        coord.join(threads)

I have spent few days on this. I am not able to identify why I am not able to retrieve the correct pair. An example being retrieved should have the same data as the example being created right ? Why am I seeing different name pairs when I read and write ?

Any help would be appreciated


Solution

  • A smaller example would be better.

    Each session.run will evaluate the tensor and run the graph. That means if you evaluate image_name and raycast_name separately, then you will get them from different runs and they won't be a pair.

    You could get the pair by evaluating both at the same time, e.g.:

    current_image_name, current_raycast_name = session.run([
        image_name, raycast_name
    ])
    

    I would also recommend to use the newer Dataset API over the queues.