Search code examples
pythonpython-3.xtensorflowdeep-learningtfrecord

Converting TFRecords back into JPEG Images


I have a file that contains hundreds of TFRecords. Each TFRecord file contains 1,024 records. Each record contains this information:

The Example proto contains the following fields:

image/height: integer, image height in pixels
image/width: integer, image width in pixels
image/colorspace: string, specifying the colorspace, always 'RGB'
image/channels: integer, specifying the number of channels, always 3
image/class/label: integer, specifying the index in a normalized classification layer
image/class/raw: integer, specifying the index in the raw (original) classification layer
image/class/source: integer, specifying the index of the source (creator of the image)
image/class/text: string, specifying the human-readable version of the normalized label
image/format: string, specifying the format, always 'JPEG'
image/filename: string containing the basename of the image file
image/id: integer, specifying the unique id for the image
image/encoded: string, containing JPEG encoded image in RGB colorspace

I have each of these TFRecords stored in a directory path /Data/train. Is there a less complex way in python to convert these images within the TFRecord back to JPEG format and save them to another directory /data/image. Ive looked at the TensorFlow docs which seem painful and also this script which converts the TFRecord to an array but I was running into issues. Any help, corrections, or feedback would be very appreciated! Thank you.

The data I'm working with is the MARCO image data:

https://marco.ccr.buffalo.edu/download


Solution

  • I got this to work in viewing a single TFRecord. Still working on writing a loop to get through multiple TFRecords:

    # Read and print data:
    sess = tf.InteractiveSession()
    
    # Read TFRecord file
    reader = tf.TFRecordReader()
    filename_queue = 
    tf.train.string_input_producer(['marcoTrainData00001.tfrecord'])
    _, serialized_example = reader.read(filename_queue)
    
    # Define features
    read_features = {
        'image/height': tf.FixedLenFeature([], dtype=tf.int64),
        'image/width': tf.FixedLenFeature([], dtype=tf.int64),
        'image/colorspace': tf.FixedLenFeature([], dtype=tf.string),
        'image/class/label': tf.FixedLenFeature([], dtype=tf.int64),
        'image/class/raw': tf.FixedLenFeature([], dtype=tf.int64),
        'image/class/source': tf.FixedLenFeature([], dtype=tf.int64),
        'image/class/text': tf.FixedLenFeature([], dtype=tf.string),
        'image/format': tf.FixedLenFeature([], dtype=tf.string),
        'image/filename': tf.FixedLenFeature([], dtype=tf.string),
        'image/id': tf.FixedLenFeature([], dtype=tf.int64),
        'image/encoded': tf.FixedLenFeature([], dtype=tf.string)
    }
    
    # Extract features from serialized data
    read_data = tf.parse_single_example(serialized=serialized_example,
                                    features=read_features)
    
    # Many tf.train functions use tf.train.QueueRunner,
    # so we need to start it before we read
    tf.train.start_queue_runners(sess)
    
    # Print features
    for name, tensor in read_data.items():
        print('{}: {}'.format(name, tensor.eval()))