Search code examples
neural-networktensorflowimagenet

Google Inception tensorflow.python.framework.errors.ResourceExhaustedError


When I try to run Google's Inception model in a loop over a list of images, I get the issue below after about 100 or so images. It seems to be running out of memory. I'm running on a CPU. Has anyone else encountered this issue?

Traceback (most recent call last):
  File "clean_dataset.py", line 33, in <module>
    description, score = inception.run_inference_on_image(f.read())
  File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 178, in run_inference_on_image
    node_lookup = NodeLookup()
  File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 83, in __init__
    self.node_lookup = self.load(label_lookup_path, uid_lookup_path)
  File "/Volumes/EXPANSION/research/dcgan-transfer/data/classify_image.py", line 112, in load
    proto_as_ascii = tf.gfile.GFile(label_lookup_path).readlines()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 110, in readlines
    self._prereadline_check()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/lib/io/file_io.py", line 72, in _prereadline_check
    compat.as_bytes(self.__name), 1024 * 512, status)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/contextlib.py", line 24, in __exit__
    self.gen.next()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/tensorflow/python/framework/errors.py", line 463, in raise_exception_on_not_ok_status
    pywrap_tensorflow.TF_GetCode(status))
tensorflow.python.framework.errors.ResourceExhaustedError: /tmp/imagenet/imagenet_2012_challenge_label_map_proto.pbtxt


real    6m32.403s
user    7m8.210s
sys     1m36.114s

https://github.com/tensorflow/models/tree/master/inception


Solution

  • The issue is you cannot simply import the original 'classify_image.py'(https://github.com/tensorflow/tensorflow/blob/master/tensorflow/models/image/imagenet/classify_image.py) in your own code, especially when you put it into a huge loop to classify thousands of images 'in batch mode'.

    Look at the original code here:

    with tf.Session() as sess:
    # Some useful tensors:
    # 'softmax:0': A tensor containing the normalized prediction across
    #   1000 labels.
    # 'pool_3:0': A tensor containing the next-to-last layer containing 2048
    #   float description of the image.
    # 'DecodeJpeg/contents:0': A tensor containing a string providing JPEG
    #   encoding of the image.
    # Runs the softmax tensor by feeding the image_data as input to the graph.
    softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
    predictions = sess.run(softmax_tensor,
                           {'DecodeJpeg/contents:0': image_data})
    predictions = np.squeeze(predictions)
    
    # Creates node ID --> English string lookup.
    node_lookup = NodeLookup()
    
    top_k = predictions.argsort()[-FLAGS.num_top_predictions:][::-1]
    for node_id in top_k:
      human_string = node_lookup.id_to_string(node_id)
      score = predictions[node_id]
      print('%s (score = %.5f)' % (human_string, score))
    

    From above you can see that for each classification task it generate a new instance of Class 'NodeLookup', which loads below from files:

    • label_lookup="imagenet_2012_challenge_label_map_proto.pbtxt"
    • uid_lookup_path="imagenet_synset_to_human_label_map.txt"

    So the instance would be really huge, and then in your codes' loop it will generate over hundreds of instances of this class, which results in 'tensorflow.python.framework.errors.ResourceExhaustedError'.

    What I am suggesting to get ride of this is to write a new script and modify those classes and functions from 'classify_image.py', and avoid to instantiate the NodeLookup class for each loop, just instantiate it for once and use it in the loop. Something like this:

    with tf.Session() as sess:
            softmax_tensor = sess.graph.get_tensor_by_name('softmax:0')
            print 'Making classifications:'
    
            # Creates node ID --> English string lookup.
            node_lookup = NodeLookup(label_lookup_path=self.Model_Save_Path + self.label_lookup,
                                     uid_lookup_path=self.Model_Save_Path + self.uid_lookup_path)
    
            current_counter = 1
            for (tensor_image, image) in self.tensor_files:
                print 'On ' + str(current_counter)
    
                predictions = sess.run(softmax_tensor, {'DecodeJpeg/contents:0': tensor_image})
                predictions = np.squeeze(predictions)
    
                top_k = predictions.argsort()[-int(self.filter_level):][::-1]
    
                 for node_id in top_k:
                     human_string = node_lookup.id_to_string(node_id)
                     score = predictions[node_id]