Search code examples
pythontensorflowtensorflow-datasetstensorflow-estimator

Tensorflow Dataset API memory error when trying to input large dataframe


I have pandas dataframe of 350K rows and 200 columns. When trying to construct input pipeline using dataset api I get memory error. When I'm inputting only, say, 10K rows everything works fine, but with all rows it does not. Also when using tf.estimator.inputs.pandas_input_fn everything is ok.

Here's the code

x_train, x_test, y_train, y_test = train_test_split(train, labels, test_size=0.25)

feature_columns = [tf.feature_column.numeric_column(c) for c in train.columns
                if train[c].dtype != 'object']

def train_input_fn():
    dataset = tf.data.Dataset.from_tensor_slices((dict(x_train), y_train))
    dataset = dataset.shuffle(1000)
    dataset = dataset.batch(100)
    iterator = dataset.make_one_shot_iterator()
    return iterator.get_next()

model = tf.estimator.DNNClassifier(feature_columns=feature_columns, hidden_units=[20, 2]
model.train(input_fn=train_input_fn, steps=1000)

And the error message

INFO:tensorflow:Calling model_fn.
INFO:tensorflow:Done calling model_fn.
INFO:tensorflow:Create CheckpointSaverHook.
INFO:tensorflow:Graph was finalized.
2018-09-28 14:42:03.736495: I T:\src\github\tensorflow\tensorflow\core\platform\cpu_feature_guard.cc:140] Your CPU supports instructions that this TensorFlow binary was not compiled to use: AVX2
2018-09-28 14:42:04.070692: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1356] Found device 0 with properties: 
name: GeForce GTX 1070 major: 6 minor: 1 memoryClockRate(GHz): 1.797
pciBusID: 0000:01:00.0
totalMemory: 8.00GiB freeMemory: 6.63GiB
2018-09-28 14:42:04.072060: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1435] Adding visible gpu devices: 0
2018-09-28 14:42:05.139979: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:923] Device interconnect StreamExecutor with strength 1 edge matrix:
2018-09-28 14:42:05.140271: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:929]      0 
2018-09-28 14:42:05.140461: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:942] 0:   N 
2018-09-28 14:42:05.141143: I T:\src\github\tensorflow\tensorflow\core\common_runtime\gpu\gpu_device.cc:1053] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 6401 MB memory) -> physical GPU (device: 0, name: GeForce GTX 1070, pci bus id: 0000:01:00.0, compute capability: 6.1)
INFO:tensorflow:Running local_init_op.
INFO:tensorflow:Done running local_init_op.
Traceback (most recent call last):
  File "C:/Users/.../test.py", line 150, in <module>
    nn.train(input_fn=train_input_fn, steps=10)
  ...
  File "C:\Users\...\google\protobuf\text_format.py", line 118, in getvalue
    return self._writer.getvalue()
MemoryError

I tried to set different batch size, net architecture but the error persists.


Solution

  • Please see my answer here https://stackoverflow.com/a/56213870/31045 for code to create a generator when you have large input data.