I am having an Indexing Error when running the following code:
import tflearn
from tflearn.layers.conv import conv_2d, max_pool_2d
from tflearn.layers.core import input_data, dropout, fully_connected
from tflearn.layers.estimator import regression
import tflearn.datasets.mnist as mnist
X, Y, test_x, test_y = mnist.load_data(one_hot=True)
X = X.reshape([-1, 28, 28, 1])
test_x = X.reshape([-1, 28, 28, 1])
convnet = input_data(shape=[None, 28, 28, 1], name='input')
convnet = conv_2d(convnet, 32, 2, activation='relu')
convnet = max_pool_2d(convnet,2)
convnet = conv_2d(convnet, 64, 2, activation='relu')
convnet = max_pool_2d(convnet,2)
convnet = fully_connected(convnet, 1024, activation='relu')
convnet = dropout(convnet, 0.8)
convnet = fully_connected(convnet, 10, activation='softmax')
convnet = regression(convnet, optimizer='adam', learning_rate=0.01,
loss='categorical_crossentropy', name='targets')
model = tflearn.DNN(convnet)
model.fit({'input':X},{'targets':Y}, n_epoch=10,
validation_set=({'input':test_x},{'targets':test_y}),
snapshot_step=500, show_metric=True, run_id='mnist')
model.save('tflearncnn.model')
I cannot figure out how to make the index larger than 0-9999 (10000) as i am not sure where the error is occurring.
here is the error in my Terminal:
---------------------------------
Run id: mnist
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 55000
Validation samples: 55000
--
Exception in thread Thread-5:oss: 0.13790 | time: 29.813s
Traceback (most recent call last):0 - acc: 0.9592 -- iter: 31936/55000
File "/usr/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/usr/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/usr/.local/lib/python3.6/site-packages/tflearn/data_flow.py", line 187, in fill_feed_dict_queue
data = self.retrieve_data(batch_ids)
File "/home/usr/.local/lib/python3.6/site-packages/tflearn/data_flow.py", line 222, in retrieve_data
utils.slice_array(self.feed_dict[key], batch_ids)
File "/home/usr/.local/lib/python3.6/site-packages/tflearn/utils.py", line 187, in slice_array
return X[start]
IndexError: index 10000 is out of bounds for axis 0 with size 10000
this happens when i reach the point where a new Epoch is supposed to start as shown when step 499 is reached:
---------------------------------
Run id: mnist
Log directory: /tmp/tflearn_logs/
---------------------------------
Training samples: 55000
Validation samples: 55000
--
Training Step: 499 | total loss: 0.12698 | time: 27.880s
| Adam | epoch: 001 | loss: 0.12698 - acc: 0.9616 -- iter: 31936/55000
I have tried the following:
-Changing size of snapshot_steps
-changing the size of n_units in fully_connected()
-changing the nb_filter in conv_2d
This is just your typo
test_x = X.reshape([-1, 28, 28, 1])test_x = test_x.reshape([-1, 28, 28, 1])