I'm trying to model a f(x) = x^2 graph using a neural network, I'm making it in tflearn. But even when using multiple layers, when I plot some points from the model it always plots a straight line.
import numpy as np
from matplotlib import pyplot
import tflearn
x = list()
y = list()
for i in range(100):
x.append(float(i))
y.append(float(i**2))
features = np.array(x).reshape(len(x),1)
labels = np.array(y).reshape(len(y), 1)
g = tflearn.input_data(shape=[None, 1])
g = tflearn.fully_connected(g, 128)
g = tflearn.fully_connected(g, 64)
g = tflearn.fully_connected(g, 1)
g = tflearn.regression(g, optimizer='sgd', learning_rate=0.01,
loss='mean_square')
# Model training
m = tflearn.DNN(g)
m.fit(features, labels, n_epoch=100, snapshot_epoch=False)
x = list()
y = list()
for i in range(100):
x.append(i)
y.append(float(m.predict([[i]])))
pyplot.plot(x, y)
pyplot.plot(features, labels)
pyplot.show()
The green line is an x^2 graph and the blue line is the model.
By default tflearn.fully_connected
has activation='linear'
so no matter how many layers you stack you can only approximate linear function.
Try a different activation function, for example tflearn.fully_connected(g, 128, activation='tanh')
and leave the output layer with activation='linear'
so it does not clip your output.