I'm building a model to perform a linear regression of the equation y=mx+c
. I generated a csv file of 1999 samples and a model where I can change the normalization (on / off), number of layers, number of nodes and number of epochs. I expected to be able to use the loss / accuracy of the training and evaluation to guide the selection of these variable in situations where I do not know the answer in advance but am confused by my results so far as summarized below:
Normalization Layers Nodes Epochs Start Loss End Loss Accuracy
TRUE 1 200 5 0.6022 0.4348 0
TRUE 1 200 50 0.5963 0.4347 0
TRUE 10 200 5 0.5249 0.4525 0
TRUE 10 200 50 0.5157 0.4418 0
TRUE 10 500 5 0.5816 0.4825 0
TRUE 10 500 50 0.5591 0.4422 0
FALSE 1 200 5 996.2897 1.8313 0
FALSE 1 200 50 1063.1994 1.7264 0
FALSE 10 200 5 421.1371 40.6160 0
FALSE 10 200 50 293.6943 46.2854 0
FALSE 10 500 5 382.2659 297.2881 0
FALSE 10 500 50 412.2182 79.7649 0
The compile parameters I am using are
compile optimizer:adam loss:mean_absolute_error metrics:['accuracy'] loss_weights:[1.0]
an example model summary is
Model: "LRmodel"
_________________________________________________________________
Layer (type) Output Shape Param #
LR-input (InputLayer) [(None, 1)] 0
_________________________________________________________________
dense (Dense) (None, 200) 400
_________________________________________________________________
ML-LinearRegression (Dense) (None, 1) 201
Total params: 601
Trainable params: 601
Non-trainable params: 0
example fitting result is
1600/1600 - 1s - loss: 1063.1994 - accuracy: 0.0000e+00 - val_loss: 90.2848 - val_accuracy: 0.0000e+00
Epoch 2/5
1600/1600 - 0s - loss: 137.8654 - accuracy: 0.0000e+00 - val_loss: 2.1525 - val_accuracy: 0.0000e+00
Epoch 3/5
1600/1600 - 0s - loss: 4.4340 - accuracy: 0.0000e+00 - val_loss: 3.4557 - val_accuracy: 0.0000e+00
Epoch 4/5
1600/1600 - 0s - loss: 1.7573 - accuracy: 0.0000e+00 - val_loss: 3.1190 - val_accuracy: 0.0000e+00
Epoch 5/5
1600/1600 - 0s - loss: 1.7264 - accuracy: 0.0000e+00 - val_loss: 3.2794 - val_accuracy: 0.0000e+00
Additionally there are 2 issues I do not understand
you do not show your model. However if you are doing linear regression you should not use accuracy as a metric. Accuracy is used when you are doing classification such as trying to classify if an image is a dog or a cat. You should use a loss function in model.compile that is appropriate for liner regression like tf.keras.losses.MeanSquaredError
. Documentation for regression losses is here. Documentation for regression metrics is here.