I've been working with a basic LSTM neural network exploring its capacity to forecast stock market behavior, and the particular code I built a lot of it around generates an accuracy score that I don't fully understand how to interpret. It outputs a value between 0 and 1, but I wouldn't have a clue how to explain to someone whether its good or bad and why.
Any help would be greatly appreciated!
Here's the function that calculates the accuracy score:
def accuracy(model, data):
y_test = data["y_test"]
X_test = data["X_test"]
y_pred = model.predict(X_test)
y_test = np.squeeze(data["column_scaler"]["Close"].inverse_transform(np.expand_dims(y_test, axis=0)))
y_pred = np.squeeze(data["column_scaler"]["Close"].inverse_transform(y_pred))
y_pred = list(map(lambda current, future: int(float(future) > float(current)), y_test[:-LOOKUP_STEP], y_pred[LOOKUP_STEP:]))
y_test = list(map(lambda current, future: int(float(future) > float(current)), y_test[:-LOOKUP_STEP], y_test[LOOKUP_STEP:]))
return accuracy_score(y_test, y_pred)
print(str(LOOKUP_STEP) + ":", "Accuracy Score:", accuracy(model, data))
From scikit-learn's documentation, the accuracy score you compute here is "either the fraction (default) or the count (normalize=False) of correct predictions".