I'm trying to create a neural network to add and subtract integers. The code is as below:
model_add = Sequential(
[
Dense(10, activation='relu'),
Dense(1, activation ='relu')
]
)
model_subtract = Sequential(
[
Dense(10, activation='relu'),
Dense(1, activation ='relu')
]
)
model_add.compile(loss = 'mse', optimizer='adam')
batch_size = 32
epochs = 100
model_add.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose = 0)
model_subtract.compile(loss = 'mse', optimizer='adam')
batch_size = 32
epochs = 100
model_add.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, verbose = 0)
print("enter first num:")
x = input()
print("Enter operation:")
op= input()
print("enter second num:")
y = input()
X = int(x)
Y = int(y)
if op == "+":
predicted_sum = model_add.predict([X, Y])
print(predicted_sum)
elif op =="-":
predicted_sum = model_subtract.predict([X, Y])
print(predicted_sum)
On input 1+2, this gives rise to an error:
raise ValueError(f"Unrecognized data type: x={x} (of type {type(x)})")
ValueError: Unrecognized data type: x=[1, 2] (of type <class 'list'>)
Can someone explain why this happens and how to fix it?
The problem was solved by replacing [X, Y]
with np.array([[X, Y]])