Search code examples
pythonibm-watsonwatson-studio

Watson Studio's Neural Network does not give the proper output format


I have followed the following tutorial on the recognition of handwritten digits : https://www.youtube.com/watch?v=Gcn7l37Qhng. However it does not say anything about the deployment, so I deployed a WebService myself, based on other IBM tutorials and examples.

I filled out the credentials properly, and tried to convert my selfmade 28x28 pictures to proper payloads with the following code snippet:

import urllib3, requests, json
from PIL import Image
import numpy as np

img = Image.open(filepath)

img = np.array(img.getdata())
img=img[:,1]

img_to_predict = 1.0 - (img.reshape(28, 28, 1)/255)

img_to_predict = img_to_predict.astype("float32").tolist()
scoring_payload = {"values": [img_to_predict]}

The payload is made with the following code snippet:

payload_scoring = scoring_payload

response_scoring = requests.post('https://us-south.ml.cloud.ibm.com/v3/wml_instances/****/deployments/****/online', json=payload_scoring, headers=header)
print("Scoring response")
print(json.loads(response_scoring.text))

I expected to receive probabilities, with highest values corresponding to the correct numbers' classes. I drew a 0 and a 1 in paint, and sent the images to the webservice. Instead of the expected high probability values at the 0 and 1 indices, I get these json responds with little to no differences (Tried with other numbers too, but same results).

Scoring response {'fields': ['prediction'], 'values': [[0.024692464619874954, 0.251592218875885, 0.1783675253391266, 0.07483808696269989, 0.10192563384771347, 0.09394937008619308, 0.06621485948562622, 0.13631191849708557, 0.033091891556978226, 0.03901601955294609]]} Scoring response {'fields': ['prediction'], 'values': [[0.024196961894631386, 0.2504081130027771, 0.18672968447208405, 0.078950896859169, 0.09495671093463898, 0.09053520858287811, 0.06100791320204735, 0.1424102932214737, 0.03167588636279106, 0.039128340780735016]]}

I have tried to deploy another model from the Machine Learning flow's example flow,but I got the same nonsense results. The answers from the service do not correspond the sent images' classes properly.

I tried to use the premade Neural Network Model with the given input from https://github.com/pmservice/wml-sample-models/blob/master/scikit-learn/hand-written-digits-recognition/test-data/mnist-scikit-learn-test-payload.json, but that is the only working data that I could lay my hand on.

I have tried with the mnist dataset's test data, but same results (Deep Learning - How can I test the MNIST tutorial model on WML?).

I have no idea where I messed up, any help would be much appreciated. Thanks in advance!


Solution

  • Okay it turned out that I messed up the input format, the data does not need normalization, nor invertion. All I had to do is create the inverted png files, and change the

    img_to_predict = 1.0 - (img.reshape(28, 28, 1)/255)
    

    part to

    img_to_predict = img.reshape(28,28,1)
    

    I can send my pictures and also some sample digits from the mnist test dataset to my deployed service.