I am trying to use Google cloud vision label detection for aem assets. I am converting the assets into input stream in the following way:
AssetManager assetMgr = resolver.adaptTo(AssetManager.class);
Asset myAsset = assetMgr.getAsset(payload);Rendition myRen = myAsset.getRendition(payload + Constants.originalRendition); InputStream is = myRen.getStream();
Once I get the stream, I hit the service this way
but I get the following response:
{ "code" : 400, "errors" : [ { "domain" : "global", "message" : "Request must specify image and features.", "reason" : "badRequest" } ], "message" : "Request must specify image and features.", "status" : "INVALID_ARGUMENT" }
Creating the InputStream same way for google face detect works fine.
It seems you are converting the images to a stream of bytes using the IOUtils.toByteArray()
method. As the code is for detecting labels is very similar to the code to detecting faces, except for the part where you convert the image to a stream of bytes, and you say can detecting faces fine, I’d try converting the image using the class ByteString
. This class is used in the code example in the documentation for detecting labels here.