I'm using Google Cloud Vision to detect text on an image. This works about 80% of the time. The other 20%, I get this error:
Error: 3 INVALID_ARGUMENT: Request must specify image and features.
at Object.callErrorFromStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\call.js:31:26)
at Object.onReceiveStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\client.js:180:52)
at Object.onReceiveStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\client-interceptors.js:336:141)
at Object.onReceiveStatus (C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\client-interceptors.js:299:181)
at C:\Users\emily\workspace\bot\node_modules\@grpc\grpc-js\build\src\call-stream.js:160:78
at processTicksAndRejections (node:internal/process/task_queues:78:11) {
code: 3,
details: 'Request must specify image and features.',
metadata: Metadata { internalRepr: Map(0) {}, options: {} },
note: 'Exception occurred in retry method that was not classified as transient'
When I googled this issue, it seems I need to send specific headers with my request to resolve this, basically like as specified here: https://cloud.google.com/vision/docs/ocr#specify_the_language_optional
However, I have no idea how to send these request parameters with the Node.js code I'm using and I can't find any examples anywhere. Can someone please help me figure out how to use this? My current code is this:
// Performs text detection on the image file using GCV
(async () => {
await Jimp.read(attachment.url).then(image => {
return image
.invert()
.contrast(0.5)
.brightness(-0.25)
.write('temp.png');
});
const [result] = await googleapis.textDetection('temp.png');
const fullImageResults = result.textAnnotations;
Thanks!
If you are using Node.js with Vision API you can refer to this sample quickstart code for using Node.js Client Library in Vision API for TEXT_DETECTION.
For the error that you are facing, you can refer to the below code to add request parameters:
index.js :
async function quickstart() {
const vision = require('@google-cloud/vision');
const client = new vision.ImageAnnotatorClient();
const request = {
"requests": [
{
"image": {
"source": {
"imageUri": "gs://bucket1/download.png"
}
},
"features": [
{
"type": "TEXT_DETECTION"
}
],
"imageContext": {
"languageHints": ["en"]
}
}
]
};
const [result] = await client.batchAnnotateImages(request);
const detections = result.responses[0].fullTextAnnotation;
console.log(detections.text);
}
quickstart().catch(console.error);
Here in the above code I have stored the image in GCS and used the path of that image in my code.
Image :
Output :
It was the best of
times, it was the worst
of times, it was the age
of wisdom, it was the
age of foolishness...
If you want to use the image file stored in the local system you can refer to the below code.
Since your file is in the local system, first you need to convert it to a base64 encoded string format and pass the same in the request parameters in your code.
index.js :
async function quickstart() {
const vision = require('@google-cloud/vision');
const client = new vision.ImageAnnotatorClient();
const request ={
"requests":[
{
"image":{
"content":"/9j/7QBEUGhvdG9...image contents...eYxxxzj/Coa6Bax//Z"
},
"features": [
{
"type":"TEXT_DETECTION"
}
],
"imageContext": {
"languageHints": ["en"]
}
}
]
};
const [result] = await client.batchAnnotateImages(request);
const detections = result.responses[0].fullTextAnnotation;
console.log(detections.text);
}
quickstart();