I implement this google cloud function sample to moderate images uploaded at firebase storage: LINK But i have a problem, this function detect almost every image i upload like if they are inappropiate, even if clearly they aren't, this is the code of the function:
exports.blurOffensiveImages = functions.storage.object().onFinalize(async (object) => {
if (object.name.startsWith(`${BLURRED_FOLDER}/`)) {
console.log(`Ignoring upload "${object.name}" because it was already blurred.`);
return null;
}
const visionClient = new vision.ImageAnnotatorClient();
const data = await visionClient.safeSearchDetection(
`gs://${object.bucket}/${object.name}`
);
const safeSearch = data[0].safeSearchAnnotation;
console.log('SafeSearch results on image', safeSearch);
if (
safeSearch.adult !== VERY_UNLIKELY ||
safeSearch.spoof !== VERY_UNLIKELY ||
safeSearch.medical !== VERY_UNLIKELY ||
safeSearch.violence !== VERY_UNLIKELY ||
safeSearch.racy !== VERY_UNLIKELY
) {
console.log('Offensive image found. Blurring.');
//DO SOME STUFF
}
return null;
});
I've experienced the same behavior.
The interface is a bit misleading, in typescript definitions I noticed that the properties, such as google.cloud.vision.v1.ISafeSearchAnnotation.adult, have a type definition of: vision.protos.google.cloud.vision.v1.Likelihood | "UNKNOWN" | "VERY_UNLIKELY" | "UNLIKELY" | "POSSIBLE" | "LIKELY" | "VERY_LIKELY" | null | undefined.
So they can return the Likelihood enum or a string of the value. When I changed my conditions to check for a string of "VERY_UNLIKELY" instead of the enum, I got the expected results.
I'll most likely check for both enum and string to guard against future changes if the enum is returned.