I can recognize image and check for prediction like nsfw,sfw,etc with the following sample code in objective-c.
// Initialize the Clarifai app with your app's ID and Secret.
ClarifaiApp *app = [[ClarifaiApp alloc] initWithAppID:@""
appSecret:@""];
// Fetch Clarifai's general model.
[app getModelByName:@"general-v1.3" completion:^(ClarifaiModel *model, NSError *error) {
// Create a Clarifai image from a uiimage.
ClarifaiImage *clarifaiImage = [[ClarifaiImage alloc] initWithImage:image];
// Use Clarifai's general model to pedict tags for the given image.
[model predictOnImages:@[clarifaiImage] completion:^(NSArray<ClarifaiOutput *> *outputs, NSError *error) {
if (!error) {
ClarifaiOutput *output = outputs[0];
// Loop through predicted concepts (tags), and display them on the screen.
NSMutableArray *tags = [NSMutableArray array];
for (ClarifaiConcept *concept in output.concepts) {
[tags addObject:concept.conceptName];
}
dispatch_async(dispatch_get_main_queue(), ^{
self.textView.text = [NSString stringWithFormat:@"Tags:\n%@", [tags componentsJoinedByString:@", "]];
});
}
dispatch_async(dispatch_get_main_queue(), ^{
self.button.enabled = YES;
});
}];
}];
For this I can get model and from that model i can predict images.
Question:
How can I make the crop function? I am not getting a way to reach the crop functionality available in Clarifai .
Any ideas.
You can create a crop using the ClarifaiCrop class and using it to initialize a ClarifaiImage.
ClarifaiCrop *crop = [[ClarifaiCrop alloc] initWithTop:0.1
left:0.1
bottom:0.1
right:0.1];
Where top, left, bottom, and right are percentages (between 0 and 1) of the distances from the borders of the image to the region of interest. In the example above the image would be cropped 10% from each margin.
ClarifaiImage *clarifaiImage = [[ClarifaiImage alloc] initWithImage:image
andCrop:crop];