I am trying to crop faces out of the original captured image using the rect values to identify the target areas and create bitmaps of just the face detected area. This works :)
The issue is:
When I have an image with more than one face, the for loop in the onSuccess method which calls an alert dialog for user input for each cropped face filename seems to loop before the alert dialogs onClick() is complete. The code for saving each face is fired once the alert dialog onClick (OK) method is called.
The code currently saves only one of the cropped faces, the different user inputs are correctly handled in the individual alert dialogs but, only the last face in the is saved.
I think, the for loop is continuing to loop after the alert dialog is triggered but before the user has completed the input and the save has taken place for each face. Therefore, when the save method is called it is only saving the last object in the faces list.
Any suggestions on how I can improve this code?
@Override
public void onImage(CameraKitImage cameraKitImage) {
capturedImage = cameraKitImage.getBitmap();
capturedImage = Bitmap.createScaledBitmap(capturedImage, cameraView.getWidth(), cameraView.getHeight(), false);
cameraView.stop();
processFaceDetection(capturedImage);
}
public void processFaceDetection(final Bitmap bitmap) {
FirebaseVisionImage visionImage = FirebaseVisionImage.fromBitmap(bitmap);
FirebaseVisionFaceDetectorOptions detectorOptions = new FirebaseVisionFaceDetectorOptions.Builder()
.setPerformanceMode(FirebaseVisionFaceDetectorOptions.ACCURATE)
.setLandmarkMode(FirebaseVisionFaceDetectorOptions.NO_LANDMARKS)
.setClassificationMode(FirebaseVisionFaceDetectorOptions.NO_CLASSIFICATIONS)
.setMinFaceSize(0.15f)
.enableTracking()
.build();
FirebaseVisionFaceDetector detector = FirebaseVision.getInstance().getVisionFaceDetector(detectorOptions);
detector.detectInImage(visionImage).addOnSuccessListener(new OnSuccessListener<List<FirebaseVisionFace>>() {
@Override
public void onSuccess(List<FirebaseVisionFace> firebaseVisionFaces) {
listSize = firebaseVisionFaces.size();
Bitmap originalCapture = Bitmap.createScaledBitmap(capturedImage, cameraView.getWidth(), cameraView.getHeight(), false);//scaled bitmap created from captured image
saveImageOriginal(originalCapture);
//for (FirebaseVisionFace face : firebaseVisionFaces) {
for ( i = 0; i < firebaseVisionFaces.size(); i++){
FirebaseVisionFace face = firebaseVisionFaces.get(i);
Rect rect = face.getBoundingBox();
faceCrop = Bitmap.createBitmap(originalCapture, rect.left, rect.top, rect.width(), rect.height());//face cropped using rect values
RectOverlay rectOverlay = new RectOverlay(graphicOverlay, rect);
graphicOverlay.add(rectOverlay);//draw box around face
showAddItemDialog(Camera.CurrentContext); //prompt for name, save cropped face
}
}
});
}
private void showAddItemDialog(Context c) {
final EditText inputName = new EditText(c);
AlertDialog dialog = new AlertDialog.Builder(c)
.setTitle("Input Person's Name" + i)
.setMessage("Format: LastName, FirstName")
.setView(inputName)
.setPositiveButton("Add", new DialogInterface.OnClickListener() {
@Override
public void onClick(DialogInterface dialog, int which) {
nameIn = String.valueOf(inputName.getText());
try {
saveImage(faceCrop); //give read write permission
}catch (Exception e) {
e.printStackTrace();
}
}
})
.setNegativeButton("Cancel", null)
.create();
dialog.show();
}
public String saveImage(Bitmap croppedFace) {
String eventFaces, event;
event = "/Summer Event 2020";
eventFaces = "/Event_Faces";
final ByteArrayOutputStream bytes = new ByteArrayOutputStream();
croppedFace.compress(Bitmap.CompressFormat.JPEG, 90, bytes);
final File facesDirectory = new File(getApplicationContext().getExternalFilesDir(null).getAbsolutePath() + event + eventFaces); //crop
if (!facesDirectory.exists()) {
Log.d("directory SAVING", "" + facesDirectory.mkdirs());
facesDirectory.mkdirs();
}
try {
croppedFile = new File(facesDirectory, nameIn + ".jpg");
croppedFile.createNewFile();
FileOutputStream fo = new FileOutputStream(croppedFile);
fo.write(bytes.toByteArray());
MediaScannerConnection.scanFile(Camera.CurrentContext, new String[]{croppedFile.getPath()}, new String[]{"image/jpeg"}, null);
fo.close();
Log.d("TAG", "File Saved::--->" + croppedFile.getAbsolutePath());
Toast.makeText(Camera.this, nameIn + " " + "i" + i + " list" + listSize + " " + "Face Cropped and Saved to -> " + croppedFile.getPath(), Toast.LENGTH_SHORT).show();
return croppedFile.getAbsolutePath();
} catch (IOException e1) {
e1.printStackTrace();
}
return "";
}//end of save image
If anyone is experiencing this same type of issue, I divided the code in the for loop into two separate loops and incorporated a flag into the AlertDialog on user input (keyboard in). Once the flag is true, following the AlertDialog user input, the conditions will now be met for the second for loop.
Hope this helps.