I am building an ios app that allows you to record a short video which is subsequently split into multiple images, which are in turn classified by a Neural Network. I am using AVAssetImageGenerator's function generateCGImagesAsynchronously for that.
func splitImages(imgURL: URL){
let videoAsset = AVAsset(url: imgURL)
var timesArray = [NSValue]()
let loops = round(videoAsset.duration.seconds*60)
for i in stride(from: 0, to: loops, by: 5){
let t = CMTimeMake(value: Int64(i), timescale: 60)
timesArray.append(NSValue(time: t))
}
let generator = AVAssetImageGenerator(asset: videoAsset)
generator.requestedTimeToleranceBefore = CMTime.zero
generator.requestedTimeToleranceAfter = CMTime.zero
generator.generateCGImagesAsynchronously(forTimes: timesArray, completionHandler: {requestedTime, image, actualTime, result, error in
DispatchQueue.main.async {
if let image = image {
let ciImage = CIImage(cgImage: image)
let guess = self.detect(ciImage: ciImage)
if guess == self.selectedConcept{
self.correctGuesses.append(Classification(image: image, labelGuess: guess))
} else {
self.otherGuesses.append(Classification(image: image, labelGuess: guess))
}
}
}
})
}
I call this function when the video has been recorded and selected by the user (in an ImagePickerView). The functionality works fine as far as the video splitting and image detection are concerned, but I can't seem to figure out how to do something with the results only when all images have been processed (in this case, loading them into a collection view). I know that's what the completion handler is for, but unfortunately I am not at all versed with async programming, and I couldn't apply what I found about completion handlers on the web to my situation. Can somebody help me?
Thanks in advance.
You could add a handler closure that is called every time the routine has something to add to your collection view. If you have a long video, you might not want to wait for all of them. E.g.
@discardableResult
func splitImages(
imgURL: URL,
selectedConcept: String,
handler: @escaping (Result<Classification, Error>) -> Void,
completion: @escaping () -> Void
) -> AVAssetImageGenerator {
let videoAsset = AVAsset(url: imgURL)
var timesArray = [NSValue]()
let loops = round(videoAsset.duration.seconds * 60)
for i in stride(from: 0, to: loops, by: 5) {
let t = CMTime(value: CMTimeValue(i), timescale: 60)
timesArray.append(NSValue(time: t))
}
let count = timesArray.count
let generator = AVAssetImageGenerator(asset: videoAsset)
generator.requestedTimeToleranceBefore = .zero
generator.requestedTimeToleranceAfter = .zero
var index = 0
generator.generateCGImagesAsynchronously(forTimes: timesArray) { requestedTime, image, actualTime, result, error in
index += 1
defer {
if index == count {
DispatchQueue.main.async { completion() }
}
}
guard let image = image, error == nil else {
DispatchQueue.main.async { handler(.failure(error!)) }
return
}
let ciImage = CIImage(cgImage: image)
let guess = self.detect(ciImage: ciImage)
let isCorrect = guess == selectedConcept
let classification = Classification(image: image, labelGuess: guess, isCorrect: isCorrect)
DispatchQueue.main.async {
handler(.success(classification))
}
}
return generator
}
A few observations on the above:
It should probably not be updating the model objects, itself. You should let the caller do that. You want to keep this routine from being too tightly decoupled with other objects in your app.
It probably should not be fetching selectedConcept
, either. Supply that as a parameter to this method.
You probably don't want to run your detector on the main thread. I have moved only the call to the handler closure to the main thread.
You probably want to pass the Error
object, too, in case the caller might want to reflect the errors in the UI. We generally use a Result
type to return success/error of some process.
I have added a property to the Classification
to distinguish between “success” and “other”. You could have a separate parameter for that if you want, but it just makes it more confusing, IMHO.
Your caller would update the model and add the items to the appropriate section. E.g., if you had one section for successes and another for “other”, it might look like:
weak var generator: AVAssetImageGenerator?
func generate() {
let url = URL(string: "")!
generator?.cancelAllCGImageGeneration() // stop prior one, if any?
activityIndicatorView.startAnimating() // maybe show user that process has started; do whatever you want here
generator = splitImages(imgURL: url, selectedConcept: selectedConcept) { [weak self] result in
guard let self = self else { return }
switch result {
case .failure(let error):
// optionally do something with the error
print(error)
case .success(let classification):
if classification.isCorrect {
let index = self.correctGuesses.count
self.correctGuesses.append(classification)
self.collectionView.insertItems(at: [IndexPath(item: index, section: 0)])
} else {
let index = self.otherGuesses.count
self.otherGuesses.append(classification)
self.collectionView.insertItems(at: [IndexPath(item: index, section: 0)])
}
}
} completion: { [weak self] in
// do something else to indicate that it's all done?
self?.activityIndicatorView.stopAnimating()
}
}
You may have noticed that I made the splitImages
return a discardable AVAssetImageGenerator
. If you do not handle cancelation, you can ignore it. But if you do want to support cancelation, like above, you can.
Because we do not have a MCVE, I cannot test the above, so please forgive any errors. But hopefully, it illustrates the basic idea: Give your routine closures for responses and completion and call them at the appropriate times.