I am building an ios app that allows you to record a short video which is subsequently split into multiple images, which are in turn classified by a Neural Network. I am using AVAssetImageGenerator's function generateCGImagesAsynchronously for that.
func splitImages(imgURL: URL){
let videoAsset = AVAsset(url: imgURL)
var timesArray = [NSValue]()
let loops = round(videoAsset.duration.seconds*60)
for i in stride(from: 0, to: loops, by: 5){
let t = CMTimeMake(value: Int64(i), timescale: 60)
timesArray.append(NSValue(time: t))
}
let generator = AVAssetImageGenerator(asset: videoAsset)
generator.requestedTimeToleranceBefore = CMTime.zero
generator.requestedTimeToleranceAfter = CMTime.zero
generator.generateCGImagesAsynchronously(forTimes: timesArray, completionHandler: {requestedTime, image, actualTime, result, error in
DispatchQueue.main.async {
if let image = image {
let ciImage = CIImage(cgImage: image)
let guess = self.detect(ciImage: ciImage)
if guess == self.selectedConcept{
self.correctGuesses.append(Classification(image: image, labelGuess: guess))
} else {
self.otherGuesses.append(Classification(image: image, labelGuess: guess))
}
}
}
})
}
I call this function when the video has been recorded and selected by the user (in an ImagePickerView). The functionality works fine as far as the video splitting and image detection are concerned, but I can't seem to figure out how to do something with the results only when all images have been processed (in this case, loading them into a collection view). I know that's what the completion handler is for, but unfortunately I am not at all versed with async programming, and I couldn't apply what I found about completion handlers on the web to my situation. Can somebody help me?
Thanks in advance.
You could add a handler closure that is called every time the routine has something to add to your collection view. If you have a long video, you might not want to wait for all of them. E.g.
A few observations on the above:
It should probably not be updating the model objects, itself. You should let the caller do that. You want to keep this routine from being too tightly decoupled with other objects in your app.
It probably should not be fetching
selectedConcept, either. Supply that as a parameter to this method.You probably don't want to run your detector on the main thread. I have moved only the call to the handler closure to the main thread.
You probably want to pass the
Errorobject, too, in case the caller might want to reflect the errors in the UI. We generally use aResulttype to return success/error of some process.I have added a property to the
Classificationto distinguish between “success” and “other”. You could have a separate parameter for that if you want, but it just makes it more confusing, IMHO.Your caller would update the model and add the items to the appropriate section. E.g., if you had one section for successes and another for “other”, it might look like:
You may have noticed that I made the
splitImagesreturn a discardableAVAssetImageGenerator. If you do not handle cancelation, you can ignore it. But if you do want to support cancelation, like above, you can.Because we do not have a MCVE, I cannot test the above, so please forgive any errors. But hopefully, it illustrates the basic idea: Give your routine closures for responses and completion and call them at the appropriate times.