Swift Playground Bundle can't find Compiled CoreML Model (.mlmodelc)

62 Views Asked by At

I have been attempting to debug this for over 10 hours...

I am working on implementing Apple's MobileNetV2 CoreML model into a Swift Playgrounds. I performed the following steps

  • Compiled CoreML model in regular Xcode project
  • Moved Compiled CoreML (MobileNetV2.mlmodelc) model to Resources folder of Swift Playground
  • Copy Paste the model class (MobileNetV2.swift) into the Sources folder of Swift Playground
  • Use UIImage extensions to resize and convert UIImage into CVbuffer
  • Implement basic code to run the model.

However, every time I run this, it keeps giving me this error:

MobileNetV2.swift:100: Fatal error: Unexpectedly found nil while unwrapping an Optional value

From the automatically generated model class function:

    /// URL of model assuming it was installed in the same bundle as this class
    class var urlOfModelInThisBundle : URL {
        let bundle = Bundle(for: self)
        return bundle.url(forResource: "MobileNetV2", withExtension:"mlmodelc")!
    }

The model builds perfectly, this is my contentView Code:

import SwiftUI

struct ContentView: View {
    
    
    func test() -> String{

            // 1. Load the image from the 'Resources' folder.
            let newImage = UIImage(named: "img")
            
            // 2. Resize the image to the required input dimension of the Core ML model
            // Method from UIImage+Extension.swift
            let newSize = CGSize(width: 224, height: 224)
            guard let resizedImage = newImage?.resizeImageTo(size: newSize) else {
                fatalError("⚠️ The image could not be found or resized.")
            }

            // 3. Convert the resized image to CVPixelBuffer as it is the required input
            // type of the Core ML model. Method from UIImage+Extension.swift
            guard let convertedImage = resizedImage.convertToBuffer() else {
                fatalError("⚠️ The image could not be converted to CVPixelBugger")
            }
            
            // 1. Create the ML model instance from the model class in the 'Sources' folder
            let mlModel = MobileNetV2()

            // 2. Get the prediction output
            guard let prediction = try? mlModel.prediction(image: convertedImage) else {
                fatalError("⚠️ The model could not return a prediction")
            }


            // 3. Checking the results of the prediction
            let mostLikelyImageCategory = prediction.classLabel
            let probabilityOfEachCategory = prediction.classLabelProbs

            var highestProbability: Double {
                let probabilty = probabilityOfEachCategory[mostLikelyImageCategory] ?? 0.0
                let roundedProbability = (probabilty * 100).rounded(.toNearestOrEven)
                
                return roundedProbability
            }

            return("\(mostLikelyImageCategory): \(highestProbability)%")
    
    }
    
    
    var body: some View {
        VStack {
            let _ = print(test())
            Image(systemName: "globe")
                .imageScale(.large)
                .foregroundColor(.accentColor)
            Text("Hello, world!")
            Image(uiImage: UIImage(named: "img")!)
            
            
            
            
            
        }
    }
}

Upon printing my bundle contents, I get these:

["_CodeSignature", "metadata.json", "__PlaceholderAppIcon76x76@2x~ipad.png", "Info.plist", "[email protected]", "coremldata.bin", "{App Name}", "PkgInfo", "Assets.car", "embedded.mobileprovision"]

Anything would help

For additional reference, here are my UIImage extensions in ExtImage.swift:

//Huge thanks to @mprecke on github for these UIImage extension function.

import Foundation
import UIKit

extension UIImage {
    
    func resizeImageTo(size: CGSize) -> UIImage? {
        
        UIGraphicsBeginImageContextWithOptions(size, false, 0.0)
        self.draw(in: CGRect(origin: CGPoint.zero, size: size))
        let resizedImage = UIGraphicsGetImageFromCurrentImageContext()!
        UIGraphicsEndImageContext()
        return resizedImage
    }
    
     func convertToBuffer() -> CVPixelBuffer? {
        
        let attributes = [
            kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
            kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue
        ] as CFDictionary
        
        var pixelBuffer: CVPixelBuffer?
        
        let status = CVPixelBufferCreate(
            kCFAllocatorDefault, Int(self.size.width),
            Int(self.size.height),
            kCVPixelFormatType_32ARGB,
            attributes,
            &pixelBuffer)
        
        guard (status == kCVReturnSuccess) else {
            return nil
        }
        
        CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        
        let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer!)
        let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
        
        let context = CGContext(
            data: pixelData,
            width: Int(self.size.width),
            height: Int(self.size.height),
            bitsPerComponent: 8,
            bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!),
            space: rgbColorSpace,
            bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
        
        context?.translateBy(x: 0, y: self.size.height)
        context?.scaleBy(x: 1.0, y: -1.0)
        
        UIGraphicsPushContext(context!)
        self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
        UIGraphicsPopContext()
        
        CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
        
        return pixelBuffer
    }

}
0

There are 0 best solutions below