How to implement TensorFlow Lite in iOS using interpreter?

27 Views Asked by At

My problem is I have a trained model and again trained with custom data. Captured image is UIImage converted to pixelBuffer. but i am getting error like this

Failed to invoke the interpreter with error: Provided data count 605952 must match the required count 4915200.

I tried several ways to like resizing the image, but still got the same error.

func runModel(pixelBuffer: CVPixelBuffer) 
    {
        let sourcePixelFormat = CVPixelBufferGetPixelFormatType(pixelBuffer)
        
        assert(sourcePixelFormat == kCVPixelFormatType_32ARGB ||
               sourcePixelFormat == kCVPixelFormatType_32BGRA ||
               sourcePixelFormat == kCVPixelFormatType_32RGBA)
        
        let imageChannels = 4
        
        assert(imageChannels >= inputChannels)
        let scaledSize = CGSize(width: inputWidth, height: inputHeight)
        
        let thumbnailPixelBuffer = pixelBuffer
        secondImageView.image = toUIImage(from: pixelBuffer)
        
        let outputTensor: Tensor
        
        do
        {
            let inputTensor = try interpreter.input(at: 0)

            guard let rgbData = rgbDataFromBuffer(
                thumbnailPixelBuffer,
                byteCount: batchSize * inputWidth * inputHeight * inputChannels,
                isModelQuantized: inputTensor.dataType == .uInt8
            ) else { print("Failed to convert the image buffer to RGB data."); return }

            try interpreter.copy(rgbData, toInputAt: 0)
            // error in this line

            try interpreter.invoke()

            outputTensor = try interpreter.output(at: 0)
            
            
        }
        catch let error
        {
            print("Failed to invoke the interpreter with error: \(error.localizedDescription)") ;return
        }
        
        let results: [Float]
        
        switch outputTensor.dataType
        {
            case .uInt8:
                guard let quantization = outputTensor.quantizationParameters else {
                    print("No results returned because the quantization values for the output tensor are nil.")
                    return
                }
                let quantizedResults = [UInt8](outputTensor.data)
                results = quantizedResults.map {
                    quantization.scale * Float(Int($0) - quantization.zeroPoint)
                }
            case .float32:
                results = [Float32](unsafeData: outputTensor.data) ?? []
            default:
                print("Output tensor data type \(outputTensor.dataType) is unsupported for this example app.")
                return
        }
        
        getTopNLabels(results: results)
    }
0

There are 0 best solutions below