Why use Caffe2 or Core-ML instead of LibTorch(.pt file) on iOS?

423 Views Asked by At

It seems like there are several ways to run Pytorch models on iOS.

  1. PyTorch(.pt) -> onnx -> caffe2
  2. PyTorch(.pt) -> onnx -> Core-ML (.mlmodel)
  3. PyTorch(.pt) -> LibTorch (.pt)
  4. PyTorch Mobile?

What is the difference between the above methods? Why people use caffe2 or Core-ml (.mlmodel), which requires model format conversion, instead of LibTorch?

1

There are 1 best solutions below

0
Matthijs Hollemans On BEST ANSWER

Core ML can use the Apple Neural Engine (ANE), which is much faster than running the model on the CPU or GPU. If a device has no ANE, Core ML can automatically fall back to the GPU or CPU.

I haven't really looked into PyTorch Mobile in detail, but I think it currently only runs on the CPU, not on the GPU. And it definitely won't run on the ANE because only Core ML can do that.

Converting models can be a hassle, especially from PyTorch which requires going through ONNX first. But you do end up with a much faster way to run those models.