How do I use pytorch1.12.0 cpuonly to specify the quantization model as float16?
I want to use pytorch 1.12.0 to complete resnet18 quantization both int8 and float16
or if tensorflow is better than pytorch when I need to do quantization
How do I use pytorch1.12.0 cpuonly to specify the quantization model as float16?
I want to use pytorch 1.12.0 to complete resnet18 quantization both int8 and float16
or if tensorflow is better than pytorch when I need to do quantization
Copyright © 2021 Jogjafile Inc.