Here is the documentation on CGImageCreateWithImageInRect.
It takes a CGRect object as a parameter. From what I understand a CGRect is in points
However the documentation says:
"References the pixels within the resulting rectangle, treating the first pixel within the rectangle as the origin of the subimage."
This seemed inaccurate and was proven so when I needed to resize my UIImages. I needed to multiply the image dimensions by the screen scale otherwise my image came out the wrong size
var imageRef:CGImageRef = CGImageCreateWithImageInRect(image.CGImage, CGRectMake(0, 0, image.size.width*UIScreen.mainScreen().scale, image.size.height*UIScreen.mainScreen().scale))
If I didn't multiply by scale the image came out too small.
Am I right that this is bad documentation (as in it shouldnt take a CGRect which is in points, and then read it as pixels) or am I not understanding something fundamental here?
I think the documentation is correct (pixels are pixels) based on this test:
output:
I'm not sure what to say about your UIImage resizing without knowing exactly how you're trying to resize them and the exact results you saw. I do know that if you created the UIImage from an asset in an asset catalog (using UIImage(named:)), the actual pixel dimensions of the CGImage property will depend on the scale factor of the device or simulator. If there are multiple sizes for the same asset, the UIImage will load whichever asset matches the system's scale factor. In other words, in this scenario, you can't count on the UIImage's CGImage having consistent dimensions, and scaling code may go awry.