Core Image lets us specify a color space for a CIContext, as in:
let context = CIContext(options: [kCIContextOutputColorSpace: NSNull(),
kCIContextWorkingColorSpace: NSNull()])
Or for a CIImage, as in:
let image = CIImage(cvImageBuffer: inputPixelBuffer,
options: [kCIImageColorSpace: NSNull()])
How are these three related:
- kCIContextOutputColorSpace
- kCIContextWorkingColorSpace
- kCIImageColorSpace
What are the pros and cons of setting each of them?
Apple's documentation explains the differences:
If images are tagged with a color space, they are converted to linear working space before filtering. If you tag a CIImage with
DeviceRGB, it is gamma corrected to linear before filtering.Setting the keys to
NSNullinstructs CoreImage to leave the color values as they are which is referred to asunmanaged color space.