I am translating to Swift some Python image processing tasks, which in Python only requires about 30 lines of code.
As I just want to obtain a simple command-line tool with no UI graphics, I'm trying to keep minimum dependence on Apple's high-level interface frameworks.
My Python code looks a bit like that:
from PIL import Image
import numpy
# Load a 16bit grayscale image and convert it to raw data array
img = Image.open("mylocaldir/Gray16.png")
(sizex,sizey) = img.size
inputpix = numpy.array(img.getdata()).astype(numpy.uint16).reshape((sizey,sizex))
# Here, do whatever processing to fill a RGB raw data array.
outputpix = numpy.zeros((sizey,sizex,3),numpy.uint8)
# ...
# ...
# ...
# Write the array back as a jpg file
img = Image.frombytes("RGB",(sizex,sizey),outputpix.reshape(sizex*sizey*3),"raw")
img.save("mylocaldir/OutputRGB.jpg")
Not so familiar with Apple's frameworks, I am struggling to figure out how to implement that in a way as simple as possible. Should a CGImage be used? Or is there any simpler object allowing image file I/O?
Could anybody help me getting the most streamlined Swift version of the Python code written above?
Here is what I came up for loading or saving png images from a UInt16 array of predefined size, only using CoreGraphics.
Sorry, as a Swift beginner with a C++ background, I might not organise my code in the best way!
Unfortunately, there is a problem with this code: Repeated use will fill up the memory! Is there any resource that should have been manually released?