My usecase is to upload an Image's pixel data to a texture and then use that in my render pipeline to draw this image on screen.
Given that Image pixel data is huge, so I have created a background thread that uploads the pixel data to MTLTexture using the replaceRegion function.
So now my main thread pseudo code looks like below
- Create MTLSharedEvent.
- Create MTLTexture.
- Add task to my background queue with CPU memory pointer and MTLTexture to upload.
- Add
encodeWaitForEventin my render command encoder before encoding the draw command. - Encode the draw command.
Now whenever my background thread completes the replaceRegion calls, it calls:
_sharedEvent.signaledValue = <value_for_which_I_encoded_wait>;
Finally after doing all the other work that I wanted to do, I commit the Command buffer and display it on the screen.
The final content on the screen looks like an uninitialized image data, either drawing all garbage values where the image's pixels were supposed to go or a half cut image, indicating that the GPU executed the draw call before the upload could be completed.
Now looking at https://developer.apple.com/documentation/metal/resource_synchronization/synchronizing_events_between_a_gpu_and_the_cpu?language=objc, I think what I'm doing is correct and it seems to me that this was why MTLEvent was introduced in the first place. But is my understanding wrong?
I'm doing all of this on a single GPU and with a single command buffer.
Thanks