I'm trying to read a 3D texture I rendered using an FBO. This texture is so large that glGetTexImage results in GL_OUT_OF_MEMORY error due to failure of nvidia driver to allocate memory for intermediate storage* (needed, I suppose, to avoid changing destination buffer in case of error).
So I then thought of getting this texture layer by layer, using glReadPixels after I render each layer. But glReadPixels doesn't have layer index as a parameter. The only place where it actually appears as something that directs I/O to the particular layer is gl_Layer output in the geometry shader. And that is for the writing stage, not reading.
As I tried simply doing the calls to glReadPixels anyway after I render each layer, I only got the texels for layer 0. So glReadPixels at least doesn't fail to get something.
But the question is: can I get arbitrary layer of a 3D texture using glReadPixels? And if not, what should I use instead, given the above described memory constraints? Do I have to sample the layer from 3D texture in a shader to render the result to a 2D texture, and read this 2D texture afterwards?
*It's not a guess, I've actually tracked it down to a failing malloc call (with the size of the texture as argument) from within the nvidia driver's shared library.
Yes,
glReadPixelscan read other slices from the 3D texture. One just has to useglFramebufferTextureLayerto attach the correct current slice to the FBO — instead of attaching the full 3D texture as the color attachment. Here's the replacement code forglGetTexImage(a special FBO for this,fboForTextureSaving, should be generated beforehand):Anyway, this is not a long-term solution to the problem. The first reason for
GL_OUT_OF_MEMORYerrors with large textures is actually not lack of RAM or VRAM. It's subtler: each texture allocated on GPU is mapped to the process' address space (at least on Linux/nvidia). So if your process doesn'tmalloceven half of the RAM available to it, its address space may be already used by these large mappings. Add to this a bit of memory fragmentation, and you get eitherGL_OUT_OF_MEMORY, ormallocfailure, orstd::bad_allocsomewhere even earlier than expected.The proper long-term solution is to embrace the 64-bit reality and compile your app as 64-bit code. This is what I ended up doing, ditching all this layer-by-layer kludge and simplifying the code quite a bit.