I have an OpenGL 2.1 app wherein the user can load multiple images, which can vary in width and height, as textures in order to be rendered on their respective quad objects on the screen.
I was initially pre-loading all the images right away prior to rendering them, but the issue occurs when the user decides to load by the hundreds, on rare cases by the thousands, which results a very high usage of RAM. I am also de-allocating the bitmap container immediately upon converting them to textures, FYI. In addition, I tried reducing the size of the images based on the sizes of their respective quads to be rendered to. But that seems to be only applying a bandage to the issue, as there's the possibility the user can load more images and load more quad objects; hence more RAM.
Sans putting a limit on how much images they can upload, I'm at a loss on how to properly manage textures. I have also read a technique on using Pixel Buffer Objects: transfer image data to the buffer -> render on one re-usable texture, repeat process. But I'm a bit stumped on how to proceed from there, seeing as there seems to be an assumption that all the images have to be the same size prior to updating the texture. There's also the possibility of performance loss, such as drastic decrease in frame rates during the process of uploading images as textures to OpenGL. Though, I'm very much willing to be proven wrong with this.
Can anyone shed some light on this issue or point me in the right direction?