This is something I have been struggling with a bit. I even went out of my way to try and implement my own indexing system, which turned out to be sub optimal in the end.
But I finally think I got it. And I just wanted to ask if I am implementing this correctly.
First off, as I understand it, the benefit of index buffer objects is that you're not storing redundant vertices in a vertex array which you're then binding then drawing (as well as mapping to gpu memory).
If we imagine a object to be a glm::vec3, glm::vec3 and glm::vec2 (position, normal and texture respectively), that's 16 + 16 + 8 bytes (40 bytes) per vertex. For a cube which has 12 faces, each of which consist of 3 vertices each (I'm not using quads), that's 1440 bytes for a single crate. In a cube for example, only 8 unique vertices exist, so a lot of extra redundant vertices get created.
With index buffers, we can instead only keep unique vertices and call which unique vertex to use by presenting an index (in the case of a cube, between 0 and 8).
When I was testing with a plane, this was very simple. Because I had 4 vertices and 4 textures. The vertices / textures had an equal unique attribute count. But when meshes become more complicated, I think having an uneven vertex count between attributes is unavoidable. For something like a cube for example, I was getting 8 unique pos coords and 10 unique texture coords.
This is where I got confused, but then I figured, maybe it is expected that in Vulkan (or gl/d3d) that you have to suck it up and create redundant vertices, but only as an exception to the difference between the number of position coords & texture coords. In the case of the cube, I had to reuse two pos coords, and map those to the last 2 unique texture coords.
I'll post my code. I won't show any of the vulkan buffer/render creation. Just the index creation. For simplicities sake, I tested this with hard coded values and default normals. I plan to write a dae mesh loader later once I'm confident that I'm on the right path.
Take this obj as an example (it's a cube. I used a cylindrical projection for the UV map for visual testing in my engine)
v 1.000000 -1.000000 -1.000000
v 1.000000 1.000000 -1.000000
v 1.000000 -1.000000 1.000000
v 1.000000 1.000000 1.000000
v -1.000000 -1.000000 -1.000000
v -1.000000 1.000000 -1.000000
v -1.000000 -1.000000 1.000000
v -1.000000 1.000000 1.000000
vt 0.922181 0.858908
vt 0.767621 0.094101
vt 0.853231 0.084485
vt 0.672173 0.891780
vt 0.530632 0.045162
vt 0.425062 0.842841
vt 0.260906 0.067522
vt 0.432410 0.958225
vt 0.033512 0.061229
vt 0.088379 0.806583
s off
f 2/1 3/2 1/3
f 4/4 7/5 3/2
f 8/6 5/7 7/5
f 6/8 1/9 5/7
f 7/5 1/3 3/2
f 4/4 6/8 8/6
f 2/1 4/4 3/2
f 4/4 8/6 7/5
f 8/6 6/8 5/7
f 6/8 2/10 1/9
f 7/5 5/7 1/3
f 4/4 2/1 6/8
Here's how I mapped the array of vertices/indices. The "normals" are just default values and not derived from the obj. It's okay to not pay attention to that. So you'll notice my vertices array from 0-8 is all unique pos/tex coords. Until indices 9 & 10. I reuse 0 and 1 (as per the obj faces).
struct Vertex
{
glm::vec3 pos;
glm::vec3 normal;
glm::vec2 texCoord;
};
const std::vector<Vertex> vertices =
{
{ {+1.000000, -1.000000, -1.000000}, {0.0f, 0.0f, 1.0f}, {0.853231f, 0.084485f} },
{ {+1.000000, +1.000000, -1.000000}, {0.0f, 0.0f, 1.0f}, {0.922181f, 0.858908f} },
{ {+1.000000, -1.000000, +1.000000}, {0.0f, 0.0f, 1.0f}, {0.767621f, 0.094101f} },
{ {+1.000000, +1.000000, +1.000000}, {0.0f, 0.0f, 1.0f}, {0.672173f, 0.891780f} },
{ {-1.000000, -1.000000, -1.000000}, {0.0f, 0.0f, 1.0f}, {0.260906f, 0.067522f} },
{ {-1.000000, +1.000000, -1.000000}, {0.0f, 0.0f, 1.0f}, {0.432410f, 0.958225f} },
{ {-1.000000, -1.000000, +1.000000}, {0.0f, 0.0f, 1.0f}, {0.530632f, 0.045162f} },
{ {-1.000000, +1.000000, +1.000000}, {0.0f, 0.0f, 1.0f}, {0.425062f, 0.842841f} },
//pos[0] tex[9]
{ {+1.000000, -1.000000, -1.000000}, {0.0f, 0.0f, 1.0f}, {0.033512f, 0.061229f} },
//pos[1] tex[10]
{ {+1.000000, +1.000000, -1.000000}, {0.0f, 0.0f, 1.0f}, {0.088379f, 0.806583f} },
};
const std::vector<uint16_t> indices =
{
1,2,0,
3,6,2,
7,4,6,
5,0,4,
6,0,2,
3,5,7,
1,3,2,
3,7,6,
7,5,4,
5,1,0,
6,4,0,
3,1,5
};
There is no other way to achieve indexed buffer objects except for with this method, right?
Indexed Buffering with texture faces. Seeing if I understand and am implementing it correctly.
A vertexes unqiueness is not defined by it's position, but by all attributes that make up that vertex. Two vertices with the same position but different texture coordinates are not the same vertex.
In almost every mesh you're going cut it to have some portion of the triangles which share borders with other triangles, and some portion of the triangles that are discontinuous and share no border.
The Wavefront OBJ format solves this by considering each attribute of a vertex to be independent, and then produces multiple sets of indices for each vertex. While this optimization helps with the storage format (not that the format is particularly efficient to begin with), it doesn't map to GPUs which expect a single indice for each vertex during vertex fetch.
If GPUs did support indexing like the OBJ format you'd almost certainly end up using more memory as you would need an index buffer for every attribute
Your OBJ parser is going to have to filter through the indices and form new buffers that contain all of the unique vertices, even if some of the individual attributes are duplicates.
That's nothing. For reference, you can't even allocate a page that small on the GPU, you have to suballocate from a larger allocation. Meshes are tiny in comparison to textures. A 50,000 vertex mesh with position, normal, texture, and 32 bit indices, assuming no duplicate attributes would be 2,200,000 bytes, whereas a single 2K RGB texture with a full mip chain would be nearly 8 times larger at 15,990,783 bytes. That's only a single texture, most games typically have a base, metalness, roughness, normal map, and possibly an AO map.