I've an OBJ file that I've parsed, but not surprisingly indexing for vertex position and vertex texture is separate.
Here are a couple of OBJ lines to explicit what I mean with different indexing. These are quads, where first index references XYZ position and second index references UV coords:
f 3899/8605 3896/8606 720/8607 3897/8608
f 3898/8609 3899/8610 3897/8611 721/8612
I know that a solution is do some duplication, but what's the most clever way to proceed? As per now I had these two options in mind:
1) Use the indexing to create two big sets of vertices and vertex texture coordinates. This means that I duplicate everything so that I will end up with a vertex for each couple v/vt in the faces blindly. If I have for example 1/3 in first face and the same 1/3 in a different face, I will end up with two separate vertices. Proceed then with glDrawArrays without using indices anymore, but the newly created sets (full of duplicates)
2) Examine each face vertex to come up to unique "GL vertices" (position+texture coord are the same in my specific case) and figure out a way of indexing with these. Differently from 1) here I will not consider as separate vertices the same couple found multiple times. I'll then create a new indexing for these new vertices and finally using glDrawElements when it comes to the draw call using the new indices.
Now I believe that the first option is way easier, but I guess each drawArrays call will be bit slower than a drawElement right? How much is this advantage I'd have?
The second option as a first thought looks pretty slow in a preprocessing step and more complicated to implement. But will it grants to me much better performance overall?
Are there any other way to account for this issue?
If you have few low-poly models - go for option #1, it's way easier to implement and performance difference will be unnoticeable.
Option #2 would be the proper way if you have some high-poly models (looking at the sample, you have at least 9k vertices in there).
Generally you should not worry about model loading time, cos that is done only once and after that you can convert/save it in a most optimal format you need (serialize it just the way it is stored in your code)
Where's the dividing line between these two approaches? It's impossible to say without real-life profiling on the target hardware and your vertex rendering pipe (skeletal animation, shadows, everything adds its toll).