Let's say we have a set of polygons, can change the camera view angle and can translate the camera in the 3D environment. From certain view angles some of these polygons are completely occluded by one or several of the other polygons. For each drawn frame we know the exact coordinates for each of these polygons and can iterate through them in the "increasing distance to camera" or "decreasing distance to camera" order.
Now my question:
What is an efficient way to prerender determine whether a polygon is completely occluded by others, so that we could simply skip it in the drawing process to boost performance?
The technique you're looking for is called Occlusion Culling and is a rather complex task.
Being able to iterate through them in increasing camera distance order (front-to-back) gives you some advantages. Just rendering them this way lets you profit from early z-testing features of nowaday's graphics hardware and the polygons only have to go through vertex-processing and rasterization, but need not to be fragment-shaded. This can also be achieved without sorting the polygons but rendering them (in an arbitrary order) in a so-called depth-prepass, where you disable color writes and only render the polygons' depth values. Now in the next rendering pass (the real one) you also profit from early z-rejection.
You might also use hardware occlusion queries of nowaday's GPUs like explained in this GPU Gems article.
But like Hannesh said, it should always be weighted if the overhead of the occlusion culling is worth it. I assume the front-to-back sorting in your case doesn't just come out of nowhere. Maybe the depth-prepass is a viable alternative requiring no sorting. Whereas you can use occlusion queries in a way that doesn't require any sorting (like described in the link), in this case it's not as effective as with front-to-back sorting.