Search code examples
3dmeshsampling

Presence of Holes while simulating Quantification on 3D Meshes


I'm using a database of 3D meshes that have been distored by a Quantification of their 3D coordinates. My question is why there are Holes in theses 3D meshes ? Can you explain deeply because i'm not familiarized with Quantification. (I just know that quantification in 2D images permits to assign a number (symbol) to a fraction of the signal that we are sampling).


Solution

  • If your meshes are done correctly then you shouldn’t get any holes after quantification. At most, some of your triangles may reuse single vertex, which will make their area = 0.

    If you get holes then you probably have an issue with the geometry itself, there’re two situations that comes to my mind:

    1. You may have two polygons looking like they share a vertex or edge (like a corner between two walls) but in fact their ends slightly differ. One of them may end at i.e 2.499 and another one starts at 2.501. Before the quantification the error is too small to notice, but after it you end up with one wall going to 2.0 and another starting from 3.0.

    2. Another problem, quite often in 3D models is when one polygon touches another one, but they don’t share vertices. In other words - vertex of 1st polygon lies on an edge of 2nd one. Imagine it as a wall and a desk near the wall, when the desk touches a wall but they don’t share a vertex. If you quantificate such model you may get a mesh, when the touch point on the wall (interpolated between two quantificated ends) doesn’t match the point at the desk (which was quantificated separately). This may give you holes in your mesh also. The solution for this issue is to split a wall at the touch point, making it out of few polygons, that really share a vertex or more with a wall.

    I can't do any drawings right now and I realize it's hard to explain by word, but I hope this description will do. If you still have problems getting to this I may try to draw some examples, let me know.

    Edit:

    The quantification is just a process where you translate some numeric values into smaller number of possible ones, like casting a float to int or rounding a price to 10 cent coins… It’s that simple and you can apply it to any numeric value.

    What makes it hard to understand for you is that you’re making a wrong assumption about mesh - image analogy. Please remember that typical meshes are vector based and bitmap images are raster based. The 3D analogy for 2D raster image is 3D raster image - like a 3D texture, we sometimes call it Voxels (from volume-pixels). On the other hand, a 2D analogy to 3D mesh would be a vector image, like SVG.

    In raster-based data you have a grid of pixels, each of them have some values, like color, brightness, etc.; in vector-based data you have separate vertices, which have values like position, uv, normal, etc. and all of those numbers can be quantificated.

    You may also quantificate values many times, for different number of resulting values. Let’s say you digital Camera has 10-bit CCDs and it does first quantification when you take a picture, then those data needs to be saved as jpeg, which supports 8-bits per channel, then you convert it to gif, which supports just 256 colors, etc. Every time you’re doing another quantification on your data.