Search code examples
c++opengltexturesshadervertex-shader

Scene voxelization not working due to lack of comprehension of texture coordinates


The goal is to take an arbitrary geometry and create a 3D texture containing the voxel approximation of the scene. However right now we only have cubes.

The scene looks as follows:

enter image description here

The 2 most important aspects of these scene are the following:

Each cube in the scene is supposed to correspond to a voxel in the 3D texture. The scene geometry becomes smaller as the height increases (similar to a pyramid). The scene geometry is hollow (i.e if you go inside one of these hills the interior has no cubes, only the outline does).

To voxelize the scene we render layer by layer as follows:

glViewport(0, 0, 7*16, 7*16);
glBindFramebuffer(GL_FRAMEBUFFER, FBOs[FBO_TEXTURE]);

for(int i=0; i<4*16; i++)
{
    glFramebufferTexture3D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_3D, 
        vMap->textureID, 0, i);

    glClearColor(0.f, 0.f, 0.f, 0.0f);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    load_uniform((float)i, "level");
    draw();
}

Where "level" corresponds to the current layer.

Then in the vertex shader we attempt to create a single layer as follows;

#version 450

layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex

layout(std430, binding = 3) buffer instance_buffer
{
    vec4 cubes_info[];//first 3 values are position of object 
};

out vec3 normalized_pos;

out float test;

uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;

uniform float voxel_size = 1;

uniform float level=0;

void main()
{
    vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));

    pos.x = (2.f*pos.x-width)/(width);
    pos.y = (2.f*pos.y-depth)/(depth);
    pos.z = floor(pos.z);

    test = pos.z;
    pos.z -= level;

    gl_Position = pos;
}

Finally the fragment shader:

#version 450

in vec3 normalized_pos;
in float l;

in float test;

out vec4 outColor;//Final color of the pixel

void main()
{
    outColor = vec4(vec3(test)/10.f, 1.0);

}

Using renderdoc I have taken some screenshots of what the resulting texture looks like:

Layer 0: enter image description here

Layer 2:

enter image description here

The immediate 2 noticeable problems are that:

A layer should not have multiple tones of gray, only one (since each layer corresponds to a different height there should not be multiple heights being rendered to the same layer)

The darkest section of layer 2 looks like what layer 0 should look like (i.e a filled shape with no "holes"). So not only does it seem I am rendering multiple heights to teh same layer, it also seems I have an offset of 2 when rendering, which should not happen.

Does anyone have any idea as to what the problem could be?

EDIT:

In case anyone is wondering the cubes have dimenions of [1,1,1] And their coordinate system is aligned with teh texture. i.e the bottom, left, front corner of the first cube is at (0,0,0)

EDIT 2:

Changing

pos.z = floor(pos.z); 

To: pos.z = floor(pos.z)+0.1;

Partially fixes the problem. The lowest layer is now correct however instead of 3 different colors (height values) there's now 2.

EDIT 3:

It seems the problem comes from drawing the geometry multiple times. i.e my actual draw clal looks like:

for(uint i=0; i<render_queue.size(); i++)
    {
        Object_3D *render_data = render_queue[i]; 
        //Render multiple instances of the current object
        multi_render(render_data->VAO, &(render_data->VBOs), 
            &(render_data->types), render_data->layouts, 
            render_data->mesh_indices, render_data->render_instances);
    }

void Renderer::multi_render(GLuint VAO, vector<GLuint> *VBOs, 
    vector<GLuint> *buffer_types, GLuint layout_num, 
    GLuint index_num, GLuint instances)
{
    //error check
    if(VBOs->size() != buffer_types->size())
    {
        cerr << "Mismatching VBOs's and buffer_types sizes" << endl;
        return;
    }

    //Bind Vertex array object and rendering rpogram
    glBindVertexArray(VAO);
    glUseProgram(current_program);

    //enable shader layouts
    for(int i=0; i<layout_num;i++)
        glEnableVertexAttribArray(i);

    //Bind VBO's storing rendering data
    for(uint i=0; i<buffer_types->size(); i++)
    {
        if((*buffer_types)[i]==GL_SHADER_STORAGE_BUFFER)
        {
            glBindBuffer((*buffer_types)[i], (*VBOs)[i]);
            glBindBufferBase(GL_SHADER_STORAGE_BUFFER, i, (*VBOs)[i]);
        }
    }
    //Draw call
    glDrawElementsInstanced(GL_TRIANGLES, index_num, GL_UNSIGNED_INT, (void*)0, instances);
}

It seems then that due to rendering multiple subsets of the scene at a time I end up with different cubes being mapped to the same voxel in 2 different draw calls.


Solution

  • I have figured out the problem.

    Since my geometry matches the voxel grid 1 to 1. Different layers could be mapped to the same voxel, causing them to overlap in the same layer.

    Modifying the fragment shader to the following:

    #version 450
    
    layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
    
    layout(std430, binding = 3) buffer instance_buffer
    {
        vec4 cubes_info[];//first 3 values are position of object 
    };
    
    out vec3 normalized_pos;
    
    out float test;
    
    uniform float width = 128;
    uniform float depth = 128;
    uniform float height = 128;
    
    uniform float voxel_size = 1;
    
    uniform float level=0;
    
    void main()
    {
        vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
    
        pos.x = (2.f*pos.x-width)/(width);
        pos.y = (2.f*pos.y-depth)/(depth);
    
        pos.z = cubes_info[gl_InstanceID].z;
    
        test = pos.z + 1;
        pos.z -= level;
    
        if(pos.z >=0 && pos.z < 0.999f)
            pos.z = 1;
        else 
            pos.z = 2;
    
        gl_Position = pos;
    
        normalized_pos = vec3(pos);
    }
    

    Fixes the issue.

    The if statement check guarantees that geometry from a different layer that could potentially be mapped to the current layer is discarded.

    There are probably better ways to do this. So I will accept as an answer anything that produces an equivalent result in a more elegant way.

    This is what layer 0 looks like now:

    enter image description here

    And this is what layer 2 looks like:

    enter image description here