Here is a convertion from 32bit float per channel to "unsigned byte" per channel color normalization to save some pci-express bandwidth for other things. Sometimes there can be stripes of color and they look unnatural.
How can I avoid this? Especially on the edge of spheres.
Float color channels:
Unsigned byte channels:
Here, yellow edge on the blue sphere and blue edge on the red one should not exist.
Normalization I used(from opencl kernel) :
// multiplying with r doesnt help as picture color gets too bright and reddish.
float r=rsqrt(pixel0.x*pixel0.x+pixel0.y*pixel0.y+pixel0.z*pixel0.z+0.001f);
unsigned char rgb0=(unsigned char)(pixel0.x*255.0);
unsigned char rgb1=(unsigned char)(pixel0.y*255.0);
unsigned char rgb2=(unsigned char)(pixel0.z*255.0);
rgba_byte[i*4+0]=rgb0>255?255:rgb0;
rgba_byte[i*4+1]=rgb1>255?255:rgb1;
rgba_byte[i*4+2]=rgb2>255?255:rgb2;
rgba_byte[i*4+3]=255;
Binding to buffer:
GL11.glEnableClientState(GL11.GL_COLOR_ARRAY);
GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, id);
GL11.glColorPointer(4, GL11.GL_UNSIGNED_BYTE, 4, 0);
using lwjgl(glfw context) in java environment.
As Andon M. said, I clamped before casting (I couldnt see when I nneded sleep heavily) and it solved.
Color quality is not great by the way but using smaller color buffer helped up the performance.
Your original data set contains floating-point values outside the normalized [0.0, 1.0] range, which after multiplying by 255.0 and casting to unsigned char
produces overflow. The false coloring you experienced occurs in areas of the scene that are exceptionally bright in one or more color components.
It seems you knew to expect this overflow when you wrote rgb0>255?255:rgb0
, but that logic will not work because when an unsigned char
overflows it wraps around to 0 instead of a number larger than 255.
The minimal solution to this would be to clamp the floating-point colors into the range [0.0, 1.0] before converting to fixed-point 0.8 (8-bit unsigned normalized) color, to avoid overflow.
However, if this is a frequent problem, you may be better off implementing an HDR to LDR post-process. You would identify the brightest pixel in some region (or all) of your scene and then normalize all of the colors into that range. You were sort of implementing this to begin with (with r = sqrt (...)
), but it was only using the magnitude of the current pixel to normalize color.