Before starting:
A2B10G10R10 (2 bits for the alpha, 10 bits for each color channels)
A8B8G8R8 (8 bits for every channels)
Correct me if I'm wrong, but is it right that the A2B10G10R10 pixel format cannot be displayed directly on screens ?
If so, I would like to convert my A2B10G10R10 image to a displayable A8B8G8R8 one either using OpenCV, Direct3D9 or even manually but I'm really bad when it comes to bitwise operation that's why I need your help.
So here I am:
// Get the texture bits pointer
offscreenSurface->LockRect(&memDesc, NULL, 0);
// Copy the texture bits to a cv::Mat
cv::Mat m(desc.Height, desc.Width, CV_8UC4, memDesc.pBits, memDesc.Pitch);
// Convert from A2B10G10R10 to A8B8G8R8
???
Here how I think I should do for every 32 bits pack:
Note:
So to resume, the question is: How to convert a A2B10G10R10 pixel format texture to a A8B8G8R8 one ?
Thanks. Best regards.
I'm not sure why you are using legacy Direct3D 9 instead of DirectX 11. In any case, the naming scheme between Direct3D 9 era D3DFMT
and the modern DXGI_FORMAT
is flipped, so it can be a bit confusing.
D3DFMT_A8B8G8R8
is the same as DXGI_FORMAT_R8G8B8A8_UNORM
D3DFMT_A2B10G10R10
is the same as DXGI_FORMAT_R10G10B10A2_UNORM
D3DFMT_A8R8G8B8
is the same as DXGI_FORMAT_B8G8R8A8_UNORM
There is no direct equivalent of D3DFMT_A2R10G10B10
in DXGI but you can swap the red/blue channels to get it.
There's also a long-standing bug in the deprecated D3DX9, D3DX10, and D3DX11 helper libraries where the DDS file format
DDPIXELFORMAT
have the red and blue masks backwards for both 10:10:10:2 formats. My DDS texture readers solve this by flipping the mapping of the masks to the formats on read, and always writing DDS files using the more modern DX10 header where I explicitly useDXGI_FORMAT_R10G10B10A2_UNORM
. See this post for more details.
The biggest problem with converting 10:10:10:2 to 8:8:8:8 is that you are losing 2 bits of data from the R, G, B color channels. You can do a naïve bit-shift, but the results are usually crap. To handle the color conversion where you are losing precision, you want to use something like error diffusion or ordered dithering.
Furthermore for the 2-bit alpha, you don't want 3 (11
) to map to 192 (11000000
) because in 2-bit alpha 3 "11" is fully opaque while 255 (11111111
) is in 8-bit alpha.
Take a look at DirectXTex which is an open source library that does conversions for every DXGI_FORMAT
and can handle legacy conversions of most D3DFMT
. It implements all the stuff I just mentioned.
The library uses float4
intermediate values because it's built on DirectXMath and that provides a more general solution than having a bunch of special-case conversion combinations. For special-case high-performance use, you could write a direct 10-bit to 8-bit converter with all the dithering, but that's a pretty unusual situation.
With all that discussion of format image conversion out of the way, you can in fact render a 10:10:10:2 texture onto a 8:8:8:8 render target for display. You can use 10:10:10:2 as a render target backbuffer format as well, and it will get converted to 8:8:8:8 as part of the present. Hardware support for 10:10:10:2 is optional on Direct3D 9, but required for Direct3D Feature Level 10 or better cards when using DirectX 11. You can even get true 10-bit display scan-out when using the "exclusive" full screen rendering mode, and Windows 10 is implementing HDR display out natively later this year.