My compute shader (written in HLSL) compiled and worked with the earlier SDK 1.0.65.0. I updated to 1.0.68.0 and recompiled it, now I get this error when calling vkCreateShaderModule
:
Vulkan error: [SC], code: 5: SPIR-V module not valid: AtomicSMax: expected Result Type to be int scalar type
I verified that the error comes from this function in my shader:
groupshared uint ldsZMax;
groupshared uint ldsZMin;
...
void CalculateMinMaxDepthInLds( uint3 globalThreadIdx, uint depthBufferSampleIdx )
{
float viewPosZ = depthTexture.Load( uint3( globalThreadIdx.x, globalThreadIdx.y, 0 ) ).x;
uint z = asuint( viewPosZ );
if (viewPosZ != 0.f)
{
InterlockedMax( ldsZMax, z );
InterlockedMin( ldsZMin, z );
}
}
I compile the shader with this command:
C:\VulkanSDK\1.0.68.0\Bin\glslangValidator -D -V -S comp -e CSMain LightCuller.hlsl -o LightCuller.spv
The error goes away if I don't use those Interlocked*
methods.
I also tried to use int
s instead of uint
s but the problem persists. What am I doing wrong or could this be a codegen bug?
If the SPIR-V validator says that something generated by glslangValidator is invalid, then that is either a glslangValidator bug or a SPIR-V validator bug. Probably best to file a bug at https://github.com/KhronosGroup/glslang; if the maintainers there think they're doing the right thing, they'll follow up with the spirv-tools people.
Though looking at the HLSL InterlockedMax docs, isn't it supposed to have three parameters? The glslangValidator bug here may just be a failure to issue an error for invalid input. But I'm not an HLSL expert, maybe there some other variant with two parameters and a return value.