This question is regarding the mad functions available in OpenCL which promise significant improvements for calculations of the type:
a * b + c
if used as mad(a,b,c)
and compiled with cl-mad-enable.
I have tried a calculation of the form a + b * c + d * e
using mad for a very huge size and was expecting significant improvement. Surprisingly, it took the same time.
If anybody has experience of this, I would appreciate some insight. I have a jist that it should work because most of the resources are full of praise for mad()
. Note: The data types I am using are all doubles, and if it is important, my usage of mad
resulted in a v. huge precision loss.
There's a big difference between being able to handle doubles, and being able to handle double precision efficiently. Most recent GPUs handle double, but are circa 2X-4X slower than single precision.
However, AFAIK all of the GPUs that handle double have madd instructions. AMD documents this - e.g. see R600-Family ISA, dated 2008, the MULADD_64 instruction. I've seen less detailed documentation for Nvidia, but docs like Floating Point for NVIDIA GPUs say Nvidia has FMA (Fused Multiply Add). The manuals for Intel GPUs at https://www.x.org/docs/intel/ don't mention double precision (at least not to google).
However, probably the main reason you are seeing no difference when using madd(), is that the compiler is already recognizing that a madd can be used.
On some GPUs you can look at the code generated; e.g. AMD CodeAnalyst or AMD GPU ShaderAnalyzer for OpenGL code.
I have spent a lot of time looking at code generated with these tools, and IIRC it was optimized. TBD: show an example here.