I wrote an application that prefers NFC. When I get a filename from OSX its normalized as NFD though. As far as I know I shouldn't convert the data as its mentioned here:
http://www.win.tue.nl/~aeb/linux/uc/nfc_vs_nfd.html
[...](Not because something is wrong with NFD, or this version of NFD, but because one should never change data. Filenames must not be normalized.)[...]
When I compare the filename with the user input (which is in NFC) I have to implement a corresponding compare function which takes care of the Unicode equivalence. But that could be much slower than needed. Wouldn't it be better if I normalize the filename to NFC instead? It would improve the speed a lot when just a memory compare is involved.
The accuracy of advice you link to is dependent on the filesystem in question.
The 'standard' Linux file systems do not prescribe an encoding for filenames (they are treated as raw bytes), so assuming they are UTF-8 and normalising them is an error and may cause problems.
On the other hand, the default filesystem on Mac OS X (HFS+) enforces all filenames to be valid UTF-16 in a variant of NFD. If you need to compare file paths, you should do so in a similar format – ideally using the APIs provides by the system, as its NFD form is tied to an older version of Unicode.