Visual studio tries to insist on using tchars, which when compiled with the UNICODE option then basically ends up using the wide versions of the Windows and other API.
Is there then any danger to using UTF-8 internally in the application (which makes use of the C++ STL easier and also enables more readable cross platform code) and then only converting to UTF-16 when you need to use any of the OS APIs?
I'm specifically asking about developing for more than one OS - Windows that doesn't use UTF-8 and others like Mac, that do.
As others have said, there is no danger to using UTF-8 internally, and then converting when you need to call Windows functions.
However, be aware that the cost of converting every time so might become prohibitively expensive if you're displaying a lot of text. (Remember, you don't just have the conversion, but you may also have the cost of allocating and freeing buffers to hold the temporary, converted strings.)
I should also point out there is wide-character support built in to STL, so there's really no reason for doing this. (std::wstring, et al.)
Additionally, working exclusively with UTF-8 is fine for English, but if you plan on supporting Eastern European, Arabic, or Asian character sets your storage requirements for text might turn out to be larger than those for UTF-16 (due to more characters requiring three or four code points to be stored). Again this will probably only be an issue if you're dealing with large volumes of text, but it's something to consider - doubly so if you're going to be transferring this text over a network connection at any time.