As the title says, I'm having two questions.
Edit: To clarify, they don't actually use char
and short
, they ensure them to be 8-bit and 16-bit by specific typedefs. The actual type is then called UInt8
and UInt16
.
1. Question
The iTunes SDK uses unsigned short*
where a string is needed. What are the advantages of using it instead of char*
/unsigned char*
? How to convert it to char*
, and what differs when working with this type instead?
2. Question
I've only seen char*
when a string must be stored, yet. When should I use unsigned char*
then, or doesn't it make any difference?
unsigned short
arrays can be used with wide character strings - for instance if you have UTF-16 encoded texts - although I'd expect to see wchar_t
in those cases. But they may have their reasons, like being compatible between MacOS and Windows. (If my sources are right, MacOS' wchar_t
is 32 bits, while Windows' is 16 bits.)
You convert between the two types of string by calling the appropriate library function. Which function is appropriate depends on the situation. Doesn't the SDK come with one?
And char
instead of unsigned char
, well, all strings have historically always been defined with char
, so switching to unsigned char
would introduce incompatibilities.
(Switching to signed char
would also cause incompatibilities, but somehow not as many...)
Edit Now the question has been edited, let me say that I didn't see the edits before I typed my answer. But yes, UInt16
is a better representation of a 16 bit entity than wchar_t for the above reason.