Search code examples
c++wchar-t

Vector won't store correct datatype (wchar_t instead of uint16_t)


I have some code from the net reading hyperspectral data (image, so lots of integers giving pixel intensity) into a vector. I used the code with success on a Linux system, but now I need the same on a windows system. I use Visual Studio 2008.

Reading the data in Linux I get a vector full of integers. On windows I get the integers and then some chars or byte data. I don't know enough to describe it better.

The vector is initialized by

std::vector< unsigned short int > data;
data.resize( samples * lines * bands );
std::fill( data.begin(), data.end(), 0 );

and the relevant code is

for( unsigned int i=0; i < num_pixels && file; ++i ){
     char number[sizeof(DataType)];
     file.read( number , sizeof( DataType ) );
     int l = sizeof(DataType)-1;
    if (machine_endian != header.big_endian) {
        for (int j = 0; j < l; j++, l--){
            number[j] ^=number[l];
            number[l] ^= number[j];
            number[j] ^= number[l];
        }
    }
 unsigned short temp = *((unsigned short int*)number);
     data[i] = temp;
}

The machine_endian part is never run. The temp is just to test if I can cast the number into an int. It works fine. However, when I put temp into the vector there's more information inserted than just the int and it's listed as wchar_t. See the image below. I guess it's something to do with type size, but I am clueless as to why. Is it mine or Visual Studios fault? Any ideas?


Solution

  • Actually, it is working fine. Microsoft just wants to facilitate inspecting wchar_t values (which are short ints and represent utf16 coded characters on windows), so their debugger shows short ints as wchar_ts. It's just a matter of interpretation.

    If you had used chars for your data, you would encounter the same phenomenon on almost any architecture.