I'm reading this explanation of DataView and there's an example there:
var littleEndian = (function() {
var buffer = new ArrayBuffer(2);
new DataView(buffer).setInt16(0, 256, true /* littleEndian */);
// Int16Array uses the platform's endianness.
return new Int16Array(buffer)[0] === 256;
})();
I don't really understand what this line does:
new DataView(buffer).setInt16(0, 256, true /* littleEndian */);
Does it mean that the data stored in the range [0;256]
bits should be stored in littleEndian?
Suppose we create an array buffer and array like this:
var dv = new DataView(new ArrayBuffer(4));
It means that we've got 32 bits in memory:
0000 0000 0000 0000 0000 0000 0000 0000
Now, we want to store the number 0x0103
, which has the pattern:
0000 0001 0000 0011
Now, let's store this number in first two bytes using little endianess, and in the second two bytes using big endianess and see how it's laid out in the memory. So:
dv.setInt16(0, 0x0103, true);
dv.setInt16(2, 0x0103, false);
Now, the bits in the DataView
have this pattern:
0000 0011 0000 0001 0000 0001 0000 0011
Here is the code to test that behavior:
var little = dv.getUint16(0);
little === 0x0103 // false
little === 0x0301 // true
var big = dv.getUint16(2);
big === 0x0103 // true
big === 0x0301 // false