Search code examples
javascriptjsonarraybuffertyped-arrays

How do you decide which typed array to use?


I am trying to create a view of ArrayBuffer object in order to JSONify it.

var data = { data: new Uint8Array(arrayBuffer) }
var json = JSON.stringify(data)

It seems that the size of the ArrayBuffer does not matter even with the smallest Uint8Array. I did not get any RangeError so far:) If so, how do I decide which typed array to use?


Solution

  • You decide based on the data stored in the buffer, or, better said, based on your interpretation of that data.

    Also an Uint8Array is not an 8 bit array, it's an array of unsigned 8 bit integers. It can have any length. A Uint8Array created from the same ArrayBuffer as a Uint16Array is going to be twice as long, because every byte in the ArrayBuffer is going to be "placed" as one element of the Uint8Array, while for the Uint16Array each pair of bytes is going to "become" one element in the array.

    A good explanation of what happens is if we try thinking in binary. Try running this:

    var buffer = new ArrayBuffer(2);
    var uint8View = new Uint8Array(buffer);
    var uint16View = new Uint16Array(buffer);
    
    uint8View[0] = 2;
    uint8View[1] = 1;
    
    console.log(uint8View[0].toString(2));
    console.log(uint8View[1].toString(2));
    console.log(uint16View[0].toString(2));
    

    The output is going to be

    10
    1
    100000010
    

    because displayed as an unsigned 8 bit integer in binary, 2 is 00000010 and 1 is 00000001. (toString strips leading zeroes).

    Uint8Array represents an array of bytes. As I said, an element is an unsigned 8 bit integer. We just pushed two bytes to it.

    In memory those two bytes are stored side by side as 00000001 00000010 (binary form again used to make things clearer).

    Now when you initialize a Uint16Array over the same buffer it's going to contain the same bytes, but because an element is a unsigned 16 bit integer (two bytes), when you access uint16View[0] it's going to take the first two bytes and give them back to you. So 0000000100000010, which is 100000010 with no leading zeroes.

    If you interpret this data as base 10 (decimal) integers you'll know it's 0000000100000010 to base 10 (258).

    Neither Uint8Array nor Uint16Array store any data themselves. They are simply different ways of accessing bytes in an ArrayBuffer.

    how one chooses which one to use? It's not based on preference but on the underlying data. ArrayBuffer is to be used when you receive some binary data from some external source (web socket maybe) and already know what the data represents. It might be a list of unsigned 8 bit integers, or one of signed 16 bit ones, or even a mixed list where you know the first element is an 8 bit integer and the next one is a 16 bit one. Then you can use DataView to read typed items from it. If you don't know what the data represents you can't choose what to use.