Search code examples
c++armintrinsicsneon

Efficiently reshuffle and combine 16 3-bit numbers in arm neon


I have an array of sixteen unsigned numbers, where each number is less than 8 (e.g. it can be represented by 3 bits). These sixteen numbers are loaded in a uint8x16_t q-register. I need to reshuffle and combine them, similarly to this pseudo-code:

void reshuffleCombine(uint8_t src[16], uint64_t* dst)
{
    uint64 d = 0;

    d |= uint64(src[0])  << 45;
    d |= uint64(src[4])  << 42;
    d |= uint64(src[8])  << 39;
    d |= uint64(src[12]) << 36;

    d |= uint64(src[1])  << 33;
    d |= uint64(src[5])  << 30;
    d |= uint64(src[9])  << 27;
    d |= uint64(src[13]) << 24;

    d |= uint64(src[2])  << 21;
    d |= uint64(src[6])  << 18;
    d |= uint64(src[10]) << 15;
    d |= uint64(src[14]) << 12;

    d |= uint64(src[3])  << 9;
    d |= uint64(src[7])  << 6;
    d |= uint64(src[11]) << 3;
    d |= uint64(src[15]) << 0;

    *dst = d;
}

void reshuffleCombineNeon(uint8x16_t src, uint64_t* dst)
{
    uint64x1_t res;
    // ??
    vst1_u64(dst, res);
}

I can reshuffle them with 1 vld followed by 1 vtbl, however, this entire operation is one of final steps and isn't repeated many times (e.g. 1 vld cannot be shared between multiple reshuffleCombines), thus it might be better to use vtrn/vzip if that's possible and could be more efficient than vld+vtbl. However, the main point of the question: how can I merge all these sixteen 3-bit numbers into a single 48-bit value (stored in a 64-bit uint). This function runs at the end of algorithm, the 16 3-bit numbers come from neon and result of the functions is stored in memory.

My neon version:

void reshuffleCombineNeon(uint8x16_t src, uint32_t* dst)
{
    static const uint8_t idx0[] = { 15, 7, 14, 6, 13, 5, 12, 4 };
    static const uint8_t idx1[] = { 11, 3, 10, 2, 9,  1, 8,  0 };

    uint8x8x2_t y;
    y.val[0] = vget_low_u8(src);
    y.val[1] = vget_high_u8(src);

    uint8x8_t vidx0 = vld1_u8(idx0);
    uint8x8_t vidx1 = vld1_u8(idx1);
    uint8x8_t x0 = vtbl2_u8(y, vidx0);
    uint8x8_t x1 = vtbl2_u8(y, vidx1);

    uint8x8_t x01 = vsli_n_u8(x0, x1, 3);
    uint16x8_t x01L = vmovl_u8(x01);

    uint32x4_t x01LL = vsraq_n_u32(vreinterpretq_u32_u16(x01L), vreinterpretq_u32_u16(x01L), 10);
    x01LL = vmovl_u16(vmovn_u32(x01LL));

    uint64x2_t x01X = vsraq_n_u64(vreinterpretq_u64_u32(x01LL), vreinterpretq_u64_u32(x01LL), 20);
    x01X = vmovl_u32(vmovn_u64(x01X));

    uint64x1_t X0 = vget_low_u64(x01X);
    uint64x1_t X1 = vget_high_u64(x01X);
    X0 = vsli_n_u64(X0, X1, 24);
    vst1_u32(dst, vreinterpret_u32_u64(X0));
}

Solution

  • Ok, for whatever it's worth, below is a NEON version optimized to my best knowledge:


    rev64   v16.16b, v16.16b
    usra    v16.2d, v16.2d, #32-3       // 8 elements (8bit)
    ushll   v17.8h, v16.8b, #6
    uaddw2  v16.8h, v17.8h, v16.16b     // 4 elements (16bit)
    uxtl    v16.4s, v16.4h
    usra    v16.2d, v16.2d, #32-12      // 2 elements (32bit)
    ushll2  v17.2d, v16.4s, #24
    uaddw   v16.2d, v17.2d, v16.2s      // 1 element (64bit), d16 contains the result
    

    It should be considerably faster than yours, but again, it won't make much sense on in-order machines.