Search code examples
simdsseavx

Is there a SIMD intrinsics like scatter but between registers?


So as far as I know there is _mm_shuffle_epi8 if you want to do

dst[i] = a[b[i]]

but my question is if there is a intrinsic that does

dst[b[i]] = a[i]

I want it to work with 16 elements of 8 bits (doesn't matter if it's signed or not)

I looked trough Intel Intrinsics Guide but didn't find anything there except permute and shuffle which seem close but I don't think they do dst[b[i]] = a[i]. I tried looking on web but I am not sure what to search. The only thing that I found out that I need "scatter" kind of "permutation" between simd registers.


Solution

  • There is no direct way to scatter (and I don't expect we will ever have it (we have it with a memory destination but it's not great)) but what you can do, surprisingly, is invert a permutation. And then you can use that inverse-permutation to do a normal source-index-based shuffle. An obvious requirement for that to work is that there are no "collisions" where more than one element is sent to the same position (but what would that mean anyway).

    Here is some AVX2 code, taken from (Not) transposing a 16x16 bitmatrix (which I also wrote), modified slightly so it takes and returns a __m128i directly. Just use _mm_shuffle_epi8(a, invert_permutation_avx2(b)).

    __m256i nottranspose16x16(__m256i x)
    {
        // exchange off-diagonal quadrants
        x = _mm256_shuffle_epi8(x, _mm256_setr_epi8(
            0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15,
            0, 2, 4, 6, 8, 10, 12, 14, 1, 3, 5, 7, 9, 11, 13, 15));
        x = _mm256_permutex_epi64(x, _MM_SHUFFLE(3, 1, 2, 0));
        x = _mm256_shuffle_epi8(x, _mm256_setr_epi8(
            0, 8, 1, 9, 2, 10, 3, 11, 4, 12, 5, 13, 6, 14, 7, 15,
            0, 8, 1, 9, 2, 10, 3, 11, 4, 12, 5, 13, 6, 14, 7, 15));
        // rotate every row by its y coordinate
        __m256i shifts = _mm256_setr_epi16(
            1 << 0, 1 << 1, 1 << 2, 1 << 3,
            1 << 4, 1 << 5, 1 << 6, 1 << 7,
            1 << 8, 1 << 9, 1 << 10, 1 << 11,
            1 << 12, 1 << 13, 1 << 14, 1 << 15);
        __m256i sll = _mm256_mullo_epi16(x, shifts);
        __m256i srl = _mm256_mulhi_epu16(x, shifts);
        x = _mm256_or_si256(sll, srl);
        // within each quadrant independently, 
        // rotate every column by its x coordinate
        __m256i x0, x1, m;
        // rotate by 4
        m = _mm256_set1_epi8(0x0F);
        x0 = _mm256_and_si256(x, m);
        x1 = _mm256_andnot_si256(m, _mm256_alignr_epi8(x, x, 8));
        x = _mm256_or_si256(x0, x1);
        // rotate by 2
        m = _mm256_set1_epi8(0x33);
        x0 = _mm256_and_si256(x, m);
        x1 = _mm256_andnot_si256(m, _mm256_alignr_epi8(x, x, 4));
        x = _mm256_or_si256(x0, x1);
        // rotate by 1
        m = _mm256_set1_epi8(0x55);
        x0 = _mm256_and_si256(x, m);
        x1 = _mm256_andnot_si256(m, _mm256_alignr_epi8(x, x, 2));
        x = _mm256_or_si256(x0, x1);
        return x;
    }
    
    __m128i invert_permutation_avx2(__m128i p)
    {
        __m256i v = _mm256_cvtepu8_epi16(p);
        // indexes to masks
        v = _mm256_or_si256(v, _mm256_slli_epi64(v, 8));
        v = _mm256_add_epi8(v, _mm256_set1_epi16(0xF878));
        __m256i m = _mm256_shuffle_epi8(_mm256_setr_epi8(
            1, 2, 4, 8, 16, 32, 64, 128,
            1, 2, 4, 8, 16, 32, 64, 128,
            1, 2, 4, 8, 16, 32, 64, 128,
            1, 2, 4, 8, 16, 32, 64, 128), v);
        // ???
        m = nottranspose16x16(m);
        // masks to indexes
        __m256i deBruijn = _mm256_and_si256(_mm256_mulhi_epu16(m, _mm256_set1_epi16(0x9AF0)), _mm256_set1_epi16(0x000F));
        __m128i i = _mm_packs_epi16(_mm256_castsi256_si128(deBruijn), _mm256_extracti128_si256(deBruijn, 1));
        i = _mm_shuffle_epi8(_mm_setr_epi8(
            0, 1, 2, 5, 3, 9, 6, 11, 15, 4, 8, 10, 14, 7, 13, 12), i);
        // un-mess-up the indexes
        i = _mm_sub_epi8(i, _mm_setr_epi8(0, 7, 6, 5, 4, 3, 2, 1, 8, 15, 14, 13, 12, 11, 10, 9));
        i = _mm_and_si128(i, _mm_set1_epi8(0x0F));
        i = _mm_shuffle_epi8(i, _mm_setr_epi8(0, 7, 6, 5, 4, 3, 2, 1, 8, 15, 14, 13, 12, 11, 10, 9));
        return i;
    }
    

    This would be easier to do with GFNI (since GF2P8AFFINEQB is very useful for transposing bit-matrices), especially if we also have AVX512 for some other operations that AVX2 has some trouble with. Based on the tags of this question I chose not to use those newer instruction set extensions.