SHUFPS — Packed Interleave Shuffle of Quadruplets of Single-Precision Floating-Point Values

Opcode/Instruction Op / En 64/32 bit Mode Support CPUID Feature Flag Description
NP 0F C6 /r ib SHUFPS xmm1, xmm3/m128, imm8 A V/V SSE Select from quadruplet of single-precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1.
VEX.128.0F.WIG C6 /r ib VSHUFPS xmm1, xmm2, xmm3/m128, imm8 B V/V AVX Select from quadruplet of single-precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1.
VEX.256.0F.WIG C6 /r ib VSHUFPS ymm1, ymm2, ymm3/m256, imm8 B V/V AVX Select from quadruplet of single-precision floating-point values in ymm2 and ymm3/m256 using imm8, interleaved result pairs are stored in ymm1.
EVEX.128.0F.W0 C6 /r ib VSHUFPS xmm1{k1}{z}, xmm2, xmm3/m128/m32bcst, imm8 C V/V AVX512VL AVX512F Select from quadruplet of single-precision floating-point values in xmm1 and xmm2/m128 using imm8, interleaved result pairs are stored in xmm1, subject to writemask k1.
EVEX.256.0F.W0 C6 /r ib VSHUFPS ymm1{k1}{z}, ymm2, ymm3/m256/m32bcst, imm8 C V/V AVX512VL AVX512F Select from quadruplet of single-precision floating-point values in ymm2 and ymm3/m256 using imm8, interleaved result pairs are stored in ymm1, subject to writemask k1.
EVEX.512.0F.W0 C6 /r ib VSHUFPS zmm1{k1}{z}, zmm2, zmm3/m512/m32bcst, imm8 C V/V AVX512F Select from quadruplet of single-precision floating-point values in zmm2 and zmm3/m512 using imm8, interleaved result pairs are stored in zmm1, subject to writemask k1.

Instruction Operand Encoding

Op/En Tuple Type Operand 1 Operand 2 Operand 3 Operand 4
A NA ModRM:reg (r, w) ModRM:r/m (r) Imm8 NA
B NA ModRM:reg (w) VEX.vvvv (r) ModRM:r/m (r) Imm8
C Full ModRM:reg (w) EVEX.vvvv (r) ModRM:r/m (r) Imm8

Description

Selects a single-precision floating-point value of an input quadruplet using a two-bit control and move to a designated element of the destination operand. Each 64-bit element-pair of a 128-bit lane of the destination operand is interleaved between the corresponding lane of the first source operand and the second source operand at the granularity 128 bits. Each two bits in the imm8 byte, starting from bit 0, is the select control of the corresponding element of a 128-bit lane of the destination to received the shuffled result of an input quadruplet. The two lower elements of a 128-bit lane in the destination receives shuffle results from the quadruple of the first source operand. The next two elements of the destination receives shuffle results from the quadruple of the second source operand.

EVEX encoded versions: The first source operand is a ZMM/YMM/XMM register. The second source operand can be a ZMM/YMM/XMM register, a 512/256/128-bit memory location or a 512/256/128-bit vector broadcasted from a 32-bit memory location. The destination operand is a ZMM/YMM/XMM register updated according to the writemask. Imm8[7:0] provides 4 select controls for each applicable 128-bit lane of the destination.

VEX.256 encoded version: The first source operand is a YMM register. The second source operand can be a YMM register or a 256-bit memory location. The destination operand is a YMM register. Imm8[7:0] provides 4 select controls for the high and low 128-bit of the destination.

VEX.128 encoded version: The first source operand is a XMM register. The second source operand can be a XMM register or a 128-bit memory location. The destination operand is a XMM register. The upper bits (MAXVL-1:128) of the corresponding ZMM register destination are zeroed. Imm8[7:0] provides 4 select controls for each element of the destination.

128-bit Legacy SSE version: The source can be an XMM register or an 128-bit memory location. The destination is not distinct from the first source XMM register and the upper bits (MAXVL-1:128) of the corresponding ZMM register destination are unmodified. Imm8[7:0] provides 4 select controls for each element of the destination.

X7 X6 X5 X4 X3 X2 X1 X0 SRC1 Y7 Y6 Y5 Y4 Y3 Y2 Y1 Y0 SRC2 Y3 ..Y0 Y3 ..Y0 DEST Y7 .. Y4 Y7 .. Y4 X7 .. X4 X7 .. X4 X3 .. X0 X3 .. X0
Figure 4-26. 256-bit VSHUFPS Operation of Selection from Input Quadruplet and Pair-wise Interleaved Result

Operation

Select4(SRC, control) {
CASE (control[1:0]) OF
    0: TMP ←SRC[31:0];
    1: TMP ←SRC[63:32];
    2: TMP ←SRC[95:64];
    3: TMP ←SRC[127:96];
ESAC;
RETURN TMP
}

VPSHUFPS (EVEX encoded versions when SRC2 is a vector register)

(KL, VL) = (4, 128), (8, 256), (16, 512)
TMP_DEST[31:0]←Select4(SRC1[127:0], imm8[1:0]);
TMP_DEST[63:32]←Select4(SRC1[127:0], imm8[3:2]);
TMP_DEST[95:64]←Select4(SRC2[127:0], imm8[5:4]);
TMP_DEST[127:96]←Select4(SRC2[127:0], imm8[7:6]);
IF VL >= 256
    TMP_DEST[159:128]←Select4(SRC1[255:128], imm8[1:0]);
    TMP_DEST[191:160]←Select4(SRC1[255:128], imm8[3:2]);
    TMP_DEST[223:192]←Select4(SRC2[255:128], imm8[5:4]);
    TMP_DEST[255:224]←Select4(SRC2[255:128], imm8[7:6]);
FI;
IF VL >= 512
    TMP_DEST[287:256]←Select4(SRC1[383:256], imm8[1:0]);
    TMP_DEST[319:288]←Select4(SRC1[383:256], imm8[3:2]);
    TMP_DEST[351:320]←Select4(SRC2[383:256], imm8[5:4]);
    TMP_DEST[383:352]←Select4(SRC2[383:256], imm8[7:6]);
    TMP_DEST[415:384]←Select4(SRC1[511:384], imm8[1:0]);
    TMP_DEST[447:416]←Select4(SRC1[511:384], imm8[3:2]);
    TMP_DEST[479:448]←Select4(SRC2[511:384], imm8[5:4]);
    TMP_DEST[511:480]←Select4(SRC2[511:384], imm8[7:6]);
FI;
FOR j←0 TO KL-1
    i←j * 32
    IF k1[j] OR *no writemask*
        THEN DEST[i+31:i]←TMP_DEST[i+31:i]
        ELSE
            IF *merging-masking*
                        ; merging-masking
                THEN *DEST[i+31:i] remains unchanged*
                ELSE *zeroing-masking*
                            ; zeroing-masking
                    DEST[i+31:i] ← 0
            FI
    FI;
ENDFOR
DEST[MAXVL-1:VL] ← 0

VPSHUFPS (EVEX encoded versions when SRC2 is memory)

(KL, VL) = (4, 128), (8, 256), (16, 512)
FOR j←0 TO KL-1
    i←j * 32
    IF (EVEX.b = 1)
        THEN TMP_SRC2[i+31:i]←SRC2[31:0]
        ELSE TMP_SRC2[i+31:i]←SRC2[i+31:i]
    FI;
ENDFOR;
TMP_DEST[31:0]←Select4(SRC1[127:0], imm8[1:0]);
TMP_DEST[63:32]←Select4(SRC1[127:0], imm8[3:2]);
TMP_DEST[95:64]←Select4(TMP_SRC2[127:0], imm8[5:4]);
TMP_DEST[127:96]←Select4(TMP_SRC2[127:0], imm8[7:6]);
IF VL >= 256
    TMP_DEST[159:128]←Select4(SRC1[255:128], imm8[1:0]);
    TMP_DEST[191:160]←Select4(SRC1[255:128], imm8[3:2]);
    TMP_DEST[223:192]←Select4(TMP_SRC2[255:128], imm8[5:4]);
    TMP_DEST[255:224]←Select4(TMP_SRC2[255:128], imm8[7:6]);
FI;
IF VL >= 512
    TMP_DEST[287:256]←Select4(SRC1[383:256], imm8[1:0]);
    TMP_DEST[319:288]←Select4(SRC1[383:256], imm8[3:2]);
    TMP_DEST[351:320]←Select4(TMP_SRC2[383:256], imm8[5:4]);
    TMP_DEST[383:352]←Select4(TMP_SRC2[383:256], imm8[7:6]);
    TMP_DEST[415:384]←Select4(SRC1[511:384], imm8[1:0]);
    TMP_DEST[447:416]←Select4(SRC1[511:384], imm8[3:2]);
    TMP_DEST[479:448]←Select4(TMP_SRC2[511:384], imm8[5:4]);
    TMP_DEST[511:480]←Select4(TMP_SRC2[511:384], imm8[7:6]);
FI;
FOR j←0 TO KL-1
    i←j * 32
    IF k1[j] OR *no writemask*
        THEN DEST[i+31:i]←TMP_DEST[i+31:i]
        ELSE
            IF *merging-masking*
                        ; merging-masking
                THEN *DEST[i+31:i] remains unchanged*
                ELSE *zeroing-masking*
                            ; zeroing-masking
                    DEST[i+31:i] ← 0
            FI
    FI;
ENDFOR
DEST[MAXVL-1:VL] ← 0

VSHUFPS (VEX.256 encoded version)

DEST[31:0]←Select4(SRC1[127:0], imm8[1:0]);
DEST[63:32]←Select4(SRC1[127:0], imm8[3:2]);
DEST[95:64]←Select4(SRC2[127:0], imm8[5:4]);
DEST[127:96]←Select4(SRC2[127:0], imm8[7:6]);
DEST[159:128]←Select4(SRC1[255:128], imm8[1:0]);
DEST[191:160]←Select4(SRC1[255:128], imm8[3:2]);
DEST[223:192]←Select4(SRC2[255:128], imm8[5:4]);
DEST[255:224]←Select4(SRC2[255:128], imm8[7:6]);
DEST[MAXVL-1:256] ←0

VSHUFPS (VEX.128 encoded version)

DEST[31:0]←Select4(SRC1[127:0], imm8[1:0]);
DEST[63:32]←Select4(SRC1[127:0], imm8[3:2]);
DEST[95:64]←Select4(SRC2[127:0], imm8[5:4]);
DEST[127:96]←Select4(SRC2[127:0], imm8[7:6]);
DEST[MAXVL-1:128] ←0

SHUFPS (128-bit Legacy SSE version)

DEST[31:0]←Select4(SRC1[127:0], imm8[1:0]);
DEST[63:32]←Select4(SRC1[127:0], imm8[3:2]);
DEST[95:64]←Select4(SRC2[127:0], imm8[5:4]);
DEST[127:96]←Select4(SRC2[127:0], imm8[7:6]);
DEST[MAXVL-1:128] (Unmodified)

Intel C/C++ Compiler Intrinsic Equivalent

VSHUFPS __m512 _mm512_shuffle_ps(__m512 a, __m512 b, int imm);
VSHUFPS __m512 _mm512_mask_shuffle_ps(__m512 s, __mmask16 k, __m512 a, __m512 b, int imm);
VSHUFPS __m512 _mm512_maskz_shuffle_ps(__mmask16 k, __m512 a, __m512 b, int imm);
VSHUFPS __m256 _mm256_shuffle_ps (__m256 a, __m256 b, const int select);
VSHUFPS __m256 _mm256_mask_shuffle_ps(__m256 s, __mmask8 k, __m256 a, __m256 b, int imm);
VSHUFPS __m256 _mm256_maskz_shuffle_ps(__mmask8 k, __m256 a, __m256 b, int imm);
SHUFPS __m128 _mm_shuffle_ps (__m128 a, __m128 b, const int select);
VSHUFPS __m128 _mm_mask_shuffle_ps(__m128 s, __mmask8 k, __m128 a, __m128 b, int imm);
VSHUFPS __m128 _mm_maskz_shuffle_ps(__mmask8 k, __m128 a, __m128 b, int imm);

SIMD Floating-Point Exceptions

None

Other Exceptions

Non-EVEX-encoded instruction, see Exceptions Type 4.

EVEX-encoded instruction, see Exceptions Type E4NF.