Skip to content

Commit

Permalink
x86/kvm: Fix SETcc emulation for return thunks
Browse files Browse the repository at this point in the history
Prepare the SETcc fastop stuff for when RET can be larger still.

The tricky bit here is that the expressions should not only be
constant C expressions, but also absolute GAS expressions. This means
no ?: and 'true' is ~0.

Also ensure em_setcc() has the same alignment as the actual FOP_SETCC()
ops, this ensures there cannot be an alignment hole between em_setcc()
and the first op.

Additionally, add a .skip directive to the FOP_SETCC() macro to fill
any remaining space with INT3 traps; however the primary purpose of
this directive is to generate AS warnings when the remaining space
goes negative. Which is a very good indication the alignment magic
went side-ways.

Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Josh Poimboeuf <jpoimboe@kernel.org>
Signed-off-by: Borislav Petkov <bp@suse.de>
  • Loading branch information
Peter Zijlstra authored and suryasaimadhu committed Jun 27, 2022
1 parent d77cfe5 commit af2e140
Showing 1 changed file with 15 additions and 13 deletions.
28 changes: 15 additions & 13 deletions arch/x86/kvm/emulate.c
Original file line number Diff line number Diff line change
Expand Up @@ -325,13 +325,15 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
#define FOP_RET(name) \
__FOP_RET(#name)

#define FOP_START(op) \
#define __FOP_START(op, align) \
extern void em_##op(struct fastop *fake); \
asm(".pushsection .text, \"ax\" \n\t" \
".global em_" #op " \n\t" \
".align " __stringify(FASTOP_SIZE) " \n\t" \
".align " __stringify(align) " \n\t" \
"em_" #op ":\n\t"

#define FOP_START(op) __FOP_START(op, FASTOP_SIZE)

#define FOP_END \
".popsection")

Expand Down Expand Up @@ -435,16 +437,15 @@ static int fastop(struct x86_emulate_ctxt *ctxt, fastop_t fop);
/*
* Depending on .config the SETcc functions look like:
*
* ENDBR [4 bytes; CONFIG_X86_KERNEL_IBT]
* SETcc %al [3 bytes]
* RET [1 byte]
* INT3 [1 byte; CONFIG_SLS]
*
* Which gives possible sizes 4, 5, 8 or 9. When rounded up to the
* next power-of-two alignment they become 4, 8 or 16 resp.
* ENDBR [4 bytes; CONFIG_X86_KERNEL_IBT]
* SETcc %al [3 bytes]
* RET | JMP __x86_return_thunk [1,5 bytes; CONFIG_RETPOLINE]
* INT3 [1 byte; CONFIG_SLS]
*/
#define SETCC_LENGTH (ENDBR_INSN_SIZE + 4 + IS_ENABLED(CONFIG_SLS))
#define SETCC_ALIGN (4 << IS_ENABLED(CONFIG_SLS) << HAS_KERNEL_IBT)
#define RET_LENGTH (1 + (4 * IS_ENABLED(CONFIG_RETPOLINE)) + \
IS_ENABLED(CONFIG_SLS))
#define SETCC_LENGTH (ENDBR_INSN_SIZE + 3 + RET_LENGTH)
#define SETCC_ALIGN (4 << ((SETCC_LENGTH > 4) & 1) << ((SETCC_LENGTH > 8) & 1))
static_assert(SETCC_LENGTH <= SETCC_ALIGN);

#define FOP_SETCC(op) \
Expand All @@ -453,9 +454,10 @@ static_assert(SETCC_LENGTH <= SETCC_ALIGN);
#op ": \n\t" \
ASM_ENDBR \
#op " %al \n\t" \
__FOP_RET(#op)
__FOP_RET(#op) \
".skip " __stringify(SETCC_ALIGN) " - (.-" #op "), 0xcc \n\t"

FOP_START(setcc)
__FOP_START(setcc, SETCC_ALIGN)
FOP_SETCC(seto)
FOP_SETCC(setno)
FOP_SETCC(setc)
Expand Down

0 comments on commit af2e140

Please sign in to comment.