summaryrefslogtreecommitdiff
path: root/arch/x86/include
diff options
context:
space:
mode:
authorBorislav Petkov <bp@suse.de>2013-06-09 12:07:34 +0200
committerH. Peter Anvin <hpa@linux.intel.com>2013-06-20 17:38:22 -0700
commit5f8c4218148822fde6eebbeefc34bd0a6061e031 (patch)
tree1490d5de1b80ed71c737a28841cc2dcae930ca71 /arch/x86/include
parent4a90a99c4f8002edaa6be11bd756872ebf3f3d97 (diff)
x86, fpu: Use static_cpu_has_safe before alternatives
The call stack below shows how this happens: basically eager_fpu_init() calls __thread_fpu_begin(current) which then does if (!use_eager_fpu()), which, in turn, uses static_cpu_has. And we're executing before alternatives so static_cpu_has doesn't work there yet. Use the safe variant in this path which becomes optimal after alternatives have run. WARNING: at arch/x86/kernel/cpu/common.c:1368 warn_pre_alternatives+0x1e/0x20() You're using static_cpu_has before alternatives have run! Modules linked in: Pid: 0, comm: swapper Not tainted 3.9.0-rc8+ #1 Call Trace: warn_slowpath_common warn_slowpath_fmt ? fpu_finit warn_pre_alternatives eager_fpu_init fpu_init cpu_init trap_init start_kernel ? repair_env_string x86_64_start_reservations x86_64_start_kernel Signed-off-by: Borislav Petkov <bp@suse.de> Link: http://lkml.kernel.org/r/1370772454-6106-6-git-send-email-bp@alien8.de Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Diffstat (limited to 'arch/x86/include')
-rw-r--r--arch/x86/include/asm/fpu-internal.h2
1 files changed, 1 insertions, 1 deletions
diff --git a/arch/x86/include/asm/fpu-internal.h b/arch/x86/include/asm/fpu-internal.h
index fb808d71cd70..4d0bda7b11e3 100644
--- a/arch/x86/include/asm/fpu-internal.h
+++ b/arch/x86/include/asm/fpu-internal.h
@@ -343,7 +343,7 @@ static inline void __thread_fpu_end(struct task_struct *tsk)
static inline void __thread_fpu_begin(struct task_struct *tsk)
{
- if (!use_eager_fpu())
+ if (!static_cpu_has_safe(X86_FEATURE_EAGER_FPU))
clts();
__thread_set_has_fpu(tsk);
}