diff options
author | Borislav Petkov <bp@amd64.org> | 2010-08-19 20:10:29 +0200 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-08-26 16:43:26 -0700 |
commit | 858ba8a4118f7b1cef6cc70d85ffbf5fbaf9ec78 (patch) | |
tree | 912d6b2fd641daaa8afa031166e228c04ced3eb0 /arch | |
parent | a1c9f6469110c9962b7e74acba2fe05741650501 (diff) |
x86, hotplug: Serialize CPU hotplug to avoid bringup concurrency issues
commit d7c53c9e822a4fefa13a0cae76f3190bfd0d5c11 upstream.
When testing cpu hotplug code on 32-bit we kept hitting the "CPU%d:
Stuck ??" message due to multiple cores concurrently accessing the
cpu_callin_mask, among others.
Since these codepaths are not protected from concurrent access due to
the fact that there's no sane reason for making an already complex
code unnecessarily more complex - we hit the issue only when insanely
switching cores off- and online - serialize hotplugging cores on the
sysfs level and be done with it.
[ v2.1: fix !HOTPLUG_CPU build ]
Signed-off-by: Borislav Petkov <borislav.petkov@amd.com>
LKML-Reference: <20100819181029.GC17171@aftab>
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/x86/Kconfig | 5 | ||||
-rw-r--r-- | arch/x86/kernel/smpboot.c | 19 |
2 files changed, 24 insertions, 0 deletions
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index 9458685902b..b99909cbebf 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -240,6 +240,11 @@ config X86_32_LAZY_GS config KTIME_SCALAR def_bool X86_32 + +config ARCH_CPU_PROBE_RELEASE + def_bool y + depends on HOTPLUG_CPU + source "init/Kconfig" source "kernel/Kconfig.freezer" diff --git a/arch/x86/kernel/smpboot.c b/arch/x86/kernel/smpboot.c index 763d815e27a..8931d05ee25 100644 --- a/arch/x86/kernel/smpboot.c +++ b/arch/x86/kernel/smpboot.c @@ -91,6 +91,25 @@ DEFINE_PER_CPU(int, cpu_state) = { 0 }; static DEFINE_PER_CPU(struct task_struct *, idle_thread_array); #define get_idle_for_cpu(x) (per_cpu(idle_thread_array, x)) #define set_idle_for_cpu(x, p) (per_cpu(idle_thread_array, x) = (p)) + +/* + * We need this for trampoline_base protection from concurrent accesses when + * off- and onlining cores wildly. + */ +static DEFINE_MUTEX(x86_cpu_hotplug_driver_mutex); + +void cpu_hotplug_driver_lock() +{ + mutex_lock(&x86_cpu_hotplug_driver_mutex); +} + +void cpu_hotplug_driver_unlock() +{ + mutex_unlock(&x86_cpu_hotplug_driver_mutex); +} + +ssize_t arch_cpu_probe(const char *buf, size_t count) { return -1; } +ssize_t arch_cpu_release(const char *buf, size_t count) { return -1; } #else static struct task_struct *idle_thread_array[NR_CPUS] __cpuinitdata ; #define get_idle_for_cpu(x) (idle_thread_array[(x)]) |