aboutsummaryrefslogtreecommitdiff
path: root/arch
diff options
context:
space:
mode:
authorYang Xiaowei <xiaowei.yang@intel.com>2009-09-09 12:44:52 -0700
committerGreg Kroah-Hartman <gregkh@suse.de>2009-10-05 09:31:45 -0700
commit7bb8fd4ee497a34c68e35d259a20d98cbad7fc94 (patch)
tree81e7418ff8521f813125e1f7b6b4a15704118715 /arch
parent64aa234c396c3f595bc08aed93fd753102a0a0ec (diff)
xen: use stronger barrier after unlocking lock
commit 2496afbf1e50c70f80992656bcb730c8583ddac3 upstream. We need to have a stronger barrier between releasing the lock and checking for any waiting spinners. A compiler barrier is not sufficient because the CPU's ordering rules do not prevent the read xl->spinners from happening before the unlock assignment, as they are different memory locations. We need to have an explicit barrier to enforce the write-read ordering to different memory locations. Because of it, I can't bring up > 4 HVM guests on one SMP machine. [ Code and commit comments expanded -J ] [ Impact: avoid deadlock when using Xen PV spinlocks ] Signed-off-by: Yang Xiaowei <xiaowei.yang@intel.com> Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'arch')
-rw-r--r--arch/x86/xen/spinlock.c9
1 files changed, 7 insertions, 2 deletions
diff --git a/arch/x86/xen/spinlock.c b/arch/x86/xen/spinlock.c
index 2f91e565192..36a5141108d 100644
--- a/arch/x86/xen/spinlock.c
+++ b/arch/x86/xen/spinlock.c
@@ -326,8 +326,13 @@ static void xen_spin_unlock(struct raw_spinlock *lock)
smp_wmb(); /* make sure no writes get moved after unlock */
xl->lock = 0; /* release lock */
- /* make sure unlock happens before kick */
- barrier();
+ /*
+ * Make sure unlock happens before checking for waiting
+ * spinners. We need a strong barrier to enforce the
+ * write-read ordering to different memory locations, as the
+ * CPU makes no implied guarantees about their ordering.
+ */
+ mb();
if (unlikely(xl->spinners))
xen_spin_unlock_slow(xl);