From 919fc6e34831d1c2b58bfb5ae261dc3facc9b269 Mon Sep 17 00:00:00 2001 From: "Paul E. McKenney" Date: Wed, 11 Dec 2013 13:59:11 -0800 Subject: [PATCH] powerpc: Full barrier for smp_mb__after_unlock_lock() The powerpc lock acquisition sequence is as follows: lwarx; cmpwi; bne; stwcx.; lwsync; Lock release is as follows: lwsync; stw; If CPU 0 does a store (say, x=1) then a lock release, and CPU 1 does a lock acquisition then a load (say, r1=y), then there is no guarantee of a full memory barrier between the store to 'x' and the load from 'y'. To see this, suppose that CPUs 0 and 1 are hardware threads in the same core that share a store buffer, and that CPU 2 is in some other core, and that CPU 2 does the following: y = 1; sync; r2 = x; If 'x' and 'y' are both initially zero, then the lock acquisition and release sequences above can result in r1 and r2 both being equal to zero, which could not happen if unlock+lock was a full barrier. This commit therefore makes powerpc's smp_mb__after_unlock_lock() be a full barrier. Signed-off-by: Paul E. McKenney Acked-by: Benjamin Herrenschmidt Reviewed-by: Peter Zijlstra Cc: Paul Mackerras Cc: linuxppc-dev@lists.ozlabs.org Cc: Cc: Linus Torvalds Cc: Andrew Morton Link: http://lkml.kernel.org/r/1386799151-2219-8-git-send-email-paulmck@linux.vnet.ibm.com Signed-off-by: Ingo Molnar --- arch/powerpc/include/asm/spinlock.h | 2 ++ 1 file changed, 2 insertions(+) diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h index 5f54a744dcc..f6e78d63fb6 100644 --- a/arch/powerpc/include/asm/spinlock.h +++ b/arch/powerpc/include/asm/spinlock.h @@ -28,6 +28,8 @@ #include #include +#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */ + #define arch_spin_is_locked(x) ((x)->slock != 0) #ifdef CONFIG_PPC64 -- 2.43.2