aboutsummaryrefslogtreecommitdiff
path: root/arch/arm64/include
diff options
context:
space:
mode:
authorCatalin Marinas <catalin.marinas@arm.com>2013-05-31 16:30:58 +0100
committerCatalin Marinas <catalin.marinas@arm.com>2013-06-07 17:58:31 +0100
commit4ecf7ccb1973fd826456b6ab1e6dfafe9023c753 (patch)
tree318c48fdfa422908da871d25d2e47903321aabe3 /arch/arm64/include
parentebd88367de80f9509bd30a09342d0a19c925b23e (diff)
arm64: spinlock: retry trylock operation if strex fails on free lock
An exclusive store instruction may fail for reasons other than lock contention (e.g. a cache eviction during the critical section) so, in line with other architectures using similar exclusive instructions (alpha, mips, powerpc), retry the trylock operation if the lock appears to be free but the strex reported failure. Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Reported-by: Tony Thompson <anthony.thompson@arm.com> Acked-by: Will Deacon <will.deacon@arm.com>
Diffstat (limited to 'arch/arm64/include')
-rw-r--r--arch/arm64/include/asm/spinlock.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index 7065e920149..0defa0728a9 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -59,9 +59,10 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
unsigned int tmp;
asm volatile(
- " ldaxr %w0, %1\n"
+ "2: ldaxr %w0, %1\n"
" cbnz %w0, 1f\n"
" stxr %w0, %w2, %1\n"
+ " cbnz %w0, 2b\n"
"1:\n"
: "=&r" (tmp), "+Q" (lock->lock)
: "r" (1)