diff options
author | Sebastian Ott <sebott@linux.vnet.ibm.com> | 2014-06-04 15:58:24 +0200 |
---|---|---|
committer | Tejun Heo <tj@kernel.org> | 2014-06-04 12:12:29 -0400 |
commit | 0c36b390a546055b6815d4b93a2c9fed4d980ffb (patch) | |
tree | c0ef99ba5fa35881ba8c50ac5d2238daa7e5138a /include | |
parent | 5a838c3b60e3a36ade764cf7751b8f17d7c9c2da (diff) |
percpu-refcount: fix usage of this_cpu_ops
The percpu-refcount infrastructure uses the underscore variants of
this_cpu_ops in order to modify percpu reference counters.
(e.g. __this_cpu_inc()).
However the underscore variants do not atomically update the percpu
variable, instead they may be implemented using read-modify-write
semantics (more than one instruction). Therefore it is only safe to
use the underscore variant if the context is always the same (process,
softirq, or hardirq). Otherwise it is possible to lose updates.
This problem is something that Sebastian has seen within the aio
subsystem which uses percpu refcounters both in process and softirq
context leading to reference counts that never dropped to zeroes; even
though the number of "get" and "put" calls matched.
Fix this by using the non-underscore this_cpu_ops variant which
provides correct per cpu atomic semantics and fixes the corrupted
reference counts.
Cc: Kent Overstreet <kmo@daterainc.com>
Cc: <stable@vger.kernel.org> # v3.11+
Reported-by: Sebastian Ott <sebott@linux.vnet.ibm.com>
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
References: http://lkml.kernel.org/g/alpine.LFD.2.11.1406041540520.21183@denkbrett
Diffstat (limited to 'include')
-rw-r--r-- | include/linux/percpu-refcount.h | 6 |
1 files changed, 3 insertions, 3 deletions
diff --git a/include/linux/percpu-refcount.h b/include/linux/percpu-refcount.h index 95961f0bf62..0afb48fd449 100644 --- a/include/linux/percpu-refcount.h +++ b/include/linux/percpu-refcount.h @@ -110,7 +110,7 @@ static inline void percpu_ref_get(struct percpu_ref *ref) pcpu_count = ACCESS_ONCE(ref->pcpu_count); if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) - __this_cpu_inc(*pcpu_count); + this_cpu_inc(*pcpu_count); else atomic_inc(&ref->count); @@ -139,7 +139,7 @@ static inline bool percpu_ref_tryget(struct percpu_ref *ref) pcpu_count = ACCESS_ONCE(ref->pcpu_count); if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) { - __this_cpu_inc(*pcpu_count); + this_cpu_inc(*pcpu_count); ret = true; } @@ -164,7 +164,7 @@ static inline void percpu_ref_put(struct percpu_ref *ref) pcpu_count = ACCESS_ONCE(ref->pcpu_count); if (likely(REF_STATUS(pcpu_count) == PCPU_REF_PTR)) - __this_cpu_dec(*pcpu_count); + this_cpu_dec(*pcpu_count); else if (unlikely(atomic_dec_and_test(&ref->count))) ref->release(ref); |