diff options
author | Daniel J Blueman <daniel.blueman@gmail.com> | 2010-06-01 14:06:13 +0100 |
---|---|---|
committer | Paul Gortmaker <paul.gortmaker@windriver.com> | 2011-01-06 18:07:51 -0500 |
commit | c9ecb99443a55d80019f4f2153de26d2715f6485 (patch) | |
tree | 80c7d16e9bcce6b999b7d740d9aecc1d04be0a39 /kernel | |
parent | 64c0d770b67e0bdab7addafd1e19cdcd94be87e3 (diff) |
rcu: apply RCU protection to wake_affine()
commit f3b577dec1f2ce32d2db6d2ca6badff7002512af upstream.
The task_group() function returns a pointer that must be protected
by either RCU, the ->alloc_lock, or the cgroup lock (see the
rcu_dereference_check() in task_subsys_state(), which is invoked by
task_group()). The wake_affine() function currently does none of these,
which means that a concurrent update would be within its rights to free
the structure returned by task_group(). Because wake_affine() uses this
structure only to compute load-balancing heuristics, there is no reason
to acquire either of the two locks.
Therefore, this commit introduces an RCU read-side critical section that
starts before the first call to task_group() and ends after the last use
of the "tg" pointer returned from task_group(). Thanks to Li Zefan for
pointing out the need to extend the RCU read-side critical section from
that proposed by the original patch.
Signed-off-by: Daniel J Blueman <daniel.blueman@gmail.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched_fair.c | 2 |
1 files changed, 2 insertions, 0 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 461d312d54d..94993ac575c 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1272,6 +1272,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) * effect of the currently running task from the load * of the current CPU: */ + rcu_read_lock(); if (sync) { tg = task_group(current); weight = current->se.load.weight; @@ -1297,6 +1298,7 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p, int sync) balanced = !this_load || 100*(this_load + effective_load(tg, this_cpu, weight, weight)) <= imbalance*(load + effective_load(tg, prev_cpu, 0, weight)); + rcu_read_unlock(); /* * If the currently running task will sleep within |