diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2010-03-24 18:34:10 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-09-20 13:18:09 -0700 |
commit | 6d94134f5f3f8bede26d4f700e17154d590d6d6e (patch) | |
tree | 6e090e3328da4e40c12a536f34d54032f6883cd7 /kernel/sched_fair.c | |
parent | 81695bf0ee1a6b3f2a8f183273d945decc1d3f18 (diff) |
sched: Fix TASK_WAKING vs fork deadlock
commit 0017d735092844118bef006696a750a0e4ef6ebd upstream
Oleg noticed a few races with the TASK_WAKING usage on fork.
- since TASK_WAKING is basically a spinlock, it should be IRQ safe
- since we set TASK_WAKING (*) without holding rq->lock it could
be there still is a rq->lock holder, thereby not actually
providing full serialization.
(*) in fact we clear PF_STARTING, which in effect enables TASK_WAKING.
Cure the second issue by not setting TASK_WAKING in sched_fork(), but
only temporarily in wake_up_new_task() while calling select_task_rq().
Cure the first by holding rq->lock around the select_task_rq() call,
this will disable IRQs, this however requires that we push down the
rq->lock release into select_task_rq_fair()'s cgroup stuff.
Because select_task_rq_fair() still needs to drop the rq->lock we
cannot fully get rid of TASK_WAKING.
Reported-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <new-submission>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 04ec8b82ce5..ee89be8571b 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -1392,7 +1392,8 @@ find_idlest_cpu(struct sched_group *group, struct task_struct *p, int this_cpu) * * preempt must be disabled. */ -static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flags) +static int +select_task_rq_fair(struct rq *rq, struct task_struct *p, int sd_flag, int wake_flags) { struct sched_domain *tmp, *affine_sd = NULL, *sd = NULL; int cpu = smp_processor_id(); @@ -1492,8 +1493,11 @@ static int select_task_rq_fair(struct task_struct *p, int sd_flag, int wake_flag cpumask_weight(sched_domain_span(sd)))) tmp = affine_sd; - if (tmp) + if (tmp) { + spin_unlock(&rq->lock); update_shares(tmp); + spin_lock(&rq->lock); + } } if (affine_sd && wake_affine(affine_sd, p, sync)) { |