diff options
author | Peter Zijlstra <a.p.zijlstra@chello.nl> | 2009-12-17 13:16:31 +0100 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-09-20 13:18:01 -0700 |
commit | 9f2243e5817e778ebb95101722492937b845d8c9 (patch) | |
tree | 3a209fa9d9dd5b20a726007fa59c193c5b4ec38d /kernel | |
parent | 07ad01064059aa5ac2e174ba519cd6a0c43301fa (diff) |
sched: Fix broken assertion
commit 077614ee1e93245a3b9a4e1213659405dbeb0ba6 upstream
There's a preemption race in the set_task_cpu() debug check in
that when we get preempted after setting task->state we'd still
be on the rq proper, but fail the test.
Check for preempted tasks, since those are always on the RQ.
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <20091217121830.137155561@chello.nl>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 94b1ca17db3..947b26df11d 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -2071,7 +2071,8 @@ void set_task_cpu(struct task_struct *p, unsigned int new_cpu) * We should never call set_task_cpu() on a blocked task, * ttwu() will sort out the placement. */ - WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING); + WARN_ON_ONCE(p->state != TASK_RUNNING && p->state != TASK_WAKING && + !(task_thread_info(p)->preempt_count & PREEMPT_ACTIVE)); #endif trace_sched_migrate_task(p, new_cpu); |