diff options
author | Lai Jiangshan <laijs@cn.fujitsu.com> | 2014-02-15 22:02:28 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-03-06 21:30:11 -0800 |
commit | 4403be9e25c9d9b82f881cec4fe9a126de02fb9b (patch) | |
tree | 003364a5eaccc9fb936ad70b578e41fe69494d12 /kernel | |
parent | 755ac7af96bdc50bf65254ac3699e0279a342440 (diff) |
workqueue: ensure @task is valid across kthread_stop()
commit 5bdfff96c69a4d5ab9c49e60abf9e070ecd2acbb upstream.
When a kworker should die, the kworkre is notified through WORKER_DIE
flag instead of kthread_should_stop(). This, IIRC, is primarily to
keep the test synchronized inside worker_pool lock. WORKER_DIE is
first set while holding pool->lock, the lock is dropped and
kthread_stop() is called.
Unfortunately, this means that there's a slight chance that the target
kworker may see WORKER_DIE before kthread_stop() finishes and exits
and frees the target task before or during kthread_stop().
Fix it by pinning the target task before setting WORKER_DIE and
putting it after kthread_stop() is done.
tj: Improved patch description and comment. Moved pinning above
WORKER_DIE for better signify what it's protecting.
Signed-off-by: Lai Jiangshan <laijs@cn.fujitsu.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/workqueue.c | 7 |
1 files changed, 7 insertions, 0 deletions
diff --git a/kernel/workqueue.c b/kernel/workqueue.c index 68086a34b8e..db7a6ac7c0a 100644 --- a/kernel/workqueue.c +++ b/kernel/workqueue.c @@ -1823,6 +1823,12 @@ static void destroy_worker(struct worker *worker) if (worker->flags & WORKER_IDLE) pool->nr_idle--; + /* + * Once WORKER_DIE is set, the kworker may destroy itself at any + * point. Pin to ensure the task stays until we're done with it. + */ + get_task_struct(worker->task); + list_del_init(&worker->entry); worker->flags |= WORKER_DIE; @@ -1831,6 +1837,7 @@ static void destroy_worker(struct worker *worker) spin_unlock_irq(&pool->lock); kthread_stop(worker->task); + put_task_struct(worker->task); kfree(worker); spin_lock_irq(&pool->lock); |