diff options
author | Tejun Heo <tj@kernel.org> | 2014-02-13 13:29:31 -0500 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2014-03-06 22:06:18 -0800 |
commit | 432564bca7e73404311b231c864f41c2f6921f9e (patch) | |
tree | b761a120b45302a0babd315da1aca1ae88964e7e /kernel/cgroup.c | |
parent | 3c9c65d3ea1f5e1b06d513b9acb7e3235aff4afe (diff) |
cgroup: update cgroup_enable_task_cg_lists() to grab siglock
commit 532de3fc72adc2a6525c4d53c07bf81e1732083d upstream.
Currently, there's nothing preventing cgroup_enable_task_cg_lists()
from missing set PF_EXITING and race against cgroup_exit(). Depending
on the timing, cgroup_exit() may finish with the task still linked on
css_set leading to list corruption. Fix it by grabbing siglock in
cgroup_enable_task_cg_lists() so that PF_EXITING is guaranteed to be
visible.
This whole on-demand cg_list optimization is extremely fragile and has
ample possibility to lead to bugs which can cause things like
once-a-year oops during boot. I'm wondering whether the better
approach would be just adding "cgroup_disable=all" handling which
disables the whole cgroup rather than tempting fate with this
on-demand craziness.
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel/cgroup.c')
-rw-r--r-- | kernel/cgroup.c | 5 |
1 files changed, 5 insertions, 0 deletions
diff --git a/kernel/cgroup.c b/kernel/cgroup.c index 4037e3abc40..271acd83491 100644 --- a/kernel/cgroup.c +++ b/kernel/cgroup.c @@ -2985,9 +2985,14 @@ static void cgroup_enable_task_cg_lists(void) * We should check if the process is exiting, otherwise * it will race with cgroup_exit() in that the list * entry won't be deleted though the process has exited. + * Do it while holding siglock so that we don't end up + * racing against cgroup_exit(). */ + spin_lock_irq(&p->sighand->siglock); if (!(p->flags & PF_EXITING) && list_empty(&p->cg_list)) list_add(&p->cg_list, &task_css_set(p)->tasks); + spin_unlock_irq(&p->sighand->siglock); + task_unlock(p); } while_each_thread(g, p); read_unlock(&tasklist_lock); |