aboutsummaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorNathan Lynch <ntl@pobox.com>2009-04-09 18:20:02 +0000
committerChris Wright <chrisw@sous-sol.org>2009-04-27 10:37:00 -0700
commit3f22ebd2f4d191631a7b867addc8bd99e948873f (patch)
tree71b2444f2c088dc82ac813f9f4d66daf2d471875 /include
parent94835d2bc7374d26ae8602a96c866c9ddf2e2c45 (diff)
sched: do not count frozen tasks toward load
upstream commit: e3c8ca8336707062f3f7cb1cd7e6b3c753baccdd Freezing tasks via the cgroup freezer causes the load average to climb because the freezer's current implementation puts frozen tasks in uninterruptible sleep (D state). Some applications which perform job-scheduling functions consult the load average when making decisions. If a cgroup is frozen, the load average does not provide a useful measure of the system's utilization to such applications. This is especially inconvenient if the job scheduler employs the cgroup freezer as a mechanism for preempting low priority jobs. Contrast this with using SIGSTOP for the same purpose: the stopped tasks do not count toward system load. Change task_contributes_to_load() to return false if the task is frozen. This results in /proc/loadavg behavior that better meets users' expectations. Signed-off-by: Nathan Lynch <ntl@pobox.com> Acked-by: Andrew Morton <akpm@linux-foundation.org> Acked-by: Nigel Cunningham <nigel@tuxonice.net> Tested-by: Nigel Cunningham <nigel@tuxonice.net> Cc: <stable@kernel.org> Cc: containers@lists.linux-foundation.org Cc: linux-pm@lists.linux-foundation.org Cc: Matt Helsley <matthltc@us.ibm.com> LKML-Reference: <20090408194512.47a99b95@manatee.lan> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Diffstat (limited to 'include')
-rw-r--r--include/linux/sched.h3
1 files changed, 2 insertions, 1 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 011db2f4c94..f8af1671a93 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -202,7 +202,8 @@ extern unsigned long long time_sync_thresh;
#define task_is_stopped_or_traced(task) \
((task->state & (__TASK_STOPPED | __TASK_TRACED)) != 0)
#define task_contributes_to_load(task) \
- ((task->state & TASK_UNINTERRUPTIBLE) != 0)
+ ((task->state & TASK_UNINTERRUPTIBLE) != 0 && \
+ (task->flags & PF_FROZEN) == 0)
#define __set_task_state(tsk, state_value) \
do { (tsk)->state = (state_value); } while (0)