diff options
author | Alex,Shi <alex.shi@intel.com> | 2010-06-17 14:08:13 +0800 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@suse.de> | 2010-08-02 10:29:46 -0700 |
commit | 581c88153a829dbe8c24faea91311b1701be4b7e (patch) | |
tree | a013076031f50ab07bf53b63b82065076bfd4dbf /kernel | |
parent | 0dd6ec3a33cab9ee890c9ff671edea9202246c27 (diff) |
sched: Fix over-scheduling bug
commit 3c93717cfa51316e4dbb471e7c0f9d243359d5f8 upstream.
Commit e70971591 ("sched: Optimize unused cgroup configuration") introduced
an imbalanced scheduling bug.
If we do not use CGROUP, function update_h_load won't update h_load. When the
system has a large number of tasks far more than logical CPU number, the
incorrect cfs_rq[cpu]->h_load value will cause load_balance() to pull too
many tasks to the local CPU from the busiest CPU. So the busiest CPU keeps
going in a round robin. That will hurt performance.
The issue was found originally by a scientific calculation workload that
developed by Yanmin. With that commit, the workload performance drops
about 40%.
CPU before after
00 : 2 : 7
01 : 1 : 7
02 : 11 : 6
03 : 12 : 7
04 : 6 : 6
05 : 11 : 7
06 : 10 : 6
07 : 12 : 7
08 : 11 : 6
09 : 12 : 6
10 : 1 : 6
11 : 1 : 6
12 : 6 : 6
13 : 2 : 6
14 : 2 : 6
15 : 1 : 6
Reviewed-by: Yanmin zhang <yanmin.zhang@intel.com>
Signed-off-by: Alex Shi <alex.shi@intel.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1276754893.9452.5442.camel@debian>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/sched.c | 3 |
1 files changed, 0 insertions, 3 deletions
diff --git a/kernel/sched.c b/kernel/sched.c index 37b88b0cf10..a4a38016d1f 100644 --- a/kernel/sched.c +++ b/kernel/sched.c @@ -1675,9 +1675,6 @@ static void update_shares(struct sched_domain *sd) static void update_h_load(long cpu) { - if (root_task_group_empty()) - return; - walk_tg_tree(tg_load_down, tg_nop, (void *)cpu); } |