aboutsummaryrefslogtreecommitdiff
path: root/include
diff options
context:
space:
mode:
authorSuresh Siddha <suresh.b.siddha@intel.com>2011-02-10 10:23:28 +0100
committerAK <andi@firstfloor.org>2011-03-31 11:58:02 -0700
commit118de48b5183dbb3d6281302528ad1a93624854f (patch)
tree8a63f75c5da1822edf0b0d0dcbf7aaf4434784f7 /include
parent43c34b42474e03b54e11f6f17d87ce1160a83bc0 (diff)
sched: Use group weight, idle cpu metrics to fix imbalances during idle
Commit: aae6d3ddd8b90f5b2c8d79a2b914d1706d124193 upstream Currently we consider a sched domain to be well balanced when the imbalance is less than the domain's imablance_pct. As the number of cores and threads are increasing, current values of imbalance_pct (for example 25% for a NUMA domain) are not enough to detect imbalances like: a) On a WSM-EP system (two sockets, each having 6 cores and 12 logical threads), 24 cpu-hogging tasks get scheduled as 13 on one socket and 11 on another socket. Leading to an idle HT cpu. b) On a hypothetial 2 socket NHM-EX system (each socket having 8 cores and 16 logical threads), 16 cpu-hogging tasks can get scheduled as 9 on one socket and 7 on another socket. Leaving one core in a socket idle whereas in another socket we have a core having both its HT siblings busy. While this issue can be fixed by decreasing the domain's imbalance_pct (by making it a function of number of logical cpus in the domain), it can potentially cause more task migrations across sched groups in an overloaded case. Fix this by using imbalance_pct only during newly_idle and busy load balancing. And during idle load balancing, check if there is an imbalance in number of idle cpu's across the busiest and this sched_group or if the busiest group has more tasks than its weight that the idle cpu in this_group can pull. Reported-by: Nikhil Rao <ncrao@google.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Andi Kleen <ak@linux.intel.com> LKML-Reference: <1284760952.2676.11.camel@sbsiddha-MOBL3.sc.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Mike Galbraith <efault@gmx.de> Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'include')
-rw-r--r--include/linux/sched.h1
1 files changed, 1 insertions, 0 deletions
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 0c2e5b5d02c..fc531a78012 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -856,6 +856,7 @@ struct sched_group {
* single CPU.
*/
unsigned int cpu_power;
+ unsigned int group_weight;
/*
* The CPUs this group covers.