diff options
author | Venkatesh Pallipadi <venki@google.com> | 2011-02-10 10:23:27 +0100 |
---|---|---|
committer | AK <andi@firstfloor.org> | 2011-03-31 11:58:01 -0700 |
commit | fd72c5feeb61857dbcc4fac1c98157925fbb085e (patch) | |
tree | 9a3bf2ec55ffe3d1ce0ee38ec41dc15cf1d7f36b /kernel/sched_fair.c | |
parent | a3fe22ee824895aafdc1b788e19c081a2e6dd9da (diff) |
sched: Remove irq time from available CPU power
Commit: aa483808516ca5cacfa0e5849691f64fec25828e upstream
The idea was suggested by Peter Zijlstra here:
http://marc.info/?l=linux-kernel&m=127476934517534&w=2
irq time is technically not available to the tasks running on the CPU.
This patch removes irq time from CPU power piggybacking on
sched_rt_avg_update().
Tested this by keeping CPU X busy with a network intensive task having 75%
oa a single CPU irq processing (hard+soft) on a 4-way system. And start seven
cycle soakers on the system. Without this change, there will be two tasks on
each CPU. With this change, there is a single task on irq busy CPU X and
remaining 7 tasks are spread around among other 3 CPUs.
Signed-off-by: Venkatesh Pallipadi <venki@google.com>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Andi Kleen <ak@linux.intel.com>
LKML-Reference: <1286237003-12406-8-git-send-email-venki@google.com>
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Mike Galbraith <efault@gmx.de>
Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
Diffstat (limited to 'kernel/sched_fair.c')
-rw-r--r-- | kernel/sched_fair.c | 7 |
1 files changed, 6 insertions, 1 deletions
diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c index 32112033b7b..4a9793e0967 100644 --- a/kernel/sched_fair.c +++ b/kernel/sched_fair.c @@ -2276,8 +2276,13 @@ unsigned long scale_rt_power(int cpu) u64 total, available; total = sched_avg_period() + (rq->clock - rq->age_stamp); - available = total - rq->rt_avg; + if (unlikely(total < rq->rt_avg)) { + /* Ensures that power won't end up being negative */ + available = 0; + } else { + available = total - rq->rt_avg; + } if (unlikely((s64)total < SCHED_LOAD_SCALE)) total = SCHED_LOAD_SCALE; |