<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel/sched_features.h, branch v2.6.27.59</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/kernel/sched_features.h?h=v2.6.27.59</id>
<link rel='self' href='https://git.amat.us/linux/atom/kernel/sched_features.h?h=v2.6.27.59'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2010-04-01T22:52:21Z</updated>
<entry>
<title>sched: wakeup preempt when small overlap</title>
<updated>2010-04-01T22:52:21Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-09-20T21:38:02Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ecfb7fb9b13c617447a7f6b5925da26798c1a8a1'/>
<id>urn:sha1:ecfb7fb9b13c617447a7f6b5925da26798c1a8a1</id>
<content type='text'>
commit 15afe09bf496ae10c989e1a375a6b5da7bd3e16e upstream.

Lin Ming reported a 10% OLTP regression against 2.6.27-rc4.

The difference seems to come from different preemption agressiveness,
which affects the cache footprint of the workload and its effective
cache trashing.

Aggresively preempt a task if its avg overlap is very small, this should
avoid the task going to sleep and find it still running when we schedule
back to it - saving a wakeup.

Reported-by: Lin Ming &lt;ming.m.lin@intel.com&gt;
Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>sched: disable the hrtick for now</title>
<updated>2008-11-07T03:05:50Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@elte.hu</email>
</author>
<published>2008-10-26T22:21:40Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=612f39d5e7baeb0518cfe50d53e37e14c0ca1475'/>
<id>urn:sha1:612f39d5e7baeb0518cfe50d53e37e14c0ca1475</id>
<content type='text'>
commit 0c4b83da58ec2e96ce9c44c211d6eac5f9dae478 upstream

sched: disable the hrtick for now

David Miller reported that hrtick update overhead has tripled the
wakeup overhead on Sparc64.

That is too much - disable the HRTICK feature for now by default,
until a faster implementation is found.

Reported-by: David Miller &lt;davem@davemloft.net&gt;
Acked-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Chuck Ebbert &lt;cebbert@redhat.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>sched: enable LB_BIAS by default</title>
<updated>2008-08-21T06:18:02Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-08-20T10:44:55Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=efc2dead2c82cae31943828f6d977c483942b0eb'/>
<id>urn:sha1:efc2dead2c82cae31943828f6d977c483942b0eb</id>
<content type='text'>
Yanmin reported a significant regression on his 16-core machine due to:

  commit 93b75217df39e6d75889cc6f8050343286aff4a5
  Author: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
  Date:   Fri Jun 27 13:41:33 2008 +0200

Flip back to the old behaviour.

Reported-by: "Zhang, Yanmin" &lt;yanmin_zhang@linux.intel.com&gt;
Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: bias effective_load() error towards failing wake_affine().</title>
<updated>2008-06-27T12:31:47Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-27T11:41:39Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=f5bfb7d9ff73d72ee4f2f4830a6f0c9088d00f92'/>
<id>urn:sha1:f5bfb7d9ff73d72ee4f2f4830a6f0c9088d00f92</id>
<content type='text'>
Measurement shows that the difference between cgroup:/ and cgroup:/foo
wake_affine() results is that the latter succeeds significantly more.

Therefore bias the calculations towards failing the test.

Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: update shares on wakeup</title>
<updated>2008-06-27T12:31:45Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-27T11:41:35Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=2398f2c6d34b43025f274fc42eaca34d23ec2320'/>
<id>urn:sha1:2398f2c6d34b43025f274fc42eaca34d23ec2320</id>
<content type='text'>
We found that the affine wakeup code needs rather accurate load figures
to be effective. The trouble is that updating the load figures is fairly
expensive with group scheduling. Therefore ratelimit the updating.

Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: disable source/target_load bias</title>
<updated>2008-06-27T12:31:44Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-27T11:41:33Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=93b75217df39e6d75889cc6f8050343286aff4a5'/>
<id>urn:sha1:93b75217df39e6d75889cc6f8050343286aff4a5</id>
<content type='text'>
The bias given by source/target_load functions can be very large, disable
it by default to get faster convergence.

Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: fix calc_delta_asym()</title>
<updated>2008-06-27T12:31:28Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-27T11:41:12Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c9c294a630e28eec5f2865f028ecfc58d45c0a5a'/>
<id>urn:sha1:c9c294a630e28eec5f2865f028ecfc58d45c0a5a</id>
<content type='text'>
calc_delta_asym() is supposed to do the same as calc_delta_fair() except
linearly shrink the result for negative nice processes - this causes them
to have a smaller preemption threshold so that they are more easily preempted.

The problem is that for task groups se-&gt;load.weight is the per cpu share of
the actual task group weight; take that into account.

Also provide a debug switch to disable the asymmetry (which I still don't
like - but it does greatly benefit some workloads)

This would explain the interactivity issues reported against group scheduling.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: revert the revert of: weight calculations</title>
<updated>2008-06-27T12:31:27Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-06-27T11:41:11Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=a7be37ac8e1565e00880531f4e2aff421a21c803'/>
<id>urn:sha1:a7be37ac8e1565e00880531f4e2aff421a21c803</id>
<content type='text'>
Try again..

initial commit: 8f1bc385cfbab474db6c27b5af1e439614f3025c
revert: f9305d4a0968201b2818dbed0dc8cb0d4ee7aeb3

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: Srivatsa Vaddagiri &lt;vatsa@linux.vnet.ibm.com&gt;
Cc: Mike Galbraith &lt;efault@gmx.de&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: trivial sched_features cleanup</title>
<updated>2008-06-10T10:38:17Z</updated>
<author>
<name>Mike Galbraith</name>
<email>efault@gmx.de</email>
</author>
<published>2008-06-08T07:27:13Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6492c7f83e88a3a9521793b6934d882b97afe287'/>
<id>urn:sha1:6492c7f83e88a3a9521793b6934d882b97afe287</id>
<content type='text'>
Remove unused debug/tuning features.

Signed-off-by: Mike Galbraith &lt;efault@gmx.de&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
<entry>
<title>sched: /debug/sched_features</title>
<updated>2008-04-19T17:45:00Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>a.p.zijlstra@chello.nl</email>
</author>
<published>2008-04-19T17:45:00Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=f00b45c145981b43c7e7f66315ac77534c938cbf'/>
<id>urn:sha1:f00b45c145981b43c7e7f66315ac77534c938cbf</id>
<content type='text'>
provide a text based interface to the scheduler features; this saves the
'user' from setting bits using decimal arithmetic.

Signed-off-by: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
</content>
</entry>
</feed>
