<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/lib, branch v2.6.16.41</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/lib?h=v2.6.16.41</id>
<link rel='self' href='https://git.amat.us/linux/atom/lib?h=v2.6.16.41'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2006-11-19T23:11:42Z</updated>
<entry>
<title>disable debugging version of write_lock()</title>
<updated>2006-11-19T23:11:42Z</updated>
<author>
<name>Andrew Morton</name>
<email>akpm@osdl.org</email>
</author>
<published>2006-11-19T23:11:42Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=de6c0ccfa9ab24a9104c8791030e8ebecb1e2c5a'/>
<id>urn:sha1:de6c0ccfa9ab24a9104c8791030e8ebecb1e2c5a</id>
<content type='text'>
We've confirmed that the debug version of write_lock() can get stuck for long
enough to cause NMI watchdog timeouts and hence a crash.

We don't know why, yet.   Disable it for now.

Also disable the similar read_lock() code.  Just in case.

Thanks to Dave Olson &lt;olson@unixfolk.com&gt; for reporting and testing.

Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Adrian Bunk &lt;bunk@stusta.de&gt;
</content>
</entry>
<entry>
<title>Convert idr's internal locking to _irqsave variant</title>
<updated>2006-09-18T17:28:17Z</updated>
<author>
<name>Roland Dreier</name>
<email>rolandd@cisco.com</email>
</author>
<published>2006-09-18T17:28:17Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=94744ac0cd723f01ca21c98ff21fedb79eef3c61'/>
<id>urn:sha1:94744ac0cd723f01ca21c98ff21fedb79eef3c61</id>
<content type='text'>
Currently, the code in lib/idr.c uses a bare spin_lock(&amp;idp-&gt;lock) to do
internal locking.  This is a nasty trap for code that might call idr
functions from different contexts; for example, it seems perfectly
reasonable to call idr_get_new() from process context and idr_remove() from
interrupt context -- but with the current locking this would lead to a
potential deadlock.

The simplest fix for this is to just convert the idr locking to use
spin_lock_irqsave().

In particular, this fixes a very complicated locking issue detected by
lockdep, involving the ib_ipoib driver's priv-&gt;lock and dev-&gt;_xmit_lock,
which get involved with the ib_sa module's query_idr.lock.

Signed-off-by: Roland Dreier &lt;rolandd@cisco.com&gt;
Signed-off-by: Adrian Bunk &lt;bunk@stusta.de&gt;
</content>
</entry>
<entry>
<title>[TEXTSEARCH]: Fix Boyer Moore initialization bug</title>
<updated>2006-09-18T17:26:29Z</updated>
<author>
<name>Michael Rash</name>
<email>mbr@cipherdyne.org</email>
</author>
<published>2006-09-18T17:26:29Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ec2ffcb891b9a2d00deb38177e1267516ff2be15'/>
<id>urn:sha1:ec2ffcb891b9a2d00deb38177e1267516ff2be15</id>
<content type='text'>
The pattern is set after trying to compute the prefix table, which tries
to use it. Initialize it before calling compute_prefix_tbl, make
compute_prefix_tbl consistently use only the data from struct ts_bm
and remove the now unnecessary arguments.

Signed-off-by: Michael Rash &lt;mbr@cipherdyne.org&gt;
Signed-off-by: Patrick McHardy &lt;kaber@trash.net&gt;
Acked-by: David Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Adrian Bunk &lt;bunk@stusta.de&gt;
</content>
</entry>
<entry>
<title>idr: fix race in idr code</title>
<updated>2006-09-06T14:23:48Z</updated>
<author>
<name>Sonny Rao</name>
<email>sonny@burdell.org</email>
</author>
<published>2006-09-06T14:23:48Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=eebf6e7fd7915da8ad18380107243a1faa7a8c20'/>
<id>urn:sha1:eebf6e7fd7915da8ad18380107243a1faa7a8c20</id>
<content type='text'>
I ran into a bug where the kernel died in the idr code:

cpu 0x1d: Vector: 300 (Data Access) at [c000000b7096f710]
    pc: c0000000001f8984: .idr_get_new_above_int+0x140/0x330
    lr: c0000000001f89b4: .idr_get_new_above_int+0x170/0x330
    sp: c000000b7096f990
   msr: 800000000000b032
   dar: 0
 dsisr: 40010000
  current = 0xc000000b70d43830
  paca    = 0xc000000000556900
    pid   = 2022, comm = hwup
1d:mon&gt; t
[c000000b7096f990] c0000000000d2ad8 .expand_files+0x2e8/0x364 (unreliable)
[c000000b7096faa0] c0000000001f8bf8 .idr_get_new_above+0x18/0x68
[c000000b7096fb20] c00000000002a054 .init_new_context+0x5c/0xf0
[c000000b7096fbc0] c000000000049dc8 .copy_process+0x91c/0x1404
[c000000b7096fcd0] c00000000004a988 .do_fork+0xd8/0x224
[c000000b7096fdc0] c00000000000ebdc .sys_clone+0x5c/0x74
[c000000b7096fe30] c000000000008950 .ppc_clone+0x8/0xc
-- Exception: c00 (System Call) at 000000000fde887c
SP (f8b4e7a0) is in userspace

Turned out to be a race-condition and NULL ptr deref, here's my fix:

Users of the idr code are supposed to call idr_pre_get without locking, so the
idr code must serialize itself with respect to layer allocations.  However, it
fails to do so in an error path in idr_get_new_above_int().  I added the
missing locking to fix this.

Signed-off-by: Sonny Rao &lt;sonny@burdell.org&gt;
Signed-off-by: Adrian Bunk &lt;bunk@stusta.de&gt;
</content>
</entry>
<entry>
<title>Revert mount/umount uevent removal</title>
<updated>2006-02-22T17:39:02Z</updated>
<author>
<name>Greg Kroah-Hartman</name>
<email>gregkh@suse.de</email>
</author>
<published>2006-02-22T17:39:02Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=fa675765afed59bb89adba3369094ebd428b930b'/>
<id>urn:sha1:fa675765afed59bb89adba3369094ebd428b930b</id>
<content type='text'>
This change reverts the 033b96fd30db52a710d97b06f87d16fc59fee0f1 commit
from Kay Sievers that removed the mount/umount uevents from the kernel.
Some older versions of HAL still depend on these events to detect when a
new device has been mounted.  These events are not correctly emitted,
and are broken by design, and so, should not be relied upon by any
future program.  Instead, the /proc/mounts file should be polled to
properly detect this kind of event.

A feature-removal-schedule.txt entry has been added, noting when this
interface will be removed from the kernel.

Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</content>
</entry>
<entry>
<title>[PATCH] iomap_copy fallout (m68k)</title>
<updated>2006-02-18T21:30:40Z</updated>
<author>
<name>Al Viro</name>
<email>viro@zeniv.linux.org.uk</email>
</author>
<published>2006-02-03T07:06:42Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ad6b97fc929e5844bfd1d708ab1d74d131d7960d'/>
<id>urn:sha1:ad6b97fc929e5844bfd1d708ab1d74d131d7960d</id>
<content type='text'>
added __raw_writel(), sanitized include order in iomap_copy.c

Signed-off-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
</content>
</entry>
<entry>
<title>[PATCH] Fix over-zealous tag clearing in radix_tree_delete</title>
<updated>2006-02-16T16:45:50Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2006-02-16T03:43:01Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=90f9dd8f72773152b69042debd6b9ed6d224703a'/>
<id>urn:sha1:90f9dd8f72773152b69042debd6b9ed6d224703a</id>
<content type='text'>
If a tag is set for a node being deleted from a radix_tree, then that
tag gets cleared from the parent of the node, even if it is set for some
siblings of the node begin deleted.

This patch changes the logic to include a test for any_tag_set similar
to the logic a little futher down.  Care is taken to ensure that
'nr_cleared_tags' remains equals to the number of entries in the 'tags'
array which are set to '0' (which means that this tag is not set in the
tree below pathp-&gt;node, and should be cleared at pathp-&gt;node and
possibly above.

[ Nick says: "Linus FYI, I was able to modify the radix tree test
  harness to catch the bug and can no longer trigger it after the fix.
  Resulting code passes all other harness tests as well of course." ]

Signed-off-by: Neil Brown &lt;neilb@suse.de&gt;
Acked-by: Nick Piggin &lt;npiggin@suse.de&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>Merge master.kernel.org:/pub/scm/linux/kernel/git/gregkh/driver-2.6</title>
<updated>2006-02-08T00:29:55Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@g5.osdl.org</email>
</author>
<published>2006-02-08T00:29:55Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=92118c739df879497b8cc5a2eb3a9dc255f01b20'/>
<id>urn:sha1:92118c739df879497b8cc5a2eb3a9dc255f01b20</id>
<content type='text'>
</content>
</entry>
<entry>
<title>[PATCH] Fix spinlock debugging delays to not time out too early</title>
<updated>2006-02-08T00:12:33Z</updated>
<author>
<name>Ingo Molnar</name>
<email>mingo@elte.hu</email>
</author>
<published>2006-02-07T20:58:54Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e0a602963485a2f109ae1521c0c55507304c63ed'/>
<id>urn:sha1:e0a602963485a2f109ae1521c0c55507304c63ed</id>
<content type='text'>
The spinlock-debug wait-loop was using loops_per_jiffy to detect too long
spinlock waits - but on fast CPUs this led to a way too fast timeout and false
messages.

The fix is to include a __delay(1) call in the loop, to correctly approximate
the intended delay timeout of 1 second.  The code assumes that every
architecture implements __delay(1) to last around 1/(loops_per_jiffy*HZ)
seconds.

Signed-off-by: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Andi Kleen &lt;ak@muc.de&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Fix uevent buffer overflow in input layer</title>
<updated>2006-02-06T20:17:18Z</updated>
<author>
<name>Benjamin Herrenschmidt</name>
<email>benh@kernel.crashing.org</email>
</author>
<published>2006-01-24T23:21:32Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=d87499ed1a3ba0f6dbcff8d91c96ef132c115d08'/>
<id>urn:sha1:d87499ed1a3ba0f6dbcff8d91c96ef132c115d08</id>
<content type='text'>
The buffer used for kobject uevent is too small for some of the events generated
by the input layer. Bump it to 2k.

Signed-off-by: Benjamin Herrenschmidt &lt;benh@kernel.crashing.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;
</content>
</entry>
</feed>
