<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/fs, branch v2.6.20-rc4</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/fs?h=v2.6.20-rc4</id>
<link rel='self' href='https://git.amat.us/linux/atom/fs?h=v2.6.20-rc4'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2007-01-06T21:28:21Z</updated>
<entry>
<title>Revert "[PATCH] binfmt_elf: randomize PIE binaries (2nd try)"</title>
<updated>2007-01-06T21:28:21Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@woody.osdl.org</email>
</author>
<published>2007-01-06T21:28:21Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=90cb28e8f76e57751ffe14abd09c2d53a6aea7c8'/>
<id>urn:sha1:90cb28e8f76e57751ffe14abd09c2d53a6aea7c8</id>
<content type='text'>
This reverts commit 59287c0913cc9a6c75712a775f6c1c1ef418ef3b.

Hugh Dickins reports that it causes random failures on x86 with SuSE
10.2, and points out

  "Isn't that randomization, anywhere from 0x10000 to ELF_ET_DYN_BASE,
   sure to place the ET_DYN from time to time just where the comment
   says it's trying to avoid? I assume that somehow results in the error
   reported."

(where the comment in question is the existing comment in the source
code about mmap/brk clashes).

Suggested-by: Hugh Dickins &lt;hugh@veritas.com&gt;
Acked-by: Marcus Meissner &lt;meissner@suse.de&gt;
Cc: Andrew Morton &lt;akpm@osdl.org&gt;
Cc: Andi Kleen &lt;ak@suse.de&gt;
Cc: Ingo Molnar &lt;mingo@elte.hu&gt;
Cc: Dave Jones &lt;davej@codemonkey.org.uk&gt;
Cc: Arjan van de Ven &lt;arjan@linux.intel.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] fix garbage instead of zeroes in UFS</title>
<updated>2007-01-06T07:55:29Z</updated>
<author>
<name>Evgeniy Dushistov</name>
<email>dushistov@mail.ru</email>
</author>
<published>2007-01-06T00:37:04Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=d63b70902befe189ba2672925f28ec3f4db41352'/>
<id>urn:sha1:d63b70902befe189ba2672925f28ec3f4db41352</id>
<content type='text'>
Looks like this is the problem, which point Al Viro some time ago:

ufs's get_block callback allocates 16k of disk at a time, and links that
entire 16k into the file's metadata.  But because get_block is called for only
a single buffer_head (a 2k buffer_head in this case?) we are only able to tell
the VFS that this 2k is buffer_new().

So when ufs_getfrag_block() is later called to map some more data in the file,
and when that data resides within the remaining 14k of this fragment,
ufs_getfrag_block() will incorrectly return a !buffer_new() buffer_head.

I don't see _right_ way to do nullification of whole block, if use inode
page cache, some pages may be outside of inode limits (inode size), and
will be lost; if use blockdev page cache it is possible to zero real data,
if later inode page cache will be used.

The simpliest way, as can I see usage of block device page cache, but not only
mark dirty, but also sync it during "nullification".  I use my simple tests
collection, which I used for check that create,open,write,read,close works on
ufs, and I see that this patch makes ufs code 18% slower then before.

Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] fix memory corruption from misinterpreted bad_inode_ops return values</title>
<updated>2007-01-06T07:55:23Z</updated>
<author>
<name>Eric Sandeen</name>
<email>sandeen@redhat.com</email>
</author>
<published>2007-01-06T00:36:36Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=be6aab0e9fa6d3c6d75aa1e38ac972d8b4ee82b8'/>
<id>urn:sha1:be6aab0e9fa6d3c6d75aa1e38ac972d8b4ee82b8</id>
<content type='text'>
CVE-2006-5753 is for a case where an inode can be marked bad, switching
the ops to bad_inode_ops, which are all connected as:

static int return_EIO(void)
{
        return -EIO;
}

#define EIO_ERROR ((void *) (return_EIO))

static struct inode_operations bad_inode_ops =
{
        .create         = bad_inode_create
...etc...

The problem here is that the void cast causes return types to not be
promoted, and for ops such as listxattr which expect more than 32 bits of
return value, the 32-bit -EIO is interpreted as a large positive 64-bit
number, i.e. 0x00000000fffffffa instead of 0xfffffffa.

This goes particularly badly when the return value is taken as a number of
bytes to copy into, say, a user's buffer for example...

I originally had coded up the fix by creating a return_EIO_&lt;TYPE&gt; macro
for each return type, like this:

static int return_EIO_int(void)
{
	return -EIO;
}
#define EIO_ERROR_INT ((void *) (return_EIO_int))

static struct inode_operations bad_inode_ops =
{
	.create		= EIO_ERROR_INT,
...etc...

but Al felt that it was probably better to create an EIO-returner for each
actual op signature.  Since so few ops share a signature, I just went ahead
&amp; created an EIO function for each individual file &amp; inode op that returns
a value.

Signed-off-by: Eric Sandeen &lt;sandeen@redhat.com&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] adfs: fix filename handling</title>
<updated>2007-01-06T07:55:22Z</updated>
<author>
<name>James Bursa</name>
<email>james@zamez.org</email>
</author>
<published>2007-01-06T00:36:28Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=3223ea8cca5936b8e78450dd5b8ba88372e9c0a8'/>
<id>urn:sha1:3223ea8cca5936b8e78450dd5b8ba88372e9c0a8</id>
<content type='text'>
Fix filenames on adfs discs being terminated at the first character greater
than 128 (adfs filenames are Latin 1).  I saw this problem when using a
loopback adfs image on a 2.6.17-rc5 x86_64 machine, and the patch fixed it
there.

Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mfasheh/ocfs2</title>
<updated>2006-12-30T20:02:53Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@woody.osdl.org</email>
</author>
<published>2006-12-30T20:02:53Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bfff6e92a33dce6121a3d83ef3809e9063b2734e'/>
<id>urn:sha1:bfff6e92a33dce6121a3d83ef3809e9063b2734e</id>
<content type='text'>
* 'upstream-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/mfasheh/ocfs2:
  ocfs2: export heartbeat thread pid via configfs
  ocfs2: always unmap in ocfs2_data_convert_worker()
  ocfs2: ignore NULL vfsmnt in ocfs2_should_update_atime()
  ocfs2: Allow direct I/O read past end of file
  ocfs2: don't print error in ocfs2_permission()
</content>
</entry>
<entry>
<title>[PATCH] ramfs breaks without CONFIG_BLOCK</title>
<updated>2006-12-30T18:56:42Z</updated>
<author>
<name>Dimitri Gorokhovik</name>
<email>dimitri.gorokhovik@free.fr</email>
</author>
<published>2006-12-30T00:48:24Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=131612dfe7923bd0ce5f82d6ed8303a7ef96e574'/>
<id>urn:sha1:131612dfe7923bd0ce5f82d6ed8303a7ef96e574</id>
<content type='text'>
ramfs doesn't provide the .set_dirty_page a_op, and when the BLOCK layer is
not configured in, 'set_page_dirty' makes a call via a NULL pointer.

Signed-off-by: Dimitri Gorokhovik &lt;dimitri.gorokhovik@free.fr&gt;
Cc: &lt;stable@kernel.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>[PATCH] Fix lock inversion aio_kick_handler()</title>
<updated>2006-12-30T18:55:54Z</updated>
<author>
<name>Zach Brown</name>
<email>zach.brown@oracle.com</email>
</author>
<published>2006-12-30T00:47:02Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=1ebb1101c556b1915ff041655e629a072e64dcda'/>
<id>urn:sha1:1ebb1101c556b1915ff041655e629a072e64dcda</id>
<content type='text'>
lockdep found a AB BC CA lock inversion in retry-based AIO:

1) The task struct's alloc_lock (A) is acquired in process context with
   interrupts enabled.  An interrupt might arrive and call wake_up() which
   grabs the wait queue's q-&gt;lock (B).

2) When performing retry-based AIO the AIO core registers
   aio_wake_function() as the wake funtion for iocb-&gt;ki_wait.  It is called
   with the wait queue's q-&gt;lock (B) held and then tries to add the iocb to
   the run list after acquiring the ctx_lock (C).

3) aio_kick_handler() holds the ctx_lock (C) while acquiring the
   alloc_lock (A) via lock_task() and unuse_mm().  Lockdep emits a warning
   saying that we're trying to connect the irq-safe q-&gt;lock to the
   irq-unsafe alloc_lock via ctx_lock.

This fixes the inversion by calling unuse_mm() in the AIO kick handing path
after we've released the ctx_lock.  As Ben LaHaise pointed out __put_ioctx
could set ctx-&gt;mm to NULL, so we must only access ctx-&gt;mm while we have the
lock.

Signed-off-by: Zach Brown &lt;zach.brown@oracle.com&gt;
Signed-off-by: Suparna Bhattacharya &lt;suparna@in.ibm.com&gt;
Acked-by: Benjamin LaHaise &lt;bcrl@kvack.org&gt;
Cc: "Chen, Kenneth W" &lt;kenneth.w.chen@intel.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@osdl.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@osdl.org&gt;
</content>
</entry>
<entry>
<title>ocfs2: export heartbeat thread pid via configfs</title>
<updated>2006-12-29T00:40:32Z</updated>
<author>
<name>Zhen Wei</name>
<email>zwei@novell.com</email>
</author>
<published>2006-12-08T07:48:17Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=92efc15241ceebc23451691971897020e8563a70'/>
<id>urn:sha1:92efc15241ceebc23451691971897020e8563a70</id>
<content type='text'>
The patch allows the ocfs2 heartbeat thread to prioritize I/O which may
help cut down on spurious fencing. Most of this will be in the tools -
we can have a pid configfs attribute and let userspace (ocfs2_hb_ctl)
calls the ioprio_set syscall after starting heartbeat, but only cfq
scheduler supports I/O priorities now.

Signed-off-by: Zhen Wei &lt;zwei@novell.com&gt;
Signed-off-by: Mark Fasheh &lt;mark.fasheh@oracle.com&gt;
</content>
</entry>
<entry>
<title>ocfs2: always unmap in ocfs2_data_convert_worker()</title>
<updated>2006-12-29T00:38:59Z</updated>
<author>
<name>Mark Fasheh</name>
<email>mark.fasheh@oracle.com</email>
</author>
<published>2006-12-11T19:06:36Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=7f4a2a97e324e8c826d1d983bc8efb5c59194f02'/>
<id>urn:sha1:7f4a2a97e324e8c826d1d983bc8efb5c59194f02</id>
<content type='text'>
Mmap-heavy clustered workloads were sometimes finding stale data on mmap
reads. The solution is to call unmap_mapping_range() on any down convert of
a data lock.

Signed-off-by: Mark Fasheh &lt;mark.fasheh@oracle.com&gt;
</content>
</entry>
<entry>
<title>ocfs2: ignore NULL vfsmnt in ocfs2_should_update_atime()</title>
<updated>2006-12-29T00:38:32Z</updated>
<author>
<name>Mark Fasheh</name>
<email>mark.fasheh@oracle.com</email>
</author>
<published>2006-12-19T23:25:52Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6c2aad0567e693f9588d0a0683f96ed872fb4641'/>
<id>urn:sha1:6c2aad0567e693f9588d0a0683f96ed872fb4641</id>
<content type='text'>
This can come from NFSD.

Signed-off-by: Mark Fasheh &lt;mark.fasheh@oracle.com&gt;
</content>
</entry>
</feed>
