<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/fs, branch v3.15-rc2</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/fs?h=v3.15-rc2</id>
<link rel='self' href='https://git.amat.us/linux/atom/fs?h=v3.15-rc2'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2014-04-19T20:23:31Z</updated>
<entry>
<title>coredump: fix va_list corruption</title>
<updated>2014-04-19T20:23:31Z</updated>
<author>
<name>Eric Dumazet</name>
<email>edumazet@google.com</email>
</author>
<published>2014-04-19T17:15:07Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=404ca80eb5c2727d78cd517d12108b040c522e12'/>
<id>urn:sha1:404ca80eb5c2727d78cd517d12108b040c522e12</id>
<content type='text'>
A va_list needs to be copied in case it needs to be used twice.

Thanks to Hugh for debugging this issue, leading to various panics.

Tested:

  lpq84:~# echo "|/foobar12345 %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h" &gt;/proc/sys/kernel/core_pattern

'produce_core' is simply : main() { *(int *)0 = 1;}

  lpq84:~# ./produce_core
  Segmentation fault (core dumped)
  lpq84:~# dmesg | tail -1
  [  614.352947] Core dump to |/foobar12345 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 lpq84 (null) pipe failed

Notice the last argument was replaced by a NULL (we were lucky enough to
not crash, but do not try this on your production machine !)

After fix :

  lpq83:~# echo "|/foobar12345 %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h %h" &gt;/proc/sys/kernel/core_pattern
  lpq83:~# ./produce_core
  Segmentation fault
  lpq83:~# dmesg | tail -1
  [  740.800441] Core dump to |/foobar12345 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 lpq83 pipe failed

Fixes: 5fe9d8ca21cc ("coredump: cn_vprintf() has no reason to call vsnprintf() twice")
Signed-off-by: Eric Dumazet &lt;edumazet@google.com&gt;
Diagnosed-by: Hugh Dickins &lt;hughd@google.com&gt;
Acked-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Cc: Neil Horman &lt;nhorman@tuxdriver.com&gt;
Cc: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Cc: stable@vger.kernel.org # 3.11+
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
</content>
</entry>
<entry>
<title>Merge branch 'for-next' of git://git.samba.org/sfrench/cifs-2.6</title>
<updated>2014-04-19T00:52:39Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-04-19T00:52:39Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6e66d5dab5d530a368314eb631201a02aabb075d'/>
<id>urn:sha1:6e66d5dab5d530a368314eb631201a02aabb075d</id>
<content type='text'>
Pull cifs fixes from Steve French:
 "A set of 5 small cifs fixes"

* 'for-next' of git://git.samba.org/sfrench/cifs-2.6:
  cif: fix dead code
  cifs: fix error handling cifs_user_readv
  fs: cifs: remove unused variable.
  Return correct error on query of xattr on file with empty xattrs
  cifs: Wait for writebacks to complete before attempting write.
</content>
</entry>
<entry>
<title>Merge tag 'driver-core-3.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core</title>
<updated>2014-04-18T23:59:52Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2014-04-18T23:59:52Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=60fbf2bda140f27b0e9ab5b6d17342c9a5f9eacf'/>
<id>urn:sha1:60fbf2bda140f27b0e9ab5b6d17342c9a5f9eacf</id>
<content type='text'>
Pull driver core fixes from Greg KH:
 "Here are some driver core fixes for 3.15-rc2.  Also in here are some
  documentation updates, as well as an API removal that had to wait for
  after -rc1 due to the cleanups coming into you from multiple developer
  trees (this one and the PPC tree.)

  All have been in linux next successfully"

* tag 'driver-core-3.15-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core:
  drivers/base/dd.c incorrect pr_debug() parameters
  Documentation: Update stable address in Chinese and Japanese translations
  topology: Fix compilation warning when not in SMP
  Chinese: add translation of io_ordering.txt
  stable_kernel_rules: spelling/word usage
  sysfs, driver-core: remove unused {sysfs|device}_schedule_callback_owner()
  kernfs: protect lazy kernfs_iattrs allocation with mutex
  fs: Don't return 0 from get_anon_bdev
</content>
</entry>
<entry>
<title>cif: fix dead code</title>
<updated>2014-04-17T04:08:57Z</updated>
<author>
<name>Michael Opdenacker</name>
<email>michael.opdenacker@free-electrons.com</email>
</author>
<published>2014-04-15T08:06:50Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=1f80c0cc39e587edd06a36b43ba3a3b09d4ac428'/>
<id>urn:sha1:1f80c0cc39e587edd06a36b43ba3a3b09d4ac428</id>
<content type='text'>
This issue was found by Coverity (CID 1202536)

This proposes a fix for a statement that creates dead code.
The "rc &lt; 0" statement is within code that is run
with "rc &gt; 0".

It seems like "err &lt; 0" was meant to be used here.
This way, the error code is returned by the function.

Signed-off-by: Michael Opdenacker &lt;michael.opdenacker@free-electrons.com&gt;
Acked-by: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Steve French &lt;smfrench@gmail.com&gt;
</content>
</entry>
<entry>
<title>cifs: fix error handling cifs_user_readv</title>
<updated>2014-04-17T03:54:30Z</updated>
<author>
<name>Jeff Layton</name>
<email>jlayton@redhat.com</email>
</author>
<published>2014-04-15T16:48:49Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bae9f746a18ee31bbeeb25ae6615805ed6eca173'/>
<id>urn:sha1:bae9f746a18ee31bbeeb25ae6615805ed6eca173</id>
<content type='text'>
Coverity says:

*** CID 1202537:  Dereference after null check  (FORWARD_NULL)
/fs/cifs/file.c: 2873 in cifs_user_readv()
2867     		cur_len = min_t(const size_t, len - total_read, cifs_sb-&gt;rsize);
2868     		npages = DIV_ROUND_UP(cur_len, PAGE_SIZE);
2869
2870     		/* allocate a readdata struct */
2871     		rdata = cifs_readdata_alloc(npages,
2872     					    cifs_uncached_readv_complete);
&gt;&gt;&gt;     CID 1202537:  Dereference after null check  (FORWARD_NULL)
&gt;&gt;&gt;     Comparing "rdata" to null implies that "rdata" might be null.
2873     		if (!rdata) {
2874     			rc = -ENOMEM;
2875     			goto error;
2876     		}
2877
2878     		rc = cifs_read_allocate_pages(rdata, npages);

...when we "goto error", rc will be non-zero, and then we end up trying
to do a kref_put on the rdata (which is NULL). Fix this by replacing
the "goto error" with a "break".

Reported-by: &lt;scan-admin@coverity.com&gt;
Signed-off-by: Jeff Layton &lt;jlayton@redhat.com&gt;
Signed-off-by: Steve French &lt;smfrench@gmail.com&gt;
</content>
</entry>
<entry>
<title>xfs: fix tmpfile/selinux deadlock and initialize security</title>
<updated>2014-04-16T22:15:30Z</updated>
<author>
<name>Brian Foster</name>
<email>bfoster@redhat.com</email>
</author>
<published>2014-04-16T22:15:30Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=330033d697ed8d296fa52b5303db9d802ad901cc'/>
<id>urn:sha1:330033d697ed8d296fa52b5303db9d802ad901cc</id>
<content type='text'>
xfstests generic/004 reproduces an ilock deadlock using the tmpfile
interface when selinux is enabled. This occurs because
xfs_create_tmpfile() takes the ilock and then calls d_tmpfile(). The
latter eventually calls into xfs_xattr_get() which attempts to get the
lock again. E.g.:

xfs_io          D ffffffff81c134c0  4096  3561   3560 0x00000080
ffff8801176a1a68 0000000000000046 ffff8800b401b540 ffff8801176a1fd8
00000000001d5800 00000000001d5800 ffff8800b401b540 ffff8800b401b540
ffff8800b73a6bd0 fffffffeffffffff ffff8800b73a6bd8 ffff8800b5ddb480
Call Trace:
[&lt;ffffffff8177f969&gt;] schedule+0x29/0x70
[&lt;ffffffff81783a65&gt;] rwsem_down_read_failed+0xc5/0x120
[&lt;ffffffffa05aa97f&gt;] ? xfs_ilock_attr_map_shared+0x1f/0x50 [xfs]
[&lt;ffffffff813b3434&gt;] call_rwsem_down_read_failed+0x14/0x30
[&lt;ffffffff810ed179&gt;] ? down_read_nested+0x89/0xa0
[&lt;ffffffffa05aa7f2&gt;] ? xfs_ilock+0x122/0x250 [xfs]
[&lt;ffffffffa05aa7f2&gt;] xfs_ilock+0x122/0x250 [xfs]
[&lt;ffffffffa05aa97f&gt;] xfs_ilock_attr_map_shared+0x1f/0x50 [xfs]
[&lt;ffffffffa05701d0&gt;] xfs_attr_get+0x90/0xe0 [xfs]
[&lt;ffffffffa0565e07&gt;] xfs_xattr_get+0x37/0x50 [xfs]
[&lt;ffffffff8124842f&gt;] generic_getxattr+0x4f/0x70
[&lt;ffffffff8133fd9e&gt;] inode_doinit_with_dentry+0x1ae/0x650
[&lt;ffffffff81340e0c&gt;] selinux_d_instantiate+0x1c/0x20
[&lt;ffffffff813351bb&gt;] security_d_instantiate+0x1b/0x30
[&lt;ffffffff81237db0&gt;] d_instantiate+0x50/0x70
[&lt;ffffffff81237e85&gt;] d_tmpfile+0xb5/0xc0
[&lt;ffffffffa05add02&gt;] xfs_create_tmpfile+0x362/0x410 [xfs]
[&lt;ffffffffa0559ac8&gt;] xfs_vn_tmpfile+0x18/0x20 [xfs]
[&lt;ffffffff81230388&gt;] path_openat+0x228/0x6a0
[&lt;ffffffff810230f9&gt;] ? sched_clock+0x9/0x10
[&lt;ffffffff8105a427&gt;] ? kvm_clock_read+0x27/0x40
[&lt;ffffffff8124054f&gt;] ? __alloc_fd+0xaf/0x1f0
[&lt;ffffffff8123101a&gt;] do_filp_open+0x3a/0x90
[&lt;ffffffff817845e7&gt;] ? _raw_spin_unlock+0x27/0x40
[&lt;ffffffff8124054f&gt;] ? __alloc_fd+0xaf/0x1f0
[&lt;ffffffff8121e3ce&gt;] do_sys_open+0x12e/0x210
[&lt;ffffffff8121e4ce&gt;] SyS_open+0x1e/0x20
[&lt;ffffffff8178eda9&gt;] system_call_fastpath+0x16/0x1b

xfs_vn_tmpfile() also fails to initialize security on the newly created
inode.

Pull the d_tmpfile() call up into xfs_vn_tmpfile() after the transaction
has been committed and the inode unlocked. Also, initialize security on
the inode based on the parent directory provided via the tmpfile call.

Signed-off-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Dave Chinner &lt;david@fromorbit.com&gt;

</content>
</entry>
<entry>
<title>xfs: fix buffer use after free on IO error</title>
<updated>2014-04-16T22:15:28Z</updated>
<author>
<name>Eric Sandeen</name>
<email>sandeen@redhat.com</email>
</author>
<published>2014-04-16T22:15:28Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=8d6c121018bf60d631c05a4a2efc468a392b97bb'/>
<id>urn:sha1:8d6c121018bf60d631c05a4a2efc468a392b97bb</id>
<content type='text'>
When testing exhaustion of dm snapshots, the following appeared
with CONFIG_DEBUG_OBJECTS_FREE enabled:

ODEBUG: free active (active state 0) object type: work_struct hint: xfs_buf_iodone_work+0x0/0x1d0 [xfs]

indicating that we'd freed a buffer which still had a pending reference,
down this path:

[  190.867975]  [&lt;ffffffff8133e6fb&gt;] debug_check_no_obj_freed+0x22b/0x270
[  190.880820]  [&lt;ffffffff811da1d0&gt;] kmem_cache_free+0xd0/0x370
[  190.892615]  [&lt;ffffffffa02c5924&gt;] xfs_buf_free+0xe4/0x210 [xfs]
[  190.905629]  [&lt;ffffffffa02c6167&gt;] xfs_buf_rele+0xe7/0x270 [xfs]
[  190.911770]  [&lt;ffffffffa034c826&gt;] xfs_trans_read_buf_map+0x7b6/0xac0 [xfs]

At issue is the fact that if IO fails in xfs_buf_iorequest,
we'll queue completion unconditionally, and then call
xfs_buf_rele; but if IO failed, there are no IOs remaining,
and xfs_buf_rele will free the bp while work is still queued.

Fix this by not scheduling completion if the buffer has
an error on it; run it immediately.  The rest is only comment
changes.

Thanks to dchinner for spotting the root cause.

Signed-off-by: Eric Sandeen &lt;sandeen@redhat.com&gt;
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Signed-off-by: Dave Chinner &lt;david@fromorbit.com&gt;

</content>
</entry>
<entry>
<title>xfs: wrong error sign conversion during failed DIO writes</title>
<updated>2014-04-16T22:15:27Z</updated>
<author>
<name>Dave Chinner</name>
<email>dchinner@redhat.com</email>
</author>
<published>2014-04-16T22:15:27Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=07d5035a289f8bebe0ea86c293b2d5412478c481'/>
<id>urn:sha1:07d5035a289f8bebe0ea86c293b2d5412478c481</id>
<content type='text'>
We negate the error value being returned from a generic function
incorrectly. The code path that it is running in returned negative
errors, so there is no need to negate it to get the correct error
signs here.

This was uncovered by generic/019.

Signed-off-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Dave Chinner &lt;david@fromorbit.com&gt;

</content>
</entry>
<entry>
<title>xfs: unmount does not wait for shutdown during unmount</title>
<updated>2014-04-16T22:15:26Z</updated>
<author>
<name>Dave Chinner</name>
<email>dchinner@redhat.com</email>
</author>
<published>2014-04-16T22:15:26Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=9c23eccc1e746f64b18fab070a37189b4422e44a'/>
<id>urn:sha1:9c23eccc1e746f64b18fab070a37189b4422e44a</id>
<content type='text'>
And interesting situation can occur if a log IO error occurs during
the unmount of a filesystem. The cases reported have the same
signature - the update of the superblock counters fails due to a log
write IO error:

XFS (dm-16): xfs_do_force_shutdown(0x2) called from line 1170 of file fs/xfs/xfs_log.c.  Return address = 0xffffffffa08a44a1
XFS (dm-16): Log I/O Error Detected.  Shutting down filesystem
XFS (dm-16): Unable to update superblock counters. Freespace may not be correct on next mount.
XFS (dm-16): xfs_log_force: error 5 returned.
XFS (¿-¿¿¿): Please umount the filesystem and rectify the problem(s)

It can be seen that the last line of output contains a corrupt
device name - this is because the log and xfs_mount structures have
already been freed by the time this message is printed. A kernel
oops closely follows.

The issue is that the shutdown is occurring in a separate IO
completion thread to the unmount. Once the shutdown processing has
started and all the iclogs are marked with XLOG_STATE_IOERROR, the
log shutdown code wakes anyone waiting on a log force so they can
process the shutdown error. This wakes up the unmount code that
is doing a synchronous transaction to update the superblock
counters.

The unmount path now sees all the iclogs are marked with
XLOG_STATE_IOERROR and so never waits on them again, knowing that if
it does, there will not be a wakeup trigger for it and we will hang
the unmount if we do. Hence the unmount runs through all the
remaining code and frees all the filesystem structures while the
xlog_iodone() is still processing the shutdown. When the log
shutdown processing completes, xfs_do_force_shutdown() emits the
"Please umount the filesystem and rectify the problem(s)" message,
and xlog_iodone() then aborts all the objects attached to the iclog.
An iclog that has already been freed....

The real issue here is that there is no serialisation point between
the log IO and the unmount. We have serialisations points for log
writes, log forces, reservations, etc, but we don't actually have
any code that wakes for log IO to fully complete. We do that for all
other types of object, so why not iclogbufs?

Well, it turns out that we can easily do this. We've got xfs_buf
handles, and that's what everyone else uses for IO serialisation.
i.e. bp-&gt;b_sema. So, lets hold iclogbufs locked over IO, and only
release the lock in xlog_iodone() when we are finished with the
buffer. That way before we tear down the iclog, we can lock and
unlock the buffer to ensure IO completion has finished completely
before we tear it down.

Signed-off-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Tested-by: Mike Snitzer &lt;snitzer@redhat.com&gt;
Tested-by: Bob Mastors &lt;bob.mastors@solidfire.com&gt;
Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Signed-off-by: Dave Chinner &lt;david@fromorbit.com&gt;

</content>
</entry>
<entry>
<title>xfs: collapse range is delalloc challenged</title>
<updated>2014-04-16T22:15:25Z</updated>
<author>
<name>Dave Chinner</name>
<email>dchinner@redhat.com</email>
</author>
<published>2014-04-16T22:15:25Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=d39a2ced0fa0172faa46df0866fc22419b876e2a'/>
<id>urn:sha1:d39a2ced0fa0172faa46df0866fc22419b876e2a</id>
<content type='text'>
FSX has been detecting data corruption after to collapse range
calls. The key observation is that the offset of the last extent in
the file was not being shifted, and hence when the file size was
adjusted it was truncating away data because the extents handled
been correctly shifted.

Tracing indicated that before the collapse, the extent list looked
like:

....
ino 0x5788 state  idx 6 offset 26 block 195904 count 10 flag 0
ino 0x5788 state  idx 7 offset 39 block 195917 count 35 flag 0
ino 0x5788 state  idx 8 offset 86 block 195964 count 32 flag 0

and after the shift of 2 blocks:

ino 0x5788 state  idx 6 offset 24 block 195904 count 10 flag 0
ino 0x5788 state  idx 7 offset 37 block 195917 count 35 flag 0
ino 0x5788 state  idx 8 offset 86 block 195964 count 32 flag 0

Note that the last extent did not change offset. After the changing
of the file size:

ino 0x5788 state  idx 6 offset 24 block 195904 count 10 flag 0
ino 0x5788 state  idx 7 offset 37 block 195917 count 35 flag 0
ino 0x5788 state  idx 8 offset 86 block 195964 count 30 flag 0

You can see that the last extent had it's length truncated,
indicating that we've lost data.

The reason for this is that the xfs_bmap_shift_extents() loop uses
XFS_IFORK_NEXTENTS() to determine how many extents are in the inode.
This, unfortunately, doesn't take into account delayed allocation
extents - it's a count of physically allocated extents - and hence
when the file being collapsed has a delalloc extent like this one
does prior to the range being collapsed:

....
ino 0x5788 state  idx 4 offset 11 block 4503599627239429 count 1 flag 0
....

it gets the count wrong and terminates the shift loop early.

Fix it by using the in-memory extent array size that includes
delayed allocation extents to determine the number of extents on the
inode.

Signed-off-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Tested-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-by: Christoph Hellwig &lt;hch@lst.de&gt;
Signed-off-by: Dave Chinner &lt;david@fromorbit.com&gt;

</content>
</entry>
</feed>
