<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/fs/xfs, branch v3.12.22</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/fs/xfs?h=v3.12.22</id>
<link rel='self' href='https://git.amat.us/linux/atom/fs/xfs?h=v3.12.22'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2014-05-05T12:24:39Z</updated>
<entry>
<title>xfs: fix directory hash ordering bug</title>
<updated>2014-05-05T12:24:39Z</updated>
<author>
<name>Mark Tinguely</name>
<email>tinguely@sgi.com</email>
</author>
<published>2014-04-03T20:10:49Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=fd4037cadecf7b5c0e288c19d958917ac1c62a83'/>
<id>urn:sha1:fd4037cadecf7b5c0e288c19d958917ac1c62a83</id>
<content type='text'>
commit c88547a8119e3b581318ab65e9b72f27f23e641d upstream.

Commit f5ea1100 ("xfs: add CRCs to dir2/da node blocks") introduced
in 3.10 incorrectly converted the btree hash index array pointer in
xfs_da3_fixhashpath(). It resulted in the the current hash always
being compared against the first entry in the btree rather than the
current block index into the btree block's hash entry array. As a
result, it was comparing the wrong hashes, and so could misorder the
entries in the btree.

For most cases, this doesn't cause any problems as it requires hash
collisions to expose the ordering problem. However, when there are
hash collisions within a directory there is a very good probability
that the entries will be ordered incorrectly and that actually
matters when duplicate hashes are placed into or removed from the
btree block hash entry array.

This bug results in an on-disk directory corruption and that results
in directory verifier functions throwing corruption warnings into
the logs. While no data or directory entries are lost, access to
them may be compromised, and attempts to remove entries from a
directory that has suffered from this corruption may result in a
filesystem shutdown.  xfs_repair will fix the directory hash
ordering without data loss occuring.

[dchinner: wrote useful a commit message]

Reported-by: Hannes Frederic Sowa &lt;hannes@stressinduktion.org&gt;
Signed-off-by: Mark Tinguely &lt;tinguely@sgi.com&gt;
Reviewed-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Dave Chinner &lt;david@fromorbit.com&gt;
Signed-off-by: Jiri Slaby &lt;jslaby@suse.cz&gt;
</content>
</entry>
<entry>
<title>xfs: fix infinite loop by detaching the group/project hints from user dquot</title>
<updated>2014-01-09T20:25:09Z</updated>
<author>
<name>Jie Liu</name>
<email>jeff.liu@oracle.com</email>
</author>
<published>2013-11-26T13:38:49Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=57a0ea2153fb2f7f432cf62fe78e5093ce85f61e'/>
<id>urn:sha1:57a0ea2153fb2f7f432cf62fe78e5093ce85f61e</id>
<content type='text'>
commit 718cc6f88cbfc4fbd39609f28c4c86883945f90d upstream.

xfs_quota(8) will hang up if trying to turn group/project quota off
before the user quota is off, this could be 100% reproduced by:
  # mount -ouquota,gquota /dev/sda7 /xfs
  # mkdir /xfs/test
  # xfs_quota -xc 'off -g' /xfs &lt;-- hangs up
  # echo w &gt; /proc/sysrq-trigger
  # dmesg

  SysRq : Show Blocked State
  task                        PC stack   pid father
  xfs_quota       D 0000000000000000     0 27574   2551 0x00000000
  [snip]
  Call Trace:
  [&lt;ffffffff81aaa21d&gt;] schedule+0xad/0xc0
  [&lt;ffffffff81aa327e&gt;] schedule_timeout+0x35e/0x3c0
  [&lt;ffffffff8114b506&gt;] ? mark_held_locks+0x176/0x1c0
  [&lt;ffffffff810ad6c0&gt;] ? call_timer_fn+0x2c0/0x2c0
  [&lt;ffffffffa0c25380&gt;] ? xfs_qm_shrink_count+0x30/0x30 [xfs]
  [&lt;ffffffff81aa3306&gt;] schedule_timeout_uninterruptible+0x26/0x30
  [&lt;ffffffffa0c26155&gt;] xfs_qm_dquot_walk+0x235/0x260 [xfs]
  [&lt;ffffffffa0c059d8&gt;] ? xfs_perag_get+0x1d8/0x2d0 [xfs]
  [&lt;ffffffffa0c05805&gt;] ? xfs_perag_get+0x5/0x2d0 [xfs]
  [&lt;ffffffffa0b7707e&gt;] ? xfs_inode_ag_iterator+0xae/0xf0 [xfs]
  [&lt;ffffffffa0c22280&gt;] ? xfs_trans_free_dqinfo+0x50/0x50 [xfs]
  [&lt;ffffffffa0b7709f&gt;] ? xfs_inode_ag_iterator+0xcf/0xf0 [xfs]
  [&lt;ffffffffa0c261e6&gt;] xfs_qm_dqpurge_all+0x66/0xb0 [xfs]
  [&lt;ffffffffa0c2497a&gt;] xfs_qm_scall_quotaoff+0x20a/0x5f0 [xfs]
  [&lt;ffffffffa0c2b8f6&gt;] xfs_fs_set_xstate+0x136/0x180 [xfs]
  [&lt;ffffffff8136cf7a&gt;] do_quotactl+0x53a/0x6b0
  [&lt;ffffffff812fba4b&gt;] ? iput+0x5b/0x90
  [&lt;ffffffff8136d257&gt;] SyS_quotactl+0x167/0x1d0
  [&lt;ffffffff814cf2ee&gt;] ? trace_hardirqs_on_thunk+0x3a/0x3f
  [&lt;ffffffff81abcd19&gt;] system_call_fastpath+0x16/0x1b

It's fine if we turn user quota off at first, then turn off other
kind of quotas if they are enabled since the group/project dquot
refcount is decreased to zero once the user quota if off. Otherwise,
those dquots refcount is non-zero due to the user dquot might refer
to them as hint(s).  Hence, above operation cause an infinite loop
at xfs_qm_dquot_walk() while trying to purge dquot cache.

This problem has been around since Linux 3.4, it was introduced by:
  [ b84a3a9675 xfs: remove the per-filesystem list of dquots ]

Originally we will release the group dquot pointers because the user
dquots maybe carrying around as a hint via xfs_qm_detach_gdquots().
However, with above change, there is no such work to be done before
purging group/project dquot cache.

In order to solve this problem, this patch introduces a special routine
xfs_qm_dqpurge_hints(), and it would release the group/project dquot
pointers the user dquots maybe carrying around as a hint, and then it
will proceed to purge the user dquot cache if requested.

(cherry picked from commit df8052e7dae00bde6f21b40b6e3e1099770f3afc)

Signed-off-by: Jie Liu &lt;jeff.liu@oracle.com&gt;
Reviewed-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;


</content>
</entry>
<entry>
<title>xfs: underflow bug in xfs_attrlist_by_handle()</title>
<updated>2013-12-20T15:48:53Z</updated>
<author>
<name>Dan Carpenter</name>
<email>dan.carpenter@oracle.com</email>
</author>
<published>2013-10-31T18:00:10Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=f5e6d588f847fba87394926284cc4a7a3b79c6bf'/>
<id>urn:sha1:f5e6d588f847fba87394926284cc4a7a3b79c6bf</id>
<content type='text'>
commit 31978b5cc66b8ba8a7e8eef60b12395d41b7b890 upstream.

If we allocate less than sizeof(struct attrlist) then we end up
corrupting memory or doing a ZERO_PTR_SIZE dereference.

This can only be triggered with CAP_SYS_ADMIN.

Reported-by: Nico Golde &lt;nico@ngolde.de&gt;
Reported-by: Fabian Yamaguchi &lt;fabs@goesec.de&gt;
Signed-off-by: Dan Carpenter &lt;dan.carpenter@oracle.com&gt;
Reviewed-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>xfs: growfs overruns AGFL buffer on V4 filesystems</title>
<updated>2013-12-20T15:48:53Z</updated>
<author>
<name>Dave Chinner</name>
<email>dchinner@redhat.com</email>
</author>
<published>2013-11-21T04:41:06Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=7b115360756c00867dfeb633daaf092c0a3996ba'/>
<id>urn:sha1:7b115360756c00867dfeb633daaf092c0a3996ba</id>
<content type='text'>
commit f94c44573e7c22860e2c3dfe349c45f72ba35ad3 upstream.

This loop in xfs_growfs_data_private() is incorrect for V4
superblocks filesystems:

		for (bucket = 0; bucket &lt; XFS_AGFL_SIZE(mp); bucket++)
			agfl-&gt;agfl_bno[bucket] = cpu_to_be32(NULLAGBLOCK);

For V4 filesystems, we don't have a agfl header structure, and so
XFS_AGFL_SIZE() returns an entire sector's worth of entries, which
we then index from an offset into the sector. Hence: buffer overrun.

This problem was introduced in 3.10 by commit 77c95bba ("xfs: add
CRC checks to the AGFL") which changed the AGFL structure but failed
to update the growfs code to handle the different structures.

Fix it by using the correct offset into the buffer for both V4 and
V5 filesystems.

Signed-off-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Reviewed-by: Jie Liu &lt;jeff.liu@oracle.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>xfs: add capability check to free eofblocks ioctl</title>
<updated>2013-12-08T15:29:15Z</updated>
<author>
<name>Dwight Engen</name>
<email>dwight.engen@oracle.com</email>
</author>
<published>2013-08-15T18:08:03Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=eaeeaec383f3228446715e660851f73423501eba'/>
<id>urn:sha1:eaeeaec383f3228446715e660851f73423501eba</id>
<content type='text'>
commit 8c567a7fab6e086a0284eee2db82348521e7120c upstream.

Check for CAP_SYS_ADMIN since the caller can truncate preallocated
blocks from files they do not own nor have write access to. A more
fine grained access check was considered: require the caller to
specify their own uid/gid and to use inode_permission to check for
write, but this would not catch the case of an inode not reachable
via path traversal from the callers mount namespace.

Add check for read-only filesystem to free eofblocks ioctl.

Reviewed-by: Brian Foster &lt;bfoster@redhat.com&gt;
Reviewed-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Reviewed-by: Gao feng &lt;gaofeng@cn.fujitsu.com&gt;
Signed-off-by: Dwight Engen &lt;dwight.engen@oracle.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;
Cc: Kees Cook &lt;keescook@google.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>xfs: be more forgiving of a v4 secondary sb w/ junk in v5 fields</title>
<updated>2013-11-29T19:27:52Z</updated>
<author>
<name>Eric Sandeen</name>
<email>sandeen@sandeen.net</email>
</author>
<published>2013-09-09T20:33:29Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=31fcbef62d8142c0c173b0b8255e9b0c28a7a038'/>
<id>urn:sha1:31fcbef62d8142c0c173b0b8255e9b0c28a7a038</id>
<content type='text'>
commit 10e6e65dfcedff63275c3d649d329c044caa8e26 upstream.

Today, if xfs_sb_read_verify encounters a v4 superblock
with junk past v4 fields which includes data in sb_crc,
it will be treated as a failing checksum and a significant
corruption.

There are known prior bugs which leave junk at the end
of the V4 superblock; we don't need to actually fail the
verification in this case if other checks pan out ok.

So if this is a secondary superblock, and the primary
superblock doesn't indicate that this is a V5 filesystem,
don't treat this as an actual checksum failure.

We should probably check the garbage condition as
we do in xfs_repair, and possibly warn about it
or self-heal, but that's a different scope of work.

Stable folks: This can go back to v3.10, which is what
introduced the sb CRC checking that is tripped up by old,
stale, incorrect V4 superblocks w/ unzeroed bits.

Signed-off-by: Eric Sandeen &lt;sandeen@redhat.com&gt;
Acked-by: Dave Chinner &lt;david@fromorbit.com&gt;
Reviewed-by: Mark Tinguely &lt;tinguely@sgi.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>xfs: Use kmem_free() instead of free()</title>
<updated>2013-10-04T18:56:12Z</updated>
<author>
<name>Thierry Reding</name>
<email>thierry.reding@gmail.com</email>
</author>
<published>2013-10-01T14:47:53Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=b2a42f78ab475f4730300b0e9568bc3b2587d112'/>
<id>urn:sha1:b2a42f78ab475f4730300b0e9568bc3b2587d112</id>
<content type='text'>
This fixes a build failure caused by calling the free() function which
does not exist in the Linux kernel.

Signed-off-by: Thierry Reding &lt;treding@nvidia.com&gt;
Reviewed-by: Mark Tinguely &lt;tinguely@sgi.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;

(cherry picked from commit aaaae98022efa4f3c31042f1fdf9e7a0c5f04663)
</content>
</entry>
<entry>
<title>xfs: fix memory leak in xlog_recover_add_to_trans</title>
<updated>2013-10-04T18:56:03Z</updated>
<author>
<name>tinguely@sgi.com</name>
<email>tinguely@sgi.com</email>
</author>
<published>2013-09-27T14:00:55Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=9b3b77fe661875f19ed748b67fb1eeb57d602b7e'/>
<id>urn:sha1:9b3b77fe661875f19ed748b67fb1eeb57d602b7e</id>
<content type='text'>
Free the memory in error path of xlog_recover_add_to_trans().
Normally this memory is freed in recovery pass2, but is leaked
in the error path.

Signed-off-by: Mark Tinguely &lt;tinguely@sgi.com&gt;
Reviewed-by: Eric Sandeen &lt;sandeen@redhat.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;

(cherry picked from commit 519ccb81ac1c8e3e4eed294acf93be00b43dcad6)
</content>
</entry>
<entry>
<title>xfs: dirent dtype presence is dependent on directory magic numbers</title>
<updated>2013-10-04T18:55:48Z</updated>
<author>
<name>Dave Chinner</name>
<email>dchinner@redhat.com</email>
</author>
<published>2013-09-29T23:37:04Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6d313498f035abc9d8ad3a1b3295f133bfab9638'/>
<id>urn:sha1:6d313498f035abc9d8ad3a1b3295f133bfab9638</id>
<content type='text'>
The determination of whether a directory entry contains a dtype
field originally was dependent on the filesystem having CRCs
enabled. This meant that the format for dtype beign enabled could be
determined by checking the directory block magic number rather than
doing a feature bit check. This was useful in that it meant that we
didn't need to pass a struct xfs_mount around to functions that
were already supplied with a directory block header.

Unfortunately, the introduction of dtype fields into the v4
structure via a feature bit meant this "use the directory block
magic number" method of discriminating the dirent entry sizes is
broken. Hence we need to convert the places that use magic number
checks to use feature bit checks so that they work correctly and not
by chance.

The current code works on v4 filesystems only because the dirent
size roundup covers the extra byte needed by the dtype field in the
places where this problem occurs.

Signed-off-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Reviewed-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;

(cherry picked from commit 367993e7c6428cb7617ab7653d61dca54e2fdede)
</content>
</entry>
<entry>
<title>xfs: lockdep needs to know about 3 dquot-deep nesting</title>
<updated>2013-10-04T18:55:33Z</updated>
<author>
<name>Dave Chinner</name>
<email>dchinner@redhat.com</email>
</author>
<published>2013-09-29T23:37:03Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=89c6c89af2ef41cb127c9694ef7783e585e96337'/>
<id>urn:sha1:89c6c89af2ef41cb127c9694ef7783e585e96337</id>
<content type='text'>
Michael Semon reported that xfs/299 generated this lockdep warning:

=============================================
[ INFO: possible recursive locking detected ]
3.12.0-rc2+ #2 Not tainted
---------------------------------------------
touch/21072 is trying to acquire lock:
 (&amp;xfs_dquot_other_class){+.+...}, at: [&lt;c12902fb&gt;] xfs_trans_dqlockedjoin+0x57/0x64

but task is already holding lock:
 (&amp;xfs_dquot_other_class){+.+...}, at: [&lt;c12902fb&gt;] xfs_trans_dqlockedjoin+0x57/0x64

other info that might help us debug this:
 Possible unsafe locking scenario:

       CPU0
       ----
  lock(&amp;xfs_dquot_other_class);
  lock(&amp;xfs_dquot_other_class);

 *** DEADLOCK ***

 May be due to missing lock nesting notation

7 locks held by touch/21072:
 #0:  (sb_writers#10){++++.+}, at: [&lt;c11185b6&gt;] mnt_want_write+0x1e/0x3e
 #1:  (&amp;type-&gt;i_mutex_dir_key#4){+.+.+.}, at: [&lt;c11078ee&gt;] do_last+0x245/0xe40
 #2:  (sb_internal#2){++++.+}, at: [&lt;c122c9e0&gt;] xfs_trans_alloc+0x1f/0x35
 #3:  (&amp;(&amp;ip-&gt;i_lock)-&gt;mr_lock/1){+.+...}, at: [&lt;c126cd1b&gt;] xfs_ilock+0x100/0x1f1
 #4:  (&amp;(&amp;ip-&gt;i_lock)-&gt;mr_lock){++++-.}, at: [&lt;c126cf52&gt;] xfs_ilock_nowait+0x105/0x22f
 #5:  (&amp;dqp-&gt;q_qlock){+.+...}, at: [&lt;c12902fb&gt;] xfs_trans_dqlockedjoin+0x57/0x64
 #6:  (&amp;xfs_dquot_other_class){+.+...}, at: [&lt;c12902fb&gt;] xfs_trans_dqlockedjoin+0x57/0x64

The lockdep annotation for dquot lock nesting only understands
locking for user and "other" dquots, not user, group and quota
dquots. Fix the annotations to match the locking heirarchy we now
have.

Reported-by: Michael L. Semon &lt;mlsemon35@gmail.com&gt;
Signed-off-by: Dave Chinner &lt;dchinner@redhat.com&gt;
Reviewed-by: Ben Myers &lt;bpm@sgi.com&gt;
Signed-off-by: Ben Myers &lt;bpm@sgi.com&gt;

(cherry picked from commit f112a049712a5c07de25d511c3c6587a2b1a015e)
</content>
</entry>
</feed>
