<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/fs, branch v3.3.8</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/fs?h=v3.3.8</id>
<link rel='self' href='https://git.amat.us/linux/atom/fs?h=v3.3.8'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2012-06-01T07:15:40Z</updated>
<entry>
<title>vfs: make AIO use the proper rw_verify_area() area helpers</title>
<updated>2012-06-01T07:15:40Z</updated>
<author>
<name>Linus Torvalds</name>
<email>torvalds@linux-foundation.org</email>
</author>
<published>2012-05-21T23:06:20Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ea15dd3cd4a8f3b097cccf7739edde04b69e0a03'/>
<id>urn:sha1:ea15dd3cd4a8f3b097cccf7739edde04b69e0a03</id>
<content type='text'>
commit a70b52ec1aaeaf60f4739edb1b422827cb6f3893 upstream.

We had for some reason overlooked the AIO interface, and it didn't use
the proper rw_verify_area() helper function that checks (for example)
mandatory locking on the file, and that the size of the access doesn't
cause us to overflow the provided offset limits etc.

Instead, AIO did just the security_file_permission() thing (that
rw_verify_area() also does) directly.

This fixes it to do all the proper helper functions, which not only
means that now mandatory file locking works with AIO too, we can
actually remove lines of code.

Reported-by: Manish Honap &lt;manish_honap_vit@yahoo.co.in&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>block: don't mark buffers beyond end of disk as mapped</title>
<updated>2012-06-01T07:15:39Z</updated>
<author>
<name>Jeff Moyer</name>
<email>jmoyer@redhat.com</email>
</author>
<published>2012-05-11T14:34:10Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=3735b0a1d73af536484ddefef4d8438dd468c4a6'/>
<id>urn:sha1:3735b0a1d73af536484ddefef4d8438dd468c4a6</id>
<content type='text'>
commit 080399aaaf3531f5b8761ec0ac30ff98891e8686 upstream.

Hi,

We have a bug report open where a squashfs image mounted on ppc64 would
exhibit errors due to trying to read beyond the end of the disk.  It can
easily be reproduced by doing the following:

[root@ibm-p750e-02-lp3 ~]# ls -l install.img
-rw-r--r-- 1 root root 142032896 Apr 30 16:46 install.img
[root@ibm-p750e-02-lp3 ~]# mount -o loop ./install.img /mnt/test
[root@ibm-p750e-02-lp3 ~]# dd if=/dev/loop0 of=/dev/null
dd: reading `/dev/loop0': Input/output error
277376+0 records in
277376+0 records out
142016512 bytes (142 MB) copied, 0.9465 s, 150 MB/s

In dmesg, you'll find the following:

squashfs: version 4.0 (2009/01/31) Phillip Lougher
[   43.106012] attempt to access beyond end of device
[   43.106029] loop0: rw=0, want=277410, limit=277408
[   43.106039] Buffer I/O error on device loop0, logical block 138704
[   43.106053] attempt to access beyond end of device
[   43.106057] loop0: rw=0, want=277412, limit=277408
[   43.106061] Buffer I/O error on device loop0, logical block 138705
[   43.106066] attempt to access beyond end of device
[   43.106070] loop0: rw=0, want=277414, limit=277408
[   43.106073] Buffer I/O error on device loop0, logical block 138706
[   43.106078] attempt to access beyond end of device
[   43.106081] loop0: rw=0, want=277416, limit=277408
[   43.106085] Buffer I/O error on device loop0, logical block 138707
[   43.106089] attempt to access beyond end of device
[   43.106093] loop0: rw=0, want=277418, limit=277408
[   43.106096] Buffer I/O error on device loop0, logical block 138708
[   43.106101] attempt to access beyond end of device
[   43.106104] loop0: rw=0, want=277420, limit=277408
[   43.106108] Buffer I/O error on device loop0, logical block 138709
[   43.106112] attempt to access beyond end of device
[   43.106116] loop0: rw=0, want=277422, limit=277408
[   43.106120] Buffer I/O error on device loop0, logical block 138710
[   43.106124] attempt to access beyond end of device
[   43.106128] loop0: rw=0, want=277424, limit=277408
[   43.106131] Buffer I/O error on device loop0, logical block 138711
[   43.106135] attempt to access beyond end of device
[   43.106139] loop0: rw=0, want=277426, limit=277408
[   43.106143] Buffer I/O error on device loop0, logical block 138712
[   43.106147] attempt to access beyond end of device
[   43.106151] loop0: rw=0, want=277428, limit=277408
[   43.106154] Buffer I/O error on device loop0, logical block 138713
[   43.106158] attempt to access beyond end of device
[   43.106162] loop0: rw=0, want=277430, limit=277408
[   43.106166] attempt to access beyond end of device
[   43.106169] loop0: rw=0, want=277432, limit=277408
...
[   43.106307] attempt to access beyond end of device
[   43.106311] loop0: rw=0, want=277470, limit=2774

Squashfs manages to read in the end block(s) of the disk during the
mount operation.  Then, when dd reads the block device, it leads to
block_read_full_page being called with buffers that are beyond end of
disk, but are marked as mapped.  Thus, it would end up submitting read
I/O against them, resulting in the errors mentioned above.  I fixed the
problem by modifying init_page_buffers to only set the buffer mapped if
it fell inside of i_size.

Cheers,
Jeff

Signed-off-by: Jeff Moyer &lt;jmoyer@redhat.com&gt;
Acked-by: Nick Piggin &lt;npiggin@kernel.dk&gt;

--

Changes from v1-&gt;v2: re-used max_block, as suggested by Nick Piggin.
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>bio allocation failure due to bio_get_nr_vecs()</title>
<updated>2012-06-01T07:15:39Z</updated>
<author>
<name>Bernd Schubert</name>
<email>bernd.schubert@itwm.fraunhofer.de</email>
</author>
<published>2012-05-11T14:36:44Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ce973c4010eee47734f46808ca666470e80f3034'/>
<id>urn:sha1:ce973c4010eee47734f46808ca666470e80f3034</id>
<content type='text'>
commit f908ee9463b09ddd05e1c1a0111132212dc05fac upstream.

The number of bio_get_nr_vecs() is passed down via bio_alloc() to
bvec_alloc_bs(), which fails the bio allocation if
nr_iovecs &gt; BIO_MAX_PAGES. For the underlying caller this causes an
unexpected bio allocation failure.
Limiting to queue_max_segments() is not sufficient, as max_segments
also might be very large.

bvec_alloc_bs(gfp_mask, nr_iovecs, ) =&gt; NULL when nr_iovecs  &gt; BIO_MAX_PAGES
bio_alloc_bioset(gfp_mask, nr_iovecs, ...)
bio_alloc(GFP_NOIO, nvecs)
xfs_alloc_ioend_bio()

Signed-off-by: Bernd Schubert &lt;bernd.schubert@itwm.fraunhofer.de&gt;
Signed-off-by: Jens Axboe &lt;axboe@kernel.dk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>Avoid reading past buffer when calling GETACL</title>
<updated>2012-05-21T17:46:24Z</updated>
<author>
<name>Sachin Prabhu</name>
<email>sprabhu@redhat.com</email>
</author>
<published>2012-04-17T13:35:39Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=22594eea7aea374c0680f70fd89eeb1fc47ae635'/>
<id>urn:sha1:22594eea7aea374c0680f70fd89eeb1fc47ae635</id>
<content type='text'>
commit 5a00689930ab975fdd1b37b034475017e460cf2a upstream.

Bug noticed in commit
bf118a342f10dafe44b14451a1392c3254629a1f

When calling GETACL, if the size of the bitmap array, the length
attribute and the acl returned by the server is greater than the
allocated buffer(args.acl_len), we can Oops with a General Protection
fault at _copy_from_pages() when we attempt to read past the pages
allocated.

This patch allocates an extra PAGE for the bitmap and checks to see that
the bitmap + attribute_length + ACLs don't exceed the buffer space
allocated to it.

Signed-off-by: Sachin Prabhu &lt;sprabhu@redhat.com&gt;
Reported-by: Jian Li &lt;jiali@redhat.com&gt;
[Trond: Fixed a size_t vs unsigned int printk() warning]
Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>Avoid beyond bounds copy while caching ACL</title>
<updated>2012-05-21T17:46:24Z</updated>
<author>
<name>Sachin Prabhu</name>
<email>sprabhu@redhat.com</email>
</author>
<published>2012-04-17T13:36:40Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4262cd1f4b734ea5fa05c37043c89106f38b2daa'/>
<id>urn:sha1:4262cd1f4b734ea5fa05c37043c89106f38b2daa</id>
<content type='text'>
commit 5794d21ef4639f0e33440927bb903f9598c21e92 upstream.

When attempting to cache ACLs returned from the server, if the bitmap
size + the ACL size is greater than a PAGE_SIZE but the ACL size itself
is smaller than a PAGE_SIZE, we can read past the buffer page boundary.

Signed-off-by: Sachin Prabhu &lt;sprabhu@redhat.com&gt;
Reported-by: Jian Li &lt;jiali@redhat.com&gt;
Signed-off-by: Trond Myklebust &lt;Trond.Myklebust@netapp.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>cifs: fix revalidation test in cifs_llseek()</title>
<updated>2012-05-21T17:46:22Z</updated>
<author>
<name>Dan Carpenter</name>
<email>dan.carpenter@oracle.com</email>
</author>
<published>2012-04-30T14:36:21Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4c7a674cc364f699fa5b6a6176c6cff4f586e607'/>
<id>urn:sha1:4c7a674cc364f699fa5b6a6176c6cff4f586e607</id>
<content type='text'>
commit 48a5730e5b71201e226ff06e245bf308feba5f10 upstream.

This test is always true so it means we revalidate the length every
time, which generates more network traffic.  When it is SEEK_SET or
SEEK_CUR, then we don't need to revalidate.

Signed-off-by: Dan Carpenter &lt;dan.carpenter@oracle.com&gt;
Reviewed-by: Jeff Layton &lt;jlayton@redhat.com&gt;
Signed-off-by: Steve French &lt;sfrench@us.ibm.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>ext4: avoid deadlock on sync-mounted FS w/o journal</title>
<updated>2012-05-21T17:46:22Z</updated>
<author>
<name>Eric Sandeen</name>
<email>sandeen@redhat.com</email>
</author>
<published>2012-02-21T04:06:18Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=277ff221e8b2a9d4397c247313883d1160c0403e'/>
<id>urn:sha1:277ff221e8b2a9d4397c247313883d1160c0403e</id>
<content type='text'>
commit c1bb05a657fb3d8c6179a4ef7980261fae4521d7 upstream.

Processes hang forever on a sync-mounted ext2 file system that
is mounted with the ext4 module (default in Fedora 16).

I can reproduce this reliably by mounting an ext2 partition with
"-o sync" and opening a new file an that partition with vim. vim
will hang in "D" state forever.  The same happens on ext4 without
a journal.

I am attaching a small patch here that solves this issue for me.
In the sync mounted case without a journal,
ext4_handle_dirty_metadata() may call sync_dirty_buffer(), which
can't be called with buffer lock held.

Also move mb_cache_entry_release inside lock to avoid race
fixed previously by 8a2bfdcb ext[34]: EA block reference count racing fix
Note too that ext2 fixed this same problem in 2006 with
b2f49033 [PATCH] fix deadlock in ext2

Signed-off-by: Martin.Wilck@ts.fujitsu.com
[sandeen@redhat.com: move mb_cache_entry_release before unlock, edit commit msg]
Signed-off-by: Eric Sandeen &lt;sandeen@redhat.com&gt;
Signed-off-by: "Theodore Ts'o" &lt;tytso@mit.edu&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>jffs2: Fix lock acquisition order bug in gc path</title>
<updated>2012-05-21T17:46:20Z</updated>
<author>
<name>Josh Cartwright</name>
<email>joshc@linux.com</email>
</author>
<published>2012-03-29T23:34:53Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=66a02109c4a5891a43b437e0de677de759ea2310'/>
<id>urn:sha1:66a02109c4a5891a43b437e0de677de759ea2310</id>
<content type='text'>
commit 226bb7df3d22bcf4a1c0fe8206c80cc427498eae upstream.

The locking policy is such that the erase_complete_block spinlock is
nested within the alloc_sem mutex.  This fixes a case in which the
acquisition order was erroneously reversed.  This issue was caught by
the following lockdep splat:

   =======================================================
   [ INFO: possible circular locking dependency detected ]
   3.0.5 #1
   -------------------------------------------------------
   jffs2_gcd_mtd6/299 is trying to acquire lock:
    (&amp;c-&gt;alloc_sem){+.+.+.}, at: [&lt;c01f7714&gt;] jffs2_garbage_collect_pass+0x314/0x890

   but task is already holding lock:
    (&amp;(&amp;c-&gt;erase_completion_lock)-&gt;rlock){+.+...}, at: [&lt;c01f7708&gt;] jffs2_garbage_collect_pass+0x308/0x890

   which lock already depends on the new lock.

   the existing dependency chain (in reverse order) is:

   -&gt; #1 (&amp;(&amp;c-&gt;erase_completion_lock)-&gt;rlock){+.+...}:
          [&lt;c008bec4&gt;] validate_chain+0xe6c/0x10bc
          [&lt;c008c660&gt;] __lock_acquire+0x54c/0xba4
          [&lt;c008d240&gt;] lock_acquire+0xa4/0x114
          [&lt;c046780c&gt;] _raw_spin_lock+0x3c/0x4c
          [&lt;c01f744c&gt;] jffs2_garbage_collect_pass+0x4c/0x890
          [&lt;c01f937c&gt;] jffs2_garbage_collect_thread+0x1b4/0x1cc
          [&lt;c0071a68&gt;] kthread+0x98/0xa0
          [&lt;c000f264&gt;] kernel_thread_exit+0x0/0x8

   -&gt; #0 (&amp;c-&gt;alloc_sem){+.+.+.}:
          [&lt;c008ad2c&gt;] print_circular_bug+0x70/0x2c4
          [&lt;c008c08c&gt;] validate_chain+0x1034/0x10bc
          [&lt;c008c660&gt;] __lock_acquire+0x54c/0xba4
          [&lt;c008d240&gt;] lock_acquire+0xa4/0x114
          [&lt;c0466628&gt;] mutex_lock_nested+0x74/0x33c
          [&lt;c01f7714&gt;] jffs2_garbage_collect_pass+0x314/0x890
          [&lt;c01f937c&gt;] jffs2_garbage_collect_thread+0x1b4/0x1cc
          [&lt;c0071a68&gt;] kthread+0x98/0xa0
          [&lt;c000f264&gt;] kernel_thread_exit+0x0/0x8

   other info that might help us debug this:

    Possible unsafe locking scenario:

          CPU0                    CPU1
          ----                    ----
     lock(&amp;(&amp;c-&gt;erase_completion_lock)-&gt;rlock);
                                  lock(&amp;c-&gt;alloc_sem);
                                  lock(&amp;(&amp;c-&gt;erase_completion_lock)-&gt;rlock);
     lock(&amp;c-&gt;alloc_sem);

    *** DEADLOCK ***

   1 lock held by jffs2_gcd_mtd6/299:
    #0:  (&amp;(&amp;c-&gt;erase_completion_lock)-&gt;rlock){+.+...}, at: [&lt;c01f7708&gt;] jffs2_garbage_collect_pass+0x308/0x890

   stack backtrace:
   [&lt;c00155dc&gt;] (unwind_backtrace+0x0/0x100) from [&lt;c0463dc0&gt;] (dump_stack+0x20/0x24)
   [&lt;c0463dc0&gt;] (dump_stack+0x20/0x24) from [&lt;c008ae84&gt;] (print_circular_bug+0x1c8/0x2c4)
   [&lt;c008ae84&gt;] (print_circular_bug+0x1c8/0x2c4) from [&lt;c008c08c&gt;] (validate_chain+0x1034/0x10bc)
   [&lt;c008c08c&gt;] (validate_chain+0x1034/0x10bc) from [&lt;c008c660&gt;] (__lock_acquire+0x54c/0xba4)
   [&lt;c008c660&gt;] (__lock_acquire+0x54c/0xba4) from [&lt;c008d240&gt;] (lock_acquire+0xa4/0x114)
   [&lt;c008d240&gt;] (lock_acquire+0xa4/0x114) from [&lt;c0466628&gt;] (mutex_lock_nested+0x74/0x33c)
   [&lt;c0466628&gt;] (mutex_lock_nested+0x74/0x33c) from [&lt;c01f7714&gt;] (jffs2_garbage_collect_pass+0x314/0x890)
   [&lt;c01f7714&gt;] (jffs2_garbage_collect_pass+0x314/0x890) from [&lt;c01f937c&gt;] (jffs2_garbage_collect_thread+0x1b4/0x1cc)
   [&lt;c01f937c&gt;] (jffs2_garbage_collect_thread+0x1b4/0x1cc) from [&lt;c0071a68&gt;] (kthread+0x98/0xa0)
   [&lt;c0071a68&gt;] (kthread+0x98/0xa0) from [&lt;c000f264&gt;] (kernel_thread_exit+0x0/0x8)

This was introduce in '81cfc9f jffs2: Fix serious write stall due to erase'.

Signed-off-by: Josh Cartwright &lt;joshc@linux.com&gt;
Signed-off-by: Artem Bityutskiy &lt;artem.bityutskiy@linux.intel.com&gt;
Signed-off-by: David Woodhouse &lt;David.Woodhouse@intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>nfsd: don't fail unchecked creates of non-special files</title>
<updated>2012-05-12T16:32:21Z</updated>
<author>
<name>J. Bruce Fields</name>
<email>bfields@redhat.com</email>
</author>
<published>2012-04-09T22:06:49Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=502fd05d816f4a74a1a7bb45896ba35a33f8da10'/>
<id>urn:sha1:502fd05d816f4a74a1a7bb45896ba35a33f8da10</id>
<content type='text'>
commit 9dc4e6c4d1182d34604ea40fef641775f5b15456 upstream.

Allow a v3 unchecked open of a non-regular file succeed as if it were a
lookup; typically a client in such a case will want to fall back on a
local open, so succeeding and giving it the filehandle is more useful
than failing with nfserr_exist, which makes it appear that nothing at
all exists by that name.

Similarly for v4, on an open-create, return the same errors we would on
an attempt to open a non-regular file, instead of returning
nfserr_exist.

This fixes a problem found doing a v4 open of a symlink with
O_RDONLY|O_CREAT, which resulted in the current client returning EEXIST.

Thanks also to Trond for analysis.

Reported-by: Orion Poplawski &lt;orion@cora.nwra.com&gt;
Tested-by: Orion Poplawski &lt;orion@cora.nwra.com&gt;
Signed-off-by: J. Bruce Fields &lt;bfields@redhat.com&gt;
[bwh: Backported to 3.2 and 3.3: use &amp;resfh, not resfh]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>hugepages: fix use after free bug in "quota" handling</title>
<updated>2012-05-12T16:32:21Z</updated>
<author>
<name>David Gibson</name>
<email>david@gibson.dropbear.id.au</email>
</author>
<published>2012-03-21T23:34:12Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c6e143d5d46c8c84247c5ca9202c4d677a6162a7'/>
<id>urn:sha1:c6e143d5d46c8c84247c5ca9202c4d677a6162a7</id>
<content type='text'>
commit 90481622d75715bfcb68501280a917dbfe516029 upstream.

hugetlbfs_{get,put}_quota() are badly named.  They don't interact with the
general quota handling code, and they don't much resemble its behaviour.
Rather than being about maintaining limits on on-disk block usage by
particular users, they are instead about maintaining limits on in-memory
page usage (including anonymous MAP_PRIVATE copied-on-write pages)
associated with a particular hugetlbfs filesystem instance.

Worse, they work by having callbacks to the hugetlbfs filesystem code from
the low-level page handling code, in particular from free_huge_page().
This is a layering violation of itself, but more importantly, if the
kernel does a get_user_pages() on hugepages (which can happen from KVM
amongst others), then the free_huge_page() can be delayed until after the
associated inode has already been freed.  If an unmount occurs at the
wrong time, even the hugetlbfs superblock where the "quota" limits are
stored may have been freed.

Andrew Barry proposed a patch to fix this by having hugepages, instead of
storing a pointer to their address_space and reaching the superblock from
there, had the hugepages store pointers directly to the superblock,
bumping the reference count as appropriate to avoid it being freed.
Andrew Morton rejected that version, however, on the grounds that it made
the existing layering violation worse.

This is a reworked version of Andrew's patch, which removes the extra, and
some of the existing, layering violation.  It works by introducing the
concept of a hugepage "subpool" at the lower hugepage mm layer - that is a
finite logical pool of hugepages to allocate from.  hugetlbfs now creates
a subpool for each filesystem instance with a page limit set, and a
pointer to the subpool gets added to each allocated hugepage, instead of
the address_space pointer used now.  The subpool has its own lifetime and
is only freed once all pages in it _and_ all other references to it (i.e.
superblocks) are gone.

subpools are optional - a NULL subpool pointer is taken by the code to
mean that no subpool limits are in effect.

Previous discussion of this bug found in:  "Fix refcounting in hugetlbfs
quota handling.". See:  https://lkml.org/lkml/2011/8/11/28 or
http://marc.info/?l=linux-mm&amp;m=126928970510627&amp;w=1

v2: Fixed a bug spotted by Hillf Danton, and removed the extra parameter to
alloc_huge_page() - since it already takes the vma, it is not necessary.

Signed-off-by: Andrew Barry &lt;abarry@cray.com&gt;
Signed-off-by: David Gibson &lt;david@gibson.dropbear.id.au&gt;
Cc: Hugh Dickins &lt;hughd@google.com&gt;
Cc: Mel Gorman &lt;mgorman@suse.de&gt;
Cc: Minchan Kim &lt;minchan.kim@gmail.com&gt;
Cc: Hillf Danton &lt;dhillf@gmail.com&gt;
Cc: Paul Mackerras &lt;paulus@samba.org&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
