aboutsummaryrefslogtreecommitdiff
AgeCommit message (Collapse)Author
2009-10-22Linux 2.6.27.38v2.6.27.38Greg Kroah-Hartman
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-21USB: digi_acceleport: Fix broken unthrottle.Johan Hovold
commit ba6b702f85a61561d329c4c11d3ed95604924f9a upstream. This patch fixes a regression introduced in 39892da44b21b5362eb848ca424d73a25ccc488f. Signed-off-by: Johan Hovold <jhovold@gmail.com> Acked-by: Oliver Neukum <oliver@neukum.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-21SCSI: Fix protection scsi_data_buffer leakMartin K. Petersen
commit b4c2554d40ceac130a8d062eaa8838ed22158c45 upstream. We would leak a scsi_data_buffer if the free_list command was of the protected variety. Reported-by: Boaz Harrosh <bharrosh@panasas.com> Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-21usb-serial: fix crash when sub-driver updates firmwareAlan Stern
commit 0a3c8549ea7e94d74a41096d42bc6cdf43d183bf upstream. This patch (as1244) fixes a crash in usb-serial that occurs when a sub-driver returns a positive value from its attach method, indicating that new firmware was loaded and the device will disconnect and reconnect. The usb-serial core then skips the step of registering the port devices; when the disconnect occurs, the attempt to unregister the ports fails dramatically. This problem shows up with Keyspan devices and it might affect others as well. When the attach method returns a positive value, the patch sets num_ports to 0. This tells usb_serial_disconnect() not to try unregistering any of the ports; instead they are cleaned up by destroy_serial(). Signed-off-by: Alan Stern <stern@rowland.harvard.edu> Tested-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12Linux 2.6.27.37v2.6.27.37Greg Kroah-Hartman
2009-10-12time: catch xtime_nsec underflows and fix themjohn stultz
commit 6c9bacb41c10ba84ff68f238e234d96f35fb64f7 upstream. Impact: fix time warp bug Alex Shi, along with Yanmin Zhang have been noticing occasional time inconsistencies recently. Through their great diagnosis, they found that the xtime_nsec value used in update_wall_time was occasionally going negative. After looking through the code for awhile, I realized we have the possibility for an underflow when three conditions are met in update_wall_time(): 1) We have accumulated a second's worth of nanoseconds, so we incremented xtime.tv_sec and appropriately decrement xtime_nsec. (This doesn't cause xtime_nsec to go negative, but it can cause it to be small). 2) The remaining offset value is large, but just slightly less then cycle_interval. 3) clocksource_adjust() is speeding up the clock, causing a corrective amount (compensating for the increase in the multiplier being multiplied against the unaccumulated offset value) to be subtracted from xtime_nsec. This can cause xtime_nsec to underflow. Unfortunately, since we notify the NTP subsystem via second_overflow() whenever we accumulate a full second, and this effects the error accumulation that has already occured, we cannot simply revert the accumulated second from xtime nor move the second accumulation to after the clocksource_adjust call without a change in behavior. This leaves us with (at least) two options: 1) Simply return from clocksource_adjust() without making a change if we notice the adjustment would cause xtime_nsec to go negative. This would work, but I'm concerned that if a large adjustment was needed (due to the error being large), it may be possible to get stuck with an ever increasing error that becomes too large to correct (since it may always force xtime_nsec negative). This may just be paranoia on my part. 2) Catch xtime_nsec if it is negative, then add back the amount its negative to both xtime_nsec and the error. This second method is consistent with how we've handled earlier rounding issues, and also has the benefit that the error being added is always in the oposite direction also always equal or smaller then the correction being applied. So the risk of a corner case where things get out of control is lessened. This patch fixes bug 11970, as tested by Yanmin Zhang http://bugzilla.kernel.org/show_bug.cgi?id=11970 Reported-by: alex.shi@intel.com Signed-off-by: John Stultz <johnstul@us.ibm.com> Acked-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Tested-by: Yanmin Zhang <yanmin_zhang@linux.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12hpwdt.c: Add new HP BMC controller.Thomas Mingarelli
commit d8100c3abfd32986a8820ce4e614b0223a2d22a9 upstream. Add the PCI-ID for the upcoming new BMC controller for HP hardware. Signed-off-by: Thomas Mingarelli <Thomas.Mingarelli@hp.com> Signed-off-by: Wim Van Sebroeck <wim@iguana.be> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12KVM: x86: Disallow hypercalls for guest callers in rings > 0 [CVE-2009-3290]Jan Kiszka
[ backport to 2.6.27 by Chuck Ebbert <cebbert@redhat.com> ] commit 07708c4af1346ab1521b26a202f438366b7bcffd upstream. So far unprivileged guest callers running in ring 3 can issue, e.g., MMU hypercalls. Normally, such callers cannot provide any hand-crafted MMU command structure as it has to be passed by its physical address, but they can still crash the guest kernel by passing random addresses. To close the hole, this patch considers hypercalls valid only if issued from guest ring 0. This may still be relaxed on a per-hypercall base in the future once required. Signed-off-by: Jan Kiszka <jan.kiszka@siemens.com> Signed-off-by: Avi Kivity <avi@redhat.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12x86: Increase MIN_GAP to include randomized stackMichal Hocko
[ trivial backport to 2.6.27: Chuck Ebbert <cebbert@redhat.com> ] commit 80938332d8cf652f6b16e0788cf0ca136befe0b5 upstream. Currently we are not including randomized stack size when calculating mmap_base address in arch_pick_mmap_layout for topdown case. This might cause that mmap_base starts in the stack reserved area because stack is randomized by 1GB for 64b (8MB for 32b) and the minimum gap is 128MB. If the stack really grows down to mmap_base then we can get silent mmap region overwrite by the stack values. Let's include maximum stack randomization size into MIN_GAP which is used as the low bound for the gap in mmap. Signed-off-by: Michal Hocko <mhocko@suse.cz> LKML-Reference: <1252400515-6866-1-git-send-email-mhocko@suse.cz> Acked-by: Jiri Kosina <jkosina@suse.cz> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12eCryptfs: Prevent lower dentry from going negative during unlink (CVE-2009-2908)Tyler Hicks
commit 9c2d2056647790c5034d722bd24e9d913ebca73c upstream. When calling vfs_unlink() on the lower dentry, d_delete() turns the dentry into a negative dentry when the d_count is 1. This eventually caused a NULL pointer deref when a read() or write() was done and the negative dentry's d_inode was dereferenced in ecryptfs_read_update_atime() or ecryptfs_getxattr(). Placing mutt's tmpdir in an eCryptfs mount is what initially triggered the oops and I was able to reproduce it with the following sequence: open("/tmp/upper/foo", O_RDWR|O_CREAT|O_EXCL|O_NOFOLLOW, 0600) = 3 link("/tmp/upper/foo", "/tmp/upper/bar") = 0 unlink("/tmp/upper/foo") = 0 open("/tmp/upper/bar", O_RDWR|O_CREAT|O_NOFOLLOW, 0600) = 4 unlink("/tmp/upper/bar") = 0 write(4, "eCryptfs test\n"..., 14 <unfinished ...> +++ killed by SIGKILL +++ https://bugs.launchpad.net/ecryptfs/+bug/387073 Reported-by: Loïc Minier <loic.minier@canonical.com> Cc: Serge Hallyn <serue@us.ibm.com> Cc: Dave Kleikamp <shaggy@linux.vnet.ibm.com> Cc: ecryptfs-devel@lists.launchpad.net Signed-off-by: Tyler Hicks <tyhicks@linux.vnet.ibm.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12x86: Don't leak 64-bit kernel register values to 32-bit processesJan Beulich
commit 24e35800cdc4350fc34e2bed37b608a9e13ab3b6 upstream x86: Don't leak 64-bit kernel register values to 32-bit processes While 32-bit processes can't directly access R8...R15, they can gain access to these registers by temporarily switching themselves into 64-bit mode. Therefore, registers not preserved anyway by called C functions (i.e. R8...R11) must be cleared prior to returning to user mode. Signed-off-by: Jan Beulich <jbeulich@novell.com> LKML-Reference: <4AC34D73020000780001744A@vpn.id2.novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12x86-64: slightly stream-line 32-bit syscall entry codeJan Beulich
commit 295286a89107c353b9677bc604361c537fd6a1c0 upstream x86-64: slightly stream-line 32-bit syscall entry code [ required for following patch to apply properly ] Avoid updating registers or memory twice as well as needlessly loading or copying registers. Signed-off-by: Jan Beulich <jbeulich@novell.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-12net: Fix wrong sizeofJean Delvare
commit b607bd900051efc3308c4edc65dd98b34b230021 upstream. Which is why I have always preferred sizeof(struct foo) over sizeof(var). Signed-off-by: Jean Delvare <khali@linux-fr.org> Acked-by: Randy Dunlap <rdunlap@xenotime.net> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05Linux 2.6.27.36v2.6.27.36Greg Kroah-Hartman
2009-10-05mmap: avoid unnecessary anon_vma lock acquisition in vma_adjust()Lee Schermerhorn
commit 252c5f94d944487e9f50ece7942b0fbf659c5c31 upstream. We noticed very erratic behavior [throughput] with the AIM7 shared workload running on recent distro [SLES11] and mainline kernels on an 8-socket, 32-core, 256GB x86_64 platform. On the SLES11 kernel [2.6.27.19+] with Barcelona processors, as we increased the load [10s of thousands of tasks], the throughput would vary between two "plateaus"--one at ~65K jobs per minute and one at ~130K jpm. The simple patch below causes the results to smooth out at the ~130k plateau. But wait, there's more: We do not see this behavior on smaller platforms--e.g., 4 socket/8 core. This could be the result of the larger number of cpus on the larger platform--a scalability issue--or it could be the result of the larger number of interconnect "hops" between some nodes in this platform and how the tasks for a given load end up distributed over the nodes' cpus and memories--a stochastic NUMA effect. The variability in the results are less pronounced [on the same platform] with Shanghai processors and with mainline kernels. With 31-rc6 on Shanghai processors and 288 file systems on 288 fibre attached storage volumes, the curves [jpm vs load] are both quite flat with the patched kernel consistently producing ~3.9% better throughput [~80K jpm vs ~77K jpm] than the unpatched kernel. Profiling indicated that the "slow" runs were incurring high[er] contention on an anon_vma lock in vma_adjust(), apparently called from the sbrk() system call. The patch: A comment in mm/mmap.c:vma_adjust() suggests that we don't really need the anon_vma lock when we're only adjusting the end of a vma, as is the case for brk(). The comment questions whether it's worth while to optimize for this case. Apparently, on the newer, larger x86_64 platforms, with interesting NUMA topologies, it is worth while--especially considering that the patch [if correct!] is quite simple. We can detect this condition--no overlap with next vma--by noting a NULL "importer". The anon_vma pointer will also be NULL in this case, so simply avoid loading vma->anon_vma to avoid the lock. However, we DO need to take the anon_vma lock when we're inserting a vma ['insert' non-NULL] even when we have no overlap [NULL "importer"], so we need to check for 'insert', as well. And Hugh points out that we should also take it when adjusting vm_start (so that rmap.c can rely upon vma_address() while it holds the anon_vma lock). akpm: Zhang Yanmin reprts a 150% throughput improvement with aim7, so it might be -stable material even though thiss isn't a regression: "this issue is not clear on dual socket Nehalem machine (2*4*2 cpu), but is severe on large machine (4*8*2 cpu)" [hugh.dickins@tiscali.co.uk: test vma start too] Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Cc: Nick Piggin <npiggin@suse.de> Cc: Eric Whitney <eric.whitney@hp.com> Tested-by: "Zhang, Yanmin" <yanmin_zhang@linux.intel.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05mm: fix anonymous dirtyingHugh Dickins
commit 1ac0cb5d0e22d5e483f56b2bc12172dec1cf7536 upstream. do_anonymous_page() has been wrong to dirty the pte regardless. If it's not going to mark the pte writable, then it won't help to mark it dirty here, and clogs up memory with pages which will need swap instead of being thrown away. Especially wrong if no overcommit is chosen, and this vma is not yet VM_ACCOUNTed - we could exceed the limit and OOM despite no overcommit. Signed-off-by: Hugh Dickins <hugh.dickins@tiscali.co.uk> Acked-by: Rik van Riel <riel@redhat.com> Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com> Cc: KOSAKI Motohiro <kosaki.motohiro@jp.fujitsu.com> Cc: Nick Piggin <npiggin@suse.de> Cc: Mel Gorman <mel@csn.ul.ie> Cc: Minchan Kim <minchan.kim@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05hugetlb: restore interleaving of bootmem huge pages (2.6.31)Lee Schermerhorn
Not upstream as it is fixed differently in .32 I noticed that alloc_bootmem_huge_page() will only advance to the next node on failure to allocate a huge page. I asked about this on linux-mm and linux-numa, cc'ing the usual huge page suspects. Mel Gorman responded: I strongly suspect that the same node being used until allocation failure instead of round-robin is an oversight and not deliberate at all. It appears to be a side-effect of a fix made way back in commit 63b4613c3f0d4b724ba259dc6c201bb68b884e1a ["hugetlb: fix hugepage allocation with memoryless nodes"]. Prior to that patch it looked like allocations would always round-robin even when allocation was successful. Andy Whitcroft countered that the existing behavior looked like Andi Kleen's original implementation and suggested that we ask him. We did and Andy replied that his intention was to interleave the allocations. So, ... This patch moves the advance of the hstate next node from which to allocate up before the test for success of the attempted allocation. This will unconditionally advance the next node from which to alloc, interleaving successful allocations over the nodes with sufficient contiguous memory, and skipping over nodes that fail the huge page allocation attempt. Note that alloc_bootmem_huge_page() will only be called for huge pages of order > MAX_ORDER. Signed-off-by: Lee Schermerhorn <lee.schermerhorn@hp.com> Reviewed-by: Andi Kleen <ak@linux.intel.com> Cc: Mel Gorman <mel@csn.ul.ie> Cc: David Rientjes <rientjes@google.com> Cc: Adam Litke <agl@us.ibm.com> Cc: Andy Whitcroft <apw@canonical.com> Cc: Eric Whitney <eric.whitney@hp.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05netfilter: bridge: refcount fixPatrick McHardy
netfilter: bridge: refcount fix Upstream commit f3abc9b9: commit f216f082b2b37c4943f1e7c393e2786648d48f6f ([NETFILTER]: bridge netfilter: deal with martians correctly) added a refcount leak on in_dev. Instead of using in_dev_get(), we can use __in_dev_get_rcu(), as netfilter hooks are running under rcu_read_lock(), as pointed by Patrick. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: Patrick McHardy <kaber@trash.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05net: Make the copy length in af_packet sockopt handler unsignedArjan van de Ven
fixed upstream in commit b7058842c940ad2c08dd829b21e5c92ebe3b8758 in a different way The length of the to-copy data structure is currently stored in a signed integer. However many comparisons are done with sizeof(..) which is unsigned. It's more suitable for this variable to be unsigned to make these comparisons more naturally right. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05net ax25: Fix signed comparison in the sockopt handlerArjan van de Ven
fixed upstream in commit b7058842c940ad2c08dd829b21e5c92ebe3b8758 in a different way The ax25 code tried to use if (optlen < sizeof(int)) return -EINVAL; as a security check against optlen being negative (or zero) in the set socket option. Unfortunately, "sizeof(int)" is an unsigned property, with the result that the whole comparison is done in unsigned, letting negative values slip through. This patch changes this to if (optlen < (int)sizeof(int)) return -EINVAL; so that the comparison is done as signed, and negative values get properly caught. Signed-off-by: Arjan van de Ven <arjan@linux.intel.com> Cc: David S. Miller <davem@davemloft.net> Cc: Ingo Molnar <mingo@elte.hu> Cc: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05Fix incorrect stable backport to bas_gigasetTilman Schmidt
bas_gigaset: correctly allocate USB interrupt transfer buffer [ Upstream commit 170ebf85160dd128e1c4206cc197cce7d1424705 ] This incorrect backport to 2.6.28.10 placed some code into the probe function which used a pointer before it was initialized. Moving this to the correct place (as it is in upstream). Signed-off-by: Stefan Bader <stefan.bader@canonical.com> Acked-by: Tim Gardner <tim.gardner@canonical.com> Acked-by: Steve Conklin <steve.conklin@canonical.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05pcnet_cs: Fix misuse of the equality operator.Cord Walter
commit a9d3a146923d374b945aa388dc884df69564a818 upstream. Signed-off-by: Cord Walter <qord@cwalter.net> Signed-off-by: Komuro <komurojun-mbn@nifty.com> Signed-off-by: David S. Miller <davem@davemloft.net> Cc: Christoph Biedl <linux-kernel.bfrz@manchmal.in-ulm.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05enc28j60: fix RX buffer overflowBaruch Siach
commit 22692018b93f0782cda5a843cecfffda1854eb8d upstream. The enc28j60 driver doesn't check whether the length of the packet as reported by the hardware fits into the preallocated buffer. When stressed, the hardware may report insanely large packets even tough the "Receive OK" bit is set. Fix this. Signed-off-by: Baruch Siach <baruch@tkos.co.il> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05p54usb: add Zcomax XG-705A usbidChristian Lamparter
commit f7f71173ea69d4dabf166533beffa9294090b7ef upstream. This patch adds a new usbid for Zcomax XG-705A to the device table. Reported-by: Jari Jaakola <jari.jaakola@gmail.com> Signed-off-by: Christian Lamparter <chunkeey@googlemail.com> Signed-off-by: John W. Linville <linville@tuxdriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-10-05fs: make sure data stored into inode is properly seen before unlocking new inodeJan Kara
commit 580be0837a7a59b207c3d5c661d044d8dd0a6a30 upstream. In theory it could happen that on one CPU we initialize a new inode but clearing of I_NEW | I_LOCK gets reordered before some of the initialization. Thus on another CPU we return not fully uptodate inode from iget_locked(). This seems to fix a corruption issue on ext3 mounted over NFS. [akpm@linux-foundation.org: add some commentary] Signed-off-by: Jan Kara <jack@suse.cz> Cc: Christoph Hellwig <hch@infradead.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24Linux 2.6.27.35v2.6.27.35Greg Kroah-Hartman
2009-09-24nfsd: fix hung up of nfs client while sync write data to nfs serverWei Yongjun
commit a0d24b295aed7a9daf4ca36bd4784e4d40f82303 upstream. nfsd: fix hung up of nfs client while sync write data to nfs server Commit 'Short write in nfsd becomes a full write to the client' (31dec2538e45e9fff2007ea1f4c6bae9f78db724) broken the sync write. With the following commands to reproduce: $ mount -t nfs -o sync 192.168.0.21:/nfsroot /mnt $ cd /mnt $ echo aaaa > temp.txt Then nfs client is hung up. In SYNC mode the server alaways return the write count 0 to the client. This is because the value of host_err in nfsd_vfs_write() will be overwrite in SYNC mode by 'host_err=nfsd_sync(file);', and then we return host_err(which is now 0) as write count. This patch fixed the problem. Signed-off-by: Wei Yongjun <yjwei@cn.fujitsu.com> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24Short write in nfsd becomes a full write to the clientDavid Shaw
commit 31dec2538e45e9fff2007ea1f4c6bae9f78db724 upstream. Short write in nfsd becomes a full write to the client If a filesystem being written to via NFS returns a short write count (as opposed to an error) to nfsd, nfsd treats that as a success for the entire write, rather than the short count that actually succeeded. For example, given a 8192 byte write, if the underlying filesystem only writes 4096 bytes, nfsd will ack back to the nfs client that all 8192 bytes were written. The nfs client does have retry logic for short writes, but this is never called as the client is told the complete write succeeded. There are probably other ways it could happen, but in my case it happened with a fuse (filesystem in userspace) filesystem which can rather easily have a partial write. Here is a patch to properly return the short write count to the client. Signed-off-by: David Shaw <dshaw@jabberwocky.com> Signed-off-by: J. Bruce Fields <bfields@citi.umich.edu> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24sound: oxygen: work around MCE when changing volumeClemens Ladisch
commit f1bc07af9a9edc5c1d4bdd971f7099316ed2e405 upstream. When the volume is changed continuously (e.g., when the user drags a volume slider with the mouse), the driver does lots of I2C writes. Apparently, the sound chip can get confused when we poll the I2C status register too much, and fails to complete a read from it. On the PCI-E models, the PCI-E/PCI bridge gets upset by this and generates a machine check exception. To avoid this, this patch replaces the polling with an unconditional wait that is guaranteed to be long enough. Signed-off-by: Clemens Ladisch <clemens@ladisch.de> Tested-by: Johann Messner <johann.messner at jku.at> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24powerpc/pseries: Fix to handle slb resize across migrationBrian King
commit 46db2f86a3b2a94e0b33e0b4548fb7b7b6bdff66 upstream. The SLB can change sizes across a live migration, which was not being handled, resulting in possible machine crashes during migration if migrating to a machine which has a smaller max SLB size than the source machine. Fix this by first reducing the SLB size to the minimum possible value, which is 32, prior to migration. Then during the device tree update which occurs after migration, we make the call to ensure the SLB gets updated. Also add the slb_size to the lparcfg output so that the migration tools can check to make sure the kernel has this capability before allowing migration in scenarios where the SLB size will change. BenH: Fixed #include <asm/mmu-hash64.h> -> <asm/mmu.h> to avoid breaking ppc32 build Signed-off-by: Brian King <brking@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24libata: fix off-by-one error in ata_tf_read_block()Tejun Heo
commit ac8672ea922bde59acf50eaa1eaa1640a6395fd2 upstream. ata_tf_read_block() has off-by-one error when converting CHS address to LBA. The bug isn't very visible because ata_tf_read_block() is used only when generating sense data for a failed RW command and CHS addressing isn't used too often these days. This problem was spotted by Atsushi Nemoto. Signed-off-by: Tejun Heo <tj@kernel.org> Reported-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp> Signed-off-by: Jeff Garzik <jgarzik@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24ALSA: cs46xx - Fix minimum period sizeSophie Hamilton
commit 6148b130eb84edc76e4fa88da1877b27be6c2f06 upstream. Fix minimum period size for cs46xx cards. This fixes a problem in the case where neither a period size nor a buffer size is passed to ALSA; this is the case in Audacious, OpenAL, and others. Signed-off-by: Sophie Hamilton <kernel@theblob.org> Signed-off-by: Takashi Iwai <tiwai@suse.de> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24udf: Use device size when drive reported bogus number of written blocksJan Kara
commit 24a5d59f3477bcff4c069ff4d0ca9a3e037d0235 upstream. Some drives report 0 as the number of written blocks when there are some blocks recorded. Use device size in such case so that we can automagically mount such media. Signed-off-by: Jan Kara <jack@suse.cz> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24TPM: Fixup boot probe timeout for tpm_tis driverJason Gunthorpe
commit ec57935837a78f9661125b08a5d08b697568e040 upstream. When probing the device in tpm_tis_init the call request_locality uses timeout_a, which wasn't being initalized until after request_locality. This results in request_locality falsely timing out if the chip is still starting. Move the initialization to before request_locality. This probably only matters for embedded cases (ie mine), a BIOS likely gets the TPM into a state where this code path isn't necessary. Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com> Acked-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com> Signed-off-by: James Morris <jmorris@namei.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24powerpc/ps3: Workaround for flash memory I/O errorGeoff Levand
commit bc00351edd5c1b84d48c3fdca740fedfce4ae6ce upstream. A workaround for flash memory I/O errors when the PS3 internal hard disk has not been formatted for OtherOS use. This error condition mainly effects 'Live CD' users who have not formatted the PS3's internal hard disk for OtherOS. Fixes errors similar to these when using the ps3-flash-util or ps3-boot-game-os programs: ps3flash read failed 0x2050000 os_area_header_read: read error: os_area_header: Input/output error main:627: os_area_read_hp error. ERROR: can't change boot flag Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-24binfmt_elf: fix PT_INTERP bss handlingRoland McGrath
commit 9f0ab4a3f0fdb1ff404d150618ace2fa069bb2e1 upstream. In fs/binfmt_elf.c, load_elf_interp() calls padzero() for .bss even if the PT_LOAD has no PROT_WRITE and no .bss. This generates EFAULT. Here is a small test case. (Yes, there are other, useful PT_INTERP which have only .text and no .data/.bss.) ----- ptinterp.S _start: .globl _start nop int3 ----- $ gcc -m32 -nostartfiles -nostdlib -o ptinterp ptinterp.S $ gcc -m32 -Wl,--dynamic-linker=ptinterp -o hello hello.c $ ./hello Segmentation fault # during execve() itself After applying the patch: $ ./hello Trace trap # user-mode execution after execve() finishes If the ELF headers are actually self-inconsistent, then dying is fine. But having no PROT_WRITE segment is perfectly normal and correct if there is no segment with p_memsz > p_filesz (i.e. bss). John Reiser suggested checking for PROT_WRITE in the bss logic. I think it makes most sense to simply apply the bss logic only when there is bss. This patch looks less trivial than it is due to some reindentation. It just moves the "if (last_bss > elf_bss) {" test up to include the partial-page bss logic as well as the more-pages bss logic. Reported-by: John Reiser <jreiser@bitwagon.com> Signed-off-by: Roland McGrath <roland@redhat.com> Signed-off-by: James Morris <jmorris@namei.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-15Linux 2.6.27.34v2.6.27.34Greg Kroah-Hartman
2009-09-15slub: Fix kmem_cache_destroy() with SLAB_DESTROY_BY_RCUEric Dumazet
commit d76b1590e06a63a3d8697168cd0aabf1c4b3cb3a upstream. kmem_cache_destroy() should call rcu_barrier() *after* kmem_cache_close() and *before* sysfs_slab_remove() or risk rcu_free_slab() being called after kmem_cache is deleted (kfreed). rmmod nf_conntrack can crash the machine because it has to kmem_cache_destroy() a SLAB_DESTROY_BY_RCU enabled cache. Reported-by: Zdenek Kabelac <zdenek.kabelac@gmail.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Pekka Enberg <penberg@cs.helsinki.fi> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-15JFFS2: add missing verify buffer allocation/deallocationMassimo Cirillo
commit bc8cec0dff072f1a45ce7f6b2c5234bb3411ac51 upstream. The function jffs2_nor_wbuf_flash_setup() doesn't allocate the verify buffer if CONFIG_JFFS2_FS_WBUF_VERIFY is defined, so causing a kernel panic when that macro is enabled and the verify function is called. Similarly the jffs2_nor_wbuf_flash_cleanup() must free the buffer if CONFIG_JFFS2_FS_WBUF_VERIFY is enabled. The following patch fixes the problem. The following patch applies to 2.6.30 kernel. Signed-off-by: Massimo Cirillo <maxcir@gmail.com> Signed-off-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com> Signed-off-by: David Woodhouse <David.Woodhouse@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-15net: net_assign_generic() fixEric Dumazet
[ Upstream commit 144586301f6af5ae5943a002f030d8c626fa4fdd ] memcpy() should take into account size of pointers, not only number of pointers to copy. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Acked-by: Pavel Emelyanov <xemul@openvz.org> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-15E100: fix interaction with swiotlb on X86.Krzysztof Hałasa
[ Upstream commit 6ff9c2e7fa8ca63a575792534b63c5092099c286 ] E100 places it's RX packet descriptors inside skb->data and uses them with bidirectional streaming DMA mapping. Data in descriptors is accessed simultaneously by the chip (writing status and size when a packet is received) and CPU (reading to check if the packet was received). This isn't a valid usage of PCI DMA API, which requires use of the coherent (consistent) memory for such purpose. Unfortunately e100 chips working in "simplified" RX mode have to store received data directly after the descriptor. Fixing the driver to conform to the API would require using unsupported "flexible" RX mode or receiving data into a coherent memory and using CPU to copy it to network buffers. This patch, while not yet making the driver conform to the PCI DMA API, allows it to work correctly on X86 with swiotlb (while not breaking other architectures). Signed-off-by: Krzysztof Hałasa <khc@pm.waw.pl> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-09Linux 2.6.27.33v2.6.27.33Greg Kroah-Hartman
2009-09-09OCFS2: fix build errorGreg Kroah-Hartman
Somehow a previous patch did not get committed correctly. This fixes the build. Thanks to Jayson King, Michael Tokarev, Joel Becker, and Chuck Ebbert for pointing out the problem, and the solution. Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08Linux 2.6.27.32v2.6.27.32Greg Kroah-Hartman
2009-09-08ocfs2: ocfs2_write_begin_nolock() should handle len=0Sunil Mushran
commit 8379e7c46cc48f51197dd663fc6676f47f2a1e71 upstream. Bug introduced by mainline commit e7432675f8ca868a4af365759a8d4c3779a3d922 The bug causes ocfs2_write_begin_nolock() to oops when len=0. Signed-off-by: Sunil Mushran <sunil.mushran@oracle.com> Signed-off-by: Joel Becker <joel.becker@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08SUNRPC: Fix tcp reconnectionTrond Myklebust
This fixes a problem that was reported as Red Hat Bugzilla entry number 485339, in which rpciod starts looping on the TCP connection code, rendering the NFS client unusable for 1/2 minute or so. It is basically a backport of commit f75e6745aa3084124ae1434fd7629853bdaf6798 (SUNRPC: Fix the problem of EADDRNOTAVAIL syslog floods on reconnect) Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08SCSI: sr: report more accurate drive status after closing the tray.Peter Jones
commit 96bcc722c47d07b6fd05c9d0cb3ab8ea5574c5b1 upstream [SCSI] sr: report more accurate drive status after closing the tray. So, what's happening here is that the drive is reporting a sense of 2/4/1 ("logical unit is becoming ready") from sr_test_unit_ready(), and then we ask for the media event notification before checking that result at all. The check_media_event_descriptor() call isn't getting a check condition, but it's also reporting that the tray is closed and that there's no media. In actuality it doesn't yet know if there's media or not, but there's no way to express that in the media event status field. My current thought is that if it told us the device isn't yet ready, we should return that immediately, since there's nothing that'll tell us any more data than that reliably: Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08Remove low_latency flag setting from nozomi and mxser driversChuck Ebbert
commit 4d8d4d251df8eaaa3dae71c8cfa7fbf4510d967d upstream [ cebbert@redhat.com: backport to 2.6.27 ] Remove low_latency flag setting from nozomi and mxser drivers The kernel oopses if this flag is set. [and neither driver should set it as they call tty_flip_buffer_push from IRQ paths so have always been buggy] Signed-off-by: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08USB: removal of tty->low_latency hack dating back to the old serial codeOliver Neukum
commit 2400a2bfbd0e912193fe3b077f492d4980141813 upstream [ cebbert@redhat.com: backport to 2.6.27 ] USB: removal of tty->low_latency hack dating back to the old serial code This removes tty->low_latency from all USB serial drivers that push data into the tty layer at hard interrupt context. It's no longer needed and actually harmful. Signed-off-by: Oliver Neukum <oliver@neukum.org> Cc: Alan Cox <alan@lxorguk.ukuu.org.uk> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2009-09-08parport: quickfix the proc registration bugAlan Cox
commit 05ad709d04799125ed85dd816fdb558258102172 upstream parport: quickfix the proc registration bug Ideally we should have a directory of drivers and a link to the 'active' driver. For now just show the first device which is effectively the existing semantics without a warning. This is an update on the original buggy patch that I then forgot to resubmit. Confusingly it was proposed by Red Hat, written by Etched Pixels fixed and submitted by Intel ... Resolves-Bug: http://bugzilla.kernel.org/show_bug.cgi?id=9749 Signed-off-by: Alan Cox <alan@linux.intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Chuck Ebbert <cebbert@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>