aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2010-12-14nmi: fix clock comparator revalidationHeiko Carstens
commit e8129c642155616d9e2160a75f103e127c8c3708 upstream. On each machine check all registers are revalidated. The save area for the clock comparator however only contains the upper most seven bytes of the former contents, if valid. Therefore the machine check handler uses a store clock instruction to get the current time and writes that to the clock comparator register which in turn will generate an immediate timer interrupt. However within the lowcore the expected time of the next timer interrupt is stored. If the interrupt happens before that time the handler won't be called. In turn the clock comparator won't be reprogrammed and therefore the interrupt condition stays pending which causes an interrupt loop until the expected time is reached. On NOHZ machines this can result in unresponsive machines since the time of the next expected interrupted can be a couple of days in the future. To fix this just revalidate the clock comparator register with the expected value. In addition the special handling for udelay must be changed as well. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14OMAP3: DMA: Errata i541: sDMA FIFO draining does not finishPeter Ujfalusi
commit 0e4905c0199d683497833be60a428c784d7575b8 upstream. Implement the suggested workaround for OMAP3 regarding to sDMA draining issue, when the channel is disabled on the fly. This errata affects the following configuration: sDMA transfer is source synchronized Buffering is enabled SmartStandby is selected. The issue can be easily reproduced by creating overrun situation while recording audio. Either introduce load to the CPU: nice -19 arecord -D hw:0 -M -B 10000 -F 5000 -f dat > /dev/null & \ dd if=/dev/urandom of=/dev/null or suspending the arecord, and resuming it: arecord -D hw:0 -M -B 10000 -F 5000 -f dat > /dev/null CTRL+Z; fg; CTRL+Z; fg; ... In case of overrun audio stops DMA, and restarts it (without reseting the sDMA channel). When we hit this errata in stop case (sDMA drain did not complete), at the coming start the sDMA will not going to be operational (it is still draining). This leads to DMA stall condition. On OMAP3 we can recover with sDMA channel reset, it has been observed that by introducing unrelated sDMA activity might also help (reading from MMC for example). The same errata exists for OMAP2, where the suggestion is to disable the buffering to avoid this type of error. On OMAP3 the suggestion is to set sDMA to NoStandby before disabling the channel, and wait for the drain to finish, than configure sDMA to SmartStandby again. Signed-off-by: Peter Ujfalusi <peter.ujfalusi@nokia.com> Acked-by: Jarkko Nikula <jhnikula@gmail.com> Signed-off-by: Andi Kleen <ak@linux.intel.com> Acked-by : Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by : Manjunath Kondaiah G <manjugk@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-12-14omap: dma: Fix buffering disable bit setting for omap24xxJarkko Nikula
commit 3e57f1626b5febe5cc99aa6870377deef3ae03cc upstream. An errata workaround for omap24xx is not setting the buffering disable bit 25 what is the purpose but channel enable bit 7 instead. Background for this fix is the DMA stalling issue with ASoC omap-mcbsp driver. Peter Ujfalusi <peter.ujfalusi@nokia.com> has found an issue in recording that the DMA stall could happen if there were a buffer overrun detected by ALSA and the DMA was stopped and restarted due that. This problem is known to occur on both OMAP2420 and OMAP3. It can recover on OMAP3 after dma free, dma request and reconfiguration cycle. However, on OMAP2420 it seems that only way to recover is a reset. Problem was not visible before the commit c12abc0. That commit changed that the McBSP transmitter/receiver is released from reset only when needed. That is, only enabled McBSP transmitter without transmission was able to prevent this DMA stall problem in receiving side and underlying problem did not show up until now. McBSP transmitter itself seems to no be reason since DMA stall does not recover by enabling the transmission after stall. Debugging showed that there were a DMA write active during DMA stop time and it never completed even when restarting the DMA. Experimenting showed that the DMA buffering disable bit could be used to avoid stalling when using source synchronized transfers. However that could have performance hit and OMAP3 TRM states that buffering disable is not allowed for destination synchronized transfers so subsequent patch will implement a method to complete DMA writes when stopping. This patch is based on assumtion that complete lock-up on OMAP2420 is different but related problem. I don't have access to OMAP2420 errata but I believe this old workaround here is put for a reason but unfortunately a wrong bit was typed and problem showed up only now. Signed-off-by: Jarkko Nikula <jhnikula@gmail.com> Signed-off-by: Peter Ujfalusi <peter.ujfalusi@nokia.com> Acked-by: Manjunath Kondaiah G <manjugk@ti.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14nohz/s390: fix arch_needs_cpu() return value on offline cpusHeiko Carstens
commit 398812159e328478ae49b4bd01f0d71efea96c39 upstream. This fixes the same problem as described in the patch "nohz: fix printk_needs_cpu() return value on offline cpus" for the arch_needs_cpu() primitive: arch_needs_cpu() may return 1 if called on offline cpus. When a cpu gets offlined it schedules the idle process which, before killing its own cpu, will call tick_nohz_stop_sched_tick(). That function in turn will call arch_needs_cpu() in order to check if the local tick can be disabled. On offline cpus this function should naturally return 0 since regardless if the tick gets disabled or not the cpu will be dead short after. That is besides the fact that __cpu_disable() should already have made sure that no interrupts on the offlined cpu will be delivered anyway. In this case it prevents tick_nohz_stop_sched_tick() to call select_nohz_load_balancer(). No idea if that really is a problem. However what made me debug this is that on 2.6.32 the function get_nohz_load_balancer() is used within __mod_timer() to select a cpu on which a timer gets enqueued. If arch_needs_cpu() returns 1 then the nohz_load_balancer cpu doesn't get updated when a cpu gets offlined. It may contain the cpu number of an offline cpu. In turn timers get enqueued on an offline cpu and not very surprisingly they never expire and cause system hangs. This has been observed 2.6.32 kernels. On current kernels __mod_timer() uses get_nohz_timer_target() which doesn't have that problem. However there might be other problems because of the too early exit tick_nohz_stop_sched_tick() in case a cpu goes offline. This specific bug was indrocuded with 3c5d92a0 "nohz: Introduce arch_needs_cpu". In this case a cpu hotplug notifier is used to fix the issue in order to keep the normal/fast path small. All we need to do is to clear the condition that makes arch_needs_cpu() return 1 since it is just a performance improvement which is supposed to keep the local tick running for a short period if a cpu goes idle. Nothing special needs to be done except for clearing the condition. Acked-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14ARM: 6482/2: Fix find_next_zero_bit and related assemblyJames Jones
commit 0e91ec0c06d2cd15071a6021c94840a50e6671aa upstream. The find_next_bit, find_first_bit, find_next_zero_bit and find_first_zero_bit functions were not properly clamping to the maxbit argument at the bit level. They were instead only checking maxbit at the byte level. To fix this, add a compare and a conditional move instruction to the end of the common bit-within-the- byte code used by all the functions and be sure not to clobber the maxbit argument before it is used. Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: James Jones <jajones@nvidia.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14ARM: 6489/1: thumb2: fix incorrect optimisation in usraccWill Deacon
commit 1142b71d85894dcff1466dd6c871ea3c89e0352c upstream. Commit 8b592783 added a Thumb-2 variant of usracc which, when it is called with \rept=2, calls usraccoff once with an offset of 0 and secondly with a hard-coded offset of 4 in order to avoid incrementing the pointer again. If \inc != 4 then we will store the data to the wrong offset from \ptr. Luckily, the only caller that passes \rept=2 to this function is __clear_user so we haven't been actively corrupting user data. This patch fixes usracc to pass \inc instead of #4 to usraccoff when it is called a second time. Reported-by: Tony Thompson <tony.thompson@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14ARM: 6464/2: fix spinlock recursion in adjust_pte()Mika Westerberg
commit 4e54d93d3c9846ba1c2644ad06463dafa690d1b7 upstream. When running following code in a machine which has VIVT caches and USE_SPLIT_PTLOCKS is not defined: fd = open("/etc/passwd", O_RDONLY); addr = mmap(NULL, 4096, PROT_READ, MAP_SHARED, fd, 0); addr2 = mmap(NULL, 4096, PROT_READ, MAP_SHARED, fd, 0); v = *((int *)addr); we will hang in spinlock recursion in the page fault handler: BUG: spinlock recursion on CPU#0, mmap_test/717 lock: c5e295d8, .magic: dead4ead, .owner: mmap_test/717, .owner_cpu: 0 [<c0026604>] (unwind_backtrace+0x0/0xec) [<c014ee48>] (do_raw_spin_lock+0x40/0x140) [<c0027f68>] (update_mmu_cache+0x208/0x250) [<c0079db4>] (__do_fault+0x320/0x3ec) [<c007af7c>] (handle_mm_fault+0x2f0/0x6d8) [<c0027834>] (do_page_fault+0xdc/0x1cc) [<c00202d0>] (do_DataAbort+0x34/0x94) This comes from the fact that when USE_SPLIT_PTLOCKS is not defined, the only lock protecting the page tables is mm->page_table_lock which is already locked before update_mmu_cache() is called. Signed-off-by: Mika Westerberg <mika.westerberg@iki.fi> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14x86: Ignore trap bits on single step exceptionsFrederic Weisbecker
commit 6c0aca288e726405b01dacb12cac556454d34b2a upstream. When a single step exception fires, the trap bits, used to signal hardware breakpoints, are in a random state. These trap bits might be set if another exception will follow, like a breakpoint in the next instruction, or a watchpoint in the previous one. Or there can be any junk there. So if we handle these trap bits during the single step exception, we are going to handle an exception twice, or we are going to handle junk. Just ignore them in this case. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=21332 Reported-by: Michael Stefaniuc <mstefani@redhat.com> Signed-off-by: Frederic Weisbecker <fweisbec@gmail.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> Cc: Maciej Rutecki <maciej.rutecki@gmail.com> Cc: Alexandre Julliard <julliard@winehq.org> Cc: Jason Wessel <jason.wessel@windriver.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14uml: disable winch irq before freeing handler dataWill Newton
commit 69e83dad5207f8f03c9699e57e1febb114383cb8 upstream. Disable the winch irq early to make sure we don't take an interrupt part way through the freeing of the handler data, resulting in a crash on shutdown: winch_interrupt : read failed, errno = 9 fd 13 is losing SIGWINCH support ------------[ cut here ]------------ WARNING: at lib/list_debug.c:48 list_del+0xc6/0x100() list_del corruption, next is LIST_POISON1 (00100100) 082578c8: [<081fd77f>] dump_stack+0x22/0x24 082578e0: [<0807a18a>] warn_slowpath_common+0x5a/0x80 08257908: [<0807a23e>] warn_slowpath_fmt+0x2e/0x30 08257920: [<08172196>] list_del+0xc6/0x100 08257940: [<08060244>] free_winch+0x14/0x80 08257958: [<080606fb>] winch_interrupt+0xdb/0xe0 08257978: [<080a65b5>] handle_IRQ_event+0x35/0xe0 08257998: [<080a8717>] handle_edge_irq+0xb7/0x170 082579bc: [<08059bc4>] do_IRQ+0x34/0x50 082579d4: [<08059e1b>] sigio_handler+0x5b/0x80 082579ec: [<0806a374>] sig_handler_common+0x44/0xb0 08257a68: [<0806a538>] sig_handler+0x38/0x50 08257a78: [<0806a77c>] handle_signal+0x5c/0xa0 08257a9c: [<0806be28>] hard_handler+0x18/0x20 08257aac: [<00c14400>] 0xc14400 Signed-off-by: Will Newton <will.newton@gmail.com> Acked-by: WANG Cong <xiyou.wangcong@gmail.com> Cc: Jeff Dike <jdike@addtoit.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14acpi-cpufreq: fix a memleak when unloading driverZhang Rui
commit dab5fff14df2cd16eb1ad4c02e83915e1063fece upstream. We didn't free per_cpu(acfreq_data, cpu)->freq_table when acpi_freq driver is unloaded. Resulting in the following messages in /sys/kernel/debug/kmemleak: unreferenced object 0xf6450e80 (size 64): comm "modprobe", pid 1066, jiffies 4294677317 (age 19290.453s) hex dump (first 32 bytes): 00 00 00 00 e8 a2 24 00 01 00 00 00 00 9f 24 00 ......$.......$. 02 00 00 00 00 6a 18 00 03 00 00 00 00 35 0c 00 .....j.......5.. backtrace: [<c123ba97>] kmemleak_alloc+0x27/0x50 [<c109f96f>] __kmalloc+0xcf/0x110 [<f9da97ee>] acpi_cpufreq_cpu_init+0x1ee/0x4e4 [acpi_cpufreq] [<c11cd8d2>] cpufreq_add_dev+0x142/0x3a0 [<c11920b7>] sysdev_driver_register+0x97/0x110 [<c11cce56>] cpufreq_register_driver+0x86/0x140 [<f9dad080>] 0xf9dad080 [<c1001130>] do_one_initcall+0x30/0x160 [<c10626e9>] sys_init_module+0x99/0x1e0 [<c1002d97>] sysenter_do_call+0x12/0x26 [<ffffffff>] 0xffffffff https://bugzilla.kernel.org/show_bug.cgi?id=15807#c21 Tested-by: Toralf Forster <toralf.foerster@gmx.de> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14KVM: Correct ordering of ldt reload wrt fs/gs reloadAvi Kivity
commit 0a77fe4c188e25917799f2356d4aa5e6d80c39a2 upstream. If fs or gs refer to the ldt, they must be reloaded after the ldt. Reorder the code to that effect. Userspace code that uses the ldt with kvm is nonexistent, so this doesn't fix a user-visible bug. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14KVM: x86: fix information leak to userlandVasiliy Kulikov
commit 97e69aa62f8b5d338d6cff49be09e37cc1262838 upstream. Structures kvm_vcpu_events, kvm_debugregs, kvm_pit_state2 and kvm_clock_data are copied to userland with some padding and reserved fields unitialized. It leads to leaking of contents of kernel stack memory. We have to initialize them to zero. In patch v1 Jan Kiszka suggested to fill reserved fields with zeros instead of memset'ting the whole struct. It makes sense as these fields are explicitly marked as padding. No more fields need zeroing. Signed-off-by: Vasiliy Kulikov <segooon@gmail.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14KVM: Write protect memory after slot swapMichael S. Tsirkin
commit edde99ce05290e50ce0b3495d209e54e6349ab47 upstream. I have observed the following bug trigger: 1. userspace calls GET_DIRTY_LOG 2. kvm_mmu_slot_remove_write_access is called and makes a page ro 3. page fault happens and makes the page writeable fault is logged in the bitmap appropriately 4. kvm_vm_ioctl_get_dirty_log swaps slot pointers a lot of time passes 5. guest writes into the page 6. userspace calls GET_DIRTY_LOG At point (5), bitmap is clean and page is writeable, thus, guest modification of memory is not logged and GET_DIRTY_LOG returns an empty bitmap. The rule is that all pages are either dirty in the current bitmap, or write-protected, which is violated here. It seems that just moving kvm_mmu_slot_remove_write_access down to after the slot pointer swap should fix this bug. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14xen: don't bother to stop other cpus on shutdown/rebootJeremy Fitzhardinge
commit 31e323cca9d5c8afd372976c35a5d46192f540d1 upstream. Xen will shoot all the VCPUs when we do a shutdown hypercall, so there's no need to do it manually. In any case it will fail because all the IPI irqs have been pulled down by this point, so the cross-CPU calls will simply hang forever. Until change 76fac077db6b34e2c6383a7b4f3f4f7b7d06d8ce the function calls were not synchronously waited for, so this wasn't apparent. However after that change the calls became synchronous leading to a hang on shutdown on multi-VCPU guests. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Cc: Alok Kataria <akataria@vmware.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14um: fix global timer issue when using CONFIG_NO_HZRichard Weinberger
commit 482db6df1746c4fa7d64a2441d4cb2610249c679 upstream. This fixes a issue which was introduced by fe2cc53e ("uml: track and make up lost ticks"). timeval_to_ns() returns long long and not int. Due to that UML's timer did not work properlt and caused timer freezes. Signed-off-by: Richard Weinberger <richard@nod.at> Acked-by: Pekka Enberg <penberg@kernel.org> Cc: Jeff Dike <jdike@addtoit.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14um: remove PAGE_SIZE alignment in linker script causing kernel segfault.Richard Weinberger
commit 6915e04f8847bea16d0890f559694ad8eedd026c upstream. The linker script cleanup that I did in commit 5d150a97f93 ("um: Clean up linker script using standard macros.") (2.6.32) accidentally introduced an ALIGN(PAGE_SIZE) when converting to use INIT_TEXT_SECTION; Richard Weinberger reported that this causes the kernel to segfault with CONFIG_STATIC_LINK=y. I'm not certain why this extra alignment is a problem, but it seems likely it is because previously __init_begin = _stext = _text = _sinittext and with the extra ALIGN(PAGE_SIZE), _sinittext becomes different from the rest. So there is likely a bug here where something is assuming that _sinittext is the same as one of those other symbols. But reverting the accidental change fixes the regression, so it seems worth committing that now. Signed-off-by: Tim Abbott <tabbott@ksplice.com> Reported-by: Richard Weinberger <richard@nod.at> Cc: Jeff Dike <jdike@addtoit.com> Signed-off-by: Andi Kleen <ak@linux.intel.com> Tested by: Antoine Martin <antoine@nagafix.co.uk> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-12-14SH: Add missing consts to sys_execve() declarationDavid Howells
commit d8b5fc01683c66060edc202d6bb5635365822181 upstream. Add missing consts to the sys_execve() declaration which result in the following error: arch/sh/kernel/process_32.c:303: error: conflicting types for 'sys_execve' /warthog/nfs/linux-2.6-fscache/arch/sh/include/asm/syscalls_32.h:24: error: previous declaration of 'sys_execve' was here Signed-off-by: David Howells <dhowells@redhat.com> Cc: Nobuhiro Iwamatsu <iwamatsu@nigauri.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14microblaze: Fix build with make 3.82Thomas Backlund
commit b843e4ec01991a386a9e0e9030703524446e03da upstream. When running make headers_install_all on x86_64 and make 3.82 I hit this: arch/microblaze/Makefile:80: *** mixed implicit and normal rules. Stop. make: *** [headers_install_all] Error 2 So split the rules to satisfy make 3.82. Signed-off-by: Thomas Backlund <tmb@mandriva.org> Signed-off-by: Michal Simek <monstr@monstr.eu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de> Signed-off-by: Andi Kleen <ak@linux.intel.com>
2010-12-14powerpc: Fix call to subpage_protection()Michael Neuling
commit 1c2c25c78740b2796c7c06640784cb6732fa4907 upstream. In: powerpc/mm: Fix pgtable cache cleanup with CONFIG_PPC_SUBPAGE_PROT commit d28513bc7f675d28b479db666d572e078ecf182d Author: David Gibson <david@gibson.dropbear.id.au> subpage_protection() was changed to to take an mm rather a pgdir but it didn't change calling site in hashpage_preload(). The change wasn't noticed at compile time since hashpage_preload() used a void* as the parameter to subpage_protection(). This is obviously wrong and can trigger the following crash when CONFIG_SLAB, CONFIG_DEBUG_SLAB, CONFIG_PPC_64K_PAGES CONFIG_PPC_SUBPAGE_PROT are enabled. Freeing unused kernel memory: 704k freed Unable to handle kernel paging request for data at address 0x6b6b6b6b6b6c49b7 Faulting instruction address: 0xc0000000000410f4 cpu 0x2: Vector: 300 (Data Access) at [c00000004233f590] pc: c0000000000410f4: .hash_preload+0x258/0x338 lr: c000000000041054: .hash_preload+0x1b8/0x338 sp: c00000004233f810 msr: 8000000000009032 dar: 6b6b6b6b6b6c49b7 dsisr: 40000000 current = 0xc00000007e2c0070 paca = 0xc000000007fe0500 pid = 1, comm = init enter ? for help [c00000004233f810] c000000000041020 .hash_preload+0x184/0x338 (unreliable) [c00000004233f8f0] c00000000003ed98 .update_mmu_cache+0xb0/0xd0 [c00000004233f990] c000000000157754 .__do_fault+0x48c/0x5dc [c00000004233faa0] c000000000158fd0 .handle_mm_fault+0x508/0xa8c [c00000004233fb90] c0000000006acdd4 .do_page_fault+0x428/0x6ac [c00000004233fe30] c000000000005260 handle_page_fault+0x20/0x74 Reported-by: Jim Keniston <jkenisto@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Andi Kleen <ak@linux.intel.com> cc: David Gibson <david@gibson.dropbear.id.au> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22KVM: x86 emulator: fix regression with cmpxchg8b on i386 hostsAvi Kivity
commit 16518d5ada690643453eb0aef3cc7841d3623c2d upstream. operand::val and operand::orig_val are 32-bit on i386, whereas cmpxchg8b operands are 64-bit. Fix by adding val64 and orig_val64 union members to struct operand, and using them where needed. Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22ARM: cns3xxx: Fixup the missing second parameter to addruart macro to allow ↵Mac Lin
them to build. It can't be merged into Linus' tree because this file has already been changed in incompatible ways. Fixup the missing second parameter to addruart macro to allow them to build, according to to commit 0e17226f7cd289504724466f4298abc9bdfca3fe. Enabling DEBUG in head.S would cause: rch/arm/boot/compressed/head.S: Assembler messages: arch/arm/boot/compressed/head.S:1037: Error: too many positional arguments arch/arm/boot/compressed/head.S:1055: Error: too many positional arguments Signed-off-by: Mac Lin <mkl0301@gmail.com> Acked-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22KVM: SVM: Restore correct registers after sel_cr0 intercept emulationJoerg Roedel
commit cda0008299a06f0d7218c6037c3c02d7a865e954 upstream. This patch implements restoring of the correct rip, rsp, and rax after the svm emulation in KVM injected a selective_cr0 write intercept into the guest hypervisor. The problem was that the vmexit is emulated in the instruction emulation which later commits the registers right after the write-cr0 instruction. So the l1 guest will continue to run with the l2 rip, rsp and rax resulting in unpredictable behavior. This patch is not the final word, it is just an easy patch to fix the issue. The real fix will be done when the instruction emulator is made aware of nested virtualization. Until this is done this patch fixes the issue and provides an easy way to fix this in -stable too. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22KVM: X86: Report SVM bit to userspace only when supportedJoerg Roedel
commit 4c62a2dc92518c5adf434df8e5c2283c6762672a upstream. This patch fixes a bug in KVM where it _always_ reports the support of the SVM feature to userspace. But KVM only supports SVM on AMD hardware and only when it is enabled in the kernel module. This patch fixes the wrong reporting. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, vm86: Fix preemption bug for int1 debug and int3 breakpoint handlers.Bart Oldeman
commit 6554287b1de0448f1e02e200d02b43914e997d15 upstream. Impact: fix kernel bug such as: BUG: scheduling while atomic: dosemu.bin/19680/0x00000004 See also Ubuntu bug 455067 at https://bugs.launchpad.net/ubuntu/+source/linux/+bug/455067 Commits 4915a35e35a037254550a2ba9f367a812bc37d40 ("Use preempt_conditional_sti/cli in do_int3, like on x86_64.") and 3d2a71a596bd9c761c8487a2178e95f8a61da083 ("x86, traps: converge do_debug handlers") started disabling preemption in int1 and int3 handlers on i386. The problem with vm86 is that the call to handle_vm86_trap() may jump straight to entry_32.S and never returns so preempt is never enabled again, and there is an imbalance in the preempt count. Commit be716615fe596ee117292dc615e95f707fb67fd1 ("x86, vm86: fix preemption bug"), which was later (accidentally?) reverted by commit 08d68323d1f0c34452e614263b212ca556dae47f ("hw-breakpoints: modifying generic debug exception to use thread-specific debug registers") fixed the problem for debug exceptions but not for breakpoints. There are three solutions to this problem. 1. Reenable preemption before calling handle_vm86_trap(). This was the approach that was later reverted. 2. Do not disable preemption for i386 in breakpoint and debug handlers. This was the situation before October 2008. As far as I understand preemption only needs to be disabled on x86_64 because a seperate stack is used, but it's nice to have things work the same way on i386 and x86_64. 3. Let handle_vm86_trap() return instead of jumping to assembly code. By setting a flag in _TIF_WORK_MASK, either TIF_IRET or TIF_NOTIFY_RESUME, the code in entry_32.S is instructed to return to 32 bit mode from V86 mode. The logic in entry_32.S was already present to handle signals. (I chose TIF_IRET because it's slightly more efficient in do_notify_resume() in signal.c, but in fact TIF_IRET can probably be replaced by TIF_NOTIFY_RESUME everywhere.) I'm submitting approach 3, because I believe it is the most elegant and prevents future confusion. Still, an obvious preempt_conditional_cli(regs); is necessary in traps.c to correct the bug. [ hpa: This is technically a regression, but because: 1. the regression is so old, 2. the patch seems relatively high risk, justifying more testing, and 3. we're late in the 2.6.36-rc cycle, I'm queuing it up for the 2.6.37 merge window. It might, however, justify as a -stable backport at a latter time, hence Cc: stable. ] Signed-off-by: Bart Oldeman <bartoldeman@users.sourceforge.net> LKML-Reference: <alpine.DEB.2.00.1009231312330.4732@localhost.localdomain> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: K.Prasad <prasad@linux.vnet.ibm.com> Cc: Alan Stern <stern@rowland.harvard.edu> Cc: Alexander van Heukelum <heukelum@fastmail.fm> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, kdump: Change copy_oldmem_page() to use cached addressingCliff Wickman
commit 37a2f9f30a360fb03522d15c85c78265ccd80287 upstream. The copy of /proc/vmcore to a user buffer proceeds much faster if the kernel addresses memory as cached. With this patch we have seen an increase in transfer rate from less than 15MB/s to 80-460MB/s, depending on size of the transfer. This makes a big difference in time needed to save a system dump. Signed-off-by: Cliff Wickman <cpw@sgi.com> Acked-by: "Eric W. Biederman" <ebiederm@xmission.com> Cc: kexec@lists.infradead.org LKML-Reference: <E1OtMLz-0001yp-Ia@eag09.americas.sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, intr-remap: Set redirection hint in the IRTESuresh Siddha
commit 75e3cfbed6f71a8f151dc6e413b6ce3c390030cb upstream. Currently the redirection hint in the interrupt-remapping table entry is set to 0, which means the remapped interrupt is directed to the processors listed in the destination. So in logical flat mode in the presence of intr-remapping, this results in a single interrupt multi-casted to multiple cpu's as specified by the destination bit mask. But what we really want is to send that interrupt to one of the cpus based on the lowest priority delivery mode. Set the redirection hint in the IRTE to '1' to indicate that we want the remapped interrupt to be directed to only one of the processors listed in the destination. This fixes the issue of same interrupt getting delivered to multiple cpu's in the logical flat mode in the presence of interrupt-remapping. While there is no functional issue observed with this behavior, this will impact performance of such configurations (<=8 cpu's using logical flat mode in the presence of interrupt-remapping) Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> LKML-Reference: <20100827181049.013051492@sbsiddha-MOBL3.sc.intel.com> Cc: Weidong Han <weidong.han@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, mtrr: Assume SYS_CFG[Tom2ForceMemTypeWB] exists on all future AMD CPUsAndreas Herrmann
commit 3fdbf004c1706480a7c7fac3c9d836fa6df20d7d upstream. Instead of adapting the CPU family check in amd_special_default_mtrr() for each new CPU family assume that all new AMD CPUs support the necessary bits in SYS_CFG MSR. Tom2Enabled is architectural (defined in APM Vol.2). Tom2ForceMemTypeWB is defined in all BKDGs starting with K8 NPT. In pre K8-NPT BKDG this bit is reserved (read as zero). W/o this adaption Linux would unnecessarily complain about bad MTRR settings on every new AMD CPU family, e.g. [ 0.000000] WARNING: BIOS bug: CPU MTRRs don't cover all of memory, losing 4863MB of RAM. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> LKML-Reference: <20100930123235.GB20545@loge.amd.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, olpc: Don't retry EC commands foreverPaul Fox
commit 286e5b97eb22baab9d9a41ca76c6b933a484252c upstream. Avoids a potential infinite loop. It was observed once, during an EC hacking/debugging session - not in regular operation. Signed-off-by: Daniel Drake <dsd@laptop.org> Cc: dilinger@queued.net Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, kexec: Make sure to stop all CPUs before exiting the kernelAlok Kataria
commit 76fac077db6b34e2c6383a7b4f3f4f7b7d06d8ce upstream. x86 smp_ops now has a new op, stop_other_cpus which takes a parameter "wait" this allows the caller to specify if it wants to stop until all the cpus have processed the stop IPI. This is required specifically for the kexec case where we should wait for all the cpus to be stopped before starting the new kernel. We now wait for the cpus to stop in all cases except for panic/kdump where we expect things to be broken and we are doing our best to make things work anyway. This patch fixes a legitimate regression, which was introduced during 2.6.30, by commit id 4ef702c10b5df18ab04921fc252c26421d4d6c75. Signed-off-by: Alok N Kataria <akataria@vmware.com> LKML-Reference: <1286833028.1372.20.camel@ank32.eng.vmware.com> Cc: Eric W. Biederman <ebiederm@xmission.com> Cc: Jeremy Fitzhardinge <jeremy@xensource.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22x86, cpu: Fix renamed, not-yet-shipping AMD CPUID feature bitAndre Przywara
commit 7ef8aa72ab176e0288f363d1247079732c5d5792 upstream. The AMD SSE5 feature set as-it has been replaced by some extensions to the AVX instruction set. Thus the bit formerly advertised as SSE5 is re-used for one of these extensions (XOP). Although this changes the /proc/cpuinfo output, it is not user visible, as there are no CPUs (yet) having this feature. To avoid confusion this should be added to the stable series, too. Signed-off-by: Andre Przywara <andre.przywara@amd.com> LKML-Reference: <1283778860-26843-2-git-send-email-andre.przywara@amd.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22mm, x86: Saving vmcore with non-lazy freeing of vmasCliff Wickman
commit 3ee48b6af49cf534ca2f481ecc484b156a41451d upstream. During the reading of /proc/vmcore the kernel is doing ioremap()/iounmap() repeatedly. And the buildup of un-flushed vm_area_struct's is causing a great deal of overhead. (rb_next() is chewing up most of that time). This solution is to provide function set_iounmap_nonlazy(). It causes a subsequent call to iounmap() to immediately purge the vma area (with try_purge_vmap_area_lazy()). With this patch we have seen the time for writing a 250MB compressed dump drop from 71 seconds to 44 seconds. Signed-off-by: Cliff Wickman <cpw@sgi.com> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: kexec@lists.infradead.org LKML-Reference: <E1OwHZ4-0005WK-Tw@eag09.americas.sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22powerpc/perf: Fix sampling enable for PPC970Paul Mackerras
commit 9f5f9ffe50e90ed73040d2100db8bfc341cee352 upstream. The logic to distinguish marked instruction events from ordinary events on PPC970 and derivatives was flawed. The result is that instruction sampling didn't get enabled in the PMU for some marked instruction events, so they would never trigger. This fixes it by adding the appropriate break statements in the switch statement. Reported-by: David Binderman <dcb314@hotmail.com> Signed-off-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-11-22perf_events: Fix bogus AMD64 generic TLB eventsStephane Eranian
commit ba0cef3d149ce4db293c572bf36ed352b11ce7b9 upstream. PERF_COUNT_HW_CACHE_DTLB:READ:MISS had a bogus umask value of 0 which counts nothing. Needed to be 0x7 (to count all possibilities). PERF_COUNT_HW_CACHE_ITLB:READ:MISS had a bogus umask value of 0 which counts nothing. Needed to be 0x3 (to count all possibilities). Signed-off-by: Stephane Eranian <eranian@google.com> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> Cc: Robert Richter <robert.richter@amd.com> LKML-Reference: <4cb85478.41e9d80a.44e2.3f00@mx.google.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28x86, mm: Fix CONFIG_VMSPLIT_1G and 2G_OPT trampolineHugh Dickins
commit b7d460897739e02f186425b7276e3fdb1595cea7 upstream. rc2 kernel crashes when booting second cpu on this CONFIG_VMSPLIT_2G_OPT laptop: whereas cloning from kernel to low mappings pgd range does need to limit by both KERNEL_PGD_PTRS and KERNEL_PGD_BOUNDARY, cloning kernel pgd range itself must not be limited by the smaller KERNEL_PGD_BOUNDARY. Signed-off-by: Hugh Dickins <hughd@google.com> LKML-Reference: <alpine.LSU.2.00.1008242235120.2515@sister.anvils> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28x86-32: Fix dummy trampoline-related inline stubsH. Peter Anvin
commit 8848a91068c018bc91f597038a0f41462a0f88a4 upstream. Fix dummy inline stubs for trampoline-related functions when no trampolines exist (until we get rid of the no-trampoline case entirely.) Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <4C6C294D.3030404@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28x86-32: Separate 1:1 pagetables from swapper_pg_dirJoerg Roedel
commit fd89a137924e0710078c3ae855e7cec1c43cb845 upstream. This patch fixes machine crashes which occur when heavily exercising the CPU hotplug codepaths on a 32-bit kernel. These crashes are caused by AMD Erratum 383 and result in a fatal machine check exception. Here's the scenario: 1. On 32-bit, the swapper_pg_dir page table is used as the initial page table for booting a secondary CPU. 2. To make this work, swapper_pg_dir needs a direct mapping of physical memory in it (the low mappings). By adding those low, large page (2M) mappings (PAE kernel), we create the necessary conditions for Erratum 383 to occur. 3. Other CPUs which do not participate in the off- and onlining game may use swapper_pg_dir while the low mappings are present (when leave_mm is called). For all steps below, the CPU referred to is a CPU that is using swapper_pg_dir, and not the CPU which is being onlined. 4. The presence of the low mappings in swapper_pg_dir can result in TLB entries for addresses below __PAGE_OFFSET to be established speculatively. These TLB entries are marked global and large. 5. When the CPU with such TLB entry switches to another page table, this TLB entry remains because it is global. 6. The process then generates an access to an address covered by the above TLB entry but there is a permission mismatch - the TLB entry covers a large global page not accessible to userspace. 7. Due to this permission mismatch a new 4kb, user TLB entry gets established. Further, Erratum 383 provides for a small window of time where both TLB entries are present. This results in an uncorrectable machine check exception signalling a TLB multimatch which panics the machine. There are two ways to fix this issue: 1. Always do a global TLB flush when a new cr3 is loaded and the old page table was swapper_pg_dir. I consider this a hack hard to understand and with performance implications 2. Do not use swapper_pg_dir to boot secondary CPUs like 64-bit does. This patch implements solution 2. It introduces a trampoline_pg_dir which has the same layout as swapper_pg_dir with low_mappings. This page table is used as the initial page table of the booting CPU. Later in the bringup process, it switches to swapper_pg_dir and does a global TLB flush. This fixes the crashes in our test cases. -v2: switch to swapper_pg_dir right after entering start_secondary() so that we are able to access percpu data which might not be mapped in the trampoline page table. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> LKML-Reference: <20100816123833.GB28147@aftab> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28x86: detect scattered cpuid features earlierJacob Pan
commit 1dedefd1a066a795a87afca9c0236e1a94de9bf6 upstream. Some extra CPU features such as ARAT is needed in early boot so that x86_init function pointers can be set up properly. http://lkml.org/lkml/2010/5/18/519 At start_kernel() level, this patch moves init_scattered_cpuid_features() from check_bugs() to setup_arch() -> early_cpu_init() which is earlier than platform specific x86_init layer setup. Suggested by HPA. Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com> LKML-Reference: <1274295685-6774-2-git-send-email-jacob.jun.pan@linux.intel.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28powerpc: Don't use kernel stack with translation offMichael Neuling
commit 54a834043314c257210db2a9d59f8cc605571639 upstream. In f761622e59433130bc33ad086ce219feee9eb961 we changed early_setup_secondary so it's called using the proper kernel stack rather than the emergency one. Unfortunately, this stack pointer can't be used when translation is off on PHYP as this stack pointer might be outside the RMO. This results in the following on all non zero cpus: cpu 0x1: Vector: 300 (Data Access) at [c00000001639fd10] pc: 000000000001c50c lr: 000000000000821c sp: c00000001639ff90 msr: 8000000000001000 dar: c00000001639ffa0 dsisr: 42000000 current = 0xc000000016393540 paca = 0xc000000006e00200 pid = 0, comm = swapper The original patch was only tested on bare metal system, so it never caught this problem. This changes __secondary_start so that we calculate the new stack pointer but only start using it after we've called early_setup_secondary. With this patch, the above problem goes away. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28powerpc: Initialise paca->kstack before early_setup_secondaryMatt Evans
commit f761622e59433130bc33ad086ce219feee9eb961 upstream. As early setup calls down to slb_initialize(), we must have kstack initialised before checking "should we add a bolted SLB entry for our kstack?" Failing to do so means stack access requires an SLB miss exception to refill an entry dynamically, if the stack isn't accessible via SLB(0) (kernel text & static data). It's not always allowable to take such a miss, and intermittent crashes will result. Primary CPUs don't have this issue; an SLB entry is not bolted for their stack anyway (as that lives within SLB(0)). This patch therefore only affects the init of secondaries. Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28KVM: x86: Move TSC reset out of vmcb_initMarcelo Tosatti
commit 47008cd887c1836bcadda123ba73e1863de7a6c4 upstream. The VMCB is reset whenever we receive a startup IPI, so Linux is setting TSC back to zero happens very late in the boot process and destabilizing the TSC. Instead, just set TSC to zero once at VCPU creation time. Why the separate patch? So git-bisect is your friend. Signed-off-by: Zachary Amsden <zamsden@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28KVM: x86: Fix SVM VMCB resetMarcelo Tosatti
commit 58877679fd393d3ef71aa383031ac7817561463d upstream. On reset, VMCB TSC should be set to zero. Instead, code was setting tsc_offset to zero, which passes through the underlying TSC. Signed-off-by: Zachary Amsden <zamsden@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28KVM: i8259: fix migrationMarcelo Tosatti
commit eebb5f31b8d9a2620dcf32297096f8ce1240b179 upstream. Top of kvm_kpic_state structure should have the same memory layout as kvm_pic_state since it is copied by memcpy. Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Avi Kivity <avi@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28x86, AMD, MCE thresholding: Fix the MCi_MISCj iteration orderBorislav Petkov
commit 6dcbfe4f0b4e17e289d56fa534b7ce5a6b7f63a3 upstream. This fixes possible cases of not collecting valid error info in the MCE error thresholding groups on F10h hardware. The current code contains a subtle problem of checking only the Valid bit of MSR0000_0413 (which is MC4_MISC0 - DRAM thresholding group) in its first iteration and breaking out if the bit is cleared. But (!), this MSR contains an offset value, BlkPtr[31:24], which points to the remaining MSRs in this thresholding group which might contain valid information too. But if we bail out only after we checked the valid bit in the first MSR and not the block pointer too, we miss that other information. The thing is, MC4_MISC0[BlkPtr] is not predicated on MCi_STATUS[MiscV] or MC4_MISC0[Valid] and should be checked prior to iterating over the MCI_MISCj thresholding group, irrespective of the MC4_MISC0[Valid] setting. Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28x86, numa: For each node, register the memory blocks actually usedYinghai Lu
commit 73cf624d029d776a33d0a80c695485b3f9b36231 upstream. Russ reported SGI UV is broken recently. He said: | The SRAT table shows that memory range is spread over two nodes. | | SRAT: Node 0 PXM 0 100000000-800000000 | SRAT: Node 1 PXM 1 800000000-1000000000 | SRAT: Node 0 PXM 0 1000000000-1080000000 | |Previously, the kernel early_node_map[] would show three entries |with the proper node. | |[ 0.000000] 0: 0x00100000 -> 0x00800000 |[ 0.000000] 1: 0x00800000 -> 0x01000000 |[ 0.000000] 0: 0x01000000 -> 0x01080000 | |The problem is recent community kernel early_node_map[] shows |only two entries with the node 0 entry overlapping the node 1 |entry. | | 0: 0x00100000 -> 0x01080000 | 1: 0x00800000 -> 0x01000000 After looking at the changelog, Found out that it has been broken for a while by following commit |commit 8716273caef7f55f39fe4fc6c69c5f9f197f41f1 |Author: David Rientjes <rientjes@google.com> |Date: Fri Sep 25 15:20:04 2009 -0700 | | x86: Export srat physical topology Before that commit, register_active_regions() is called for every SRAT memory entry right away. Use nodememblk_range[] instead of nodes[] in order to make sure we capture the actual memory blocks registered with each node. nodes[] contains an extended range which spans all memory regions associated with a node, but that does not mean that all the memory in between are included. Reported-by: Russ Anderson <rja@sgi.com> Tested-by: Russ Anderson <rja@sgi.com> Signed-off-by: Yinghai Lu <yinghai@kernel.org> LKML-Reference: <4CB27BDF.5000800@kernel.org> Acked-by: David Rientjes <rientjes@google.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2010-10-28ubd: fix incorrect sector handling during request restartTejun Heo
commit 47526903feb52f4c26a6350370bdf74e337fcdb1 upstream. Commit f81f2f7c (ubd: drop unnecessary rq->sector manipulation) dropped request->sector manipulation in preparation for global request handling cleanup; unfortunately, it incorrectly assumed that the updated sector wasn't being used. ubd tries to issue as many requests as possible to io_thread. When issuing fails due to memory pressure or other reasons, the device is put on the restart list and issuing stops. On IO c