aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2011-06-26x86, binutils, xen: Fix another wrong size directiveAlexander van Heukelum
commit 371c394af27ab7d1e58a66bc19d9f1f3ac1f67b4 upstream. The latest binutils (2.21.0.20110302/Ubuntu) breaks the build yet another time, under CONFIG_XEN=y due to a .size directive that refers to a slightly differently named (hence, to the now very strict and unforgiving assembler, non-existent) symbol. [ mingo: This unnecessary build breakage caused by new binutils version 2.21 gets escallated back several kernel releases spanning several years of Linux history, affecting over 130,000 upstream kernel commits (!), on CONFIG_XEN=y 64-bit kernels (i.e. essentially affecting all major Linux distro kernel configs). Git annotate tells us that this slight debug symbol code mismatch bug has been introduced in 2008 in commit 3d75e1b8: 3d75e1b8 (Jeremy Fitzhardinge 2008-07-08 15:06:49 -0700 1231) ENTRY(xen_do_hypervisor_callback) # do_hypervisor_callback(struct *pt_regs) The 'bug' is just a slight assymetry in ENTRY()/END() debug-symbols sequences, with lots of assembly code between the ENTRY() and the END(): ENTRY(xen_do_hypervisor_callback) # do_hypervisor_callback(struct *pt_regs) ... END(do_hypervisor_callback) Human reviewers almost never catch such small mismatches, and binutils never even warned about it either. This new binutils version thus breaks the Xen build on all upstream kernels since v2.6.27, out of the blue. This makes a straightforward Git bisection of all 64-bit Xen-enabled kernels impossible on such binutils, for a bisection window of over hundred thousand historic commits. (!) This is a major fail on the side of binutils and binutils needs to turn this show-stopper build failure into a warning ASAP. ] Signed-off-by: Alexander van Heukelum <heukelum@fastmail.fm> Cc: Jeremy Fitzhardinge <jeremy@goop.org> Cc: Jan Beulich <jbeulich@novell.com> Cc: H.J. Lu <hjl.tools@gmail.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Kees Cook <kees.cook@canonical.com> LKML-Reference: <1299877178-26063-1-git-send-email-heukelum@fastmail.fm> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc: rtas_flash needs to use rtas_data_bufMilton Miller
commit bd2b64a12bf55bec0d1b949e3dca3f8863409646 upstream. When trying to flash a machine via the update_flash command, Anton received the following error: Restarting system. FLASH: kernel bug...flash list header addr above 4GB The code in question has a comment that the flash list should be in the kernel data and therefore under 4GB: /* NOTE: the "first" block list is a global var with no data * blocks in the kernel data segment. We do this because * we want to ensure this block_list addr is under 4GB. */ Unfortunately the Kconfig option is marked tristate which means the variable may not be in the kernel data and could be above 4GB. Instead of relying on the data segment being below 4GB, use the static data buffer allocated by the kernel for use by rtas. Since we don't use the header struct directly anymore, convert it to a simple pointer. Reported-By: Anton Blanchard <anton@samba.org> Signed-Off-By: Milton Miller <miltonm@bga.com Tested-By: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc: Fix default_machine_crash_shutdown #ifdef botchPaul E. McKenney
commit c2be05481f6125254c45b78f334d4dd09c701c82 upstream. crash_kexec_wait_realmode() is defined only if CONFIG_PPC_STD_MMU_64 and CONFIG_SMP, but is called if CONFIG_PPC_STD_MMU_64 even if !CONFIG_SMP. Fix the conditional compilation around the invocation. [PG: commit b3df895aebe091b1657 "powerpc/kexec: Add support for FSL-BookE" introduced the PPC_STD_MMU_64 check this refers to, but we don't want to cherry pick that whole addition to stable just for one #ifdef line.] Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Acked-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/kdump: Fix race in kdump shutdownMichael Neuling
commit 60adec6226bbcf061d4c2d10944fced209d1847d upstream. When we are crashing, the crashing/primary CPU IPIs the secondaries to turn off IRQs, go into real mode and wait in kexec_wait. While this is happening, the primary tears down all the MMU maps. Unfortunately the primary doesn't check to make sure the secondaries have entered real mode before doing this. On PHYP machines, the secondaries can take a long time shutting down the IRQ controller as RTAS calls are need. These RTAS calls need to be serialised which resilts in the secondaries contending in lock_rtas() and hence taking a long time to shut down. We've hit this on large POWER7 machines, where some secondaries are still waiting in lock_rtas(), when the primary tears down the HPTEs. This patch makes sure all secondaries are in real mode before the primary tears down the MMU. It uses the new kexec_state entry in the paca. It times out if the secondaries don't reach real mode after 10sec. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/kexec: Fix race in kexec shutdownMichael Neuling
commit 1fc711f7ffb01089efc58042cfdbac8573d1b59a upstream. In kexec_prepare_cpus, the primary CPU IPIs the secondary CPUs to kexec_smp_down(). kexec_smp_down() calls kexec_smp_wait() which sets the hw_cpu_id() to -1. The primary does this while leaving IRQs on which means the primary can take a timer interrupt which can lead to the IPIing one of the secondary CPUs (say, for a scheduler re-balance) but since the secondary CPU now has a hw_cpu_id = -1, we IPI CPU -1... Kaboom! We are hitting this case regularly on POWER7 machines. There is also a second race, where the primary will tear down the MMU mappings before knowing the secondaries have entered real mode. Also, the secondaries are clearing out any pending IPIs before guaranteeing that no more will be received. This changes kexec_prepare_cpus() so that we turn off IRQs in the primary CPU much earlier. It adds a paca flag to say that the secondaries have entered the kexec_smp_down() IPI and turned off IRQs, rather than overloading hw_cpu_id with -1. This new paca flag is again used to in indicate when the secondaries has entered real mode. It also ensures that all CPUs have their IRQs off before we clear out any pending IPI requests (in kexec_cpu_down()) to ensure there are no trailing IPIs left unacknowledged. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26fix per-cpu flag problem in the cpu affinity checkersThomas Gleixner
commit 9804c9eaeacfe78651052c5ddff31099f60ef78c upstream. The CHECK_IRQ_PER_CPU is wrong, it should be checking irq_to_desc(irq)->status not just irq. Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86: Flush TLB if PGD entry is changed in i386 PAE modeShaohua Li
commit 4981d01eada5354d81c8929d5b2836829ba3df7b upstream. According to intel CPU manual, every time PGD entry is changed in i386 PAE mode, we need do a full TLB flush. Current code follows this and there is comment for this too in the code. But current code misses the multi-threaded case. A changed page table might be used by several CPUs, every such CPU should flush TLB. Usually this isn't a problem, because we prepopulate all PGD entries at process fork. But when the process does munmap and follows new mmap, this issue will be triggered. When it happens, some CPUs keep doing page faults: http://marc.info/?l=linux-kernel&m=129915020508238&w=2 Reported-by: Yasunori Goto<y-goto@jp.fujitsu.com> Tested-by: Yasunori Goto<y-goto@jp.fujitsu.com> Reviewed-by: Rik van Riel <riel@redhat.com> Signed-off-by: Shaohua Li<shaohua.li@intel.com> Cc: Mallick Asit K <asit.k.mallick@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Andrew Morton <akpm@linux-foundation.org> Cc: linux-mm <linux-mm@kvack.org> LKML-Reference: <1300246649.2337.95.camel@sli10-conroe> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26perf, powerpc: Handle events that raise an exception without overflowingAnton Blanchard
commit 0837e3242c73566fc1c0196b4ec61779c25ffc93 upstream. Events on POWER7 can roll back if a speculative event doesn't eventually complete. Unfortunately in some rare cases they will raise a performance monitor exception. We need to catch this to ensure we reset the PMC. In all cases the PMC will be 256 or less cycles from overflow. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20110309143842.6c22845e@kryten> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86, quirk: Fix SB600 revision checkAndreas Herrmann
commit 1d3e09a304e6c4e004ca06356578b171e8735d3c upstream. Commit 7f74f8f28a2bd9db9404f7d364e2097a0c42cc12 (x86 quirk: Fix polarity for IRQ0 pin2 override on SB800 systems) introduced a regression. It removed some SB600 specific code to determine the revision ID without adapting a corresponding revision ID check for SB600. See this mail thread: http://marc.info/?l=linux-kernel&m=129980296006380&w=2 This patch adapts the corresponding check to cover all SB600 revisions. Tested-by: Wang Lei <f3d27b@gmail.com> Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Andrew Morton <akpm@linux-foundation.org> LKML-Reference: <20110315143137.GD29499@alberich.amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86: Emit "mem=nopentium ignored" warning when not supportedKamal Mostafa
commit 9a6d44b9adb777ca9549e88cd55bd8f2673c52a2 upstream. Emit warning when "mem=nopentium" is specified on any arch other than x86_32 (the only that arch supports it). Signed-off-by: Kamal Mostafa <kamal@canonical.com> BugLink: http://bugs.launchpad.net/bugs/553464 Cc: Yinghai Lu <yinghai@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> LKML-Reference: <1296783486-23033-2-git-send-email-kamal@canonical.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86: Fix panic when handling "mem={invalid}" paramKamal Mostafa
commit 77eed821accf5dd962b1f13bed0680e217e49112 upstream. Avoid removing all of memory and panicing when "mem={invalid}" is specified, e.g. mem=blahblah, mem=0, or mem=nopentium (on platforms other than x86_32). Signed-off-by: Kamal Mostafa <kamal@canonical.com> BugLink: http://bugs.launchpad.net/bugs/553464 Cc: Yinghai Lu <yinghai@kernel.org> Cc: Len Brown <len.brown@intel.com> Cc: Rafael J. Wysocki <rjw@sisk.pl> LKML-Reference: <1296783486-23033-1-git-send-email-kamal@canonical.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86/mm: Handle mm_fault_error() in kernel spaceAndrey Vagin
commit f86268549f424f83b9eb0963989270e14fbfc3de upstream. mm_fault_error() should not execute oom-killer, if page fault occurs in kernel space. E.g. in copy_from_user()/copy_to_user(). This would happen if we find ourselves in OOM on a copy_to_user(), or a copy_from_user() which faults. Without this patch, the kernels hangs up in copy_from_user(), because OOM killer sends SIG_KILL to current process, but it can't handle a signal while in syscall, then the kernel returns to copy_from_user(), reexcute current command and provokes page_fault again. With this patch the kernel return -EFAULT from copy_from_user(). The code, which checks that page fault occurred in kernel space, has been copied from do_sigbus(). This situation is handled by the same way on powerpc, xtensa, tile, ... Signed-off-by: Andrey Vagin <avagin@openvz.org> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> LKML-Reference: <201103092322.p29NMNPH001682@imap1.linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26MIPS: MTX-1: Make au1000_eth probe all PHY addressesFlorian Fainelli
commit bf3a1eb85967dcbaae42f4fcb53c2392cec32677 upstream. When au1000_eth probes the MII bus for PHY address, if we do not set au1000_eth platform data's phy_search_highest_address, the MII probing logic will exit early and will assume a valid PHY is found at address 0. For MTX-1, the PHY is at address 31, and without this patch, the link detection/speed/duplex would not work correctly. Signed-off-by: Florian Fainelli <florian@openwrt.org> To: linux-mips@linux-mips.org Patchwork: https://patchwork.linux-mips.org/patch/2111/ Signed-off-by: Ralf Baechle <ralf@linux-mips.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/kexec: Fix orphaned offline CPUs across kexecMatt Evans
commit e8e5c2155b0035b6e04f29be67f6444bc914005b upstream. When CPU hotplug is used, some CPUs may be offline at the time a kexec is performed. The subsequent kernel may expect these CPUs to be already running, and will declare them stuck. On pseries, there's also a soft-offline (cede) state that CPUs may be in; this can also cause problems as the kexeced kernel may ask RTAS if they're online -- and RTAS would say they are. The CPU will either appear stuck, or will cause a crash as we replace its cede loop beneath it. This patch kicks each present offline CPU awake before the kexec, so that none are forever lost to these assumptions in the subsequent kernel. Now, the behaviour is that all available CPUs that were offlined are now online & usable after the kexec. This mimics the behaviour of a full reboot (on which all CPUs will be restarted). Signed-off-by: Matt Evans <matt@ozlabs.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/crashdump: Do not fail on NULL pointer dereferencingMaxim Uvarov
commit 426b6cb478e60352a463a0d1ec75c1c9fab30b13 upstream. Signed-off-by: Maxim Uvarov <muvarov@gmail.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/kexec: Speedup kexec hash PTE tear downMichael Neuling
commit d504bed676caad29a3dba3d3727298c560628f5c upstream. Currently for kexec the PTE tear down on 1TB segment systems normally requires 3 hcalls for each PTE removal. On a machine with 32GB of memory it can take around a minute to remove all the PTEs. This optimises the path so that we only remove PTEs that are valid. It also uses the read 4 PTEs at once HCALL. For the common case where a PTEs is invalid in a 1TB segment, this turns the 3 HCALLs per PTE down to 1 HCALL per 4 PTEs. This gives an > 10x speedup in kexec times on PHYP, taking a 32GB machine from around 1 minute down to a few seconds. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/pseries: Add hcall to read 4 ptes at a time in real modeMichael Neuling
commit f90ece28c1f5b3ec13fe481406857fe92f4bc7d1 upstream. This adds plpar_pte_read_4_raw() which can be used read 4 PTEs from PHYP at a time, while in real mode. It also creates a new hcall9 which can be used in real mode. It's the same as plpar_hcall9 but minus the tracing hcall statistics which may require variables outside the RMO. Signed-off-by: Michael Neuling <mikey@neuling.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc: Use more accurate limit for first segment memory allocationsAnton Blanchard
commit 095c7965f4dc870ed2b65143b1e2610de653416c upstream. Author: Milton Miller <miltonm@bga.com> On large machines we are running out of room below 256MB. In some cases we only need to ensure the allocation is in the first segment, which may be 256MB or 1TB. Add slb0_limit and use it to specify the upper limit for the irqstack and emergency stacks. On a large ppc64 box, this fixes a panic at boot when the crashkernel= option is specified (previously we would run out of memory below 256MB). Signed-off-by: Milton Miller <miltonm@bga.com> Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/kdump: Use chip->shutdown to disable IRQsAnton Blanchard
commit 5d7a87217de48b234b3c8ff8a73059947d822e07 upstream. I saw this in a kdump kernel: IOMMU table initialized, virtual merging enabled Interrupt 155954 (real) is invalid, disabling it. Interrupt 155953 (real) is invalid, disabling it. ie we took some spurious interrupts. default_machine_crash_shutdown tries to disable all interrupt sources but uses chip->disable which maps to the default action of: static void default_disable(unsigned int irq) { } If we use chip->shutdown, then we actually mask the IRQ: static void default_shutdown(unsigned int irq) { struct irq_desc *desc = irq_to_desc(irq); desc->chip->mask(irq); desc->status |= IRQ_MASKED; } Not sure why we don't implement a ->disable action for xics.c, or why default_disable doesn't mask the interrupt. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc/kdump: CPUs assume the context of the oopsing CPUAnton Blanchard
commit 0644079410065567e3bb31fcb8e6441f2b7685a9 upstream. We wrap the crash_shutdown_handles[] calls with longjmp/setjmp, so if any of them fault we can recover. The problem is we add a hook to the debugger fault handler hook which calls longjmp unconditionally. This first part of kdump is run before we marshall the other CPUs, so there is a very good chance some CPU on the box is going to page fault. And when it does it hits the longjmp code and assumes the context of the oopsing CPU. The machine gets very confused when it has 10 CPUs all with the same stack, all thinking they have the same CPU id. I get even more confused trying to debug it. The patch below adds crash_shutdown_cpu and uses it to specify which cpu is in the protected region. Since it can only be -1 or the oopsing CPU, we don't need to use memory barriers since it is only valid on the local CPU - no other CPU will ever see a value that matches it's local CPU id. Eventually we should switch the order and marshall all CPUs before doing the crash_shutdown_handles[] calls, but that is a bigger fix. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86: Use u32 instead of long to set reset vector back to 0Don Zickus
commit 299c56966a72b9109d47c71a6db52097098703dd upstream. A customer of ours, complained that when setting the reset vector back to 0, it trashed other data and hung their box. They noticed when only 4 bytes were set to 0 instead of 8, everything worked correctly. Mathew pointed out: | | We're supposed to be resetting trampoline_phys_low and | trampoline_phys_high here, which are two 16-bit values. | Writing 64 bits is definitely going to overwrite space | that we're not supposed to be touching. | So limit the area modified to u32. Signed-off-by: Don Zickus <dzickus@redhat.com> Acked-by: Matthew Garrett <mjg@redhat.com> LKML-Reference: <1297139100-424-1-git-send-email-dzickus@redhat.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86 quirk: Fix polarity for IRQ0 pin2 override on SB800 systemsAndreas Herrmann
commit 7f74f8f28a2bd9db9404f7d364e2097a0c42cc12 upstream. On some SB800 systems polarity for IOAPIC pin2 is wrongly specified as low active by BIOS. This caused system hangs after resume from S3 when HPET was used in one-shot mode on such systems because a timer interrupt was missed (HPET signal is high active). For more details see: http://marc.info/?l=linux-kernel&m=129623757413868 Tested-by: Manoj Iyer <manoj.iyer@canonical.com> Tested-by: Andre Przywara <andre.przywara@amd.com> Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <20110224145346.GD3658@alberich.amd.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26ARM: Ensure predictable endian state on signal handler entryRussell King
commit 53399053eb505cf541b2405bd9d9bca5ecfb96fb upstream. Ensure a predictable endian state when entering signal handlers. This avoids programs which use SETEND to momentarily switch their endian state from having their signal handlers entered with an unpredictable endian state. Acked-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26s390: remove task_show_regsMartin Schwidefsky
commit 261cd298a8c363d7985e3482946edb4bfedacf98 upstream. task_show_regs used to be a debugging aid in the early bringup days of Linux on s390. /proc/<pid>/status is a world readable file, it is not a good idea to show the registers of a process. The only correct fix is to remove task_show_regs. Reported-by: Al Viro <viro@zeniv.linux.org.uk> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86/pvclock: Zero last_value on resumeJeremy Fitzhardinge
commit e7a3481c0246c8e45e79c629efd63b168e91fcda upstream. If the guest domain has been suspend/resumed or migrated, then the system clock backing the pvclock clocksource may revert to a smaller value (ie, can be non-monotonic across the migration/save-restore). Make sure we zero last_value in that case so that the domain continues to see clock updates. Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86, mm: avoid possible bogus tlb entries by clearing prev mm_cpumask after ↵Suresh Siddha
switching mm commit 831d52bc153971b70e64eccfbed2b232394f22f8 upstream. Clearing the cpu in prev's mm_cpumask early will avoid the flush tlb IPI's while the cr3 is still pointing to the prev mm. And this window can lead to the possibility of bogus TLB fills resulting in strange failures. One such problematic scenario is mentioned below. T1. CPU-1 is context switching from mm1 to mm2 context and got a NMI etc between the point of clearing the cpu from the mm_cpumask(mm1) and before reloading the cr3 with the new mm2. T2. CPU-2 is tearing down a specific vma for mm1 and will proceed with flushing the TLB for mm1. It doesn't send the flush TLB to CPU-1 as it doesn't see that cpu listed in the mm_cpumask(mm1). T3. After the TLB flush is complete, CPU-2 goes ahead and frees the page-table pages associated with the removed vma mapping. T4. CPU-2 now allocates those freed page-table pages for something else. T5. As the CR3 and TLB caches for mm1 is still active on CPU-1, CPU-1 can potentially speculate and walk through the page-table caches and can insert new TLB entries. As the page-table pages are already freed and being used on CPU-2, this page walk can potentially insert a bogus global TLB entry depending on the (random) contents of the page that is being used on CPU-2. T6. This bogus TLB entry being global will be active across future CR3 changes and can result in weird memory corruption etc. To avoid this issue, for the prev mm that is handing over the cpu to another mm, clear the cpu from the mm_cpumask(prev) after the cr3 is changed. Marking it for -stable, though we haven't seen any reported failure that can be attributed to this. Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26parisc : Remove broken line wrapping handling pdc_iodc_print()Guy Martin
commit fbea668498e93bb38ac9226c7af9120a25957375 upstream. Remove the broken line wrapping handling in pdc_iodc_print(). It is broken in 3 ways : - It doesn't keep track of the current screen position, it just assumes that the new buffer will be printed at the begining of the screen. - It doesn't take in account that non printable characters won't increase the current position on the screen. - And last but not least, it triggers a kernel panic if a backspace is the first char in the provided buffer : Backtrace: [<0000000040128ec4>] pdc_console_write+0x44/0x78 [<0000000040128f18>] pdc_console_tty_write+0x20/0x38 [<000000004032f1ac>] n_tty_write+0x2a4/0x550 [<000000004032b158>] tty_write+0x1e0/0x2d8 [<00000000401bb420>] vfs_write+0xb8/0x188 [<00000000401bb630>] sys_write+0x68/0xb8 [<0000000040104eb8>] syscall_exit+0x0/0x14 Most terminals handle the line wrapping just fine. I've confirmed that it works correctly on a C8000 with both vga and serial output. Signed-off-by: Guy Martin <gmsoft@tuxicoman.be> Signed-off-by: James Bottomley <James.Bottomley@suse.de> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26powerpc: Fix some 6xx/7xxx CPU setup functionsBenjamin Herrenschmidt
commit 1f1936ff3febf38d582177ea319eaa278f32c91f upstream. Some of those functions try to adjust the CPU features, for example to remove NAP support on some revisions. However, they seem to use r5 as an index into the CPU table entry, which might have been right a long time ago but no longer is. r4 is the right register to use. This probably caused some off behaviours on some PowerMac variants using 750cx or 7455 processor revisions. Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26x86, mtrr: Avoid MTRR reprogramming on BP during boot on UP platformsSuresh Siddha
commit f7448548a9f32db38f243ccd4271617758ddfe2c upstream. Markus Kohn ran into a hard hang regression on an acer aspire 1310, when acpi is enabled. git bisect showed the following commit as the bad one that introduced the boot regression. commit d0af9eed5aa91b6b7b5049cae69e5ea956fd85c3 Author: Suresh Siddha <suresh.b.siddha@intel.com> Date: Wed Aug 19 18:05:36 2009 -0700 x86, pat/mtrr: Rendezvous all the cpus for MTRR/PAT init Because of the UP configuration of that platform, native_smp_prepare_cpus() bailed out (in smp_sanity_check()) before doing the set_mtrr_aps_delayed_init() Further down the boot path, native_smp_cpus_done() will call the delayed MTRR initialization for the AP's (mtrr_aps_init()) with mtrr_aps_delayed_init not set. This resulted in the boot processor reprogramming its MTRR's to the values seen during the start of the OS boot. While this is not needed ideally, this shouldn't have caused any side-effects. This is because the reprogramming of MTRR's (set_mtrr_state() that gets called via set_mtrr()) will check if the live register contents are different from what is being asked to write and will do the actual write only if they are different. BP's mtrr state is read during the start of the OS boot and typically nothing would have changed when we ask to reprogram it on BP again because of the above scenario on an UP platform. So on a normal UP platform no reprogramming of BP MTRR MSR's happens and all is well. However, on this platform, bios seems to be modifying the fixed mtrr range registers between the start of OS boot and when we double check the live registers for reprogramming BP MTRR registers. And as the live registers are modified, we end up reprogramming the MTRR's to the state seen during the start of the OS boot. During ACPI initialization, something in the bios (probably smi handler?) don't like this fact and results in a hard lockup. We didn't see this boot hang issue on this platform before the commit d0af9eed5aa91b6b7b5049cae69e5ea956fd85c3, because only the AP's (if any) will program its MTRR's to the value that BP had at the start of the OS boot. Fix this issue by checking mtrr_aps_delayed_init before continuing further in the mtrr_aps_init(). Now, only AP's (if any) will program its MTRR's to the BP values during boot. Addresses https://bugzilla.novell.com/show_bug.cgi?id=623393 [ By the way, this behavior of the bios modifying MTRR's after the start of the OS boot is not common and the kernel is not prepared to handle this situation well. Irrespective of this issue, during suspend/resume, linux kernel will try to reprogram the BP's MTRR values to the values seen during the start of the OS boot. So suspend/resume might be already broken on this platform for all linux kernel versions. ] Reported-and-bisected-by: Markus Kohn <jabber@gmx.org> Tested-by: Markus Kohn <jabber@gmx.org> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Cc: Thomas Renninger <trenn@novell.com> Cc: Rafael Wysocki <rjw@novell.com> Cc: Venkatesh Pallipadi <venki@google.com> LKML-Reference: <1296694975.4418.402.camel@sbsiddha-MOBL3.sc.intel.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-06-26rapidio: fix hang on RapidIO doorbell queue full conditionThomas Taranowski
commit 12a4dc43911785f51a596f771ae0701b18d436f1 upstream. In fsl_rio_dbell_handler() the code currently simply acknowledges the QFI queue full interrupt, but does nothing to resolve the queue full condition. Instead, it jumps to the end of the isr. When a queue full condition occurs, the isr is then re-entered immediately and continually, forever. The fix is to just fall through and read out current doorbell entries. Signed-off-by: Thomas Taranowski <tom@baringforge.com> Cc: Alexandre Bounine <alexandre.bounine@idt.com> Cc: Kumar Gala <galak@kernel.crashing.org> Cc: Matt Porter <mporter@kernel.crashing.org> Cc: Li Yang <leoli@freescale.com> Cc: Thomas Moll <thomas.moll@sysgo.com> Cc: Micha Nelissen <micha@neli.hopto.org> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org> Cc: Grant Likely <grant.likely@secretlab.ca> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, vt-d: Fix the vt-d fault handling irq migration in the x2apic modeKenji Kaneshige
commit 086e8ced65d9bcc4a8e8f1cd39b09640f2883f90 upstream. In x2apic mode, we need to set the upper address register of the fault handling interrupt register of the vt-d hardware. Without this irq migration of the vt-d fault handling interrupt is broken. Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> LKML-Reference: <1291225233.2648.39.camel@sbsiddha-MOBL3> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Chris Wright <chrisw@sous-sol.org> Tested-by: Takao Indoh <indou.takao@jp.fujitsu.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86: Enable the intr-remap fault handling after local APIC setupKenji Kaneshige
commit 7f7fbf45c6b748074546f7f16b9488ca71de99c1 upstream. Interrupt-remapping gets enabled very early in the boot, as it determines the apic mode that the processor can use. And the current code enables the vt-d fault handling before the setup_local_APIC(). And hence the APIC LDR registers and data structure in the memory may not be initialized. So the vt-d fault handling in logical xapic/x2apic modes were broken. Fix this by enabling the vt-d fault handling in the end_local_APIC_setup() A cleaner fix of enabling fault handling while enabling intr-remapping will be addressed for v2.6.38. [ Enabling intr-remapping determines the usage of x2apic mode and the apic mode determines the fault-handling configuration. ] Signed-off-by: Kenji Kaneshige <kaneshige.kenji@jp.fujitsu.com> LKML-Reference: <20101201062244.541996375@intel.com> Signed-off-by: Suresh Siddha <suresh.b.siddha@intel.com> Acked-by: Chris Wright <chrisw@sous-sol.org> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, gcc-4.6: Use gcc -m options when building vdsoH. Peter Anvin
commit de2a8cf98ecdde25231d6c5e7901e2cffaf32af9 upstream. The vdso Makefile passes linker-style -m options not to the linker but to gcc. This happens to work with earlier gcc, but fails with gcc 4.6. Pass gcc-style -m options, instead. Note: all currently supported versions of gcc supports -m32, so there is no reason to conditionalize it any more. Reported-by: H. J. Lu <hjl.tools@gmail.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> LKML-Reference: <tip-*@git.kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, hotplug: In the MWAIT case of play_dead, CLFLUSH the cache lineH. Peter Anvin
commit ce5f68246bf2385d6174856708d0b746dc378f20 upstream. When we're using MWAIT for play_dead, explicitly CLFLUSH the cache line before executing MONITOR. This is a potential workaround for the Xeon 7400 erratum AAI65 after having a spurious wakeup and returning around the loop. "Potential" here because it is not certain that that erratum could actually trigger; however, the CLFLUSH should be harmless. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Acked-by: Venkatesh Pallipadi <venki@google.com> Cc: Asit Mallick <asit.k.mallick@intel.com> Cc: Arjan van de Ven <arjan@linux.kernel.org> Cc: Len Brown <lenb@kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, hotplug: Move WBINVD back outside the play_dead loopH. Peter Anvin
commit a68e5c94f7d3dd64fef34dd5d97e365cae4bb42a upstream. On processors with hyperthreading, when only one thread is offlined the other thread can cause a spurious wakeup on the idled thread. We do not want to re-WBINVD when that happens. Ideally, we should simply skip WBINVD unless we're the last thread on a particular core to shut down, but there might be similar issues elsewhere in the system. Thus, revert to previous behavior of only WBINVD outside the loop. Partly as a result, remove the mb()'s around it: they are not necessary since wbinvd() is a serializing instruction, but they were intended to make sure the compiler didn't do any funny loop optimizations. Reported-by: Asit Mallick <asit.k.mallick@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Cc: Arjan van de Ven <arjan@linux.kernel.org> Cc: Len Brown <lenb@kernel.org> Cc: Venkatesh Pallipadi <venki@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <tip-ea53069231f9317062910d6e772cca4ce93de8c8@git.kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, hotplug: Use mwait to offline a processor, fix the legacy caseH. Peter Anvin
commit ea53069231f9317062910d6e772cca4ce93de8c8 upstream. The code in native_play_dead() has a number of problems: 1. We should use MWAIT when available, to put ourselves into a deeper sleep state. 2. We use the existence of CLFLUSH to determine if WBINVD is safe, but that is totally bogus -- WBINVD is 486+, whereas CLFLUSH is a much later addition. 3. We should do WBINVD inside the loop, just in case of something like setting an A bit on page tables. Pointed out by Arjan van de Ven. This code is based in part of a previous patch by Venki Pallipadi, but unlike that patch this one keeps all the detection code local instead of pre-caching a bunch of information. We're shutting down the CPU; there is absolutely no hurry. This patch moves all the code to C and deletes the global wbinvd_halt() which is broken anyway. Originally-by: Venkatesh Pallipadi <venkatesh.pallipadi@intel.com> Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Cc: Len Brown <lenb@kernel.org> Cc: Venkatesh Pallipadi <venki@google.com> Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> LKML-Reference: <20090522232230.162239000@intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, mwait: Move mwait constants to a common header fileH. Peter Anvin
commit bc83cccc761953f878088cdfa682de0970b5561f upstream. We have MWAIT constants spread across three different .c files, for no good reason. Move them all into a common header file. [PG: required for cherry pick of ce5f68246b - to avoid dup. mwait fields in smpboot.c; drop intel_idle.c chunk, as 34 doesnt have it] Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> Reviewed-by: Arjan van de Ven <arjan@linux.intel.com> Cc: Len Brown <lenb@kernel.org> LKML-Reference: <tip-*@git.kernel.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17nmi: fix clock comparator revalidationHeiko Carstens
commit e8129c642155616d9e2160a75f103e127c8c3708 upstream. On each machine check all registers are revalidated. The save area for the clock comparator however only contains the upper most seven bytes of the former contents, if valid. Therefore the machine check handler uses a store clock instruction to get the current time and writes that to the clock comparator register which in turn will generate an immediate timer interrupt. However within the lowcore the expected time of the next timer interrupt is stored. If the interrupt happens before that time the handler won't be called. In turn the clock comparator won't be reprogrammed and therefore the interrupt condition stays pending which causes an interrupt loop until the expected time is reached. On NOHZ machines this can result in unresponsive machines since the time of the next expected interrupted can be a couple of days in the future. To fix this just revalidate the clock comparator register with the expected value. In addition the special handling for udelay must be changed as well. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86-32: Fix dummy trampoline-related inline stubsH. Peter Anvin
commit 8848a91068c018bc91f597038a0f41462a0f88a4 upstream. Fix dummy inline stubs for trampoline-related functions when no trampolines exist (until we get rid of the no-trampoline case entirely.) Signed-off-by: H. Peter Anvin <hpa@zytor.com> Cc: Joerg Roedel <joerg.roedel@amd.com> Cc: Borislav Petkov <borislav.petkov@amd.com> LKML-Reference: <4C6C294D.3030404@zytor.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, mm: Fix CONFIG_VMSPLIT_1G and 2G_OPT trampolineHugh Dickins
commit b7d460897739e02f186425b7276e3fdb1595cea7 upstream. rc2 kernel crashes when booting second cpu on this CONFIG_VMSPLIT_2G_OPT laptop: whereas cloning from kernel to low mappings pgd range does need to limit by both KERNEL_PGD_PTRS and KERNEL_PGD_BOUNDARY, cloning kernel pgd range itself must not be limited by the smaller KERNEL_PGD_BOUNDARY. Signed-off-by: Hugh Dickins <hughd@google.com> LKML-Reference: <alpine.LSU.2.00.1008242235120.2515@sister.anvils> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86-32: Separate 1:1 pagetables from swapper_pg_dirJoerg Roedel
commit fd89a137924e0710078c3ae855e7cec1c43cb845 upstream. This patch fixes machine crashes which occur when heavily exercising the CPU hotplug codepaths on a 32-bit kernel. These crashes are caused by AMD Erratum 383 and result in a fatal machine check exception. Here's the scenario: 1. On 32-bit, the swapper_pg_dir page table is used as the initial page table for booting a secondary CPU. 2. To make this work, swapper_pg_dir needs a direct mapping of physical memory in it (the low mappings). By adding those low, large page (2M) mappings (PAE kernel), we create the necessary conditions for Erratum 383 to occur. 3. Other CPUs which do not participate in the off- and onlining game may use swapper_pg_dir while the low mappings are present (when leave_mm is called). For all steps below, the CPU referred to is a CPU that is using swapper_pg_dir, and not the CPU which is being onlined. 4. The presence of the low mappings in swapper_pg_dir can result in TLB entries for addresses below __PAGE_OFFSET to be established speculatively. These TLB entries are marked global and large. 5. When the CPU with such TLB entry switches to another page table, this TLB entry remains because it is global. 6. The process then generates an access to an address covered by the above TLB entry but there is a permission mismatch - the TLB entry covers a large global page not accessible to userspace. 7. Due to this permission mismatch a new 4kb, user TLB entry gets established. Further, Erratum 383 provides for a small window of time where both TLB entries are present. This results in an uncorrectable machine check exception signalling a TLB multimatch which panics the machine. There are two ways to fix this issue: 1. Always do a global TLB flush when a new cr3 is loaded and the old page table was swapper_pg_dir. I consider this a hack hard to understand and with performance implications 2. Do not use swapper_pg_dir to boot secondary CPUs like 64-bit does. This patch implements solution 2. It introduces a trampoline_pg_dir which has the same layout as swapper_pg_dir with low_mappings. This page table is used as the initial page table of the booting CPU. Later in the bringup process, it switches to swapper_pg_dir and does a global TLB flush. This fixes the crashes in our test cases. -v2: switch to swapper_pg_dir right after entering start_secondary() so that we are able to access percpu data which might not be mapped in the trampoline page table. Signed-off-by: Joerg Roedel <joerg.roedel@amd.com> LKML-Reference: <20100816123833.GB28147@aftab> Signed-off-by: Borislav Petkov <borislav.petkov@amd.com> Signed-off-by: H. Peter Anvin <hpa@zytor.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, UV: Fix initialization of max_pnodeJack Steiner
commit 36ac4b987bea9a95217e1af552252f275ca7fc44 upstream. Fix calculation of "max_pnode" for systems where the the highest blade has neither cpus or memory. (And, yes, although rare this does occur). Signed-off-by: Jack Steiner <steiner@sgi.com> LKML-Reference: <20100910150808.GA19802@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17x86, UV: Delete unneeded boot messagesJack Steiner
commit 2acebe9ecb2b77876e87a1480729cfb2db4570dd upstream. SGI:UV: Delete extra boot messages that describe the system topology. These messages are no longer useful. Signed-off-by: Jack Steiner <steiner@sgi.com> LKML-Reference: <20100317154038.GA29346@sgi.com> Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17sparc: Prevent no-handler signal syscall restart recursion.David S. Miller
commit c27852597829128a9c9d96d79ec454a83c6b0da5 upstream. Explicitly clear the "in-syscall" bit when we have no signal handler and back up the program counters to back up the system call. Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17sparc: Don't mask signal when we can't setup signal frame.David S. Miller
commit 392c21802ee3aa85cee0e703105f797a8a7b9416 upstream. Don't invoke the signal handler tracehook in that situation either. Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17sparc64: Fix race in signal instruction flushing.David S. Miller
commit 05c5e7698bdc54b3079a3517d86077f49ebcc788 upstream. If another cpu does a very wide munmap() on the signal frame area, it can tear down the page table hierarchy from underneath us. Borrow an idea from the 64-bit fault path's get_user_insn(), and disable cross call interrupts during the page table traversal to lock them in place while we operate. Reported-by: Al Viro <viro@ZenIV.linux.org.uk> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17ARM: 6482/2: Fix find_next_zero_bit and related assemblyJames Jones
commit 0e91ec0c06d2cd15071a6021c94840a50e6671aa upstream. The find_next_bit, find_first_bit, find_next_zero_bit and find_first_zero_bit functions were not properly clamping to the maxbit argument at the bit level. They were instead only checking maxbit at the byte level. To fix this, add a compare and a conditional move instruction to the end of the common bit-within-the- byte code used by all the functions and be sure not to clobber the maxbit argument before it is used. Reviewed-by: Nicolas Pitre <nicolas.pitre@linaro.org> Tested-by: Stephen Warren <swarren@nvidia.com> Signed-off-by: James Jones <jajones@nvidia.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17ARM: 6489/1: thumb2: fix incorrect optimisation in usraccWill Deacon
commit 1142b71d85894dcff1466dd6c871ea3c89e0352c upstream. Commit 8b592783 added a Thumb-2 variant of usracc which, when it is called with \rept=2, calls usraccoff once with an offset of 0 and secondly with a hard-coded offset of 4 in order to avoid incrementing the pointer again. If \inc != 4 then we will store the data to the wrong offset from \ptr. Luckily, the only caller that passes \rept=2 to this function is __clear_user so we haven't been actively corrupting user data. This patch fixes usracc to pass \inc instead of #4 to usraccoff when it is called a second time. Reported-by: Tony Thompson <tony.thompson@arm.com> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17uml: disable winch irq before freeing handler dataWill Newton
commit 69e83dad5207f8f03c9699e57e1febb114383cb8 upstream. Disable the winch irq early to make sure we don't take an interrupt part way through the freeing of the handler data, resulting in a crash on shutdown: winch_interrupt : read failed, errno = 9 fd 13 is losing SIGWINCH support ------------[ cut here ]------------ WARNING: at lib/list_debug.c:48 list_del+0xc6/0x100() list_del corruption, next is LIST_POISON1 (00100100) 082578c8: [<081fd77f>] dump_stack+0x22/0x24 082578e0: [<0807a18a>] warn_slowpath_common+0x5a/0x80 08257908: [<0807a23e>] warn_slowpath_fmt+0x2e/0x30 08257920: [<08172196>] list_del+0xc6/0x100 08257940: [<08060244>] free_winch+0x14/0x80 08257958: [<080606fb>] winch_interrupt+0xdb/0xe0 08257978: [<080a65b5>] handle_IRQ_event+0x35/0xe0 08257998: [<080a8717>] handle_edge_irq+0xb7/0x170 082579bc: [<08059bc4>] do_IRQ+0x34/0x50 082579d4: [<08059e1b>] sigio_handler+0x5b/0x80 082579ec: [<0806a374>] sig_handler_common+0x44/0xb0 08257a68: [<0806a538>] sig_handler+0x38/0x50 08257a78: [<0806a77c>] handle_signal+0x5c/0xa0 08257a9c: [<0806be28>] hard_handler+0x18/0x20 08257aac: [<00c14400>] 0xc14400 Signed-off-by: Will Newton <will.newton@gmail.com> Acked-by: WANG Cong <xiyou.wangcong@gmail.com> Cc: Jeff Dike <jdike@addtoit.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
2011-04-17acpi-cpufreq: fix a memleak when unloading driverZhang Rui
commit dab5fff14df2cd16eb1ad4c02e83915e1063fece upstream. We didn't free per_cpu(acfreq_data, cpu)->freq_table when acpi_freq driver is unloaded. Resulting in the following messages in /sys/kernel/debug/kmemleak: unreferenced object 0xf6450e80 (size 64): comm "modprobe", pid 1066, jiffies 4294677317 (age 19290.453s) hex dump (first 32 bytes): 00 00 00 00 e8 a2 24 00 01 00 00 00 00 9f 24 00 ......$.......$. 02 00 00 00 00 6a 18 00 03 00 00 00 00 35 0c 00 .....j.......5.. backtrace: [<c123ba97>] kmemleak_alloc+0x27/0x50 [<c109f96f>] __kmalloc+0xcf/0x110 [<f9da97ee>] acpi_cpufreq_cpu_init+0x1ee/0x4e4 [acpi_cpufreq] [<c11cd8d2>] cpufreq_add_dev+0x142/0x3a0 [<c11920b7>] sysdev_driver_register+0x97/0x110 [<c11cce56>] cpufreq_register_driver+0x86/0x140 [<f9dad080>] 0xf9dad080 [<c1001130>] do_one_initcall+0x30/0x160 [<c10626e9>] sys_init_module+0x99/0x1e0 [<c1002d97>] sysenter_do_call+0x12/0x26 [<ffffffff>] 0xffffffff https://bugzilla.kernel.org/show_bug.cgi?id=15807#c21 Tested-by: Toralf Forster <toralf.foerster@gmx.de> Signed-off-by: Zhang Rui <rui.zhang@intel.com> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>