aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2012-03-12S390: KEYS: Enable the compat keyctl wrapper on s390xDavid Howells
commit 1d057720609ed052a6371fe1d53300e5e6328e94 upstream. Enable the compat keyctl wrapper on s390x so that 32-bit s390 userspace can call the keyctl() syscall. There's an s390x assembly wrapper that truncates all the register values to 32-bits and this then calls compat_sys_keyctl() - but the latter only exists if CONFIG_KEYS_COMPAT is enabled, and the s390 Kconfig doesn't enable it. Without this patch, 32-bit calls to the keyctl() syscall are given an ENOSYS error: [root@devel4 ~]# keyctl show Session Keyring -3: key inaccessible (Function not implemented) Signed-off-by: David Howells <dhowells@redhat.com> Acked-by: dan@danny.cz Cc: Carsten Otte <cotte@de.ibm.com> Reviewed-by: Christian Borntraeger <borntraeger@de.ibm.com> Cc: linux-s390@vger.kernel.org Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-12ARM: LPC32xx: Fix irq on GPI_28Roland Stigge
commit f6737055c1c432a9628a9a731f9881ad8e0a9eee upstream. The GPI_28 IRQ was not registered properly. The registration of IRQ_LPC32XX_GPI_28 was added and the (wrong) IRQ_LPC32XX_GPI_11 at LPC32XX_SIC1_IRQ(4) was replaced by IRQ_LPC32XX_GPI_28 (see manual of LPC32xx / interrupt controller). Signed-off-by: Roland Stigge <stigge@antcom.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-12ARM: LPC32xx: Fix interrupt controller initRoland Stigge
commit 35dd0a75d4a382e7f769dd0277732e7aa5235718 upstream. This patch fixes the initialization of the interrupt controller of the LPC32xx by correctly setting up SIC1 and SIC2 instead of (wrongly) using the same value as for the Main Interrupt Controller (MIC). Signed-off-by: Roland Stigge <stigge@antcom.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-12ARM: LPC32xx: irq.c: Clear latched eventRoland Stigge
commit 94ed7830cba4dce57b18a2926b5d826bfd184bd6 upstream. This patch fixes the wakeup disable function by clearing latched events. Signed-off-by: Roland Stigge <stigge@antcom.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-12ARM: LPC32xx: serial.c: Fixed loop limitRoland Stigge
commit ff424aa4c89d19082e8ae5a3351006bc8a4cd91b upstream. This patch fixes a wrong loop limit on UART init. Signed-off-by: Roland Stigge <stigge@antcom.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-12ARM: LPC32xx: serial.c: HW bug workaroundRoland Stigge
commit 2707208ee8a80dbbd5426f5aa1a934f766825bb5 upstream. This patch fixes a HW bug by flushing RX FIFOs of the UARTs on init. It was ported from NXP's git.lpclinux.com tree. Signed-off-by: Roland Stigge <stigge@antcom.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-03-12compat: fix compile breakage on s390Heiko Carstens
commit 048cd4e51d24ebf7f3552226d03c769d6ad91658 upstream. The new is_compat_task() define for the !COMPAT case in include/linux/compat.h conflicts with a similar define in arch/s390/include/asm/compat.h. This is the minimal patch which fixes the build issues. Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Cc: Jonathan Nieder <jrnieder@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29x86/amd: Fix L1i and L2 cache sharing information for AMD family 15h processorsAndreas Herrmann
commit 32c3233885eb10ac9cb9410f2f8cd64b8df2b2a1 upstream. For L1 instruction cache and L2 cache the shared CPU information is wrong. On current AMD family 15h CPUs those caches are shared between both cores of a compute unit. This fixes https://bugzilla.kernel.org/show_bug.cgi?id=42607 Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Petkov Borislav <Borislav.Petkov@amd.com> Cc: Dave Jones <davej@redhat.com> Link: http://lkml.kernel.org/r/20120208195229.GA17523@alberich.amd.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: re-introduce FPU state preloading at context switch timeLinus Torvalds
commit 34ddc81a230b15c0e345b6b253049db731499f7e upstream. After all the FPU state cleanups and finally finding the problem that caused all our FPU save/restore problems, this re-introduces the preloading of FPU state that was removed in commit b3b0870ef3ff ("i387: do not preload FPU state at task switch time"). However, instead of simply reverting the removal, this reimplements preloading with several fixes, most notably - properly abstracted as a true FPU state switch, rather than as open-coded save and restore with various hacks. In particular, implementing it as a proper FPU state switch allows us to optimize the CR0.TS flag accesses: there is no reason to set the TS bit only to then almost immediately clear it again. CR0 accesses are quite slow and expensive, don't flip the bit back and forth for no good reason. - Make sure that the same model works for both x86-32 and x86-64, so that there are no gratuitous differences between the two due to the way they save and restore segment state differently due to architectural differences that really don't matter to the FPU state. - Avoid exposing the "preload" state to the context switch routines, and in particular allow the concept of lazy state restore: if nothing else has used the FPU in the meantime, and the process is still on the same CPU, we can avoid restoring state from memory entirely, just re-expose the state that is still in the FPU unit. That optimized lazy restore isn't actually implemented here, but the infrastructure is set up for it. Of course, older CPU's that use 'fnsave' to save the state cannot take advantage of this, since the state saving also trashes the state. In other words, there is now an actual _design_ to the FPU state saving, rather than just random historical baggage. Hopefully it's easier to follow as a result. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: move TS_USEDFPU flag from thread_info to task_structLinus Torvalds
commit f94edacf998516ac9d849f7bc6949a703977a7f3 upstream. This moves the bit that indicates whether a thread has ownership of the FPU from the TS_USEDFPU bit in thread_info->status to a word of its own (called 'has_fpu') in task_struct->thread.has_fpu. This fixes two independent bugs at the same time: - changing 'thread_info->status' from the scheduler causes nasty problems for the other users of that variable, since it is defined to be thread-synchronous (that's what the "TS_" part of the naming was supposed to indicate). So perfectly valid code could (and did) do ti->status |= TS_RESTORE_SIGMASK; and the compiler was free to do that as separate load, or and store instructions. Which can cause problems with preemption, since a task switch could happen in between, and change the TS_USEDFPU bit. The change to TS_USEDFPU would be overwritten by the final store. In practice, this seldom happened, though, because the 'status' field was seldom used more than once, so gcc would generally tend to generate code that used a read-modify-write instruction and thus happened to avoid this problem - RMW instructions are naturally low fat and preemption-safe. - On x86-32, the current_thread_info() pointer would, during interrupts and softirqs, point to a *copy* of the real thread_info, because x86-32 uses %esp to calculate the thread_info address, and thus the separate irq (and softirq) stacks would cause these kinds of odd thread_info copy aliases. This is normally not a problem, since interrupts aren't supposed to look at thread information anyway (what thread is running at interrupt time really isn't very well-defined), but it confused the heck out of irq_fpu_usable() and the code that tried to squirrel away the FPU state. (It also caused untold confusion for us poor kernel developers). It also turns out that using 'task_struct' is actually much more natural for most of the call sites that care about the FPU state, since they tend to work with the task struct for other reasons anyway (ie scheduling). And the FPU data that we are going to save/restore is found there too. Thanks to Arjan Van De Ven <arjan@linux.intel.com> for pointing us to the %esp issue. Cc: Arjan van de Ven <arjan@linux.intel.com> Reported-and-tested-by: Raphael Prevost <raphael@buro.asia> Acked-and-tested-by: Suresh Siddha <suresh.b.siddha@intel.com> Tested-by: Peter Anvin <hpa@zytor.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: move AMD K7/K8 fpu fxsave/fxrstor workaround from save to restoreLinus Torvalds
commit 4903062b5485f0e2c286a23b44c9b59d9b017d53 upstream. The AMD K7/K8 CPUs don't save/restore FDP/FIP/FOP unless an exception is pending. In order to not leak FIP state from one process to another, we need to do a floating point load after the fxsave of the old process, and before the fxrstor of the new FPU state. That resets the state to the (uninteresting) kernel load, rather than some potentially sensitive user information. We used to do this directly after the FPU state save, but that is actually very inconvenient, since it (a) corrupts what is potentially perfectly good FPU state that we might want to lazy avoid restoring later and (b) on x86-64 it resulted in a very annoying ordering constraint, where "__unlazy_fpu()" in the task switch needs to be delayed until after the DS segment has been reloaded just to get the new DS value. Coupling it to the fxrstor instead of the fxsave automatically avoids both of these issues, and also ensures that we only do it when actually necessary (the FP state after a save may never actually get used). It's simply a much more natural place for the leaked state cleanup. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: do not preload FPU state at task switch timeLinus Torvalds
commit b3b0870ef3ffed72b92415423da864f440f57ad6 upstream. Yes, taking the trap to re-load the FPU/MMX state is expensive, but so is spending several days looking for a bug in the state save/restore code. And the preload code has some rather subtle interactions with both paravirtualization support and segment state restore, so it's not nearly as simple as it should be. Also, now that we no longer necessarily depend on a single bit (ie TS_USEDFPU) for keeping track of the state of the FPU, we migth be able to do better. If we are really switching between two processes that keep touching the FP state, save/restore is inevitable, but in the case of having one process that does most of the FPU usage, we may actually be able to do much better than the preloading. In particular, we may be able to keep track of which CPU the process ran on last, and also per CPU keep track of which process' FP state that CPU has. For modern CPU's that don't destroy the FPU contents on save time, that would allow us to do a lazy restore by just re-enabling the existing FPU state - with no restore cost at all! Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: don't ever touch TS_USEDFPU directly, use helper functionsLinus Torvalds
commit 6d59d7a9f5b723a7ac1925c136e93ec83c0c3043 upstream. This creates three helper functions that do the TS_USEDFPU accesses, and makes everybody that used to do it by hand use those helpers instead. In addition, there's a couple of helper functions for the "change both CR0.TS and TS_USEDFPU at the same time" case, and the places that do that together have been changed to use those. That means that we have fewer random places that open-code this situation. The intent is partly to clarify the code without actually changing any semantics yet (since we clearly still have some hard to reproduce bug in this area), but also to make it much easier to use another approach entirely to caching the CR0.TS bit for software accesses. Right now we use a bit in the thread-info 'status' variable (this patch does not change that), but we might want to make it a full field of its own or even make it a per-cpu variable. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: move TS_USEDFPU clearing out of __save_init_fpu and into callersLinus Torvalds
commit b6c66418dcad0fcf83cd1d0a39482db37bf4fc41 upstream. Touching TS_USEDFPU without touching CR0.TS is confusing, so don't do it. By moving it into the callers, we always do the TS_USEDFPU next to the CR0.TS accesses in the source code, and it's much easier to see how the two go hand in hand. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: fix x86-64 preemption-unsafe user stack save/restoreLinus Torvalds
commit 15d8791cae75dca27bfda8ecfe87dca9379d6bb0 upstream. Commit 5b1cbac37798 ("i387: make irq_fpu_usable() tests more robust") added a sanity check to the #NM handler to verify that we never cause the "Device Not Available" exception in kernel mode. However, that check actually pinpointed a (fundamental) race where we do cause that exception as part of the signal stack FPU state save/restore code. Because we use the floating point instructions themselves to save and restore state directly from user mode, we cannot do that atomically with testing the TS_USEDFPU bit: the user mode access itself may cause a page fault, which causes a task switch, which saves and restores the FP/MMX state from the kernel buffers. This kind of "recursive" FP state save is fine per se, but it means that when the signal stack save/restore gets restarted, it will now take the '#NM' exception we originally tried to avoid. With preemption this can happen even without the page fault - but because of the user access, we cannot just disable preemption around the save/restore instruction. There are various ways to solve this, including using the "enable/disable_page_fault()" helpers to not allow page faults at all during the sequence, and fall back to copying things by hand without the use of the native FP state save/restore instructions. However, the simplest thing to do is to just allow the #NM from kernel space, but fix the race in setting and clearing CR0.TS that this all exposed: the TS bit changes and the TS_USEDFPU bit absolutely have to be atomic wrt scheduling, so while the actual state save/restore can be interrupted and restarted, the act of actually clearing/setting CR0.TS and the TS_USEDFPU bit together must not. Instead of just adding random "preempt_disable/enable()" calls to what is already excessively ugly code, this introduces some helper functions that mostly mirror the "kernel_fpu_begin/end()" functionality, just for the user state instead. Those helper functions should probably eventually replace the other ad-hoc CR0.TS and TS_USEDFPU tests too, but I'll need to think about it some more: the task switching functionality in particular needs to expose the difference between the 'prev' and 'next' threads, while the new helper functions intentionally were written to only work with 'current'. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: fix sense of sanity checkLinus Torvalds
commit c38e23456278e967f094b08247ffc3711b1029b2 upstream. The check for save_init_fpu() (introduced in commit 5b1cbac37798: "i387: make irq_fpu_usable() tests more robust") was the wrong way around, but I hadn't noticed, because my "tests" were bogus: the FPU exceptions are disabled by default, so even doing a divide by zero never actually triggers this code at all unless you do extra work to enable them. So if anybody did enable them, they'd get one spurious warning. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: make irq_fpu_usable() tests more robustLinus Torvalds
commit 5b1cbac37798805c1fee18c8cebe5c0a13975b17 upstream. Some code - especially the crypto layer - wants to use the x86 FP/MMX/AVX register set in what may be interrupt (typically softirq) context. That *can* be ok, but the tests for when it was ok were somewhat suspect. We cannot touch the thread-specific status bits either, so we'd better check that we're not going to try to save FP state or anything like that. Now, it may be that the TS bit is always cleared *before* we set the USEDFPU bit (and only set when we had already cleared the USEDFP before), so the TS bit test may actually have been sufficient, but it certainly was not obviously so. So this explicitly verifies that we will not touch the TS_USEDFPU bit, and adds a few related sanity-checks. Because it seems that somehow AES-NI is corrupting user FP state. The cause is not clear, and this patch doesn't fix it, but while debugging it I really wanted the code to be more obviously correct and robust. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29i387: math_state_restore() isn't called from asmLinus Torvalds
commit be98c2cdb15ba26148cd2bd58a857d4f7759ed38 upstream. It was marked asmlinkage for some really old and stale legacy reasons. Fix that and the equally stale comment. Noticed when debugging the irq_fpu_usable() bugs. Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29ARM: 7325/1: fix v7 boot with lockdep enabledRabin Vincent
commit 8e43a905dd574f54c5715d978318290ceafbe275 upstream. Bootup with lockdep enabled has been broken on v7 since b46c0f74657d ("ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDR"). This is because v7_setup (which is called very early during boot) calls v7_flush_dcache_all, and the save_and_disable_irqs added by that patch ends up attempting to call into lockdep C code (trace_hardirqs_off()) when we are in no position to execute it (no stack, MMU off). Fix this by using a notrace variant of save_and_disable_irqs. The code already uses the notrace variant of restore_irqs. Reviewed-by: Nicolas Pitre <nico@linaro.org> Acked-by: Stephen Boyd <sboyd@codeaurora.org> Cc: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Rabin Vincent <rabin@rab.in> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29ARM: 7321/1: cache-v7: Disable preemption when reading CCSIDRStephen Boyd
commit b46c0f74657d1fe1c1b0c1452631cc38a9e6987f upstream. armv7's flush_cache_all() flushes caches via set/way. To determine the cache attributes (line size, number of sets, etc.) the assembly first writes the CSSELR register to select a cache level and then reads the CCSIDR register. The CSSELR register is banked per-cpu and is used to determine which cache level CCSIDR reads. If the task is migrated between when the CSSELR is written and the CCSIDR is read the CCSIDR value may be for an unexpected cache level (for example L1 instead of L2) and incorrect cache flushing could occur. Disable interrupts across the write and read so that the correct cache attributes are read and used for the cache flushing routine. We disable interrupts instead of disabling preemption because the critical section is only 3 instructions and we want to call v7_dcache_flush_all from __v7_setup which doesn't have a full kernel stack with a struct thread_info. This fixes a problem we see in scm_call() when flush_cache_all() is called from preemptible context and sometimes the L2 cache is not properly flushed out. Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Reviewed-by: Nicolas Pitre <nico@linaro.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-29powerpc/perf: power_pmu_start restores incorrect values, breaking frequency ↵Anton Blanchard
events commit 9a45a9407c69d068500923480884661e2b9cc421 upstream. perf on POWER stopped working after commit e050e3f0a71b (perf: Fix broken interrupt rate throttling). That patch exposed a bug in the POWER perf_events code. Since the PMCs count upwards and take an exception when the top bit is set, we want to write 0x80000000 - left in power_pmu_start. We were instead programming in left which effectively disables the counter until we eventually hit 0x80000000. This could take seconds or longer. With the patch applied I get the expected number of samples: SAMPLE events: 9948 Signed-off-by: Anton Blanchard <anton@samba.org> Acked-by: Paul Mackerras <paulus@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-20xen pvhvm: do not remap pirqs onto evtchns if !xen_have_vector_callbackStefano Stabellini
commit 207d543f472c1ac9552df79838dc807cbcaa9740 upstream. Signed-off-by: Stefano Stabellini <stefano.stabellini@eu.citrix.com> Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-13ARM: OMAP2+: GPMC: fix device size setupYegor Yefremov
commit 8ef5d844cc3a644ea6f7665932a4307e9fad01fa upstream. following statement can only change device size from 8-bit(0) to 16-bit(1), but not vice versa: regval |= GPMC_CONFIG1_DEVICESIZE(wval); so as this field has 1 reserved bit, that could be used in future, just clear both bits and then OR with the desired value Signed-off-by: Yegor Yefremov <yegorslists@googlemail.com> Signed-off-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-13ARM: 7308/1: vfp: flush thread hwstate before copying ptrace registersWill Deacon
commit 8130b9d7b9d858aa04ce67805e8951e3cb6e9b2f upstream. If we are context switched whilst copying into a thread's vfp_hard_struct then the partial copy may be corrupted by the VFP context switching code (see "ARM: vfp: flush thread hwstate before restoring context from sigframe"). This patch updates the ptrace VFP set code so that the thread state is flushed before the copy, therefore disabling VFP and preventing corruption from occurring. Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-13ARM: 7307/1: vfp: fix ptrace regset modification raceDave Martin
commit 247f4993a5974e6759606c4d380748eecfd273ff upstream. In a preemptible kernel, vfp_set() can be preempted, causing the hardware VFP context to be switched while the thread vfp state is being read and modified. This leads to a race condition which can cause the thread vfp state to become corrupted if lazy VFP context save occurs due to preemption in between the time thread->vfpstate is read and the time the modified state is written back. This may occur if preemption occurs during the execution of a ptrace() call which modifies the VFP register state of a thread. Such instances should be very rare in most realistic scenarios -- none has been reported, so far as I am aware. Only uniprocessor systems should be affected, since VFP context save is not currently lazy in SMP kernels. The problem was introduced by my earlier patch migrating to use regsets to implement ptrace. This patch does a vfp_sync_hwstate() before reading thread->vfpstate, to make sure that the thread's VFP state is not live in the hardware registers while the registers are modified. Thanks to Will Deacon for spotting this. Signed-off-by: Dave Martin <dave.martin@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-13ARM: 7306/1: vfp: flush thread hwstate before restoring context from sigframeWill Deacon
commit 2af276dfb1722e97b190bd2e646b079a2aa674db upstream. Following execution of a signal handler, we currently restore the VFP context from the ucontext in the signal frame. This involves copying from the user stack into the current thread's vfp_hard_struct and then flushing the new data out to the hardware registers. This is problematic when using a preemptible kernel because we could be context switched whilst updating the vfp_hard_struct. If the current thread has made use of VFP since the last context switch, the VFP notifier will copy from the hardware registers into the vfp_hard_struct, overwriting any data that had been partially copied by the signal code. Disabling preemption across copy_from_user calls is a terrible idea, so instead we move the VFP thread flush *before* we update the vfp_hard_struct. Since the flushing is performed lazily, this has the effect of disabling VFP and clearing the CPU's VFP state pointer, therefore preventing the thread from being updated with stale data on the next context switch. Tested-by: Peter Maydell <peter.maydell@linaro.org> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-03net: bpf_jit: fix divide by 0 generationEric Dumazet
[ Upstream commit d00a9dd21bdf7908b70866794c8313ee8a5abd5c ] Several problems fixed in this patch : 1) Target of the conditional jump in case a divide by 0 is performed by a bpf is wrong. 2) Must 'generate' the full function prologue/epilogue at pass=0, or else we can stop too early in pass=1 if the proglen doesnt change. (if the increase of prologue/epilogue equals decrease of all instructions length because some jumps are converted to near jumps) 3) Change the wrong length detection at the end of code generation to issue a more explicit message, no need for a full stack trace. Reported-by: Phil Oester <kernel@linuxace.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-03ARM: 7296/1: proc-v7.S: remove HARVARD_CACHE preprocessor guardsWill Deacon
commit 612539e81f655f6ac73c7af1da8701c1ee618aee upstream. On v7, we use the same cache maintenance instructions for data lines as for unified lines. This was not the case for v6, where HARVARD_CACHE was defined to indicate the L1 cache topology. This patch removes the erroneous compile-time check for HARVARD_CACHE in proc-v7.S, ensuring that we perform I-side invalidation at boot. Reported-and-Acked-by: Shawn Guo <shawn.guo@linaro.org> Acked-by: Catalin Marinas <Catalin.Marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-03mach-ux500: enable ARM errata 764369Srinidhi KASAGAR
commit d65015f7c5c5be9fd3f5e567889c844ba81bdc9c upstream. This applies ARM errata 764369 for all ux500 platforms. Signed-off-by: Srinidhi Kasagar <srinidhi.kasagar@stericsson.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-03x86/microcode_amd: Add support for CPU family specific container filesAndreas Herrmann
commit 5b68edc91cdc972c46f76f85eded7ffddc3ff5c2 upstream. We've decided to provide CPU family specific container files (starting with CPU family 15h). E.g. for family 15h we have to load microcode_amd_fam15h.bin instead of microcode_amd.bin Rationale is that starting with family 15h patch size is larger than 2KB which was hard coded as maximum patch size in various microcode loaders (not just Linux). Container files which include patches larger than 2KB cause different kinds of trouble with such old patch loaders. Thus we have to ensure that the default container file provides only patches with size less than 2KB. Signed-off-by: Andreas Herrmann <andreas.herrmann3@amd.com> Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: <stable@kernel.org> Link: http://lkml.kernel.org/r/20120120164412.GD24508@alberich.amd.com [ documented the naming convention and tidied the code a bit. ] Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-02-03x86/uv: Fix uv_gpa_to_soc_phys_ram() shiftRuss Anderson
commit 5a51467b146ab7948d2f6812892eac120a30529c upstream. uv_gpa_to_soc_phys_ram() was inadvertently ignoring the shift values. This fix takes the shift into account. Signed-off-by: Russ Anderson <rja@sgi.com> Link: http://lkml.kernel.org/r/20120119020753.GA7228@sgi.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2012-01-25score: fix off-by-one index into syscall tableDan Rosenberg
commit c25a785d6647984505fa165b5cd84cfc9a95970b upstream. If the provided system call number is equal to __NR_syscalls, the current check will pass and a function pointer just after the system call table may be called, since sys_call_table is an array with total size __NR_syscalls. Whether or not this is a security bug depends on what the compiler puts immediately after the system call table. It's likely that this won't do anything bad because there is an additional NULL check on the syscall entry, but if there happens to be a non-NULL value immediately after the system call table, this may result in local privilege escalation. Signed-off-by: Dan Rosenberg <drosenberg@vsecurity.com> Cc: Chen Liqin <liqin.chen@sunplusct.com> Cc: Lennox Wu <lennox.wu@gmail.com> Cc: Eugene Teo <eugeneteo@kernel.sg> Cc: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25x86/UV2: Fix BAU destination timeout initializationCliff Wickman
commit d059f9fa84a30e04279c6ff615e9e2cf3b260191 upstream. Move the call to enable_timeouts() forward so that BAU_MISC_CONTROL is initialized before using it in calculate_destination_timeout(). Fix the calculation of a BAU destination timeout for UV2 (in calculate_destination_timeout()). Signed-off-by: Cliff Wickman <cpw@sgi.com> Link: http://lkml.kernel.org/r/20120116211848.GB5767@sgi.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25ACPI, ia64: Use SRAT table rev to use 8bit or 16/32bit PXM fields (ia64)Kurt Garloff
commit 9f10f6a520deb3639fac78d81151a3ade88b4e7f upstream. In SRAT v1, we had 8bit proximity domain (PXM) fields; SRAT v2 provides 32bits for these. The new fields were reserved before. According to the ACPI spec, the OS must disregrard reserved fields. ia64 did handle the PXM fields almost consistently, but depending on sgi's sn2 platform. This patch leaves the sn2 logic in, but does also use 16/32 bits for PXM if the SRAT has rev 2 or higher. The patch also adds __init to the two pxm accessor functions, as they access __initdata now and are called from an __init function only anyway. Note that the code only uses 16 bits for the PXM field in the processor proximity field; the patch does not address this as 16 bits are more than enough. Signed-off-by: Kurt Garloff <kurt@garloff.de> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25ACPI, x86: Use SRAT table rev to use 8bit or 32bit PXM fields (x86/x86-64)Kurt Garloff
commit cd298f60a2451a16e0f077404bf69b62ec868733 upstream. In SRAT v1, we had 8bit proximity domain (PXM) fields; SRAT v2 provides 32bits for these. The new fields were reserved before. According to the ACPI spec, the OS must disregrard reserved fields. x86/x86-64 was rather inconsistent prior to this patch; it used 8 bits for the pxm field in cpu_affinity, but 32 bits in mem_affinity. This patch makes it consistent: Either use 8 bits consistently (SRAT rev 1 or lower) or 32 bits (SRAT rev 2 or higher). cc: x86@kernel.org Signed-off-by: Kurt Garloff <kurt@garloff.de> Signed-off-by: Len Brown <len.brown@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25x86, UV: Update Boot messages for SGI UV2 platformJack Steiner
commit da517a08ac5913cd80ce3507cddd00f2a091b13c upstream. SGI UV systems print a message during boot: UV: Found <num> blades Due to packaging changes, the blade count is not accurate for on the next generation of the platform. This patch corrects the count. Signed-off-by: Jack Steiner <steiner@sgi.com> Link: http://lkml.kernel.org/r/20120106191900.GA19772@sgi.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25x86: Fix mmap random address rangeLudwig Nussel
commit 9af0c7a6fa860698d080481f24a342ba74b68982 upstream. On x86_32 casting the unsigned int result of get_random_int() to long may result in a negative value. On x86_32 the range of mmap_rnd() therefore was -255 to 255. The 32bit mode on x86_64 used 0 to 255 as intended. The bug was introduced by 675a081 ("x86: unify mmap_{32|64}.c") in January 2008. Signed-off-by: Ludwig Nussel <ludwig.nussel@suse.de> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: harvey.harrison@gmail.com Cc: "H. Peter Anvin" <hpa@zytor.com> Cc: Harvey Harrison <harvey.harrison@gmail.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Link: http://lkml.kernel.org/r/201111152246.pAFMklOB028527@wpaz5.hot.corp.google.com Signed-off-by: Ingo Molnar <mingo@elte.hu> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25x86/PCI: build amd_bus.o only when CONFIG_AMD_NB=yBjorn Helgaas
commit 5cf9a4e69c1ff0ccdd1d2b7404f95c0531355274 upstream. We only need amd_bus.o for AMD systems with PCI. arch/x86/pci/Makefile already depends on CONFIG_PCI=y, so this patch just adds the dependency on CONFIG_AMD_NB. Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25x86/PCI: amd: factor out MMCONFIG discoveryBjorn Helgaas
commit 24d25dbfa63c376323096660bfa9ad45a08870ce upstream. This factors out the AMD native MMCONFIG discovery so we can use it outside amd_bus.c. amd_bus.c reads AMD MSRs so it can remove the MMCONFIG area from the PCI resources. We may also need the MMCONFIG information to work around BIOS defects in the ACPI MCFG table. Cc: Borislav Petkov <borislav.petkov@amd.com> Cc: Yinghai Lu <yinghai@kernel.org> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com> Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-25x86/PCI: Ignore CPU non-addressable _CRS reserved memory resourcesGary Hade
commit ae5cd86455381282ece162966183d3f208c6fad7 upstream. This assures that a _CRS reserved host bridge window or window region is not used if it is not addressable by the CPU. The new code either trims the window to exclude the non-addressable portion or totally ignores the window if the entire window is non-addressable. The current code has been shown to be problematic with 32-bit non-PAE kernels on systems where _CRS reserves resources above 4GB. Signed-off-by: Gary Hade <garyhade@us.ibm.com> Reviewed-by: Bjorn Helgaas <bhelgaas@google.com> Cc: Thomas Renninger <trenn@novell.com> Cc: linux-kernel@vger.kernel.org Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-12powerpc: Fix unpaired probe_hcall_entry and probe_hcall_exitLi Zhong
commit e4f387d8db3ba3c2dae4d8bdfe7bb5f4fe1bcb0d upstream. Unpaired calling of probe_hcall_entry and probe_hcall_exit might happen as following, which could cause incorrect preempt count. __trace_hcall_entry => trace_hcall_entry -> probe_hcall_entry => get_cpu_var => preempt_disable __trace_hcall_exit => trace_hcall_exit -> probe_hcall_exit => put_cpu_var => preempt_enable where: A => B and A -> B means A calls B, but => means A will call B through function name, and B will definitely be called. -> means A will call B through function pointer, so B might not be called if the function pointer is not set. So error happens when only one of probe_hcall_entry and probe_hcall_exit get called during a hcall. This patch tries to move the preempt count operations from probe_hcall_entry and probe_hcall_exit to its callers. Reported-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Li Zhong <zhong@linux.vnet.ibm.com> Tested-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-12powerpc/time: Handle wrapping of decrementerAnton Blanchard
commit 37fb9a0231ee43d42d069863bdfd567fca2b61af upstream. When re-enabling interrupts we have code to handle edge sensitive decrementers by resetting the decrementer to 1 whenever it is negative. If interrupts were disabled long enough that the decrementer wrapped to positive we do nothing. This means interrupts can be delayed for a long time until it finally goes negative again. While we hope interrupts are never be disabled long enough for the decrementer to go positive, we have a very good test team that can drive any kernel into the ground. The softlockup data we get back from these fails could be seconds in the future, completely missing the cause of the lockup. We already keep track of the timebase of the next event so use that to work out if we should trigger a decrementer exception. Signed-off-by: Anton Blanchard <anton@samba.org> Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06net: bpf_jit: fix an off-one bug in x86_64 cond jump targetMarkus Kötter
[ Upstream commit a03ffcf873fe0f2565386ca8ef832144c42e67fa ] x86 jump instruction size is 2 or 5 bytes (near/long jump), not 2 or 6 bytes. In case a conditional jump is followed by a long jump, conditional jump target is one byte past the start of target instruction. Signed-off-by: Markus Kötter <nepenthesdev@gmail.com> Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc: Fix handling of orig_i0 wrt. debugging when restarting syscalls.David S. Miller
[ A combination of upstream commits 1d299bc7732c34d85bd43ac1a8745f5a2fed2078 and e88d2468718b0789b4c33da2f7e1cef2a1eee279 ] Although we provide a proper way for a debugger to control whether syscall restart occurs, we run into problems because orig_i0 is not saved and restored properly. Luckily we can solve this problem without having to make debuggers aware of the issue. Across system calls, several registers are considered volatile and can be safely clobbered. Therefore we use the pt_regs save area of one of those registers, %g6, as a place to save and restore orig_i0. Debuggers transparently will do the right thing because they save and restore this register already. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc64: Fix masking and shifting in VIS fpcmp emulation.David S. Miller
[ Upstream commit 2e8ecdc008a16b9a6c4b9628bb64d0d1c05f9f92 ] Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc32: Correct the return value of memcpy.David S. Miller
[ Upstream commit a52312b88c8103e965979a79a07f6b34af82ca4b ] Properly return the original destination buffer pointer. Signed-off-by: David S. Miller <davem@davemloft.net> Tested-by: Kjetil Oftedal <oftedal@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc32: Remove uses of %g7 in memcpy implementation.David S. Miller
[ Upstream commit 21f74d361dfd6a7d0e47574e315f780d8172084a ] This is setting things up so that we can correct the return value, so that it properly returns the original destination buffer pointer. Signed-off-by: David S. Miller <davem@davemloft.net> Tested-by: Kjetil Oftedal <oftedal@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc32: Remove non-kernel code from memcpy implementation.David S. Miller
[ Upstream commit 045b7de9ca0cf09f1adc3efa467f668b89238390 ] Signed-off-by: David S. Miller <davem@davemloft.net> Tested-by: Kjetil Oftedal <oftedal@gmail.com> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc: Kill custom io_remap_pfn_range().David S. Miller
[ Upstream commit 3e37fd3153ac95088a74f5e7c569f7567e9f993a ] To handle the large physical addresses, just make a simple wrapper around remap_pfn_range() like MIPS does. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
2012-01-06sparc64: Patch sun4v code sequences properly on module load.David S. Miller
[ Upstream commit 0b64120cceb86e93cb1bda0dc055f13016646907 ] Some of the sun4v code patching occurs in inline functions visible to, and usable by, modules. Therefore we have to patch them up during module load. Signed-off-by: David S. Miller <davem@davemloft.net> Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>