aboutsummaryrefslogtreecommitdiff
path: root/arch
AgeCommit message (Collapse)Author
2014-04-18arc,hexagon: Delete asm/barrier.hPeter Zijlstra
Both already use asm-generic/barrier.h as per their include/asm/Kbuild. Remove the stale files. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-c7vlkshl3tblim0o8z2p70kt@git.kernel.org Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Richard Kuo <rkuo@codeaurora.org> Cc: Vineet Gupta <vgupta@synopsys.com> Cc: linux-hexagon@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-18ia64: Fix up smp_mb__{before,after}_clear_bit()Peter Zijlstra
IA64 doesn't actually have acquire/release barriers, its a lie! Add a comment explaining this and fix up the bitop barriers. Signed-off-by: Peter Zijlstra <peterz@infradead.org> Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com> Link: http://lkml.kernel.org/n/tip-akevfh136um9dqvb1ohm55ca@git.kernel.org Cc: Akinobu Mita <akinobu.mita@gmail.com> Cc: Fenghua Yu <fenghua.yu@intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Tony Luck <tony.luck@intel.com> Cc: linux-ia64@vger.kernel.org Cc: linux-kernel@vger.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-17genirq: Allow forcing cpu affinity of interruptsThomas Gleixner
The current implementation of irq_set_affinity() refuses rightfully to route an interrupt to an offline cpu. But there is a special case, where this is actually desired. Some of the ARM SoCs have per cpu timers which require setting the affinity during cpu startup where the cpu is not yet in the online mask. If we can't do that, then the local timer interrupt for the about to become online cpu is routed to some random online cpu. The developers of the affected machines tried to work around that issue, but that results in a massive mess in that timer code. We have a yet unused argument in the set_affinity callbacks of the irq chips, which I added back then for a similar reason. It was never required so it got not used. But I'm happy that I never removed it. That allows us to implement a sane handling of the above scenario. So the affected SoC drivers can add the required force handling to their interrupt chip, switch the timer code to irq_force_affinity() and things just work. This does not affect any existing user of irq_set_affinity(). Tagged for stable to allow a simple fix of the affected SoC clock event drivers. Reported-and-tested-by: Krzysztof Kozlowski <k.kozlowski@samsung.com> Signed-off-by: Thomas Gleixner <tglx@linutronix.de> Cc: Kyungmin Park <kyungmin.park@samsung.com> Cc: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Cc: Tomasz Figa <t.figa@samsung.com>, Cc: Daniel Lezcano <daniel.lezcano@linaro.org>, Cc: Kukjin Kim <kgene.kim@samsung.com> Cc: linux-arm-kernel@lists.infradead.org, Cc: stable@vger.kernel.org Link: http://lkml.kernel.org/r/20140416143315.717251504@linutronix.de Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
2014-04-17Merge branch 'parisc-3.15' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux Pull parisc updates from Helge Deller: "There are two major changes in this patchset: The major fix is that the epoll_pwait() syscall for 32bit userspace was not using the compat wrapper on a 64bit kernel. Secondly we changed the value of SHMLBA from 4MB to PAGE_SIZE to reflect that we can actually mmap to any multiple of PAGE_SIZE. The only thing which needs care is that shared mmaps need to be mapped at the same offset inside the 4MB cache window" * 'parisc-3.15' of git://git.kernel.org/pub/scm/linux/kernel/git/deller/parisc-linux: parisc: fix epoll_pwait syscall on compat kernel parisc: change value of SHMLBA from 0x00400000 to PAGE_SIZE parisc: Replace __get_cpu_var uses for address calculation
2014-04-17uprobes/x86: Emulate relative conditional "near" jmp'sOleg Nesterov
Change branch_setup_xol_ops() to simply use opc1 = OPCODE2(insn) - 0x10 if OPCODE1() == 0x0f; this matches the "short" jmp which checks the same condition. Thanks to lib/insn.c, it does the rest correctly. branch->ilen/offs are correct no matter if this jmp is "near" or "short". Reported-by: Jonathan Lebon <jlebon@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Emulate relative conditional "short" jmp'sOleg Nesterov
Teach branch_emulate_op() to emulate the conditional "short" jmp's which check regs->flags. Note: this doesn't support jcxz/jcexz, loope/loopz, and loopne/loopnz. They all are rel8 and thus they can't trigger the problem, but perhaps we will add the support in future just for completeness. Reported-by: Jonathan Lebon <jlebon@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Emulate relative call'sOleg Nesterov
See the previous "Emulate unconditional relative jmp's" which explains why we can not execute "jmp" out-of-line, the same applies to "call". Emulating of rip-relative call is trivial, we only need to additionally push the ret-address. If this fails, we execute this instruction out of line and this should trigger the trap, the probed application should die or the same insn will be restarted if a signal handler expands the stack. We do not even need ->post_xol() for this case. But there is a corner (and almost theoretical) case: another thread can expand the stack right before we execute this insn out of line. In this case it hit the same problem we are trying to solve. So we simply turn the probed insn into "call 1f; 1:" and add ->post_xol() which restores ->sp and restarts. Many thanks to Jonathan who finally found the standalone reproducer, otherwise I would never resolve the "random SIGSEGV's under systemtap" bug-report. Now that the problem is clear we can write the simplified test-case: void probe_func(void), callee(void); int failed = 1; asm ( ".text\n" ".align 4096\n" ".globl probe_func\n" "probe_func:\n" "call callee\n" "ret" ); /* * This assumes that: * * - &probe_func = 0x401000 + a_bit, aligned = 0x402000 * * - xol_vma->vm_start = TASK_SIZE_MAX - PAGE_SIZE = 0x7fffffffe000 * as xol_add_vma() asks; the 1st slot = 0x7fffffffe080 * * so we can target the non-canonical address from xol_vma using * the simple math below, 100 * 4096 is just the random offset */ asm (".org . + 0x800000000000 - 0x7fffffffe080 - 5 - 1 + 100 * 4096\n"); void callee(void) { failed = 0; } int main(void) { probe_func(); return failed; } It SIGSEGV's if you probe "probe_func" (although this is not very reliable, randomize_va_space/etc can change the placement of xol area). Note: as Denys Vlasenko pointed out, amd and intel treat "callw" (0x66 0xe8) differently. This patch relies on lib/insn.c and thus implements the intel's behaviour: 0x66 is simply ignored. Fortunately nothing sane should ever use this insn, so we postpone the fix until we decide what should we do; emulate or not, support or not, etc. Reported-by: Jonathan Lebon <jlebon@redhat.com> Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Emulate nop's using ops->emulate()Oleg Nesterov
Finally we can kill the ugly (and very limited) code in __skip_sstep(). Just change branch_setup_xol_ops() to treat "nop" as jmp to the next insn. Thanks to lib/insn.c, it is clever enough. OPCODE1() == 0x90 includes "(rep;)+ nop;" at least, and (afaics) much more. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Emulate unconditional relative jmp'sOleg Nesterov
Currently we always execute all insns out-of-line, including relative jmp's and call's. This assumes that even if regs->ip points to nowhere after the single-step, default_post_xol_op(UPROBE_FIX_IP) logic will update it correctly. However, this doesn't work if this regs->ip == xol_vaddr + insn_offset is not canonical. In this case CPU generates #GP and general_protection() kills the task which tries to execute this insn out-of-line. Now that we have uprobe_xol_ops we can teach uprobes to emulate these insns and solve the problem. This patch adds branch_xol_ops which has a single branch_emulate_op() hook, so far it can only handle rel8/32 relative jmp's. TODO: move ->fixup into the union along with rip_rela_target_address. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reported-by: Jonathan Lebon <jlebon@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Introduce sizeof_long(), cleanup adjust_ret_addr() and ↵Oleg Nesterov
arch_uretprobe_hijack_return_addr() 1. Add the trivial sizeof_long() helper and change other callers of is_ia32_task() to use it. TODO: is_ia32_task() is not what we actually want, TS_COMPAT does not necessarily mean 32bit. Fortunately syscall-like insns can't be probed so it actually works, but it would be better to rename and use is_ia32_frame(). 2. As Jim pointed out "ncopied" in arch_uretprobe_hijack_return_addr() and adjust_ret_addr() should be named "nleft". And in fact only the last copy_to_user() in arch_uretprobe_hijack_return_addr() actually needs to inspect the non-zero error code. TODO: adjust_ret_addr() should die. We can always calculate the value we need to write into *regs->sp, just UPROBE_FIX_CALL should record insn->length. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Teach arch_uprobe_post_xol() to restart if possibleOleg Nesterov
SIGILL after the failed arch_uprobe_post_xol() should only be used as a last resort, we should try to restart the probed insn if possible. Currently only adjust_ret_addr() can fail, and this can only happen if another thread unmapped our stack after we executed "call" out-of-line. Most probably the application if buggy, but even in this case it can have a handler for SIGSEGV/etc. And in theory it can be even correct and do something non-trivial with its memory. Of course we can't restart unconditionally, so arch_uprobe_post_xol() does this only if ->post_xol() returns -ERESTART even if currently this is the only possible error. default_post_xol_op(UPROBE_FIX_CALL) can always restart, but as Jim pointed out it should not forget to pop off the return address pushed by this insn executed out-of-line. Note: this is not "perfect", we do not want the extra handler_chain() after restart, but I think this is the best solution we can realistically do without too much uglifications. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Send SIGILL if arch_uprobe_post_xol() failsOleg Nesterov
Currently the error from arch_uprobe_post_xol() is silently ignored. This doesn't look good and this can lead to the hard-to-debug problems. 1. Change handle_singlestep() to loudly complain and send SIGILL. Note: this only affects x86, ppc/arm can't fail. 2. Change arch_uprobe_post_xol() to call arch_uprobe_abort_xol() and avoid TF games if it is going to return an error. This can help to to analyze the problem, if nothing else we should not report ->ip = xol_slot in the core-file. Note: this means that handle_riprel_post_xol() can be called twice, but this is fine because it is idempotent. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Conditionalize the usage of handle_riprel_insn()Oleg Nesterov
arch_uprobe_analyze_insn() calls handle_riprel_insn() at the start, but only "0xff" and "default" cases need the UPROBE_FIX_RIP_ logic. Move the callsite into "default" case and change the "0xff" case to fall-through. We are going to add the various hooks to handle the rip-relative jmp/call instructions (and more), we need this change to enforce the fact that the new code can not conflict with is_riprel_insn() logic which, after this change, can only be used by default_xol_ops. Note: arch_uprobe_abort_xol() still calls handle_riprel_post_xol() directly. This is fine unless another _xol_ops we may add later will need to reuse "UPROBE_FIX_RIP_AX|UPROBE_FIX_RIP_CX" bits in ->fixup. In this case we can add uprobe_xol_ops->abort() hook, which (perhaps) we will need anyway in the long term. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
2014-04-17uprobes/x86: Introduce uprobe_xol_ops and arch_uprobe->opsOleg Nesterov
Introduce arch_uprobe->ops pointing to the "struct uprobe_xol_ops", move the current UPROBE_FIX_{RIP*,IP,CALL} code into the default set of methods and change arch_uprobe_pre/post_xol() accordingly. This way we can add the new uprobe_xol_ops's to handle the insns which need the special processing (rip-relative jmp/call at least). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
2014-04-17uprobes/x86: move the UPROBE_FIX_{RIP,IP,CALL} code at the end of pre/post hooksOleg Nesterov
No functional changes. Preparation to simplify the review of the next change. Just reorder the code in arch_uprobe_pre/post_xol() functions so that UPROBE_FIX_{RIP_*,IP,CALL} logic goes to the end. Also change arch_uprobe_pre_xol() to use utask instead of autask, to make the code more symmetrical with arch_uprobe_post_xol(). Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
2014-04-17uprobes/x86: Gather "riprel" functions togetherOleg Nesterov
Cosmetic. Move pre_xol_rip_insn() and handle_riprel_post_xol() up to the closely related handle_riprel_insn(). This way it is simpler to read and understand this code, and this lessens the number of ifdef's. While at it, update the comment in handle_riprel_post_xol() as Jim suggested. TODO: rename them somehow to make the naming consistent. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com>
2014-04-17uprobes/x86: Kill the "ia32_compat" check in handle_riprel_insn(), remove ↵Oleg Nesterov
"mm" arg Kill the "mm->context.ia32_compat" check in handle_riprel_insn(), if it is true insn_rip_relative() must return false. validate_insn_bits() passed "ia32_compat" as !x86_64 to insn_init(), and insn_rip_relative() checks insn->x86_64. Also, remove the no longer needed "struct mm_struct *mm" argument and the unnecessary "return" at the end. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
2014-04-17uprobes/x86: Fold prepare_fixups() into arch_uprobe_analyze_insn()Oleg Nesterov
No functional changes, preparation. Shift the code from prepare_fixups() to arch_uprobe_analyze_insn() with the following modifications: - Do not call insn_get_opcode() again, it was already called by validate_insn_bits(). - Move "case 0xea" up. This way "case 0xff" can fall through to default case. - change "case 0xff" to use the nested "switch (MODRM_REG)", this way the code looks a bit simpler. - Make the comments look consistent. While at it, kill the initialization of rip_rela_target_address and ->fixups, we can rely on kzalloc(). We will add the new members into arch_uprobe, it would be better to assume that everything is zero by default. TODO: cleanup/fix the mess in validate_insn_bits() paths: - validate_insn_64bits() and validate_insn_32bits() should be unified. - "ifdef" is not used consistently; if good_insns_64 depends on CONFIG_X86_64, then probably good_insns_32 should depend on CONFIG_X86_32/EMULATION - the usage of mm->context.ia32_compat looks wrong if the task is TIF_X32. Signed-off-by: Oleg Nesterov <oleg@redhat.com> Reviewed-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Reviewed-by: Jim Keniston <jkenisto@us.ibm.com> Acked-by: Srikar Dronamraju <srikar@linux.vnet.ibm.com>
2014-04-17Merge tag 'stable/for-linus-3.15-rc1-tag' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip Pull Xen fixes from David Vrabel: "Xen regression and bug fixes for 3.15-rc1: - fix completely broken 32-bit PV guests caused by x86 refactoring 32-bit thread_info. - only enable ticketlock slow path on Xen (not bare metal) - fix two bugs with PV guests not shutting down when requested - fix a minor memory leak in xen-pciback error path" * tag 'stable/for-linus-3.15-rc1-tag' of git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip: xen/manage: Poweroff forcefully if user-space is not yet up. xen/xenbus: Avoid synchronous wait on XenBus stalling shutdown/restart. xen/spinlock: Don't enable them unconditionally. xen-pciback: silence an unwanted debug printk xen: fix memory leak in __xen_pcibk_add_pci_dev() x86/xen: Fix 32-bit PV guests's usage of kernel_stack
2014-04-17KVM: VMX: speed up wildcard MMIO EVENTFDMichael S. Tsirkin
With KVM, MMIO is much slower than PIO, due to the need to do page walk and emulation. But with EPT, it does not have to be: we know the address from the VMCS so if the address is unique, we can look up the eventfd directly, bypassing emulation. Unfortunately, this only works if userspace does not need to match on access length and data. The implementation adds a separate FAST_MMIO bus internally. This serves two purposes: - minimize overhead for old userspace that does not use eventfd with lengtth = 0 - minimize disruption in other code (since we don't know the length, devices on the MMIO bus only get a valid address in write, this way we don't need to touch all devices to teach them to handle an invalid length) At the moment, this optimization only has effect for EPT on x86. It will be possible to speed up MMIO for NPT and MMU using the same idea in the future. With this patch applied, on VMX MMIO EVENTFD is essentially as fast as PIO. I was unable to detect any measureable slowdown to non-eventfd MMIO. Making MMIO faster is important for the upcoming virtio 1.0 which includes an MMIO signalling capability. The idea was suggested by Peter Anvin. Lots of thanks to Gleb for pre-review and suggestions. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2014-04-17KVM: support any-length wildcard ioeventfdMichael S. Tsirkin
It is sometimes benefitial to ignore IO size, and only match on address. In hindsight this would have been a better default than matching length when KVM_IOEVENTFD_FLAG_DATAMATCH is not set, In particular, this kind of access can be optimized on VMX: there no need to do page lookups. This can currently be done with many ioeventfds but in a suboptimal way. However we can't change kernel/userspace ABI without risk of breaking some applications. Use len = 0 to mean "ignore length for matching" in a more optimal way. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2014-04-17x86/efi: Save and restore FPU context around efi_calls (i386)Ricardo Neri
Do a complete FPU context save/restore around the EFI calls. This required as runtime EFI firmware may potentially use the FPU. This change covers only the i386 configuration. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-17x86/efi: Save and restore FPU context around efi_calls (x86_64)Ricardo Neri
Do a complete FPU context save/restore around the EFI calls. This required as runtime EFI firmware may potentially use the FPU. This change covers only the x86_64 configuration. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-17x86/efi: Implement a __efi_call_virt macroRicardo Neri
For i386, all the EFI system runtime services functions return efi_status_t except efi_reset_system_system. Therefore, not all functions can be covered by the same macro in case the macro needs to do more than calling the function (i.e., return a value). The purpose of the __efi_call_virt macro is to be used when no return value is expected. For x86_64, this macro would not be needed as all the runtime services return u64. However, the same code is used for both x86_64 and i386. Thus, the macro __efi_call_virt is also defined to not break compilation. Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-17x86, fpu: Extend the use of static_cpu_has_safeMatt Fleming
It may be necessary to save and restore the FPU context during EFI runtime system services calls. However, this may happen during boot and before alternatives have run. Thus, we need to use static_cpu_has_safe instead. The rationale behind the use of static_cpu_has_safe is the same as in commit 5f8c4218148822fde6ee ("x86, fpu: Use static_cpu_has_safe before alternatives") by Borislav Petkov. Signed-off-by: Matt Fleming <matt.fleming@intel.com> Signed-off-by: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Borislav Petkov <bp@suse.de>
2014-04-17x86/efi: Delete most of the efi_call* macrosMatt Fleming
We really only need one phys and one virt function call, and then only one assembly function to make firmware calls. Since we are not using the C type system anyway, we're not really losing much by deleting the macros apart from no longer having a check that we are passing the correct number of parameters. The lack of duplicated code seems like a worthwhile trade-off. Cc: Ricardo Neri <ricardo.neri-calderon@linux.intel.com> Cc: Borislav Petkov <bp@suse.de> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-17efi: x86: Handle arbitrary Unicode charactersH. Peter Anvin
Instead of truncating UTF-16 assuming all characters is ASCII, properly convert it to UTF-8. Signed-off-by: H. Peter Anvin <hpa@linux.intel.com> [ Bug and style fixes. ] Signed-off-by: Roy Franz <roy.franz@linaro.org> Signed-off-by: Leif Lindholm <leif.lindholm@linaro.org> Signed-off-by: Matt Fleming <matt.fleming@intel.com>
2014-04-17kprobes/x86: Fix page-fault handling logicMasami Hiramatsu
Current kprobes in-kernel page fault handler doesn't expect that its single-stepping can be interrupted by an NMI handler which may cause a page fault(e.g. perf with callback tracing). In that case, the page-fault handled by kprobes and it misunderstands the page-fault has been caused by the single-stepping code and tries to recover IP address to probed address. But the truth is the page-fault has been caused by the NMI handler, and do_page_fault failes to handle real page fault because the IP address is modified and causes Kernel BUGs like below. ---- [ 2264.726905] BUG: unable to handle kernel NULL pointer dereference at 0000000000000020 [ 2264.727190] IP: [<ffffffff813c46e0>] copy_user_generic_string+0x0/0x40 To handle this correctly, I fixed the kprobes fault handler to ensure the faulted ip address is its own single-step buffer instead of checking current kprobe state. Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com> Cc: Andi Kleen <andi@firstfloor.org> Cc: Ananth N Mavinakayanahalli <ananth@in.ibm.com> Cc: Sandeepa Prabhu <sandeepa.prabhu@linaro.org> Cc: Frederic Weisbecker <fweisbec@gmail.com> Cc: Steven Rostedt <rostedt@goodmis.org> Cc: fche@redhat.com Cc: systemtap@sourceware.org Link: http://lkml.kernel.org/r/20140417081644.26341.52351.stgit@ltc230.yrl.intra.hitachi.co.jp Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-17x86/mce: Fix CMCI preemption bugsIngo Molnar
The following commit: 27f6c573e0f7 ("x86, CMCI: Add proper detection of end of CMCI storms") Added two preemption bugs: - machine_check_poll() does a get_cpu_var() without a matching put_cpu_var(), which causes preemption imbalance and crashes upon bootup. - it does percpu ops without disabling preemption. Preemption is not disabled due to the mistaken use of a raw spinlock. To fix these bugs fix the imbalance and change cmci_discover_lock to a regular spinlock. Reported-by: Owen Kibel <qmewlo@gmail.com> Reported-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Ingo Molnar <mingo@kernel.org> Cc: Chen, Gong <gong.chen@linux.intel.com> Cc: Josh Boyer <jwboyer@fedoraproject.org> Cc: Tony Luck <tony.luck@intel.com> Cc: Peter Zijlstra <peterz@infradead.org> Cc: Alexander Todorov <atodorov@redhat.com> Cc: Borislav Petkov <bp@alien8.de> Link: http://lkml.kernel.org/n/tip-jtjptvgigpfkpvtQxpEk1at2@git.kernel.org Signed-off-by: Ingo Molnar <mingo@kernel.org> -- arch/x86/kernel/cpu/mcheck/mce.c | 4 +--- arch/x86/kernel/cpu/mcheck/mce_intel.c | 18 +++++++++--------- 2 files changed, 10 insertions(+), 12 deletions(-)
2014-04-17ARM: orion5x: fix target ID for crypto SRAM windowThomas Petazzoni
In commit 4ca2c04085a1caa903e92a5fc0da25362150aac2 ('ARM: orion5x: Move to ID based window creation'), the mach-orion5x code was changed to use the new mvebu-mbus API. However, in the process, a mistake was made on the crypto SRAM window target ID: it should have been 0x9 (verified in the datasheet) and not 0x0. Signed-off-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Sebastian Hesselbarth <sebastian.hesselbarth@gmail.com> Link: https://lkml.kernel.org/r/1397400006-4315-2-git-send-email-thomas.petazzoni@free-electrons.com Fixes: 4ca2c04085a1 ('ARM: orion5x: Move to ID based window creation') Cc: stable@vger.kernel.org # v3.12+ Signed-off-by: Jason Cooper <jason@lakedaemon.net>
2014-04-16Merge branch 'x86-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull x86 fixes from Ingo Molnar: "Various fixes: - reboot regression fix - build message spam fix - GPU quirk fix - 'make kvmconfig' fix plus the wire-up of the renameat2() system call on i386" * 'x86-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: x86: Remove the PCI reboot method from the default chain x86/build: Supress "Nothing to be done for ..." messages x86/gpu: Fix sign extension issue in Intel graphics stolen memory quirks x86/platform: Fix "make O=dir kvmconfig" i386: Wire up the renameat2() syscall
2014-04-16Merge branch 'perf-urgent-for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip Pull perf fixes from Ingo Molnar: "Tooling fixes, plus a simple hardware-enablement patch for the Intel RAPL PMU (energy use measurement) on Haswell CPUs, which I hope is still fine at this stage" * 'perf-urgent-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip: perf tools: Instead of redirecting flex output, use -o perf tools: Fix double free in perf test 21 (code-reading.c) perf stat: Initialize statistics correctly perf bench: Set more defaults in the 'numa' suite perf bench: Fix segfault at the end of an 'all' execution perf bench: Update manpage to mention numa and futex perf probe: Use dwarf_getcfi_elf() instead of dwarf_getcfi() perf probe: Fix to handle errors in line_range searching perf probe: Fix --line option behavior perf tools: Pick up libdw without explicit LIBDW_DIR MAINTAINERS: Change e-mail to kernel.org one perf callchains: Disable unwind libraries when libelf isn't found tools lib traceevent: Do not call warning() directly tools lib traceevent: Print event name when show warning if possible perf top: Fix documentation of invalid -s option perf/x86: Enable DRAM RAPL support on Intel Haswell
2014-04-16ARM: tegra: fix Venice2 SD card VQMMC supplyAndrew Bresticker
VDDIO_SDMMC3 is the VQMMC (I/O) supply, not the VMMC (core) supply, for the SD slot on Venice2. Signed-off-by: Andrew Bresticker <abrestic@chromium.org> Signed-off-by: Stephen Warren <swarren@nvidia.com>
2014-04-16ARM: tegra: make Venice's +3.3V_RUN regulator always onStephen Warren
This regulator supplies power to pretty much everything on the board, so it doesn't make sense to allow it to turn off. Mark it boot-on and always-on so it doesn't get turned off. Without this, I see issues with the eMMC device; it can't be correctly detected during boot. Signed-off-by: Stephen Warren <swarren@nvidia.com>
2014-04-16ARM: tegra: fix Jetson TK1 SD card supplyStephen Warren
Regulator vddio_sdmmc3 provides the Tegra<->SD IO voltage, not the card core supply voltage. That is, it provides vqmmc, not vmmc. Fix the DT to correctly reflect this. Reported-by: Andrew Bresticker <abrestic@chromium.org> Signed-off-by: Stephen Warren <swarren@nvidia.com>
2014-04-16Merge tag 'pinctrl-v3.15-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl Pull pincontrol fixes from Linus Walleij: "A first set of pin control fixes for the v3.15 series: - Fix a couple of barnsjukdomar on the Rockchip driver. - Remove an idiotic debug print I happened to leave behind in the Nomadik driver. - Fixup the Qualcomm MSM interrupt handling code for the TLMM v2. - Three patches renaming the Broadcom Capri driver to BCM28155. This has been falling between the chairs for some time due to some cross-tree synchronization misunderstandings, now I'm fed up with this and just rename it in this -rc1 phase" * tag 'pinctrl-v3.15-2' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl: pinctrl: fix typo in bindings documentation Update bcm_defconfig with new pinctrl CONFIG pinctrl: Rename Broadcom Capri pinctrl driver pinctrl: msm: Correct interrupt code for TLMM v2 pinctrl: nomadik: delete stray debug print pinctrl: rockchip: handle first half of rk3188-bank0 correctly pinctrl: rockchip: add return value to rockchip_set_mux pinctrl: rockchip: fix offset of mux registers for rk3188
2014-04-16KVM: x86: Fix page-tables reserved bitsNadav Amit
KVM does not handle the reserved bits of x86 page tables correctly: In PAE, bits 5:8 are reserved in the PDPTE. In IA-32e, bit 8 is not reserved. Signed-off-by: Nadav Amit <namit@cs.technion.ac.il> Signed-off-by: Marcelo Tosatti <mtosatti@redhat.com>
2014-04-16Merge branch 'for-linus' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux Pull s390 patches from Martin Schwidefsky: "An update to the oops output with additional information about the crash. The renameat2 system call is enabled. Two patches in regard to the PTR_ERR_OR_ZERO cleanup. And a bunch of bug fixes" * 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/s390/linux: s390/sclp_cmd: replace PTR_RET with PTR_ERR_OR_ZERO s390/sclp: replace PTR_RET with PTR_ERR_OR_ZERO s390/sclp_vt220: Fix kernel panic due to early terminal input s390/compat: fix typo s390/uaccess: fix possible register corruption in strnlen_user_srst() s390: add 31 bit warning message s390: wire up sys_renameat2 s390: show_registers() should not map user space addresses to kernel symbols s390/mm: print control registers and page table walk on crash s390/smp: fix smp_stop_cpu() for !CONFIG_SMP s390: fix control register update
2014-04-16Merge tag 'please-pull-ia64-erratum' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux Pull itanium erratum fix from Tony Luck: "Small workaround for a rare, but annoying, erratum #237" * tag 'please-pull-ia64-erratum' of git://git.kernel.org/pub/scm/linux/kernel/git/aegl/linux: [IA64] Change default PSR.ac from '1' to '0' (Fix erratum #237)
2014-04-16[IA64] Change default PSR.ac from '1' to '0' (Fix erratum #237)Tony Luck
April 2014 Itanium processor specification update: http://www.intel.com/content/www/us/en/processors/itanium/itanium-specification-update.html describes this erratum: ========================================================================= 237. Under a complex set of conditions, store to load forwarding for a sub 8-byte load may complete incorrectly Problem: A load instruction may complete incorrectly when a code sequence using 4-byte or smaller load and store operations to the same address is executed in combination with specific timing of all the following concurrent conditions: store to load forwarding, alignment checking enabled, a mis-predicted branch, and complex cache utilization activity. Implication: The affected sub 8-byte instruction may complete incorrectly resulting in unpredictable system behavior. There is an extremely low probability of exposure due to the significant number of complex microarchitectural concurrent conditions required to encounter the erratum. Workaround: Set PSR.ac = 0 to completely avoid the erratum. Disabling Hyper-Threading will significantly reduce exposure to the conditions that contribute to encountering the erratum. Status: See the Summary Table of Changes for the affected steppings. ========================================================================= [Table of changes essentially lists all models from McKinley to Tukwila] The PSR.ac bit controls whether the processor will always generate an unaligned reference trap (0x5a00) for a misaligned data access (when PSR.ac=1) or if it will let the access succeed when running on a cpu that implements logic to handle some unaligned accesses. Way back in 2008 in commit b704882e70d87d7f56db5ff17e2253f3fa90e4f3 [IA64] Rationalize kernel mode alignment checking we made the decision to always enable strict checking. We were already doing so in trap/interrupt context because the common preamble code set this bit - but the rest of supervisor code (and by inheritance user code) ran with PSR.ac=0. We now reverse that decision and set PSR.ac=0 everywhere in the kernel (also inherited by user processes). This will avoid the erratum using the method described in the Itanium specification update. Net effect for users is that the processor will handle unaligned access when it can (typically with a tiny performance bubble in the pipeline ... but much less invasive than taking a trap and having the OS perform the access). Signed-off-by: Tony Luck <tony.luck@intel.com>
2014-04-16x86/build: Supress realmode.bin is up to date messagePeter Foley
Supress this unnecessary message during kernel re-build: make[3]: 'arch/x86/realmode/rm/realmode.bin' is up to date. Signed-off-by: Peter Foley <pefoley2@pefoley.com> Link: http://lkml.kernel.org/r/1397584693-15902-1-git-send-email-pefoley2@pefoley.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16Merge commit 'b13b1d2d8692' into x86/mmIngo Molnar
It got into x86/urgent but isn't really urgent material. Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16crypto: bfin_crc - access crc registers by readl and writel functionsSonic Zhang
Move architecture independant crc header file out of the blackfin folder. Signed-off-by: Sonic Zhang <sonic.zhang@analog.com> Reviewed-by: Marek Vasut <marex@denx.de> Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
2014-04-16x86/irq: Fix fixup_irqs() error handlingPrarit Bhargava
Several patches to fix cpu hotplug and the down'd cpu's irq relocations have been submitted in the past month or so. The patches should resolve the problems with cpu hotplug and irq relocation, however, there is always a possibility that a bug still exists. The big problem with debugging these irq reassignments is that the cpu down completes and then we get random stack traces from drivers for which irqs have not been properly assigned to a new cpu. The stack traces are a mix of storage, network, and other kernel subsystem (I once saw the serial port stop working ...) warnings and failures. The problem with these failures is that they are difficult to diagnose. There is no warning in the cpu hotplug down path to indicate that an IRQ has failed to be assigned to a new cpu, and all we are left with is a stack trace from a driver, or a non-functional device. If we had some information on the console debugging these situations would be much easier; after all we can map an IRQ to a device by simply using lspci or /proc/interrupts. The current code, fixup_irqs(), which migrates IRQs from the down'd cpu and is called close to the end of the cpu down path, calls chip->set_irq_affinity which eventually calls __assign_irq_vector(). Errors are not propogated back from this function call and this results in silent irq relocation failures. This patch fixes this issue by returning the error codes up the call stack and prints out a warning if there is a relocation failure. Signed-off-by: Prarit Bhargava <prarit@redhat.com> Acked-by: Thomas Gleixner <tglx@linutronix.de> Cc: Rui Wang <rui.y.wang@intel.com> Cc: Liu Ping Fan <kernelfans@gmail.com> Cc: Bjorn Helgaas <bhelgaas@google.com> Cc: Yoshihiro YUNOMAE <yoshihiro.yunomae.ez@hitachi.com> Cc: Lv Zheng <lv.zheng@intel.com> Cc: Seiji Aguchi <seiji.aguchi@hds.com> Cc: Yang Zhang <yang.z.zhang@intel.com> Cc: Andi Kleen <ak@linux.intel.com> Cc: Steven Rostedt (Red Hat) <rostedt@goodmis.org> Cc: Li Fei <fei.li@intel.com> Cc: gong.chen@linux.intel.com Link: http://lkml.kernel.org/r/1396440673-18286-1-git-send-email-prarit@redhat.com [ Made small cleanliness tweaks. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16Merge branch 'x86/apic' into x86/irq, to consolidate branches.Ingo Molnar
Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16x86/mm: In the PTE swapout page reclaim case clear the accessed bit instead ↵Shaohua Li
of flushing the TLB We use the accessed bit to age a page at page reclaim time, and currently we also flush the TLB when doing so. But in some workloads TLB flush overhead is very heavy. In my simple multithreaded app with a lot of swap to several pcie SSDs, removing the tlb flush gives about 20% ~ 30% swapout speedup. Fortunately just removing the TLB flush is a valid optimization: on x86 CPUs, clearing the accessed bit without a TLB flush doesn't cause data corruption. It could cause incorrect page aging and the (mistaken) reclaim of hot pages, but the chance of that should be relatively low. So as a performance optimization don't flush the TLB when clearing the accessed bit, it will eventually be flushed by a context switch or a VM operation anyway. [ In the rare event of it not getting flushed for a long time the delay shouldn't really matter because there's no real memory pressure for swapout to react to. ] Suggested-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Shaohua Li <shli@fusionio.com> Acked-by: Rik van Riel <riel@redhat.com> Acked-by: Mel Gorman <mgorman@suse.de> Acked-by: Hugh Dickins <hughd@google.com> Acked-by: Johannes Weiner <hannes@cmpxchg.org> Cc: linux-mm@kvack.org Cc: Peter Zijlstra <a.p.zijlstra@chello.nl> Link: http://lkml.kernel.org/r/20140408075809.GA1764@kernel.org [ Rewrote the changelog and the code comments. ] Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16x86: Remove the PCI reboot method from the default chainIngo Molnar
Steve reported a reboot hang and bisected it back to this commit: a4f1987e4c54 x86, reboot: Add EFI and CF9 reboot methods into the default list He heroically tested all reboot methods and found the following: reboot=t # triple fault ok reboot=k # keyboard ctrl FAIL reboot=b # BIOS ok reboot=a # ACPI FAIL reboot=e # EFI FAIL [system has no EFI] reboot=p # PCI 0xcf9 FAIL And I think it's pretty obvious that we should only try PCI 0xcf9 as a last resort - if at all. The other observation is that (on this box) we should never try the PCI reboot method, but close with either the 'triple fault' or the 'BIOS' (terminal!) reboot methods. Thirdly, CF9_COND is a total misnomer - it should be something like CF9_SAFE or CF9_CAREFUL, and 'CF9' should be 'CF9_FORCE' ... So this patch fixes the worst problems: - it orders the actual reboot logic to follow the reboot ordering pattern - it was in a pretty random order before for no good reason. - it fixes the CF9 misnomers and uses BOOT_CF9_FORCE and BOOT_CF9_SAFE flags to make the code more obvious. - it tries the BIOS reboot method before the PCI reboot method. (Since 'BIOS' is a terminal reboot method resulting in a hang if it does not work, this is essentially equivalent to removing the PCI reboot method from the default reboot chain.) - just for the miraculous possibility of terminal (resulting in hang) reboot methods of triple fault or BIOS returning without having done their job, there's an ordering between them as well. Reported-and-bisected-and-tested-by: Steven Rostedt <rostedt@goodmis.org> Cc: Li Aubrey <aubrey.li@linux.intel.com> Cc: Linus Torvalds <torvalds@linux-foundation.org> Cc: Matthew Garrett <mjg59@srcf.ucam.org> Link: http://lkml.kernel.org/r/20140404064120.GB11877@gmail.com Signed-off-by: Ingo Molnar <mingo@kernel.org>
2014-04-16ARM: shmobile: armadillo-reference dts: Seiko Instruments, Inc is "sii"Geert Uytterhoeven
Use "sii,s35390a" instead of "seiko,s35390a", cfr. Documentation/devicetree/bindings/i2c/trivial-devices.txt and Documentation/devicetree/bindings/vendor-prefixes.txt. Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Acked-by: Ulrich Hecht <ulrich.hecht+renesas@gmail.com> Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
2014-04-16ARM: shmobile: r8a7740: Make r8a7740_meram_workaround() __initGeert Uytterhoeven
It's called from eva_init() only, which is __init Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Simon Horman <horms+renesas@verge.net.au>
2014-04-16ARM: shmobile: sh7372: Call sh7372_add_early_devices() instead of open codingGeert Uytterhoeven
Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be> Reviewed-by: Laurent Pinchart <laurent.pinchart@ideasonboard.com> Signed-off-by: Simon Horman <horms+renesas@verge.net.au>