aboutsummaryrefslogtreecommitdiff
path: root/arch/arm/mm
AgeCommit message (Collapse)Author
2013-04-17ARM: 7696/1: Fix kexec by setting outer_cache.inv_all for FeroceonIllia Ragozin
On Feroceon the L2 cache becomes non-coherent with the CPU when the L1 caches are disabled. Thus the L2 needs to be invalidated after both L1 caches are disabled. On kexec before the starting the code for relocation the kernel, the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon), but after L2 cache is never invalidated, because inv_all is not set in cache-feroceon-l2.c. So kernel relocation and decompression may has (and usually has) errors. Setting the function enables L2 invalidation and fixes the issue. Cc: <stable@vger.kernel.org> Signed-off-by: Illia Ragozin <illia.ragozin@grapecom.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-17ARM: 7694/1: ARM, TCM: initialize TCM in paging_init(), instead of setup_arch()Joonsoo Kim
tcm_init() call iotable_init() and it use early_alloc variants which do memblock allocation. Directly using memblock allocation after initializing bootmem should not permitted, because bootmem can't know where are additinally reserved. So move tcm_init() to a safe place before initalizing bootmem. (On the U300) Tested-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-17Merge branch 'for-rmk/740t' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into fixes
2013-04-08ARM: Do 15e0d9e37c (ARM: pm: let platforms select cpu_suspend support) properlyRussell King
Let's do the changes properly and fix the same problem everywhere, not just for one case. Cc: <stable@vger.kernel.org> # kernels containing 15e0d9e37c or equivalent Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-03ARM: 7684/1: errata: Workaround for Cortex-A15 erratum 798181 (TLBI/DSB ↵Catalin Marinas
operations) On Cortex-A15 (r0p0..r3p2) the TLBI/DSB are not adequately shooting down all use of the old entries. This patch implements the erratum workaround which consists of: 1. Dummy TLBIMVAIS and DSB on the CPU doing the TLBI operation. 2. Send IPI to the CPUs that are running the same mm (and ASID) as the one being invalidated (or all the online CPUs for global pages). 3. CPU receiving the IPI executes a DMB and CLREX (part of the exception return code already). Signed-off-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-04-03ARM: 7682/1: cache-l2x0: fix masking of RTL revision numbering and set_debug ↵Rob Herring
init Commit b8db6b8 (ARM: 7547/4: cache-l2x0: add support for Aurora L2 cache ctrl) moved the masking of the part ID which caused the RTL version to be lost. Commit 6248d06 (ARM: 7545/1: cache-l2x0: make outer_cache_fns a field of l2x0_of_data) changed how .set_debug is initialized. Both commits break commit 74ddcdb (ARM: 7608/1: l2x0: Only set .set_debug on PL310 r3p0 and earlier) which uses the RTL version to conditionally set .set_debug function pointer. Commit b8db6b8 also caused the printed cache ID to be missing the version information. Fix this by reverting how the part number is masked so the RTL version info is maintained. The cache-id-part DT property does not set the RTL bits so masking them should have no effect. Also, re-arrange the order of the function pointer init so the .set_debug function can be overridden. Reported-by: Paolo Pisati <paolo.pisati@canonical.com> Signed-off-by: Rob Herring <rob.herring@calxeda.com> Cc: Gregory CLEMENT <gregory.clement@free-electrons.com> Cc: Yehuda Yitschak <yehuday@marvell.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-26ARM: modules: don't export cpu_set_pte_ext when !MMUWill Deacon
cpu_set_pte_ext is only guaranteed to be defined when CONFIG_MMU, so don't export it to modules otherwise. Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-03-26ARM: mm: remove broken condition check for v4 flushingWill Deacon
There's no point having a conditional cache flush if we don't know the state of the condition beforehand. This patch makes the cacheflush in v4_flush_user_cache_range unconditional. signed-off-by: will deacon <will.deacon@arm.com>
2013-03-26ARM: mm: fix numerous hideous errors in proc-arm740.SWill Deacon
The setup code in proc-arm740.S is completely broken and, as far as I can tell, always has been. I was >this< close to ripping it out, when a 740t core-tile materialised in the office, so I've had a crack at fixing things up: - Fix the ram/flash area calculations so that we actually set the condition flags before testing them... - Fix the proc_info structure so that __cpu_io_mmu_flags are defined as 0, placing the __cpu_flush pointer at the correct offset - Re-number the registers used during __arm740_setup so that we don't clobber the machine ID et al - Advertise Thumb support via the hwcaps, since 740T is the only 740 implementation. Acked-by: Hyok S. Choi <hyok.choi@samsung.com> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-03-26ARM: cache: remove ARMv3 support codeWill Deacon
This is only used by 740t, which is a v4 core and (by my reading of the datasheet for the CPU) ignores CRm for the cp15 cache flush operation, making the v4 cache implementation in cache-v4.S sufficient for this CPU. Tested with 740T core-tile on Integrator/AP baseboard. Acked-by: Hyok S. Choi <hyok.choi@samsung.com> Acked-by: Greg Ungerer <gerg@uclinux.org> Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-03-22ARM: 7678/1: Work around faulty ISAR0 register in some Krait CPUsStepan Moskovchenko
Some early versions of the Krait CPU design incorrectly indicate that they only support the UDIV and SDIV instructions in Thumb mode when they actually support them in ARM and Thumb mode. It seems that these CPUs follow the DDI0406B ARM ARM which has two possible values for the divide instructions field, instead of the DDI0406C document which has three possible values. Work around this problem by checking the MIDR against Krait CPUs with this faulty ISAR0 register and force the hwcaps to indicate support in both modes. [sboyd: Rewrote commit text to reflect real reasoning now that we autodetect udiv/sdiv] Signed-off-by: Stepan Moskovchenko <stepanm@codeaurora.org> Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-22ARM: 7680/1: Detect support for SDIV/UDIV from ISAR0 registerStephen Boyd
The ISAR0 register indicates support for the SDIV and UDIV instructions in both the Thumb and ARM instruction set. Read the register to detect the supported instructions and update the elf_hwcap mask as appropriate. This is better than adding more and more cpuid checks in proc-v7.S for each new cpu variant that supports these instructions. Acked-by: Will Deacon <will.deacon@arm.com> Cc: Stepan Moskovchenko <stepanm@codeaurora.org> Signed-off-by: Stephen Boyd <sboyd@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-22ARM: 7677/1: LPAE: Fix mapping in alloc_init_section for unaligned addressesSricharan R
With LPAE enabled, alloc_init_section() does not map the entire address space for unaligned addresses. The issue also reproduced with CMA + LPAE. CMA tries to map 16MB with page granularity mappings during boot. alloc_init_pte() is called and out of 16MB, only 2MB gets mapped and rest remains unaccessible. Because of this OMAP5 boot is broken with CMA + LPAE enabled. Fix the issue by ensuring that the entire addresses are mapped. Signed-off-by: R Sricharan <r.sricharan@ti.com> Cc: Catalin Marinas <catalin.marinas@arm.com> Cc: Christoffer Dall <chris@cloudcar.com> Cc: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Laura Abbott <lauraa@codeaurora.org> Acked-by: Catalin Marinas <catalin.marinas@arm.com> Acked-by: Christoffer Dall <chris@cloudcar.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-15Merge branch 'fixes-for-3.9' of ↵Linus Torvalds
git://git.linaro.org/people/mszyprowski/linux-dma-mapping Pull DMA-mapping fix from Marek Szyprowski: "An important fix for all ARM architectures which use ZONE_DMA. Without it dma_alloc_* calls with GFP_ATOMIC flag might have allocated buffers outsize DMA zone." * 'fixes-for-3.9' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: ARM: DMA-mapping: add missing GFP_DMA flag for atomic buffer allocation
2013-03-14ARM: DMA-mapping: add missing GFP_DMA flag for atomic buffer allocationMarek Szyprowski
Atomic pool should always be allocated from DMA zone if such zone is available in the system to avoid issues caused by limited dma mask of any of the devices used for making an atomic allocation. Reported-by: Krzysztof Halasa <khc@pm.waw.pl> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Cc: Stable <stable@vger.kernel.org> [v3.6+]
2013-03-03ARM: 7661/1: mm: perform explicit branch predictor maintenance when requiredWill Deacon
The ARM ARM requires branch predictor maintenance if, for a given ASID, the instructions at a specific virtual address appear to change. From the kernel's point of view, that means: - Changing the kernel's view of memory (e.g. switching to the identity map) - ASID rollover (since ASIDs will be re-allocated to new tasks) This patch adds explicit branch predictor maintenance when either of the two conditions above are met. Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-03ARM: 7659/1: mm: make mm->context.id an atomic64_t variableWill Deacon
mm->context.id is updated under asid_lock when a new ASID is allocated to an mm_struct. However, it is also read without the lock when a task is being scheduled and checking whether or not the current ASID generation is up-to-date. If two threads of the same process are being scheduled in parallel and the bottom bits of the generation in their mm->context.id match the current generation (that is, the mm_struct has not been used for ~2^24 rollovers) then the non-atomic, lockless access to mm->context.id may yield the incorrect ASID. This patch fixes this issue by making mm->context.id and atomic64_t, ensuring that the generation is always read consistently. For code that only requires access to the ASID bits (e.g. TLB flushing by mm), then the value is accessed directly, which GCC converts to an ldrb. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-03ARM: 7658/1: mm: fix race updating mm->context.id on ASID rolloverWill Deacon
If a thread triggers an ASID rollover, other threads of the same process must be made to wait until the mm->context.id for the shared mm_struct has been updated to new generation and associated book-keeping (e.g. TLB invalidation) has ben performed. However, there is a *tiny* window where both mm->context.id and the relevant active_asids entry are updated to the new generation, but the TLB flush has not been performed, which could allow another thread to return to userspace with a dirty TLB, potentially leading to data corruption. In reality this will never occur because one CPU would need to perform a context-switch in the time it takes another to do a couple of atomic test/set operations but we should plug the race anyway. This patch moves the active_asids update until after the potential TLB flush on context-switch. Cc: <stable@vger.kernel.org> # 3.8 Reviewed-by: Catalin Marinas <catalin.marinas@arm.com> Signed-off-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-03ARM: 7652/1: mm: fix missing use of 'asid' to get asid value from mm->context.idBen Dooks
Fix missing use of the asid macro when getting the ASID from the mm->context.id field. Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-03-03Merge branch 'for-linus' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds
Pull late ARM updates from Russell King: "Here is the late set of ARM updates for this merge window; in here is: - The ARM parts of the broadcast timer support, core parts merged through tglx's tree. This was left over from the previous merge to allow the dependency on tglx's tree to be resolved. - A fix to the VFP code which shows up on Raspberry Pi's, as well as fixing the fallout from a previous commit in this area. - A number of smaller fixes scattered throughout the ARM tree" * 'for-linus' of git://git.linaro.org/people/rmk/linux-arm: ARM: Fix broken commit 0cc41e4a21d43 corrupting kernel messages ARM: fix scheduling while atomic warning in alignment handling code ARM: VFP: fix emulation of second VFP instruction ARM: 7656/1: uImage: Error out on build of multiplatform without LOADADDR ARM: 7640/1: memory: tegra_ahb_enable_smmu() depends on TEGRA_IOMMU_SMMU ARM: 7654/1: Preserve L_PTE_VALID in pte_modify() ARM: 7653/2: do not scale loops_per_jiffy when using a constant delay clock ARM: 7651/1: remove unused smp_timer_broadcast #define
2013-03-03Merge branches 'devel-stable', 'fixes' and 'mmci' into for-linusRussell King
2013-02-26Merge branch 'for-v3.9' of ↵Linus Torvalds
git://git.linaro.org/people/mszyprowski/linux-dma-mapping Pull DMA-mapping updates from Marek Szyprowski: "This time all patches are related only to ARM DMA-mapping subsystem. The main extension provided by this pull request is highmem support. Besides that it contains a bunch of small bugfixes and cleanups." * 'for-v3.9' of git://git.linaro.org/people/mszyprowski/linux-dma-mapping: ARM: DMA-mapping: fix memory leak in IOMMU dma-mapping implementation ARM: dma-mapping: Add maximum alignment order for dma iommu buffers ARM: dma-mapping: use himem for DMA buffers for IOMMU-mapped devices ARM: dma-mapping: add support for CMA regions placed in highmem zone arm: dma mapping: export arm iommu functions ARM: dma-mapping: Add arm_iommu_detach_device() ARM: dma-mapping: Add macro to_dma_iommu_mapping() ARM: dma-mapping: Set arm_dma_set_mask() for iommu->set_dma_mask() ARM: iommu: Include linux/kref.h in asm/dma-iommu.h
2013-02-25ARM: fix scheduling while atomic warning in alignment handling codeRussell King
Paolo Pisati reports that IPv6 triggers this warning: BUG: scheduling while atomic: swapper/0/0/0x40000100 Modules linked in: [<c001b1c4>] (unwind_backtrace+0x0/0xf0) from [<c0503c5c>] (__schedule_bug+0x48/0x5c) [<c0503c5c>] (__schedule_bug+0x48/0x5c) from [<c0508608>] (__schedule+0x700/0x740) [<c0508608>] (__schedule+0x700/0x740) from [<c007007c>] (__cond_resched+0x24/0x34) [<c007007c>] (__cond_resched+0x24/0x34) from [<c05086dc>] (_cond_resched+0x3c/0x44) [<c05086dc>] (_cond_resched+0x3c/0x44) from [<c0021f6c>] (do_alignment+0x178/0x78c) [<c0021f6c>] (do_alignment+0x178/0x78c) from [<c00083e0>] (do_DataAbort+0x34/0x98) [<c00083e0>] (do_DataAbort+0x34/0x98) from [<c0509a60>] (__dabt_svc+0x40/0x60) Exception stack(0xc0763d70 to 0xc0763db8) 3d60: e97e805e e97e806e 2c000000 11000000 3d80: ea86bb00 0000002c 00000011 e97e807e c076d2a8 e97e805e e97e806e 0000002c 3da0: 3d000000 c0763dbc c04b98fc c02a8490 00000113 ffffffff [<c0509a60>] (__dabt_svc+0x40/0x60) from [<c02a8490>] (__csum_ipv6_magic+0x8/0xc8) Fix this by using probe_kernel_address() stead of __get_user(). Cc: <stable@vger.kernel.org> Reported-by: Paolo Pisati <p.pisati@gmail.com> Tested-by: Paolo Pisati <p.pisati@gmail.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-25ARM: DMA-mapping: fix memory leak in IOMMU dma-mapping implementationMarek Szyprowski
This patch removes page_address() usage in IOMMU-aware dma-mapping implementation and replaced it with direct use of the cpu virtual address provided by the caller. page_address() returned incorrect address for pages remapped in atomic pool, what caused memory leak. Reported-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Tested-by: Hiroshi Doyu <hdoyu@nvidia.com>
2013-02-25ARM: dma-mapping: Add maximum alignment order for dma iommu buffersSeung-Woo Kim
Alignment order for a dma iommu buffer is set by buffer size. For large buffer, it is a waste of iommu address space. So configurable parameter to limit maximum alignment order can reduce the waste. Signed-off-by: Seung-Woo Kim <sw0312.kim@samsung.com> Signed-off-by: Kyungmin.park <kyungmin.park@samsung.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: use himem for DMA buffers for IOMMU-mapped devicesMarek Szyprowski
IOMMU can provide access to any memory page, so there is no point in limiting the allocated pages only to lowmem, once other parts of dma-mapping subsystem correctly supports himem pages. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: add support for CMA regions placed in highmem zoneMarek Szyprowski
This patch adds missing pieces to correctly support memory pages served from CMA regions placed in high memory zones. Please note that the default global CMA area is still put into lowmem and is limited by optional architecture specific DMA zone. One can however put device specific CMA regions in high memory zone to reduce lowmem usage. Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Michal Nazarewicz <mina86@mina86.com>
2013-02-25arm: dma mapping: export arm iommu functionsPrathyush K
This patch adds EXPORT_SYMBOL_GPL calls to the three arm iommu functions - arm_iommu_create_mapping, arm_iommu_free_mapping and arm_iommu_attach_device. These three functions are arm specific wrapper functions for creating/freeing/using an iommu mapping and they are called by various drivers. If any of these drivers need to be built as dynamic modules, these functions need to be exported. Changelog v2: using EXPORT_SYMBOL_GPL as suggested by Marek. Signed-off-by: Prathyush K <prathyush.k@samsung.com> [m.szyprowski: extended with recently introduced EXPORT_SYMBOL_GPL(arm_iommu_detach_device)] Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: Add arm_iommu_detach_device()Hiroshi Doyu
A counter part of arm_iommu_attach_device(). Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-25ARM: dma-mapping: Set arm_dma_set_mask() for iommu->set_dma_mask()Hiroshi Doyu
struct dma_map_ops iommu_ops doesn't have ->set_dma_mask, which causes crash when dma_set_mask() is called from some driver. Signed-off-by: Hiroshi Doyu <hdoyu@nvidia.com> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
2013-02-21Merge tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-socLinus Torvalds
Pull ARM SoC-specific updates from Arnd Bergmann: "This is a larger set of new functionality for the existing SoC families, including: - vt8500 gains support for new CPU cores, notably the Cortex-A9 based wm8850 - prima2 gains support for the "marco" SoC family, its SMP based cousin - tegra gains support for the new Tegra4 (Tegra114) family - socfpga now supports a newer version of the hardware including SMP - i.mx31 and bcm2835 are now using DT probing for their clocks - lots of updates for sh-mobile - OMAP updates for clocks, power management and USB - i.mx6q and tegra now support cpuidle - kirkwood now supports PCIe hot plugging - tegra clock support is updated - tegra USB PHY probing gets implemented diffently" * tag 'soc' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (148 commits) ARM: prima2: remove duplicate v7_invalidate_l1 ARM: shmobile: r8a7779: Correct TMU clock support again ARM: prima2: fix __init section for cpu hotplug ARM: OMAP: Consolidate OMAP USB-HS platform data (part 3/3) ARM: OMAP: Consolidate OMAP USB-HS platform data (part 1/3) arm: socfpga: Add SMP support for actual socfpga harware arm: Add v7_invalidate_l1 to cache-v7.S arm: socfpga: Add entries to enable make dtbs socfpga arm: socfpga: Add new device tree source for actual socfpga HW ARM: tegra: sort Kconfig selects for Tegra114 ARM: tegra: enable ARCH_REQUIRE_GPIOLIB for Tegra114 ARM: tegra: Fix build error w/ ARCH_TEGRA_114_SOC w/o ARCH_TEGRA_3x_SOC ARM: tegra: Fix build error for gic update ARM: tegra: remove empty tegra_smp_init_cpus() ARM: shmobile: Register ARM architected timer ARM: MARCO: fix the build issue due to gic-vic-to-irqchip move ARM: shmobile: r8a7779: Correct TMU clock support ARM: mxs_defconfig: Select CONFIG_DEVTMPFS_MOUNT ARM: mxs: decrease mxs_clockevent_device.min_delta_ns to 2 clock cycles ARM: mxs: use apbx bus clock to drive the timers on timrotv2 ...
2013-02-20Merge branch 'for-linus-2' of git://git.linaro.org/people/rmk/linux-armLinus Torvalds
Pull ARM updates (part two) from Russell King: - breakpoint and perf updates from Will Deacon. - hypervisor boot mode updates from Will. - support for Power State Coordination Interface via the Hypervisor - core ARM support for KVM * 'for-linus-2' of git://git.linaro.org/people/rmk/linux-arm: (32 commits) KVM: ARM: Add maintainer entry for KVM/ARM KVM: ARM: Power State Coordination Interface implementation KVM: ARM: Handle I/O aborts KVM: ARM: Handle guest faults in KVM KVM: ARM: VFP userspace interface KVM: ARM: Demux CCSIDR in the userspace API KVM: ARM: User space API for getting/setting co-proc registers KVM: ARM: Emulation framework and CP15 emulation KVM: ARM: World-switch implementation KVM: ARM: Inject IRQs and FIQs from userspace KVM: ARM: Memory virtualization setup KVM: ARM: Hypervisor initialization KVM: ARM: Initial skeleton to compile KVM support ARM: Section based HYP idmap ARM: Add page table and page defines needed by KVM ARM: perf: simplify __hw_perf_event_init err handling ARM: perf: remove unnecessary checks for idx < 0 ARM: perf: handle armpmu_register failing ARM: perf: don't pretend to support counting of L1I writes ARM: perf: remove redundant NULL check on cpu_pmu ...
2013-02-20Merge branch 'misc' into for-linusRussell King
Conflicts: arch/arm/include/asm/memory.h
2013-02-19Merge tag 'omap-for-v3.9/usb-signed' of ↵Arnd Bergmann
git://git.kernel.org/pub/scm/linux/kernel/git/tmlind/linux-omap into next/soc These changes contain the OMAP USB related platform data changes that were dropped from linux next because of the merge conflicts as requested by me and Olof. The reason was that at this point we really should be able to do the arch/arm related changes separately from driver changes to avoid dependencies between branches. These patches were initially part of the USB related MFD patches. Based on our comments, Roger Quadros quickly reworked these patches into a shared branch between ARM SoC tree and the MFD tree, then separate patches for the OMAP platform data and MFD driver. Note that this branch will conflict with c1d1cd597fc7 ("ARM: OMAP2+: omap_device: remove obsolete pm_lats and early_device code"). Please see http://lkml.org/lkml/2013/2/11/16 for the merge resolution. [arnd - resolved the merge conflict] Signed-off-by: Arnd Bergmann <arnd@arndb.de>
2013-02-16ARM: 7650/1: mm: replace direct access to mm->context.id with new macroBen Dooks
The mmid macro is meant to be used to get the mm->context.id data from the mm structure, but it seems to have been missed in a cuple of files. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-16ARM: 7649/1: mm: mm->context.id fix for big-endianBen Dooks
Since the new ASID code in b5466f8728527a05a493cc4abe9e6f034a1bbaab ("ARM: mm: remove IPI broadcasting on ASID rollover") was changed to use 64bit operations it has broken the BE operation due to an issue with the MM code accessing sub-fields of mm->context.id. When running in BE mode we see the values in mm->context.id are stored with the highest value first, so the LDR in the arch/arm/mm/proc-macros.S reads the wrong part of this field. To resolve this, change the LDR in the mmid macro to load from +4. Acked-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Ben Dooks <ben.dooks@codethink.co.uk> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-16ARM: fix warnings introduced by previous patchRussell King
869486d5f51 (ARM: 7646/1: mm: use static_vm for managing static mapped areas) introduced new warnings: arch/arm/mm/mmu.c: In function 'pci_reserve_io': arch/arm/mm/mmu.c:888:16: warning: unused variable 'addr' arch/arm/mm/mmu.c:887:20: warning: unused variable 'vm' because it failed to delete the two local variables it no longer used. Fix this. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-16ARM: 7646/1: mm: use static_vm for managing static mapped areasJoonsoo Kim
A static mapped area is ARM-specific, so it is better not to use generic vmalloc data structure, that is, vmlist and vmlist_lock for managing static mapped area. And it causes some needless overhead and reducing this overhead is better idea. Now, we have newly introduced static_vm infrastructure. With it, we don't need to iterate all mapped areas. Instead, we just iterate static mapped areas. It helps to reduce an overhead of finding matched area. And architecture dependency on vmalloc layer is removed, so it will help to maintainability for vmalloc layer. Reviewed-by: Nicolas Pitre <nico@linaro.org> Acked-by: Rob Herring <rob.herring@calxeda.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-16ARM: 7645/1: ioremap: introduce an infrastructure for static mapped areaJoonsoo Kim
In current implementation, we used ARM-specific flag, that is, VM_ARM_STATIC_MAPPING, for distinguishing ARM specific static mapped area. The purpose of static mapped area is to re-use static mapped area when entire physical address range of the ioremap request can be covered by this area. This implementation causes needless overhead for some cases. For example, assume that there is only one static mapped area and vmlist has 300 areas. Every time we call ioremap, we check 300 areas for deciding whether it is matched or not. Moreover, even if there is no static mapped area and vmlist has 300 areas, every time we call ioremap, we check 300 areas in now. If we construct a extra list for static mapped area, we can eliminate above mentioned overhead. With a extra list, if there is one static mapped area, we just check only one area and proceed next operation quickly. In fact, it is not a critical problem, because ioremap is not frequently used. But reducing overhead is better idea. Another reason for doing this work is for removing architecture dependency on vmalloc layer. I think that vmlist and vmlist_lock is internal data structure for vmalloc layer. Some codes for debugging and stat inevitably use vmlist and vmlist_lock. But it is preferable that they are used as least as possible in outside of vmalloc.c Now, I introduce an ARM-specific infrastructure for static mapped area. In the following patch, we will use this and resolve above mentioned problem. Reviewed-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-16ARM: 7644/1: vmregion: remove vmregion code entirelyJoonsoo Kim
Now, there is no user for vmregion. So remove it. Acked-by: Nicolas Pitre <nico@linaro.org> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Joonsoo Kim <iamjoonsoo.kim@lge.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-11arm: Add v7_invalidate_l1 to cache-v7.SDinh Nguyen
mach-socfpga is another platform that needs to use v7_invalidate_l1 to bringup additional cores. There was a comment that the ideal place for v7_invalidate_l1 should be in arm/mm/cache-v7.S Signed-off-by: Dinh Nguyen <dinguyen@altera.com> Acked-by: Simon Horman <horms+renesas@verge.net.au> Acked-by: Stephen Warren <swarren@nvidia.com> Reviewed-by: Pavel Machek <pavel@denx.de> Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Pavel Machek <pavel@denx.de> Tested-by: Stephen Warren <swarren@nvidia.com> Cc: Arnd Bergmann <arnd@arndb.de> Cc: Russell King <linux@arm.linux.org.uk> Cc: Olof Johansson <olof@lixom.net> Cc: Thomas Gleixner <tglx@linutronix.de> Cc: Rob Herring <rob.herring@calxeda.com> Cc: Sascha Hauer <kernel@pengutronix.de> Cc: Magnus Damm <magnus.damm@gmail.com> Signed-off-by: Olof Johansson <olof@lixom.net>
2013-02-08ARM: DMA mapping: fix bad atomic testRussell King
Realview fails to boot with this warning: BUG: spinlock lockup suspected on CPU#0, init/1 lock: 0xcf8bde10, .magic: dead4ead, .owner: init/1, .owner_cpu: 0 Backtrace: [<c00185d8>] (dump_backtrace+0x0/0x10c) from [<c03294e8>] (dump_stack+0x18/0x1c) r6:cf8bde10 r5:cf83d1c0 r4:cf8bde10 r3:cf83d1c0 [<c03294d0>] (dump_stack+0x0/0x1c) from [<c018926c>] (spin_dump+0x84/0x98) [<c01891e8>] (spin_dump+0x0/0x98) from [<c0189460>] (do_raw_spin_lock+0x100/0x198) [<c0189360>] (do_raw_spin_lock+0x0/0x198) from [<c032cbac>] (_raw_spin_lock+0x3c/0x44) [<c032cb70>] (_raw_spin_lock+0x0/0x44) from [<c01c9224>] (pl011_console_write+0xe8/0x11c) [<c01c913c>] (pl011_console_write+0x0/0x11c) from [<c002aea8>] (call_console_drivers.clone.7+0xdc/0x104) [<c002adcc>] (call_console_drivers.clone.7+0x0/0x104) from [<c002b320>] (console_unlock+0x2e8/0x454) [<c002b038>] (console_unlock+0x0/0x454) from [<c002b8b4>] (vprintk_emit+0x2d8/0x594) [<c002b5dc>] (vprintk_emit+0x0/0x594) from [<c0329718>] (printk+0x3c/0x44) [<c03296dc>] (printk+0x0/0x44) from [<c002929c>] (warn_slowpath_common+0x28/0x6c) [<c0029274>] (warn_slowpath_common+0x0/0x6c) from [<c0029304>] (warn_slowpath_null+0x24/0x2c) [<c00292e0>] (warn_slowpath_null+0x0/0x2c) from [<c0070ab0>] (lockdep_trace_alloc+0xd8/0xf0) [<c00709d8>] (lockdep_trace_alloc+0x0/0xf0) from [<c00c0850>] (kmem_cache_alloc+0x24/0x11c) [<c00c082c>] (kmem_cache_alloc+0x0/0x11c) from [<c00bb044>] (__get_vm_area_node.clone.24+0x7c/0x16c) [<c00bafc8>] (__get_vm_area_node.clone.24+0x0/0x16c) from [<c00bb7b8>] (get_vm_area_caller+0x48/0x54) [<c00bb770>] (get_vm_area_caller+0x0/0x54) from [<c0020064>] (__alloc_remap_buffer.clone.15+0x38/0xb8) [<c002002c>] (__alloc_remap_buffer.clone.15+0x0/0xb8) from [<c0020244>] (__dma_alloc+0x160/0x2c8) [<c00200e4>] (__dma_alloc+0x0/0x2c8) from [<c00204d8>] (arm_dma_alloc+0x88/0xa0)[<c0020450>] (arm_dma_alloc+0x0/0xa0) from [<c00beb00>] (dma_pool_alloc+0xcc/0x1a8) [<c00bea34>] (dma_pool_alloc+0x0/0x1a8) from [<c01a9d14>] (pl08x_fill_llis_for_desc+0x28/0x568) [<c01a9cec>] (pl08x_fill_llis_for_desc+0x0/0x568) from [<c01aab8c>] (pl08x_prep_slave_sg+0x258/0x3b0) [<c01aa934>] (pl08x_prep_slave_sg+0x0/0x3b0) from [<c01c9f74>] (pl011_dma_tx_refill+0x140/0x288) [<c01c9e34>] (pl011_dma_tx_refill+0x0/0x288) from [<c01ca748>] (pl011_start_tx+0xe4/0x120) [<c01ca664>] (pl011_start_tx+0x0/0x120) from [<c01c54a4>] (__uart_start+0x48/0x4c) [<c01c545c>] (__uart_start+0x0/0x4c) from [<c01c632c>] (uart_start+0x2c/0x3c) [<c01c6300>] (uart_start+0x0/0x3c) from [<c01c795c>] (uart_write+0xcc/0xf4) [<c01c7890>] (uart_write+0x0/0xf4) from [<c01b0384>] (n_tty_write+0x1c0/0x3e4) [<c01b01c4>] (n_tty_write+0x0/0x3e4) from [<c01acfe8>] (tty_write+0x144/0x240) [<c01acea4>] (tty_write+0x0/0x240) from [<c01ad17c>] (redirected_tty_write+0x98/0xac) [<c01ad0e4>] (redirected_tty_write+0x0/0xac) from [<c00c371c>] (vfs_write+0xbc/0x150) [<c00c3660>] (vfs_write+0x0/0x150) from [<c00c39c0>] (sys_write+0x4c/0x78) [<c00c3974>] (sys_write+0x0/0x78) from [<c0014460>] (ret_fast_syscall+0x0/0x3c) This happens because the DMA allocation code is not respecting atomic allocations correctly. GFP flags should not be tested for GFP_ATOMIC to determine if an atomic allocation is being requested. GFP_ATOMIC is not a flag but a value. The GFP bitmask flags are all prefixed with __GFP_. The rest of the kernel tests for __GFP_WAIT not being set to indicate an atomic allocation. We need to do the same. Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-02-04Merge branch 'for-rmk/broadcast' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
2013-02-04Merge branch 'for-rmk/virt/kvm/core' of ↵Russell King
git://git.kernel.org/pub/scm/linux/kernel/git/will/linux into devel-stable
2013-01-23ARM: Section based HYP idmapChristoffer Dall
Add a method (hyp_idmap_setup) to populate a hyp pgd with an identity mapping of the code contained in the .hyp.idmap.text section. Offer a method to drop this identity mapping through hyp_idmap_teardown. Make all the above depend on CONFIG_ARM_VIRT_EXT and CONFIG_ARM_LPAE. Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Marc Zyngier <marc.zyngier@arm.com> Signed-off-by: Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-23ARM: Add page table and page defines needed by KVMChristoffer Dall
KVM uses the stage-2 page tables and the Hyp page table format, so we define the fields and page protection flags needed by KVM. The nomenclature is this: - page_hyp: PL2 code/data mappings - page_hyp_device: PL2 device mappings (vgic access) - page_s2: Stage-2 code/data page mappings - page_s2_device: Stage-2 device mappings (vgic access) Reviewed-by: Will Deacon <will.deacon@arm.com> Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Christoffer Dall <c.dall@virtualopensystems.com>
2013-01-19ARM: 7629/1: mm: Fix missing XN flag for for MT_MEMORY_SOSantosh Shilimkar
Commit 8fb54284ba6a {ARM: mm: Add strongly ordered descriptor support} added XN flag at section level but missed it at PTE level. Fix it by adding the L_PTE_XN to MT_MEMORY_SO PTE descriptor. Reported-by: Richard Woodruff <r-woodruff2@ti.com> Signed-off-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-01-19ARM: DMA: Fix struct page iterator in dma_cache_maint() to work with sparsememRussell King
Subhash Jadavani reported this partial backtrace: Now consider this call stack from MMC block driver (this is on the ARMv7 based board): [<c001b50c>] (v7_dma_inv_range+0x30/0x48) from [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) [<c0017b8c>] (dma_cache_maint_page+0x1c4/0x24c) from [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) [<c0017c28>] (___dma_page_cpu_to_dev+0x14/0x1c) from [<c0017ff8>] (dma_map_sg+0x3c/0x114) This is caused by incrementing the struct page pointer, and running off the end of the sparsemem page array. Fix this by incrementing by pfn instead, and convert the pfn to a struct page. Cc: <stable@vger.kernel.org> Suggested-by: James Bottomley <JBottomley@Parallels.com> Tested-by: Subhash Jadavani <subhashj@codeaurora.org> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>
2013-01-10ARM: virt: hide CONFIG_ARM_VIRT_EXT from userWill Deacon
ARM_VIRT_EXT is a property of CPU_V7, but does not adversely affect other CPUs that can be built into the same kernel image (i.e. ARMv6+). This patch defaults ARM_VIRT_EXT to y if CPU_V7, allowing hypervisors such as KVM to make better use of the option and being able to rely on hyp-mode boot support. Signed-off-by: Will Deacon <will.deacon@arm.com>
2013-01-07ARM: 7616/1: cache-l2x0: aurora: Use writel_relaxed instead of writelGregory CLEMENT
The use of writel instead of writel_relaxed lead to deadlock in some situation (SMP on Armada 370 for instance). The use of writel_relaxed as it was done in the rest of this driver fixes this bug. Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com> Tested-by: Thomas Petazzoni <thomas.petazzoni@free-electrons.com> Acked-by: Jason Cooper <jason@lakedaemon.net> Signed-off-by: Russell King <rmk+kernel@arm.linux.org.uk>