diff options
Diffstat (limited to 'Documentation/arm')
| -rw-r--r-- | Documentation/arm/00-INDEX | 16 | ||||
| -rw-r--r-- | Documentation/arm/Booting | 42 | ||||
| -rw-r--r-- | Documentation/arm/IXP4xx | 2 | ||||
| -rw-r--r-- | Documentation/arm/Marvell/README | 42 | ||||
| -rw-r--r-- | Documentation/arm/OMAP/DSS | 10 | ||||
| -rw-r--r-- | Documentation/arm/OMAP/omap_pm | 2 | ||||
| -rw-r--r-- | Documentation/arm/Samsung-S3C24XX/GPIO.txt | 13 | ||||
| -rw-r--r-- | Documentation/arm/cluster-pm-race-avoidance.txt | 498 | ||||
| -rw-r--r-- | Documentation/arm/firmware.txt | 88 | ||||
| -rw-r--r-- | Documentation/arm/kernel_mode_neon.txt | 121 | ||||
| -rw-r--r-- | Documentation/arm/memory.txt | 9 | ||||
| -rw-r--r-- | Documentation/arm/sti/overview.txt | 33 | ||||
| -rw-r--r-- | Documentation/arm/sti/stih407-overview.txt | 18 | ||||
| -rw-r--r-- | Documentation/arm/sti/stih415-overview.txt | 12 | ||||
| -rw-r--r-- | Documentation/arm/sti/stih416-overview.txt | 12 | ||||
| -rw-r--r-- | Documentation/arm/sunxi/README | 52 | ||||
| -rw-r--r-- | Documentation/arm/sunxi/clocks.txt | 56 | ||||
| -rw-r--r-- | Documentation/arm/uefi.txt | 64 | ||||
| -rw-r--r-- | Documentation/arm/vlocks.txt | 211 |
19 files changed, 1262 insertions, 39 deletions
diff --git a/Documentation/arm/00-INDEX b/Documentation/arm/00-INDEX index 36420e116c9..3b08bc2b04c 100644 --- a/Documentation/arm/00-INDEX +++ b/Documentation/arm/00-INDEX @@ -4,6 +4,8 @@ Booting - requirements for booting Interrupts - ARM Interrupt subsystem documentation +IXP4xx + - Intel IXP4xx Network processor. msm - MSM specific documentation Netwinder @@ -24,8 +26,16 @@ SPEAr - ST SPEAr platform Linux Overview VFP/ - Release notes for Linux Kernel Vector Floating Point support code +cluster-pm-race-avoidance.txt + - Algorithm for CPU and Cluster setup/teardown empeg/ - Ltd's Empeg MP3 Car Audio Player +firmware.txt + - Secure firmware registration and calling. +kernel_mode_neon.txt + - How to use NEON instructions in kernel mode +kernel_user_helpers.txt + - Helper functions in kernel space made available for userspace. mem_alignment - alignment abort handler documentation memory.txt @@ -34,3 +44,9 @@ nwfpe/ - NWFPE floating point emulator documentation swp_emulation - SWP/SWPB emulation handler/logging description +tcm.txt + - ARM Tightly Coupled Memory +uefi.txt + - [U]EFI configuration and runtime services documentation +vlocks.txt + - Voting locks, low-level mechanism relying on memory system atomic writes. diff --git a/Documentation/arm/Booting b/Documentation/arm/Booting index 0c1f475fdf3..371814a3671 100644 --- a/Documentation/arm/Booting +++ b/Documentation/arm/Booting @@ -18,7 +18,8 @@ following: 2. Initialise one serial port. 3. Detect the machine type. 4. Setup the kernel tagged list. -5. Call the kernel image. +5. Load initramfs. +6. Call the kernel image. 1. Setup and initialise RAM @@ -120,12 +121,27 @@ tagged list. The boot loader must pass at a minimum the size and location of the system memory, and the root filesystem location. The dtb must be placed in a region of memory where the kernel decompressor will not -overwrite it. The recommended placement is in the first 16KiB of RAM -with the caveat that it may not be located at physical address 0 since -the kernel interprets a value of 0 in r2 to mean neither a tagged list -nor a dtb were passed. +overwrite it, whilst remaining within the region which will be covered +by the kernel's low-memory mapping. -5. Calling the kernel image +A safe location is just above the 128MiB boundary from start of RAM. + +5. Load initramfs. +------------------ + +Existing boot loaders: OPTIONAL +New boot loaders: OPTIONAL + +If an initramfs is in use then, as with the dtb, it must be placed in +a region of memory where the kernel decompressor will not overwrite it +while also with the region which will be covered by the kernel's +low-memory mapping. + +A safe location is just above the device tree blob which itself will +be loaded just above the 128MiB boundary from the start of RAM as +recommended above. + +6. Calling the kernel image --------------------------- Existing boot loaders: MANDATORY @@ -136,11 +152,17 @@ is stored in flash, and is linked correctly to be run from flash, then it is legal for the boot loader to call the zImage in flash directly. -The zImage may also be placed in system RAM (at any location) and -called there. Note that the kernel uses 16K of RAM below the image -to store page tables. The recommended placement is 32KiB into RAM. +The zImage may also be placed in system RAM and called there. The +kernel should be placed in the first 128MiB of RAM. It is recommended +that it is loaded above 32MiB in order to avoid the need to relocate +prior to decompression, which will make the boot process slightly +faster. + +When booting a raw (non-zImage) kernel the constraints are tighter. +In this case the kernel must be loaded at an offset into system equal +to TEXT_OFFSET - PAGE_OFFSET. -In either case, the following conditions must be met: +In any case, the following conditions must be met: - Quiesce all DMA capable devices so that memory does not get corrupted by bogus network packets or disk data. This will save diff --git a/Documentation/arm/IXP4xx b/Documentation/arm/IXP4xx index 7b9351f2f55..e48b74de6ac 100644 --- a/Documentation/arm/IXP4xx +++ b/Documentation/arm/IXP4xx @@ -36,7 +36,7 @@ Linux currently supports the following features on the IXP4xx chips: - Timers (watchdog, OS) The following components of the chips are not supported by Linux and -require the use of Intel's proprietary CSR softare: +require the use of Intel's proprietary CSR software: - USB device interface - Network interfaces (HSS, Utopia, NPEs, etc) diff --git a/Documentation/arm/Marvell/README b/Documentation/arm/Marvell/README index 8f08a86e03b..2cce5401e32 100644 --- a/Documentation/arm/Marvell/README +++ b/Documentation/arm/Marvell/README @@ -83,13 +83,24 @@ EBU Armada family 88F6710 88F6707 88F6W11 + Product Brief: http://www.marvell.com/embedded-processors/armada-300/assets/Marvell_ARMADA_370_SoC.pdf + + Armada 375 Flavors: + 88F6720 + Product Brief: http://www.marvell.com/embedded-processors/armada-300/assets/ARMADA_375_SoC-01_product_brief.pdf + + Armada 380/385 Flavors: + 88F6810 + 88F6820 + 88F6828 Armada XP Flavors: MV78230 MV78260 MV78460 + NOTE: not to be confused with the non-SMP 78xx0 SoCs + Product Brief: http://www.marvell.com/embedded-processors/armada-xp/assets/Marvell-ArmadaXP-SoC-product%20brief.pdf - Product Brief: http://www.marvell.com/embedded-processors/armada-xp/assets/Marvell-ArmadaXP-SoC-product%20brief.pdf No public datasheet available. Core: Sheeva ARMv7 compatible @@ -210,6 +221,35 @@ MMP/MMP2 family (communication processor) Linux kernel mach directory: arch/arm/mach-mmp Linux kernel plat directory: arch/arm/plat-pxa +Berlin family (Digital Entertainment) +------------------------------------- + + Flavors: + 88DE3005, Armada 1500-mini + Design name: BG2CD + Core: ARM Cortex-A9, PL310 L2CC + Homepage: http://www.marvell.com/digital-entertainment/armada-1500-mini/ + 88DE3100, Armada 1500 + Design name: BG2 + Core: Marvell PJ4B (ARMv7), Tauros3 L2CC + Homepage: http://www.marvell.com/digital-entertainment/armada-1500/ + Product Brief: http://www.marvell.com/digital-entertainment/armada-1500/assets/Marvell-ARMADA-1500-Product-Brief.pdf + 88DE3114, Armada 1500 Pro + Design name: BG2-Q + Core: Quad Core ARM Cortex-A9, PL310 L2CC + Homepage: http://www.marvell.com/digital-entertainment/armada-1500-pro/ + Product Brief: http://www.marvell.com/digital-entertainment/armada-1500-pro/assets/Marvell_ARMADA_1500_PRO-01_product_brief.pdf + 88DE???? + Design name: BG3 + Core: ARM Cortex-A15, CA15 integrated L2CC + + Homepage: http://www.marvell.com/digital-entertainment/ + Directory: arch/arm/mach-berlin + + Comments: + * This line of SoCs is based on Marvell Sheeva or ARM Cortex CPUs + with Synopsys DesignWare (IRQ, GPIO, Timers, ...) and PXA IP (SDHCI, USB, ETH, ...). + Long-term plans --------------- diff --git a/Documentation/arm/OMAP/DSS b/Documentation/arm/OMAP/DSS index a564ceea9e9..4484e021290 100644 --- a/Documentation/arm/OMAP/DSS +++ b/Documentation/arm/OMAP/DSS @@ -285,7 +285,10 @@ FB0 +-- GFX ---- LCD ---- LCD Misc notes ---------- -OMAP FB allocates the framebuffer memory using the OMAP VRAM allocator. +OMAP FB allocates the framebuffer memory using the standard dma allocator. You +can enable Contiguous Memory Allocator (CONFIG_CMA) to improve the dma +allocator, and if CMA is enabled, you use "cma=" kernel parameter to increase +the global memory area for CMA. Using DSI DPLL to generate pixel clock it is possible produce the pixel clock of 86.5MHz (max possible), and with that you get 1280x1024@57 output from DVI. @@ -301,11 +304,6 @@ framebuffer parameters. Kernel boot arguments --------------------- -vram=<size>[,<physaddr>] - - Amount of total VRAM to preallocate and optionally a physical start - memory address. For example, "10M". omapfb allocates memory for - framebuffers from VRAM. - omapfb.mode=<display>:<mode>[,...] - Default video mode for specified displays. For example, "dvi:800x400MR-24@60". See drivers/video/modedb.c. diff --git a/Documentation/arm/OMAP/omap_pm b/Documentation/arm/OMAP/omap_pm index 9012bb03909..4ae915a9f89 100644 --- a/Documentation/arm/OMAP/omap_pm +++ b/Documentation/arm/OMAP/omap_pm @@ -78,7 +78,7 @@ to NULL. Drivers should use the following idiom: The most common usage of these functions will probably be to specify the maximum time from when an interrupt occurs, to when the device becomes accessible. To accomplish this, driver writers should use the -set_max_mpu_wakeup_lat() function to to constrain the MPU wakeup +set_max_mpu_wakeup_lat() function to constrain the MPU wakeup latency, and the set_max_dev_wakeup_lat() function to constrain the device wakeup latency (from clk_enable() to accessibility). For example, diff --git a/Documentation/arm/Samsung-S3C24XX/GPIO.txt b/Documentation/arm/Samsung-S3C24XX/GPIO.txt index 8b46c79679c..0ebd7e2244d 100644 --- a/Documentation/arm/Samsung-S3C24XX/GPIO.txt +++ b/Documentation/arm/Samsung-S3C24XX/GPIO.txt @@ -85,21 +85,10 @@ between the calls. Headers ------- - See arch/arm/mach-s3c2410/include/mach/regs-gpio.h for the list + See arch/arm/mach-s3c24xx/include/mach/regs-gpio.h for the list of GPIO pins, and the configuration values for them. This is included by using #include <mach/regs-gpio.h> - The GPIO management functions are defined in the hardware - header arch/arm/mach-s3c2410/include/mach/hardware.h which can be - included by #include <mach/hardware.h> - - A useful amount of documentation can be found in the hardware - header on how the GPIO functions (and others) work. - - Whilst a number of these functions do make some checks on what - is passed to them, for speed of use, they may not always ensure - that the user supplied data to them is correct. - PIN Numbers ----------- diff --git a/Documentation/arm/cluster-pm-race-avoidance.txt b/Documentation/arm/cluster-pm-race-avoidance.txt new file mode 100644 index 00000000000..750b6fc24af --- /dev/null +++ b/Documentation/arm/cluster-pm-race-avoidance.txt @@ -0,0 +1,498 @@ +Cluster-wide Power-up/power-down race avoidance algorithm +========================================================= + +This file documents the algorithm which is used to coordinate CPU and +cluster setup and teardown operations and to manage hardware coherency +controls safely. + +The section "Rationale" explains what the algorithm is for and why it is +needed. "Basic model" explains general concepts using a simplified view +of the system. The other sections explain the actual details of the +algorithm in use. + + +Rationale +--------- + +In a system containing multiple CPUs, it is desirable to have the +ability to turn off individual CPUs when the system is idle, reducing +power consumption and thermal dissipation. + +In a system containing multiple clusters of CPUs, it is also desirable +to have the ability to turn off entire clusters. + +Turning entire clusters off and on is a risky business, because it +involves performing potentially destructive operations affecting a group +of independently running CPUs, while the OS continues to run. This +means that we need some coordination in order to ensure that critical +cluster-level operations are only performed when it is truly safe to do +so. + +Simple locking may not be sufficient to solve this problem, because +mechanisms like Linux spinlocks may rely on coherency mechanisms which +are not immediately enabled when a cluster powers up. Since enabling or +disabling those mechanisms may itself be a non-atomic operation (such as +writing some hardware registers and invalidating large caches), other +methods of coordination are required in order to guarantee safe +power-down and power-up at the cluster level. + +The mechanism presented in this document describes a coherent memory +based protocol for performing the needed coordination. It aims to be as +lightweight as possible, while providing the required safety properties. + + +Basic model +----------- + +Each cluster and CPU is assigned a state, as follows: + + DOWN + COMING_UP + UP + GOING_DOWN + + +---------> UP ----------+ + | v + + COMING_UP GOING_DOWN + + ^ | + +--------- DOWN <--------+ + + +DOWN: The CPU or cluster is not coherent, and is either powered off or + suspended, or is ready to be powered off or suspended. + +COMING_UP: The CPU or cluster has committed to moving to the UP state. + It may be part way through the process of initialisation and + enabling coherency. + +UP: The CPU or cluster is active and coherent at the hardware + level. A CPU in this state is not necessarily being used + actively by the kernel. + +GOING_DOWN: The CPU or cluster has committed to moving to the DOWN + state. It may be part way through the process of teardown and + coherency exit. + + +Each CPU has one of these states assigned to it at any point in time. +The CPU states are described in the "CPU state" section, below. + +Each cluster is also assigned a state, but it is necessary to split the +state value into two parts (the "cluster" state and "inbound" state) and +to introduce additional states in order to avoid races between different +CPUs in the cluster simultaneously modifying the state. The cluster- +level states are described in the "Cluster state" section. + +To help distinguish the CPU states from cluster states in this +discussion, the state names are given a CPU_ prefix for the CPU states, +and a CLUSTER_ or INBOUND_ prefix for the cluster states. + + +CPU state +--------- + +In this algorithm, each individual core in a multi-core processor is +referred to as a "CPU". CPUs are assumed to be single-threaded: +therefore, a CPU can only be doing one thing at a single point in time. + +This means that CPUs fit the basic model closely. + +The algorithm defines the following states for each CPU in the system: + + CPU_DOWN + CPU_COMING_UP + CPU_UP + CPU_GOING_DOWN + + cluster setup and + CPU setup complete policy decision + +-----------> CPU_UP ------------+ + | v + + CPU_COMING_UP CPU_GOING_DOWN + + ^ | + +----------- CPU_DOWN <----------+ + policy decision CPU teardown complete + or hardware event + + +The definitions of the four states correspond closely to the states of +the basic model. + +Transitions between states occur as follows. + +A trigger event (spontaneous) means that the CPU can transition to the +next state as a result of making local progress only, with no +requirement for any external event to happen. + + +CPU_DOWN: + + A CPU reaches the CPU_DOWN state when it is ready for + power-down. On reaching this state, the CPU will typically + power itself down or suspend itself, via a WFI instruction or a + firmware call. + + Next state: CPU_COMING_UP + Conditions: none + + Trigger events: + + a) an explicit hardware power-up operation, resulting + from a policy decision on another CPU; + + b) a hardware event, such as an interrupt. + + +CPU_COMING_UP: + + A CPU cannot start participating in hardware coherency until the + cluster is set up and coherent. If the cluster is not ready, + then the CPU will wait in the CPU_COMING_UP state until the + cluster has been set up. + + Next state: CPU_UP + Conditions: The CPU's parent cluster must be in CLUSTER_UP. + Trigger events: Transition of the parent cluster to CLUSTER_UP. + + Refer to the "Cluster state" section for a description of the + CLUSTER_UP state. + + +CPU_UP: + When a CPU reaches the CPU_UP state, it is safe for the CPU to + start participating in local coherency. + + This is done by jumping to the kernel's CPU resume code. + + Note that the definition of this state is slightly different + from the basic model definition: CPU_UP does not mean that the + CPU is coherent yet, but it does mean that it is safe to resume + the kernel. The kernel handles the rest of the resume + procedure, so the remaining steps are not visible as part of the + race avoidance algorithm. + + The CPU remains in this state until an explicit policy decision + is made to shut down or suspend the CPU. + + Next state: CPU_GOING_DOWN + Conditions: none + Trigger events: explicit policy decision + + +CPU_GOING_DOWN: + + While in this state, the CPU exits coherency, including any + operations required to achieve this (such as cleaning data + caches). + + Next state: CPU_DOWN + Conditions: local CPU teardown complete + Trigger events: (spontaneous) + + +Cluster state +------------- + +A cluster is a group of connected CPUs with some common resources. +Because a cluster contains multiple CPUs, it can be doing multiple +things at the same time. This has some implications. In particular, a +CPU can start up while another CPU is tearing the cluster down. + +In this discussion, the "outbound side" is the view of the cluster state +as seen by a CPU tearing the cluster down. The "inbound side" is the +view of the cluster state as seen by a CPU setting the CPU up. + +In order to enable safe coordination in such situations, it is important +that a CPU which is setting up the cluster can advertise its state +independently of the CPU which is tearing down the cluster. For this +reason, the cluster state is split into two parts: + + "cluster" state: The global state of the cluster; or the state + on the outbound side: + + CLUSTER_DOWN + CLUSTER_UP + CLUSTER_GOING_DOWN + + "inbound" state: The state of the cluster on the inbound side. + + INBOUND_NOT_COMING_UP + INBOUND_COMING_UP + + + The different pairings of these states results in six possible + states for the cluster as a whole: + + CLUSTER_UP + +==========> INBOUND_NOT_COMING_UP -------------+ + # | + | + CLUSTER_UP <----+ | + INBOUND_COMING_UP | v + + ^ CLUSTER_GOING_DOWN CLUSTER_GOING_DOWN + # INBOUND_COMING_UP <=== INBOUND_NOT_COMING_UP + + CLUSTER_DOWN | | + INBOUND_COMING_UP <----+ | + | + ^ | + +=========== CLUSTER_DOWN <------------+ + INBOUND_NOT_COMING_UP + + Transitions -----> can only be made by the outbound CPU, and + only involve changes to the "cluster" state. + + Transitions ===##> can only be made by the inbound CPU, and only + involve changes to the "inbound" state, except where there is no + further transition possible on the outbound side (i.e., the + outbound CPU has put the cluster into the CLUSTER_DOWN state). + + The race avoidance algorithm does not provide a way to determine + which exact CPUs within the cluster play these roles. This must + be decided in advance by some other means. Refer to the section + "Last man and first man selection" for more explanation. + + + CLUSTER_DOWN/INBOUND_NOT_COMING_UP is the only state where the + cluster can actually be powered down. + + The parallelism of the inbound and outbound CPUs is observed by + the existence of two different paths from CLUSTER_GOING_DOWN/ + INBOUND_NOT_COMING_UP (corresponding to GOING_DOWN in the basic + model) to CLUSTER_DOWN/INBOUND_COMING_UP (corresponding to + COMING_UP in the basic model). The second path avoids cluster + teardown completely. + + CLUSTER_UP/INBOUND_COMING_UP is equivalent to UP in the basic + model. The final transition to CLUSTER_UP/INBOUND_NOT_COMING_UP + is trivial and merely resets the state machine ready for the + next cycle. + + Details of the allowable transitions follow. + + The next state in each case is notated + + <cluster state>/<inbound state> (<transitioner>) + + where the <transitioner> is the side on which the transition + can occur; either the inbound or the outbound side. + + +CLUSTER_DOWN/INBOUND_NOT_COMING_UP: + + Next state: CLUSTER_DOWN/INBOUND_COMING_UP (inbound) + Conditions: none + Trigger events: + + a) an explicit hardware power-up operation, resulting + from a policy decision on another CPU; + + b) a hardware event, such as an interrupt. + + +CLUSTER_DOWN/INBOUND_COMING_UP: + + In this state, an inbound CPU sets up the cluster, including + enabling of hardware coherency at the cluster level and any + other operations (such as cache invalidation) which are required + in order to achieve this. + + The purpose of this state is to do sufficient cluster-level + setup to enable other CPUs in the cluster to enter coherency + safely. + + Next state: CLUSTER_UP/INBOUND_COMING_UP (inbound) + Conditions: cluster-level setup and hardware coherency complete + Trigger events: (spontaneous) + + +CLUSTER_UP/INBOUND_COMING_UP: + + Cluster-level setup is complete and hardware coherency is + enabled for the cluster. Other CPUs in the cluster can safely + enter coherency. + + This is a transient state, leading immediately to + CLUSTER_UP/INBOUND_NOT_COMING_UP. All other CPUs on the cluster + should consider treat these two states as equivalent. + + Next state: CLUSTER_UP/INBOUND_NOT_COMING_UP (inbound) + Conditions: none + Trigger events: (spontaneous) + + +CLUSTER_UP/INBOUND_NOT_COMING_UP: + + Cluster-level setup is complete and hardware coherency is + enabled for the cluster. Other CPUs in the cluster can safely + enter coherency. + + The cluster will remain in this state until a policy decision is + made to power the cluster down. + + Next state: CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP (outbound) + Conditions: none + Trigger events: policy decision to power down the cluster + + +CLUSTER_GOING_DOWN/INBOUND_NOT_COMING_UP: + + An outbound CPU is tearing the cluster down. The selected CPU + must wait in this state until all CPUs in the cluster are in the + CPU_DOWN state. + + When all CPUs are in the CPU_DOWN state, the cluster can be torn + down, for example by cleaning data caches and exiting + cluster-level coherency. + + To avoid wasteful unnecessary teardown operations, the outbound + should check the inbound cluster state for asynchronous + transitions to INBOUND_COMING_UP. Alternatively, individual + CPUs can be checked for entry into CPU_COMING_UP or CPU_UP. + + + Next states: + + CLUSTER_DOWN/INBOUND_NOT_COMING_UP (outbound) + Conditions: cluster torn down and ready to power off + Trigger events: (spontaneous) + + CLUSTER_GOING_DOWN/INBOUND_COMING_UP (inbound) + Conditions: none + Trigger events: + + a) an explicit hardware power-up operation, + resulting from a policy decision on another + CPU; + + b) a hardware event, such as an interrupt. + + +CLUSTER_GOING_DOWN/INBOUND_COMING_UP: + + The cluster is (or was) being torn down, but another CPU has + come online in the meantime and is trying to set up the cluster + again. + + If the outbound CPU observes this state, it has two choices: + + a) back out of teardown, restoring the cluster to the + CLUSTER_UP state; + + b) finish tearing the cluster down and put the cluster + in the CLUSTER_DOWN state; the inbound CPU will + set up the cluster again from there. + + Choice (a) permits the removal of some latency by avoiding + unnecessary teardown and setup operations in situations where + the cluster is not really going to be powered down. + + + Next states: + + CLUSTER_UP/INBOUND_COMING_UP (outbound) + Conditions: cluster-level setup and hardware + coherency complete + Trigger events: (spontaneous) + + CLUSTER_DOWN/INBOUND_COMING_UP (outbound) + Conditions: cluster torn down and ready to power off + Trigger events: (spontaneous) + + +Last man and First man selection +-------------------------------- + +The CPU which performs cluster tear-down operations on the outbound side +is commonly referred to as the "last man". + +The CPU which performs cluster setup on the inbound side is commonly +referred to as the "first man". + +The race avoidance algorithm documented above does not provide a +mechanism to choose which CPUs should play these roles. + + +Last man: + +When shutting down the cluster, all the CPUs involved are initially +executing Linux and hence coherent. Therefore, ordinary spinlocks can +be used to select a last man safely, before the CPUs become +non-coherent. + + +First man: + +Because CPUs may power up asynchronously in response to external wake-up +events, a dynamic mechanism is needed to make sure that only one CPU +attempts to play the first man role and do the cluster-level +initialisation: any other CPUs must wait for this to complete before +proceeding. + +Cluster-level initialisation may involve actions such as configuring +coherency controls in the bus fabric. + +The current implementation in mcpm_head.S uses a separate mutual exclusion +mechanism to do this arbitration. This mechanism is documented in +detail in vlocks.txt. + + +Features and Limitations +------------------------ + +Implementation: + + The current ARM-based implementation is split between + arch/arm/common/mcpm_head.S (low-level inbound CPU operations) and + arch/arm/common/mcpm_entry.c (everything else): + + __mcpm_cpu_going_down() signals the transition of a CPU to the + CPU_GOING_DOWN state. + + __mcpm_cpu_down() signals the transition of a CPU to the CPU_DOWN + state. + + A CPU transitions to CPU_COMING_UP and then to CPU_UP via the + low-level power-up code in mcpm_head.S. This could + involve CPU-specific setup code, but in the current + implementation it does not. + + __mcpm_outbound_enter_critical() and __mcpm_outbound_leave_critical() + handle transitions from CLUSTER_UP to CLUSTER_GOING_DOWN + and from there to CLUSTER_DOWN or back to CLUSTER_UP (in + the case of an aborted cluster power-down). + + These functions are more complex than the __mcpm_cpu_*() + functions due to the extra inter-CPU coordination which + is needed for safe transitions at the cluster level. + + A cluster transitions from CLUSTER_DOWN back to CLUSTER_UP via + the low-level power-up code in mcpm_head.S. This + typically involves platform-specific setup code, + provided by the platform-specific power_up_setup + function registered via mcpm_sync_init. + +Deep topologies: + + As currently described and implemented, the algorithm does not + support CPU topologies involving more than two levels (i.e., + clusters of clusters are not supported). The algorithm could be + extended by replicating the cluster-level states for the + additional topological levels, and modifying the transition + rules for the intermediate (non-outermost) cluster levels. + + +Colophon +-------- + +Originally created and documented by Dave Martin for Linaro Limited, in +collaboration with Nicolas Pitre and Achin Gupta. + +Copyright (C) 2012-2013 Linaro Limited +Distributed under the terms of Version 2 of the GNU General Public +License, as defined in linux/COPYING. diff --git a/Documentation/arm/firmware.txt b/Documentation/arm/firmware.txt new file mode 100644 index 00000000000..c2e468fe7b0 --- /dev/null +++ b/Documentation/arm/firmware.txt @@ -0,0 +1,88 @@ +Interface for registering and calling firmware-specific operations for ARM. +---- +Written by Tomasz Figa <t.figa@samsung.com> + +Some boards are running with secure firmware running in TrustZone secure +world, which changes the way some things have to be initialized. This makes +a need to provide an interface for such platforms to specify available firmware +operations and call them when needed. + +Firmware operations can be specified using struct firmware_ops + + struct firmware_ops { + /* + * Enters CPU idle mode + */ + int (*do_idle)(void); + /* + * Sets boot address of specified physical CPU + */ + int (*set_cpu_boot_addr)(int cpu, unsigned long boot_addr); + /* + * Boots specified physical CPU + */ + int (*cpu_boot)(int cpu); + /* + * Initializes L2 cache + */ + int (*l2x0_init)(void); + }; + +and then registered with register_firmware_ops function + + void register_firmware_ops(const struct firmware_ops *ops) + +the ops pointer must be non-NULL. + +There is a default, empty set of operations provided, so there is no need to +set anything if platform does not require firmware operations. + +To call a firmware operation, a helper macro is provided + + #define call_firmware_op(op, ...) \ + ((firmware_ops->op) ? firmware_ops->op(__VA_ARGS__) : (-ENOSYS)) + +the macro checks if the operation is provided and calls it or otherwise returns +-ENOSYS to signal that given operation is not available (for example, to allow +fallback to legacy operation). + +Example of registering firmware operations: + + /* board file */ + + static int platformX_do_idle(void) + { + /* tell platformX firmware to enter idle */ + return 0; + } + + static int platformX_cpu_boot(int i) + { + /* tell platformX firmware to boot CPU i */ + return 0; + } + + static const struct firmware_ops platformX_firmware_ops = { + .do_idle = exynos_do_idle, + .cpu_boot = exynos_cpu_boot, + /* other operations not available on platformX */ + }; + + /* init_early callback of machine descriptor */ + static void __init board_init_early(void) + { + register_firmware_ops(&platformX_firmware_ops); + } + +Example of using a firmware operation: + + /* some platform code, e.g. SMP initialization */ + + __raw_writel(virt_to_phys(exynos4_secondary_startup), + CPU1_BOOT_REG); + + /* Call Exynos specific smc call */ + if (call_firmware_op(cpu_boot, cpu) == -ENOSYS) + cpu_boot_legacy(...); /* Try legacy way */ + + gic_raise_softirq(cpumask_of(cpu), 1); diff --git a/Documentation/arm/kernel_mode_neon.txt b/Documentation/arm/kernel_mode_neon.txt new file mode 100644 index 00000000000..525452726d3 --- /dev/null +++ b/Documentation/arm/kernel_mode_neon.txt @@ -0,0 +1,121 @@ +Kernel mode NEON +================ + +TL;DR summary +------------- +* Use only NEON instructions, or VFP instructions that don't rely on support + code +* Isolate your NEON code in a separate compilation unit, and compile it with + '-mfpu=neon -mfloat-abi=softfp' +* Put kernel_neon_begin() and kernel_neon_end() calls around the calls into your + NEON code +* Don't sleep in your NEON code, and be aware that it will be executed with + preemption disabled + + +Introduction +------------ +It is possible to use NEON instructions (and in some cases, VFP instructions) in +code that runs in kernel mode. However, for performance reasons, the NEON/VFP +register file is not preserved and restored at every context switch or taken +exception like the normal register file is, so some manual intervention is +required. Furthermore, special care is required for code that may sleep [i.e., +may call schedule()], as NEON or VFP instructions will be executed in a +non-preemptible section for reasons outlined below. + + +Lazy preserve and restore +------------------------- +The NEON/VFP register file is managed using lazy preserve (on UP systems) and +lazy restore (on both SMP and UP systems). This means that the register file is +kept 'live', and is only preserved and restored when multiple tasks are +contending for the NEON/VFP unit (or, in the SMP case, when a task migrates to +another core). Lazy restore is implemented by disabling the NEON/VFP unit after +every context switch, resulting in a trap when subsequently a NEON/VFP +instruction is issued, allowing the kernel to step in and perform the restore if +necessary. + +Any use of the NEON/VFP unit in kernel mode should not interfere with this, so +it is required to do an 'eager' preserve of the NEON/VFP register file, and +enable the NEON/VFP unit explicitly so no exceptions are generated on first +subsequent use. This is handled by the function kernel_neon_begin(), which +should be called before any kernel mode NEON or VFP instructions are issued. +Likewise, the NEON/VFP unit should be disabled again after use to make sure user +mode will hit the lazy restore trap upon next use. This is handled by the +function kernel_neon_end(). + + +Interruptions in kernel mode +---------------------------- +For reasons of performance and simplicity, it was decided that there shall be no +preserve/restore mechanism for the kernel mode NEON/VFP register contents. This +implies that interruptions of a kernel mode NEON section can only be allowed if +they are guaranteed not to touch the NEON/VFP registers. For this reason, the +following rules and restrictions apply in the kernel: +* NEON/VFP code is not allowed in interrupt context; +* NEON/VFP code is not allowed to sleep; +* NEON/VFP code is executed with preemption disabled. + +If latency is a concern, it is possible to put back to back calls to +kernel_neon_end() and kernel_neon_begin() in places in your code where none of +the NEON registers are live. (Additional calls to kernel_neon_begin() should be +reasonably cheap if no context switch occurred in the meantime) + + +VFP and support code +-------------------- +Earlier versions of VFP (prior to version 3) rely on software support for things +like IEEE-754 compliant underflow handling etc. When the VFP unit needs such +software assistance, it signals the kernel by raising an undefined instruction +exception. The kernel responds by inspecting the VFP control registers and the +current instruction and arguments, and emulates the instruction in software. + +Such software assistance is currently not implemented for VFP instructions +executed in kernel mode. If such a condition is encountered, the kernel will +fail and generate an OOPS. + + +Separating NEON code from ordinary code +--------------------------------------- +The compiler is not aware of the special significance of kernel_neon_begin() and +kernel_neon_end(), i.e., that it is only allowed to issue NEON/VFP instructions +between calls to these respective functions. Furthermore, GCC may generate NEON +instructions of its own at -O3 level if -mfpu=neon is selected, and even if the +kernel is currently compiled at -O2, future changes may result in NEON/VFP +instructions appearing in unexpected places if no special care is taken. + +Therefore, the recommended and only supported way of using NEON/VFP in the +kernel is by adhering to the following rules: +* isolate the NEON code in a separate compilation unit and compile it with + '-mfpu=neon -mfloat-abi=softfp'; +* issue the calls to kernel_neon_begin(), kernel_neon_end() as well as the calls + into the unit containing the NEON code from a compilation unit which is *not* + built with the GCC flag '-mfpu=neon' set. + +As the kernel is compiled with '-msoft-float', the above will guarantee that +both NEON and VFP instructions will only ever appear in designated compilation +units at any optimization level. + + +NEON assembler +-------------- +NEON assembler is supported with no additional caveats as long as the rules +above are followed. + + +NEON code generated by GCC +-------------------------- +The GCC option -ftree-vectorize (implied by -O3) tries to exploit implicit +parallelism, and generates NEON code from ordinary C source code. This is fully +supported as long as the rules above are followed. + + +NEON intrinsics +--------------- +NEON intrinsics are also supported. However, as code using NEON intrinsics +relies on the GCC header <arm_neon.h>, (which #includes <stdint.h>), you should +observe the following in addition to the rules above: +* Compile the unit containing the NEON intrinsics with '-ffreestanding' so GCC + uses its builtin version of <stdint.h> (this is a C99 header which the kernel + does not supply); +* Include <arm_neon.h> last, or at least after <linux/types.h> diff --git a/Documentation/arm/memory.txt b/Documentation/arm/memory.txt index 4bfb9ffbdbc..38dc06d0a79 100644 --- a/Documentation/arm/memory.txt +++ b/Documentation/arm/memory.txt @@ -41,16 +41,9 @@ fffe8000 fffeffff DTCM mapping area for platforms with fffe0000 fffe7fff ITCM mapping area for platforms with ITCM mounted inside the CPU. -fff00000 fffdffff Fixmap mapping region. Addresses provided +ffc00000 ffdfffff Fixmap mapping region. Addresses provided by fix_to_virt() will be located here. -ffc00000 ffefffff DMA memory mapping region. Memory returned - by the dma_alloc_xxx functions will be - dynamically mapped here. - -ff000000 ffbfffff Reserved for future expansion of DMA - mapping region. - fee00000 feffffff Mapping of PCI I/O space. This is a static mapping within the vmalloc space. diff --git a/Documentation/arm/sti/overview.txt b/Documentation/arm/sti/overview.txt new file mode 100644 index 00000000000..1a4e93d6027 --- /dev/null +++ b/Documentation/arm/sti/overview.txt @@ -0,0 +1,33 @@ + STi ARM Linux Overview + ========================== + +Introduction +------------ + + The ST Microelectronics Multimedia and Application Processors range of + CortexA9 System-on-Chip are supported by the 'STi' platform of + ARM Linux. Currently STiH415, STiH416 SOCs are supported with both + B2000 and B2020 Reference boards. + + + configuration + ------------- + + A generic configuration is provided for both STiH415/416, and can be used as the + default by + make stih41x_defconfig + + Layout + ------ + All the files for multiple machine families (STiH415, STiH416, and STiG125) + are located in the platform code contained in arch/arm/mach-sti + + There is a generic board board-dt.c in the mach folder which support + Flattened Device Tree, which means, It works with any compatible board with + Device Trees. + + + Document Author + --------------- + + Srinivas Kandagatla <srinivas.kandagatla@st.com>, (c) 2013 ST Microelectronics diff --git a/Documentation/arm/sti/stih407-overview.txt b/Documentation/arm/sti/stih407-overview.txt new file mode 100644 index 00000000000..3343f32f58b --- /dev/null +++ b/Documentation/arm/sti/stih407-overview.txt @@ -0,0 +1,18 @@ + STiH407 Overview + ================ + +Introduction +------------ + + The STiH407 is the new generation of SoC for Multi-HD, AVC set-top boxes + and server/connected client application for satellite, cable, terrestrial + and IP-STB markets. + + Features + - ARM Cortex-A9 1.5 GHz dual core CPU (28nm) + - SATA2, USB 3.0, PCIe, Gbit Ethernet + + Document Author + --------------- + + Maxime Coquelin <maxime.coquelin@st.com>, (c) 2014 ST Microelectronics diff --git a/Documentation/arm/sti/stih415-overview.txt b/Documentation/arm/sti/stih415-overview.txt new file mode 100644 index 00000000000..1383e33f265 --- /dev/null +++ b/Documentation/arm/sti/stih415-overview.txt @@ -0,0 +1,12 @@ + STiH415 Overview + ================ + +Introduction +------------ + + The STiH415 is the next generation of HD, AVC set-top box processors + for satellite, cable, terrestrial and IP-STB markets. + + Features + - ARM Cortex-A9 1.0 GHz, dual-core CPU + - SATA2x2,USB 2.0x3, PCIe, Gbit Ethernet MACx2 diff --git a/Documentation/arm/sti/stih416-overview.txt b/Documentation/arm/sti/stih416-overview.txt new file mode 100644 index 00000000000..558444c201c --- /dev/null +++ b/Documentation/arm/sti/stih416-overview.txt @@ -0,0 +1,12 @@ + STiH416 Overview + ================ + +Introduction +------------ + + The STiH416 is the next generation of HD, AVC set-top box processors + for satellite, cable, terrestrial and IP-STB markets. + + Features + - ARM Cortex-A9 1.2 GHz dual core CPU + - SATA2x2,USB 2.0x3, PCIe, Gbit Ethernet MACx2 diff --git a/Documentation/arm/sunxi/README b/Documentation/arm/sunxi/README new file mode 100644 index 00000000000..7945238453e --- /dev/null +++ b/Documentation/arm/sunxi/README @@ -0,0 +1,52 @@ +ARM Allwinner SoCs +================== + +This document lists all the ARM Allwinner SoCs that are currently +supported in mainline by the Linux kernel. This document will also +provide links to documentation and/or datasheet for these SoCs. + +SunXi family +------------ + Linux kernel mach directory: arch/arm/mach-sunxi + + Flavors: + * ARM926 based SoCs + - Allwinner F20 (sun3i) + + Not Supported + + * ARM Cortex-A8 based SoCs + - Allwinner A10 (sun4i) + + Datasheet + http://dl.linux-sunxi.org/A10/A10%20Datasheet%20-%20v1.21%20%282012-04-06%29.pdf + + User Manual + http://dl.linux-sunxi.org/A10/A10%20User%20Manual%20-%20v1.20%20%282012-04-09%2c%20DECRYPTED%29.pdf + + - Allwinner A10s (sun5i) + + Datasheet + http://dl.linux-sunxi.org/A10s/A10s%20Datasheet%20-%20v1.20%20%282012-03-27%29.pdf + + - Allwinner A13 (sun5i) + + Datasheet + http://dl.linux-sunxi.org/A13/A13%20Datasheet%20-%20v1.12%20%282012-03-29%29.pdf + + User Manual + http://dl.linux-sunxi.org/A13/A13%20User%20Manual%20-%20v1.2%20%282013-01-08%29.pdf + + * Dual ARM Cortex-A7 based SoCs + - Allwinner A20 (sun7i) + + User Manual + http://dl.linux-sunxi.org/A20/A20%20User%20Manual%202013-03-22.pdf + + - Allwinner A23 + + Not Supported + + * Quad ARM Cortex-A7 based SoCs + - Allwinner A31 (sun6i) + + Datasheet + http://dl.linux-sunxi.org/A31/A31%20Datasheet%20-%20v1.00%20(2012-12-24).pdf + + - Allwinner A31s (sun6i) + + Not Supported + + * Quad ARM Cortex-A15, Quad ARM Cortex-A7 based SoCs + - Allwinner A80 + + Not Supported
\ No newline at end of file diff --git a/Documentation/arm/sunxi/clocks.txt b/Documentation/arm/sunxi/clocks.txt new file mode 100644 index 00000000000..e09a88aa313 --- /dev/null +++ b/Documentation/arm/sunxi/clocks.txt @@ -0,0 +1,56 @@ +Frequently asked questions about the sunxi clock system +======================================================= + +This document contains useful bits of information that people tend to ask +about the sunxi clock system, as well as accompanying ASCII art when adequate. + +Q: Why is the main 24MHz oscillator gatable? Wouldn't that break the + system? + +A: The 24MHz oscillator allows gating to save power. Indeed, if gated + carelessly the system would stop functioning, but with the right + steps, one can gate it and keep the system running. Consider this + simplified suspend example: + + While the system is operational, you would see something like + + 24MHz 32kHz + | + PLL1 + \ + \_ CPU Mux + | + [CPU] + + When you are about to suspend, you switch the CPU Mux to the 32kHz + oscillator: + + 24Mhz 32kHz + | | + PLL1 | + / + CPU Mux _/ + | + [CPU] + + Finally you can gate the main oscillator + + 32kHz + | + | + / + CPU Mux _/ + | + [CPU] + +Q: Were can I learn more about the sunxi clocks? + +A: The linux-sunxi wiki contains a page documenting the clock registers, + you can find it at + + http://linux-sunxi.org/A10/CCM + + The authoritative source for information at this time is the ccmu driver + released by Allwinner, you can find it at + + https://github.com/linux-sunxi/linux-sunxi/tree/sunxi-3.0/arch/arm/mach-sun4i/clock/ccmu diff --git a/Documentation/arm/uefi.txt b/Documentation/arm/uefi.txt new file mode 100644 index 00000000000..d60030a1b90 --- /dev/null +++ b/Documentation/arm/uefi.txt @@ -0,0 +1,64 @@ +UEFI, the Unified Extensible Firmware Interface, is a specification +governing the behaviours of compatible firmware interfaces. It is +maintained by the UEFI Forum - http://www.uefi.org/. + +UEFI is an evolution of its predecessor 'EFI', so the terms EFI and +UEFI are used somewhat interchangeably in this document and associated +source code. As a rule, anything new uses 'UEFI', whereas 'EFI' refers +to legacy code or specifications. + +UEFI support in Linux +===================== +Booting on a platform with firmware compliant with the UEFI specification +makes it possible for the kernel to support additional features: +- UEFI Runtime Services +- Retrieving various configuration information through the standardised + interface of UEFI configuration tables. (ACPI, SMBIOS, ...) + +For actually enabling [U]EFI support, enable: +- CONFIG_EFI=y +- CONFIG_EFI_VARS=y or m + +The implementation depends on receiving information about the UEFI environment +in a Flattened Device Tree (FDT) - so is only available with CONFIG_OF. + +UEFI stub +========= +The "stub" is a feature that extends the Image/zImage into a valid UEFI +PE/COFF executable, including a loader application that makes it possible to +load the kernel directly from the UEFI shell, boot menu, or one of the +lightweight bootloaders like Gummiboot or rEFInd. + +The kernel image built with stub support remains a valid kernel image for +booting in non-UEFI environments. + +UEFI kernel support on ARM +========================== +UEFI kernel support on the ARM architectures (arm and arm64) is only available +when boot is performed through the stub. + +When booting in UEFI mode, the stub deletes any memory nodes from a provided DT. +Instead, the kernel reads the UEFI memory map. + +The stub populates the FDT /chosen node with (and the kernel scans for) the +following parameters: +________________________________________________________________________________ +Name | Size | Description +================================================================================ +linux,uefi-system-table | 64-bit | Physical address of the UEFI System Table. +-------------------------------------------------------------------------------- +linux,uefi-mmap-start | 64-bit | Physical address of the UEFI memory map, + | | populated by the UEFI GetMemoryMap() call. +-------------------------------------------------------------------------------- +linux,uefi-mmap-size | 32-bit | Size in bytes of the UEFI memory map + | | pointed to in previous entry. +-------------------------------------------------------------------------------- +linux,uefi-mmap-desc-size | 32-bit | Size in bytes of each entry in the UEFI + | | memory map. +-------------------------------------------------------------------------------- +linux,uefi-mmap-desc-ver | 32-bit | Version of the mmap descriptor format. +-------------------------------------------------------------------------------- +linux,uefi-stub-kern-ver | string | Copy of linux_banner from build. +-------------------------------------------------------------------------------- + +For verbose debug messages, specify 'uefi_debug' on the kernel command line. diff --git a/Documentation/arm/vlocks.txt b/Documentation/arm/vlocks.txt new file mode 100644 index 00000000000..415960a9bab --- /dev/null +++ b/Documentation/arm/vlocks.txt @@ -0,0 +1,211 @@ +vlocks for Bare-Metal Mutual Exclusion +====================================== + +Voting Locks, or "vlocks" provide a simple low-level mutual exclusion +mechanism, with reasonable but minimal requirements on the memory +system. + +These are intended to be used to coordinate critical activity among CPUs +which are otherwise non-coherent, in situations where the hardware +provides no other mechanism to support this and ordinary spinlocks +cannot be used. + + +vlocks make use of the atomicity provided by the memory system for +writes to a single memory location. To arbitrate, every CPU "votes for +itself", by storing a unique number to a common memory location. The +final value seen in that memory location when all the votes have been +cast identifies the winner. + +In order to make sure that the election produces an unambiguous result +in finite time, a CPU will only enter the election in the first place if +no winner has been chosen and the election does not appear to have +started yet. + + +Algorithm +--------- + +The easiest way to explain the vlocks algorithm is with some pseudo-code: + + + int currently_voting[NR_CPUS] = { 0, }; + int last_vote = -1; /* no votes yet */ + + bool vlock_trylock(int this_cpu) + { + /* signal our desire to vote */ + currently_voting[this_cpu] = 1; + if (last_vote != -1) { + /* someone already volunteered himself */ + currently_voting[this_cpu] = 0; + return false; /* not ourself */ + } + + /* let's suggest ourself */ + last_vote = this_cpu; + currently_voting[this_cpu] = 0; + + /* then wait until everyone else is done voting */ + for_each_cpu(i) { + while (currently_voting[i] != 0) + /* wait */; + } + + /* result */ + if (last_vote == this_cpu) + return true; /* we won */ + return false; + } + + bool vlock_unlock(void) + { + last_vote = -1; + } + + +The currently_voting[] array provides a way for the CPUs to determine +whether an election is in progress, and plays a role analogous to the +"entering" array in Lamport's bakery algorithm [1]. + +However, once the election has started, the underlying memory system +atomicity is used to pick the winner. This avoids the need for a static +priority rule to act as a tie-breaker, or any counters which could +overflow. + +As long as the last_vote variable is globally visible to all CPUs, it +will contain only one value that won't change once every CPU has cleared +its currently_voting flag. + + +Features and limitations +------------------------ + + * vlocks are not intended to be fair. In the contended case, it is the + _last_ CPU which attempts to get the lock which will be most likely + to win. + + vlocks are therefore best suited to situations where it is necessary + to pick a unique winner, but it does not matter which CPU actually + wins. + + * Like other similar mechanisms, vlocks will not scale well to a large + number of CPUs. + + vlocks can be cascaded in a voting hierarchy to permit better scaling + if necessary, as in the following hypothetical example for 4096 CPUs: + + /* first level: local election */ + my_town = towns[(this_cpu >> 4) & 0xf]; + I_won = vlock_trylock(my_town, this_cpu & 0xf); + if (I_won) { + /* we won the town election, let's go for the state */ + my_state = states[(this_cpu >> 8) & 0xf]; + I_won = vlock_lock(my_state, this_cpu & 0xf)); + if (I_won) { + /* and so on */ + I_won = vlock_lock(the_whole_country, this_cpu & 0xf]; + if (I_won) { + /* ... */ + } + vlock_unlock(the_whole_country); + } + vlock_unlock(my_state); + } + vlock_unlock(my_town); + + +ARM implementation +------------------ + +The current ARM implementation [2] contains some optimisations beyond +the basic algorithm: + + * By packing the members of the currently_voting array close together, + we can read the whole array in one transaction (providing the number + of CPUs potentially contending the lock is small enough). This + reduces the number of round-trips required to external memory. + + In the ARM implementation, this means that we can use a single load + and comparison: + + LDR Rt, [Rn] + CMP Rt, #0 + + ...in place of code equivalent to: + + LDRB Rt, [Rn] + CMP Rt, #0 + LDRBEQ Rt, [Rn, #1] + CMPEQ Rt, #0 + LDRBEQ Rt, [Rn, #2] + CMPEQ Rt, #0 + LDRBEQ Rt, [Rn, #3] + CMPEQ Rt, #0 + + This cuts down on the fast-path latency, as well as potentially + reducing bus contention in contended cases. + + The optimisation relies on the fact that the ARM memory system + guarantees coherency between overlapping memory accesses of + different sizes, similarly to many other architectures. Note that + we do not care which element of currently_voting appears in which + bits of Rt, so there is no need to worry about endianness in this + optimisation. + + If there are too many CPUs to read the currently_voting array in + one transaction then multiple transations are still required. The + implementation uses a simple loop of word-sized loads for this + case. The number of transactions is still fewer than would be + required if bytes were loaded individually. + + + In principle, we could aggregate further by using LDRD or LDM, but + to keep the code simple this was not attempted in the initial + implementation. + + + * vlocks are currently only used to coordinate between CPUs which are + unable to enable their caches yet. This means that the + implementation removes many of the barriers which would be required + when executing the algorithm in cached memory. + + packing of the currently_voting array does not work with cached + memory unless all CPUs contending the lock are cache-coherent, due + to cache writebacks from one CPU clobbering values written by other + CPUs. (Though if all the CPUs are cache-coherent, you should be + probably be using proper spinlocks instead anyway). + + + * The "no votes yet" value used for the last_vote variable is 0 (not + -1 as in the pseudocode). This allows statically-allocated vlocks + to be implicitly initialised to an unlocked state simply by putting + them in .bss. + + An offset is added to each CPU's ID for the purpose of setting this + variable, so that no CPU uses the value 0 for its ID. + + +Colophon +-------- + +Originally created and documented by Dave Martin for Linaro Limited, for +use in ARM-based big.LITTLE platforms, with review and input gratefully +received from Nicolas Pitre and Achin Gupta. Thanks to Nicolas for +grabbing most of this text out of the relevant mail thread and writing +up the pseudocode. + +Copyright (C) 2012-2013 Linaro Limited +Distributed under the terms of Version 2 of the GNU General Public +License, as defined in linux/COPYING. + + +References +---------- + +[1] Lamport, L. "A New Solution of Dijkstra's Concurrent Programming + Problem", Communications of the ACM 17, 8 (August 1974), 453-455. + + http://en.wikipedia.org/wiki/Lamport%27s_bakery_algorithm + +[2] linux/arch/arm/common/vlock.S, www.kernel.org. |
