diff options
author | Russell King <rmk+kernel@arm.linux.org.uk> | 2010-05-17 17:24:04 +0100 |
---|---|---|
committer | Russell King <rmk+kernel@arm.linux.org.uk> | 2010-05-17 17:24:04 +0100 |
commit | ac1d426e825ab5778995f2f6f053ca2e6b45c622 (patch) | |
tree | 75b91356ca39463e0112931aa6790802fb1e07a2 /Documentation | |
parent | fda0e18c8a7a3e02747c2b045b4fcd2c920410b9 (diff) | |
parent | a3685f00652af83f12b63e3b4ef48f29581ba48b (diff) |
Merge branch 'devel-stable' into devel
Conflicts:
arch/arm/Kconfig
arch/arm/include/asm/system.h
arch/arm/mm/Kconfig
Diffstat (limited to 'Documentation')
37 files changed, 880 insertions, 182 deletions
diff --git a/Documentation/ABI/testing/sysfs-bus-usb b/Documentation/ABI/testing/sysfs-bus-usb index a986e9bbba3..bcebb9eaedc 100644 --- a/Documentation/ABI/testing/sysfs-bus-usb +++ b/Documentation/ABI/testing/sysfs-bus-usb @@ -160,7 +160,7 @@ Description: match the driver to the device. For example: # echo "046d c315" > /sys/bus/usb/drivers/foo/remove_id -What: /sys/bus/usb/device/.../avoid_reset +What: /sys/bus/usb/device/.../avoid_reset_quirk Date: December 2009 Contact: Oliver Neukum <oliver@neukum.org> Description: diff --git a/Documentation/PCI/PCI-DMA-mapping.txt b/Documentation/DMA-API-HOWTO.txt index 52618ab069a..52618ab069a 100644 --- a/Documentation/PCI/PCI-DMA-mapping.txt +++ b/Documentation/DMA-API-HOWTO.txt diff --git a/Documentation/DocBook/libata.tmpl b/Documentation/DocBook/libata.tmpl index ba997577150..ff3e5bec1c2 100644 --- a/Documentation/DocBook/libata.tmpl +++ b/Documentation/DocBook/libata.tmpl @@ -107,10 +107,6 @@ void (*dev_config) (struct ata_port *, struct ata_device *); issue of SET FEATURES - XFER MODE, and prior to operation. </para> <para> - Called by ata_device_add() after ata_dev_identify() determines - a device is present. - </para> - <para> This entry may be specified as NULL in ata_port_operations. </para> @@ -154,8 +150,8 @@ unsigned int (*mode_filter) (struct ata_port *, struct ata_device *, unsigned in <sect2><title>Taskfile read/write</title> <programlisting> -void (*tf_load) (struct ata_port *ap, struct ata_taskfile *tf); -void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf); +void (*sff_tf_load) (struct ata_port *ap, struct ata_taskfile *tf); +void (*sff_tf_read) (struct ata_port *ap, struct ata_taskfile *tf); </programlisting> <para> @@ -164,36 +160,35 @@ void (*tf_read) (struct ata_port *ap, struct ata_taskfile *tf); hardware registers / DMA buffers, to obtain the current set of taskfile register values. Most drivers for taskfile-based hardware (PIO or MMIO) use - ata_tf_load() and ata_tf_read() for these hooks. + ata_sff_tf_load() and ata_sff_tf_read() for these hooks. </para> </sect2> <sect2><title>PIO data read/write</title> <programlisting> -void (*data_xfer) (struct ata_device *, unsigned char *, unsigned int, int); +void (*sff_data_xfer) (struct ata_device *, unsigned char *, unsigned int, int); </programlisting> <para> All bmdma-style drivers must implement this hook. This is the low-level operation that actually copies the data bytes during a PIO data transfer. -Typically the driver -will choose one of ata_pio_data_xfer_noirq(), ata_pio_data_xfer(), or -ata_mmio_data_xfer(). +Typically the driver will choose one of ata_sff_data_xfer_noirq(), +ata_sff_data_xfer(), or ata_sff_data_xfer32(). </para> </sect2> <sect2><title>ATA command execute</title> <programlisting> -void (*exec_command)(struct ata_port *ap, struct ata_taskfile *tf); +void (*sff_exec_command)(struct ata_port *ap, struct ata_taskfile *tf); </programlisting> <para> causes an ATA command, previously loaded with ->tf_load(), to be initiated in hardware. - Most drivers for taskfile-based hardware use ata_exec_command() + Most drivers for taskfile-based hardware use ata_sff_exec_command() for this hook. </para> @@ -218,8 +213,8 @@ command. <sect2><title>Read specific ATA shadow registers</title> <programlisting> -u8 (*check_status)(struct ata_port *ap); -u8 (*check_altstatus)(struct ata_port *ap); +u8 (*sff_check_status)(struct ata_port *ap); +u8 (*sff_check_altstatus)(struct ata_port *ap); </programlisting> <para> @@ -227,20 +222,14 @@ u8 (*check_altstatus)(struct ata_port *ap); hardware. On some hardware, reading the Status register has the side effect of clearing the interrupt condition. Most drivers for taskfile-based hardware use - ata_check_status() for this hook. - </para> - <para> - Note that because this is called from ata_device_add(), at - least a dummy function that clears device interrupts must be - provided for all drivers, even if the controller doesn't - actually have a taskfile status register. + ata_sff_check_status() for this hook. </para> </sect2> <sect2><title>Select ATA device on bus</title> <programlisting> -void (*dev_select)(struct ata_port *ap, unsigned int device); +void (*sff_dev_select)(struct ata_port *ap, unsigned int device); </programlisting> <para> @@ -251,9 +240,7 @@ void (*dev_select)(struct ata_port *ap, unsigned int device); </para> <para> Most drivers for taskfile-based hardware use - ata_std_dev_select() for this hook. Controllers which do not - support second drives on a port (such as SATA contollers) will - use ata_noop_dev_select(). + ata_sff_dev_select() for this hook. </para> </sect2> @@ -441,13 +428,13 @@ void (*irq_clear) (struct ata_port *); to struct ata_host_set. </para> <para> - Most legacy IDE drivers use ata_interrupt() for the + Most legacy IDE drivers use ata_sff_interrupt() for the irq_handler hook, which scans all ports in the host_set, determines which queued command was active (if any), and calls - ata_host_intr(ap,qc). + ata_sff_host_intr(ap,qc). </para> <para> - Most legacy IDE drivers use ata_bmdma_irq_clear() for the + Most legacy IDE drivers use ata_sff_irq_clear() for the irq_clear() hook, which simply clears the interrupt and error flags in the DMA status register. </para> @@ -496,10 +483,6 @@ void (*host_stop) (struct ata_host_set *host_set); data from port at this time. </para> <para> - Many drivers use ata_port_stop() as this hook, which frees the - PRD table. - </para> - <para> ->host_stop() is called after all ->port_stop() calls have completed. The hook must finalize hardware shutdown, release DMA and other resources, etc. diff --git a/Documentation/DocBook/tracepoint.tmpl b/Documentation/DocBook/tracepoint.tmpl index 8bca1d5cec0..e8473eae2a2 100644 --- a/Documentation/DocBook/tracepoint.tmpl +++ b/Documentation/DocBook/tracepoint.tmpl @@ -16,6 +16,15 @@ </address> </affiliation> </author> + <author> + <firstname>William</firstname> + <surname>Cohen</surname> + <affiliation> + <address> + <email>wcohen@redhat.com</email> + </address> + </affiliation> + </author> </authorgroup> <legalnotice> @@ -91,4 +100,8 @@ !Iinclude/trace/events/signal.h </chapter> + <chapter id="block"> + <title>Block IO</title> +!Iinclude/trace/events/block.h + </chapter> </book> diff --git a/Documentation/HOWTO b/Documentation/HOWTO index f5395af88a4..40ada93b820 100644 --- a/Documentation/HOWTO +++ b/Documentation/HOWTO @@ -234,7 +234,7 @@ process is as follows: Linus, usually the patches that have already been included in the -next kernel for a few weeks. The preferred way to submit big changes is using git (the kernel's source management tool, more information - can be found at http://git.or.cz/) but plain patches are also just + can be found at http://git-scm.com/) but plain patches are also just fine. - After two weeks a -rc1 kernel is released it is now possible to push only patches that do not include new features that could affect the diff --git a/Documentation/RCU/NMI-RCU.txt b/Documentation/RCU/NMI-RCU.txt index a6d32e65d22..a8536cb8809 100644 --- a/Documentation/RCU/NMI-RCU.txt +++ b/Documentation/RCU/NMI-RCU.txt @@ -34,7 +34,7 @@ NMI handler. cpu = smp_processor_id(); ++nmi_count(cpu); - if (!rcu_dereference(nmi_callback)(regs, cpu)) + if (!rcu_dereference_sched(nmi_callback)(regs, cpu)) default_do_nmi(regs); nmi_exit(); @@ -47,12 +47,13 @@ function pointer. If this handler returns zero, do_nmi() invokes the default_do_nmi() function to handle a machine-specific NMI. Finally, preemption is restored. -Strictly speaking, rcu_dereference() is not needed, since this code runs -only on i386, which does not need rcu_dereference() anyway. However, -it is a good documentation aid, particularly for anyone attempting to -do something similar on Alpha. +In theory, rcu_dereference_sched() is not needed, since this code runs +only on i386, which in theory does not need rcu_dereference_sched() +anyway. However, in practice it is a good documentation aid, particularly +for anyone attempting to do something similar on Alpha or on systems +with aggressive optimizing compilers. -Quick Quiz: Why might the rcu_dereference() be necessary on Alpha, +Quick Quiz: Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only? @@ -99,17 +100,21 @@ invoke irq_enter() and irq_exit() on NMI entry and exit, respectively. Answer to Quick Quiz - Why might the rcu_dereference() be necessary on Alpha, given + Why might the rcu_dereference_sched() be necessary on Alpha, given that the code referenced by the pointer is read-only? Answer: The caller to set_nmi_callback() might well have - initialized some data that is to be used by the - new NMI handler. In this case, the rcu_dereference() - would be needed, because otherwise a CPU that received - an NMI just after the new handler was set might see - the pointer to the new NMI handler, but the old - pre-initialized version of the handler's data. - - More important, the rcu_dereference() makes it clear - to someone reading the code that the pointer is being - protected by RCU. + initialized some data that is to be used by the new NMI + handler. In this case, the rcu_dereference_sched() would + be needed, because otherwise a CPU that received an NMI + just after the new handler was set might see the pointer + to the new NMI handler, but the old pre-initialized + version of the handler's data. + + This same sad story can happen on other CPUs when using + a compiler with aggressive pointer-value speculation + optimizations. + + More important, the rcu_dereference_sched() makes it + clear to someone reading the code that the pointer is + being protected by RCU-sched. diff --git a/Documentation/RCU/checklist.txt b/Documentation/RCU/checklist.txt index cbc180f9019..790d1a81237 100644 --- a/Documentation/RCU/checklist.txt +++ b/Documentation/RCU/checklist.txt @@ -260,7 +260,8 @@ over a rather long period of time, but improvements are always welcome! The reason that it is permissible to use RCU list-traversal primitives when the update-side lock is held is that doing so can be quite helpful in reducing code bloat when common code is - shared between readers and updaters. + shared between readers and updaters. Additional primitives + are provided for this case, as discussed in lockdep.txt. 10. Conversely, if you are in an RCU read-side critical section, and you don't hold the appropriate update-side lock, you -must- @@ -344,8 +345,8 @@ over a rather long period of time, but improvements are always welcome! requiring SRCU's read-side deadlock immunity or low read-side realtime latency. - Note that, rcu_assign_pointer() and rcu_dereference() relate to - SRCU just as they do to other forms of RCU. + Note that, rcu_assign_pointer() relates to SRCU just as they do + to other forms of RCU. 15. The whole point of call_rcu(), synchronize_rcu(), and friends is to wait until all pre-existing readers have finished before diff --git a/Documentation/RCU/lockdep.txt b/Documentation/RCU/lockdep.txt index fe24b58627b..d7a49b2f699 100644 --- a/Documentation/RCU/lockdep.txt +++ b/Documentation/RCU/lockdep.txt @@ -32,9 +32,20 @@ checking of rcu_dereference() primitives: srcu_dereference(p, sp): Check for SRCU read-side critical section. rcu_dereference_check(p, c): - Use explicit check expression "c". + Use explicit check expression "c". This is useful in + code that is invoked by both readers and updaters. rcu_dereference_raw(p) Don't check. (Use sparingly, if at all.) + rcu_dereference_protected(p, c): + Use explicit check expression "c", and omit all barriers + and compiler constraints. This is useful when the data + structure cannot change, for example, in code that is + invoked only by updaters. + rcu_access_pointer(p): + Return the value of the pointer and omit all barriers, + but retain the compiler constraints that prevent duplicating + or coalescsing. This is useful when when testing the + value of the pointer itself, for example, against NULL. The rcu_dereference_check() check expression can be any boolean expression, but would normally include one of the rcu_read_lock_held() @@ -59,7 +70,20 @@ In case (1), the pointer is picked up in an RCU-safe manner for vanilla RCU read-side critical sections, in case (2) the ->file_lock prevents any change from taking place, and finally, in case (3) the current task is the only task accessing the file_struct, again preventing any change -from taking place. +from taking place. If the above statement was invoked only from updater +code, it could instead be written as follows: + + file = rcu_dereference_protected(fdt->fd[fd], + lockdep_is_held(&files->file_lock) || + atomic_read(&files->count) == 1); + +This would verify cases #2 and #3 above, and furthermore lockdep would +complain if this was used in an RCU read-side critical section unless one +of these two cases held. Because rcu_dereference_protected() omits all +barriers and compiler constraints, it generates better code than do the +other flavors of rcu_dereference(). On the other hand, it is illegal +to use rcu_dereference_protected() if either the RCU-protected pointer +or the RCU-protected data that it points to can change concurrently. There are currently only "universal" versions of the rcu_assign_pointer() and RCU list-/tree-traversal primitives, which do not (yet) check for diff --git a/Documentation/RCU/whatisRCU.txt b/Documentation/RCU/whatisRCU.txt index 1dc00ee9716..cfaac34c455 100644 --- a/Documentation/RCU/whatisRCU.txt +++ b/Documentation/RCU/whatisRCU.txt @@ -840,6 +840,12 @@ SRCU: Initialization/cleanup init_srcu_struct cleanup_srcu_struct +All: lockdep-checked RCU-protected pointer access + + rcu_dereference_check + rcu_dereference_protected + rcu_access_pointer + See the comment headers in the source code (or the docbook generated from them) for more information. diff --git a/Documentation/block/biodoc.txt b/Documentation/block/biodoc.txt index 6fab97ea7e6..508b5b2b028 100644 --- a/Documentation/block/biodoc.txt +++ b/Documentation/block/biodoc.txt @@ -1162,8 +1162,8 @@ where a driver received a request ala this before: As mentioned, there is no virtual mapping of a bio. For DMA, this is not a problem as the driver probably never will need a virtual mapping. -Instead it needs a bus mapping (pci_map_page for a single segment or -use blk_rq_map_sg for scatter gather) to be able to ship it to the driver. For +Instead it needs a bus mapping (dma_map_page for a single segment or +use dma_map_sg for scatter gather) to be able to ship it to the driver. For PIO drivers (or drivers that need to revert to PIO transfer once in a while (IDE for example)), where the CPU is doing the actual data transfer a virtual mapping is needed. If the driver supports highmem I/O, diff --git a/Documentation/cgroups/cgroups.txt b/Documentation/cgroups/cgroups.txt index fd588ff0e29..a1ca5924faf 100644 --- a/Documentation/cgroups/cgroups.txt +++ b/Documentation/cgroups/cgroups.txt @@ -235,8 +235,7 @@ containing the following files describing that cgroup: - cgroup.procs: list of tgids in the cgroup. This list is not guaranteed to be sorted or free of duplicate tgids, and userspace should sort/uniquify the list if this property is required. - Writing a tgid into this file moves all threads with that tgid into - this cgroup. + This is a read-only file, for now. - notify_on_release flag: run the release agent on exit? - release_agent: the path to use for release notifications (this file exists in the top cgroup only) diff --git a/Documentation/cgroups/memory.txt b/Documentation/cgroups/memory.txt index f8bc802d70b..3a6aecd078b 100644 --- a/Documentation/cgroups/memory.txt +++ b/Documentation/cgroups/memory.txt @@ -340,7 +340,7 @@ Note: 5.3 swappiness Similar to /proc/sys/vm/swappiness, but affecting a hierarchy of groups only. - Following cgroups' swapiness can't be changed. + Following cgroups' swappiness can't be changed. - root cgroup (uses /proc/sys/vm/swappiness). - a cgroup which uses hierarchy and it has child cgroup. - a cgroup which uses hierarchy and not the root of hierarchy. diff --git a/Documentation/circular-buffers.txt b/Documentation/circular-buffers.txt new file mode 100644 index 00000000000..8117e5bf606 --- /dev/null +++ b/Documentation/circular-buffers.txt @@ -0,0 +1,234 @@ + ================ + CIRCULAR BUFFERS + ================ + +By: David Howells <dhowells@redhat.com> + Paul E. McKenney <paulmck@linux.vnet.ibm.com> + + +Linux provides a number of features that can be used to implement circular +buffering. There are two sets of such features: + + (1) Convenience functions for determining information about power-of-2 sized + buffers. + + (2) Memory barriers for when the producer and the consumer of objects in the + buffer don't want to share a lock. + +To use these facilities, as discussed below, there needs to be just one +producer and just one consumer. It is possible to handle multiple producers by +serialising them, and to handle multiple consumers by serialising them. + + +Contents: + + (*) What is a circular buffer? + + (*) Measuring power-of-2 buffers. + + (*) Using memory barriers with circular buffers. + - The producer. + - The consumer. + + +========================== +WHAT IS A CIRCULAR BUFFER? +========================== + +First of all, what is a circular buffer? A circular buffer is a buffer of +fixed, finite size into which there are two indices: + + (1) A 'head' index - the point at which the producer inserts items into the + buffer. + + (2) A 'tail' index - the point at which the consumer finds the next item in + the buffer. + +Typically when the tail pointer is equal to the head pointer, the buffer is +empty; and the buffer is full when the head pointer is one less than the tail +pointer. + +The head index is incremented when items are added, and the tail index when +items are removed. The tail index should never jump the head index, and both +indices should be wrapped to 0 when they reach the end of the buffer, thus +allowing an infinite amount of data to flow through the buffer. + +Typically, items will all be of the same unit size, but this isn't strictly +required to use the techniques below. The indices can be increased by more +than 1 if multiple items or variable-sized items are to be included in the +buffer, provided that neither index overtakes the other. The implementer must +be careful, however, as a region more than one unit in size may wrap the end of +the buffer and be broken into two segments. + + +============================ +MEASURING POWER-OF-2 BUFFERS +============================ + +Calculation of the occupancy or the remaining capacity of an arbitrarily sized +circular buffer would normally be a slow operation, requiring the use of a +modulus (divide) instruction. However, if the buffer is of a power-of-2 size, +then a much quicker bitwise-AND instruction can be used instead. + +Linux provides a set of macros for handling power-of-2 circular buffers. These +can be made use of by: + + #include <linux/circ_buf.h> + +The macros are: + + (*) Measure the remaining capacity of a buffer: + + CIRC_SPACE(head_index, tail_index, buffer_size); + + This returns the amount of space left in the buffer[1] into which items + can be inserted. + + + (*) Measure the maximum consecutive immediate space in a buffer: + + CIRC_SPACE_TO_END(head_index, tail_index, buffer_size); + + This returns the amount of consecutive space left in the buffer[1] into + which items can be immediately inserted without having to wrap back to the + beginning of the buffer. + + + (*) Measure the occupancy of a buffer: + + CIRC_CNT(head_index, tail_index, buffer_size); + + This returns the number of items currently occupying a buffer[2]. + + + (*) Measure the non-wrapping occupancy of a buffer: + + CIRC_CNT_TO_END(head_index, tail_index, buffer_size); + + This returns the number of consecutive items[2] that can be extracted from + the buffer without having to wrap back to the beginning of the buffer. + + +Each of these macros will nominally return a value between 0 and buffer_size-1, +however: + + [1] CIRC_SPACE*() are intended to be used in the producer. To the producer + they will return a lower bound as the producer controls the head index, + but the consumer may still be depleting the buffer on another CPU and + moving the tail index. + + To the consumer it will show an upper bound as the producer may be busy + depleting the space. + + [2] CIRC_CNT*() are intended to be used in the consumer. To the consumer they + will return a lower bound as the consumer controls the tail index, but the + producer may still be filling the buffer on another CPU and moving the + head index. + + To the producer it will show an upper bound as the consumer may be busy + emptying the buffer. + + [3] To a third party, the order in which the writes to the indices by the + producer and consumer become visible cannot be guaranteed as they are + independent and may be made on different CPUs - so the result in such a + situation will merely be a guess, and may even be negative. + + +=========================================== +USING MEMORY BARRIERS WITH CIRCULAR BUFFERS +=========================================== + +By using memory barriers in conjunction with circular buffers, you can avoid +the need to: + + (1) use a single lock to govern access to both ends of the buffer, thus + allowing the buffer to be filled and emptied at the same time; and + + (2) use atomic counter operations. + +There are two sides to this: the producer that fills the buffer, and the +consumer that empties it. Only one thing should be filling a buffer at any one +time, and only one thing should be emptying a buffer at any one time, but the +two sides can operate simultaneously. + + +THE PRODUCER +------------ + +The producer will look something like this: + + spin_lock(&producer_lock); + + unsigned long head = buffer->head; + unsigned long tail = ACCESS_ONCE(buffer->tail); + + if (CIRC_SPACE(head, tail, buffer->size) >= 1) { + /* insert one item into the buffer */ + struct item *item = buffer[head]; + + produce_item(item); + + smp_wmb(); /* commit the item before incrementing the head */ + + buffer->head = (head + 1) & (buffer->size - 1); + + /* wake_up() will make sure that the head is committed before + * waking anyone up */ + wake_up(consumer); + } + + spin_unlock(&producer_lock); + +This will instruct the CPU that the contents of the new item must be written +before the head index makes it available to the consumer and then instructs the +CPU that the revised head index must be written before the consumer is woken. + +Note that wake_up() doesn't have to be the exact mechanism used, but whatever +is used must guarantee a (write) memory barrier between the update of the head +index and the change of state of the consumer, if a change of state occurs. + + +THE CONSUMER +------------ + +The consumer will look something like this: + + spin_lock(&consumer_lock); + + unsigned long head = ACCESS_ONCE(buffer->head); + unsigned long tail = buffer->tail; + + if (CIRC_CNT(head, tail, buffer->size) >= 1) { + /* read index before reading contents at that index */ + smp_read_barrier_depends(); + + /* extract one item from the buffer */ + struct item *item = buffer[tail]; + + consume_item(item); + + smp_mb(); /* finish reading descriptor before incrementing tail */ + + buffer->tail = (tail + 1) & (buffer->size - 1); + } + + spin_unlock(&consumer_lock); + +This will instruct the CPU to make sure the index is up to date before reading +the new item, and then it shall make sure the CPU has finished reading the item +before it writes the new tail pointer, which will erase the item. + + +Note the use of ACCESS_ONCE() in both algorithms to read the opposition index. +This prevents the compiler from discarding and reloading its cached value - +which some compilers will do across smp_read_barrier_depends(). This isn't +strictly needed if you can be sure that the opposition index will _only_ be +used the once. + + +=============== +FURTHER READING +=============== + +See also Documentation/memory-barriers.txt for a description of Linux's memory +barrier facilities. diff --git a/Documentation/connector/cn_test.c b/Documentation/connector/cn_test.c index b07add3467f..7764594778d 100644 --- a/Documentation/connector/cn_test.c +++ b/Documentation/connector/cn_test.c @@ -25,6 +25,7 @@ #include <linux/module.h> #include <linux/moduleparam.h> #include <linux/skbuff.h> +#include <linux/slab.h> #include <linux/timer.h> #include <linux/connector.h> diff --git a/Documentation/fb/imacfb.txt b/Documentation/fb/efifb.txt index 316ec9bb7de..a59916c29b3 100644 |