aboutsummaryrefslogtreecommitdiff
path: root/drivers/net/ethernet/sfc/tx.c
AgeCommit message (Collapse)Author
2013-10-31sfc: Fix DMA unmapping issue with firmware assisted TSOAlexandre Rames
When using firmware assisted TSO, we use a single DMA mapping for the linear area of a TSO skb. We still have to segment the super-packet and insert a descriptor containing the original headers before each segment of payload, so we can unmap the linear area only after the last segment is completed. The unmapping information for the linear area is therefore associated with the last header descriptor. We calculate the DMA address to unmap from using the map length and the invariant that the end of the DMA mapping matches the end of the data referenced by the last descriptor. But this invariant is broken when there is TCP payload in the linear area. Fix this by adding and using an explicit dma_offset field. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Use TX PIO for sufficiently small packetsJon Cooper
Sufficiently small linear packets can be copied into the PIO buffer with a single call to memcpy_toio(). Non-linear packets require an intermediate cache-line-sized buffer. [bwh: I wrote the first version of this, but Jon did the hard work to handle non-linear packets.] Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Introduce inline functions to simplify TX insertionBen Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Allocate and link PIO buffers; map them with write-combiningBen Hutchings
Try to allocate a segment of PIO buffer to each TX channel. If allocation fails, log an error but continue. PIO buffers must be mapped separately from the NIC registers, with write-combining enabled. Where the host page size is 4K, we could potentially map each VI's registers and PIO buffer separately. However, this would add significant complexity, and we also need to support architectures such as POWER which have a greater page size. So make a single contiguous write-combining mapping after the uncacheable mapping, aligned to the host page size, and link PIO buffers there. Where necessary, allocate additional VIs within the write-combining mapping purely for access to PIO buffers. Link all TX buffers to TX queues and the additional VIs in efx_ef10_dimension_resources() and in efx_ef10_init_nic() after an MC reboot. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Implement firmware-assisted TSO for EF10Ben Hutchings
Segmentation remains in the driver, but we generate option descriptors describing the required packet editing rather than making our own copies. Reduce tso_state::ipv4_id to 16 bits, so it doesn't overflow into the TCP_FLAGS field of the option descriptor. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-09-20sfc: Fold tso_get_head_fragment() into tso_start()Ben Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-29sfc: Update copyright bannersBen Hutchings
Update the dates for files that have been added to in 2012-2013. Drop the 'Solarstorm' brand name that's still lingering here. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-29sfc: Extend struct efx_tx_buffer to allow pushing option descriptorsBen Hutchings
The TX path firmware for EF10 supports 'option descriptors' to control offloads and various other features. Add a flag and field for these in struct efx_tx_buffer, and don't treat them as DMA descriptors on completion. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-27sfc: Add TX merged completion counterBen Hutchings
Add a counter for TX merged completion events. This is implemented in the common TX path, because the NIC event handlers only know how many descriptors were completed, not how many packets. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-21sfc: Refactor queue teardown sequence to allow for EF10 flush behaviourBen Hutchings
Currently efx_stop_datapath() will try to flush our DMA queues (if DMA is enabled), then finalise software and hardware state for each queue. However, for EF10 we must ask the MC to finalise each queue, which implicitly starts flushing it, and then wait for the flush events. We therefore need to delegate more of this to the NIC type. Combine all the hardware operations into a new NIC-type operation efx_nic_type::fini_dmaq, and call this before tearing down the software state and buffers for all the DMA queues. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2013-08-21sfc: Add GFP flags to efx_nic_alloc_buffer() and make most callers allow ↵Ben Hutchings
blocking Most call sites for efx_nic_alloc_buffer() are part of the probe or reconfiguration paths and can allocate with GFP_KERNEL. A few others should use GFP_NOIO (I think). Only one is in atomic context and must use the current GFP_ATOMIC. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-09-19sfc: Add support for IEEE-1588 PTPStuart Hodgson
Add PTP IEEE-1588 support and make accesible via the PHC subsystem. This work is based on prior code by Andrew Jackson Signed-off-by: Stuart Hodgson <smhodgson@solarflare.com> [bwh: - Add byte order conversion in efx_ptp_send_times() - Simplify conversion of PPS event times - Add the built-in vs module check to CONFIG_SFC_PTP dependencies] Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-08-24sfc: Stash header offsets for TSO in struct tso_stateBen Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-08-24sfc: Replace tso_state::full_packet_space with ip_base_lenBen Hutchings
We only use tso_state::full_packet_space to calculate the IPv4 tot_len or IPv6 payload_len, not to set tso_state::packet_space. Replace it with an ip_base_len field holding the value of tot_len or payload_len before including the TCP payload, which is much more useful when constructing the new headers. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-08-24sfc: Simplify TSO header buffer allocationBen Hutchings
TSO header buffers contain a control structure immediately followed by the packet headers, and are kept on a free list when not in use. This complicates buffer management and tends to result in cache read misses when we recycle such buffers (particularly if DMA-coherent memory requires caches to be disabled). Replace the free list with a simple mapping by descriptor index. We know that there is always a payload descriptor between any two descriptors with TSO header buffers, so we can allocate only one such buffer for each two descriptors. While we're at it, use a standard error code for allocation failure, not -1. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-08-24sfc: Stop TX queues before they fill upBen Hutchings
We now have a definite upper bound on the number of descriptors per skb; use that to stop the queue when the next packet might not fit. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-08-24sfc: Refactor struct efx_tx_buffer to use a flags fieldBen Hutchings
Add a flags field to struct efx_tx_buffer, replacing the continuation and map_single booleans. Since a single descriptor cannot be both a TSO header and the last descriptor for an skb, unionise efx_tx_buffer::{skb,tsoh} and add flags for validity of these fields. Clear all flags in free buffers (whereas previously the continuation flag would be set). Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-08-02sfc: Fix maximum number of TSO segments and minimum TX queue sizeBen Hutchings
Currently an skb requiring TSO may not fit within a minimum-size TX queue. The TX queue selected for the skb may stall and trigger the TX watchdog repeatedly (since the problem skb will be retried after the TX reset). This issue is designated as CVE-2012-3412. Set the maximum number of TSO segments for our devices to 100. This should make no difference to behaviour unless the actual MSS is less than about 700. Increase the minimum TX queue size accordingly to allow for 2 worst-case skbs, so that there will definitely be space to add an skb after we wake a queue. To avoid invalidating existing configurations, change efx_ethtool_set_ringparam() to fix up values that are too small rather than returning -EINVAL. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2012-07-17sfc: Stop changing header offsets on TXBen Hutchings
There is nothing in the VLAN driver or core VLAN support that invalidates the TCP and IP header offsets. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-07-17sfc: Remove dead write to tso_state::packet_spaceBen Hutchings
tso_state::packet_space is always set in tso_start_packet(); the value set in tso_start() is not used, and is also incorrect. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-07-17sfc: Use generic DMA API, not PCI-DMA APIBen Hutchings
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-02-22sfc: Minor formatting cleanupBen Hutchings
Fix some indentation and line continuations. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-02-13sfc: Replace some literal constants with EFX_PAGE_SIZE/EFX_BUF_SIZEBen Hutchings
The 'page size' for PCIe DMA, i.e. the alignment of boundaries at which DMA must be broken, is 4KB. Name this value as EFX_PAGE_SIZE and use it in efx_max_tx_len(). Redefine EFX_BUF_SIZE as EFX_PAGE_SIZE since its value is also a result of that requirement, and use it in efx_init_special_buffer(). Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2012-01-27sfc: Remove remnants of on-load self-testBen Hutchings
The out-of-tree version of the sfc driver used to run a self-test on each device before registering it. Although this was never included in-tree, some functions have checks for this special case which is not really possible. Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
2011-12-04sfc: Use kcalloc instead of kzalloc to allocate arrayThomas Meyer
The advantage of kcalloc is, that will prevent integer overflows which could result from the multiplication of number of elements and size and it is also a bit nicer to read. The semantic patch that makes this change is available in https://lkml.org/lkml/2011/11/25/107 Signed-off-by: Thomas Meyer <thomas@m3y3r.de> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-30sfc: fix race in efx_enqueue_skb_tso()Eric Dumazet
As soon as skb is pushed to hardware, it can be completed and freed, so we should not dereference skb anymore. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-11-29sfc: Support for byte queue limitsTom Herbert
Changes to sfc to use byte queue limits. Signed-off-by: Tom Herbert <therbert@google.com> Acked-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-10-19net: add skb frag size accessorsEric Dumazet
To ease skb->truesize sanitization, its better to be able to localize all references to skb frags size. Define accessors : skb_frag_size() to fetch frag size, and skb_frag_size_{set|add|sub}() to manipulate it. Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-10-06net: use DMA_x_DEVICE and dma_mapping_error with skb_frag_dma_mapIan Campbell
When I converted some drivers from pci_map_page to skb_frag_dma_map I neglected to convert PCI_DMA_xDEVICE into DMA_x_DEVICE and pci_dma_mapping_error into dma_mapping_error. Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Signed-off-by: David S. Miller <davem@davemloft.net>
2011-09-22sfc: convert to SKB paged frag API.Ian Campbell
Signed-off-by: Ian Campbell <ian.campbell@citrix.com> Cc: Solarflare linux maintainers <linux-net-drivers@solarflare.com> Cc: Steve Hodgson <shodgson@solarflare.com> Cc: Ben Hutchings <bhutchings@solarflare.com> Cc: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
2011-08-11sfc: Move the Solarflare driversJeff Kirsher
Moves the Solarflare drivers into drivers/net/ethernet/sfc/ and make the necessary Kconfig and Makefile changes. CC: Steve Hodgson <shodgson@solarflare.com> CC: Ben Hutchings <bhutchings@solarflare.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>