<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/net/sfc, branch v3.0.79</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/drivers/net/sfc?h=v3.0.79</id>
<link rel='self' href='https://git.amat.us/linux/atom/drivers/net/sfc?h=v3.0.79'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2013-03-28T19:06:01Z</updated>
<entry>
<title>sfc: Only use TX push if a single descriptor is to be written</title>
<updated>2013-03-28T19:06:01Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2013-02-27T16:50:38Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=9bb104c28a389c39812b15b39672aa87b91bcd79'/>
<id>urn:sha1:9bb104c28a389c39812b15b39672aa87b91bcd79</id>
<content type='text'>
[ Upstream commit fae8563b25f73dc584a07bcda7a82750ff4f7672 ]

Using TX push when notifying the NIC of multiple new descriptors in
the ring will very occasionally cause the TX DMA engine to re-use an
old descriptor.  This can result in a duplicated or partly duplicated
packet (new headers with old data), or an IOMMU page fault.  This does
not happen when the pushed descriptor is the only one written.

TX push also provides little latency benefit when a packet requires
more than one descriptor.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Disable soft interrupt handling during efx_device_detach_sync()</title>
<updated>2013-03-28T19:06:01Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2013-03-05T01:03:47Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ad0c4a9fa31036fefb30385edfbd1feb8971de97'/>
<id>urn:sha1:ad0c4a9fa31036fefb30385edfbd1feb8971de97</id>
<content type='text'>
[ Upstream commit 35205b211c8d17a8a0b5e8926cb7c73e9a7ef1ad ]

efx_device_detach_sync() locks all TX queues before marking the device
detached and thus disabling further TX scheduling.  But it can still
be interrupted by TX completions which then result in TX scheduling in
soft interrupt context.  This will deadlock when it tries to acquire
a TX queue lock that efx_device_detach_sync() already acquired.

To avoid deadlock, we must use netif_tx_{,un}lock_bh().

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Detach net device when stopping queues for reconfiguration</title>
<updated>2013-03-28T19:06:01Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2013-01-28T19:01:06Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c62fe657e9c08b273aac0c3a0556ccdce9ede49a'/>
<id>urn:sha1:c62fe657e9c08b273aac0c3a0556ccdce9ede49a</id>
<content type='text'>
[ Upstream commit 29c69a4882641285a854d6d03ca5adbba68c0034 ]

We must only ever stop TX queues when they are full or the net device
is not 'ready' so far as the net core, and specifically the watchdog,
is concerned.  Otherwise, the watchdog may fire *immediately* if no
packets have been added to the queue in the last 5 seconds.

The device is ready if all the following are true:

(a) It has a qdisc
(b) It is marked present
(c) It is running
(d) The link is reported up

(a) and (c) are normally true, and must not be changed by a driver.
(d) is under our control, but fake link changes may disturb userland.
This leaves (b).  We already mark the device absent during reset
and self-test, but we need to do the same during MTU changes and ring
reallocation.  We don't need to do this when the device is brought
down because then (c) is already false.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0: adjust context]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Fix efx_rx_buf_offset() in the presence of swiotlb</title>
<updated>2013-03-28T19:06:01Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2013-01-10T23:51:54Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=67d8c1035e0c960a3d41abe532ea868bb3985f22'/>
<id>urn:sha1:67d8c1035e0c960a3d41abe532ea868bb3985f22</id>
<content type='text'>
[ Upstream commits 06e63c57acbb1df7c35ebe846ae416a8b88dfafa,
  b590ace09d51cd39744e0f7662c5e4a0d1b5d952 and
  c73e787a8db9117d59b5180baf83203a42ecadca ]

We assume that the mapping between DMA and virtual addresses is done
on whole pages, so we can find the page offset of an RX buffer using
the lower bits of the DMA address.  However, swiotlb maps in units of
2K, breaking this assumption.

Add an explicit page_offset field to struct efx_rx_buffer.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0: adjust context]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Properly sync RX DMA buffer when it is not the last in the page</title>
<updated>2013-03-28T19:06:01Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2012-12-20T18:48:20Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bbd3cfb8cbb326f32f1daec0ea6ffbf855a7ecc8'/>
<id>urn:sha1:bbd3cfb8cbb326f32f1daec0ea6ffbf855a7ecc8</id>
<content type='text'>
[ Upstream commit 3a68f19d7afb80f548d016effbc6ed52643a8085 ]

We may currently allocate two RX DMA buffers to a page, and only unmap
the page when the second is completed.  We do not sync the first RX
buffer to be completed; this can result in packet loss or corruption
if the last RX buffer completed in a NAPI poll is the first in a page
and is not DMA-coherent.  (In the middle of a NAPI poll, we will
handle the following RX completion and unmap the page *before* looking
at the content of the first buffer.)

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0: adjust context]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Fix timekeeping in efx_mcdi_poll()</title>
<updated>2013-03-28T19:06:01Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2012-12-01T02:21:17Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=16cec22e5099020edb0ba8b6ae8f1b011e2ec4d5'/>
<id>urn:sha1:16cec22e5099020edb0ba8b6ae8f1b011e2ec4d5</id>
<content type='text'>
[ Upstream commit ebf98e797b4e26ad52ace1511a0b503ee60a6cd4 ]

efx_mcdi_poll() uses get_seconds() to read the current time and to
implement a polling timeout.  The use of this function was chosen
partly because it could easily be replaced in a co-sim environment
with a macro that read the simulated time.

Unfortunately the real get_seconds() returns the system time (real
time) which is subject to adjustment by e.g. ntpd.  If the system time
is adjusted forward during a polled MCDI operation, the effective
timeout can be shorter than the intended 10 seconds, resulting in a
spurious failure.  It is also possible for a backward adjustment to
delay detection of a areal failure.

Use jiffies instead, and change MCDI_RPC_TIMEOUT to be denominated in
jiffies.  Also correct rounding of the timeout: check time &gt; finish
(or rather time_after(time, finish)) and not time &gt;= finish.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0: adjust context]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: lock TX queues when calling netif_device_detach()</title>
<updated>2013-03-28T19:06:00Z</updated>
<author>
<name>Daniel Pieczko</name>
<email>dpieczko@solarflare.com</email>
</author>
<published>2012-10-17T12:21:23Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=376ed848f420a921325e9dc144c9cc7fa3829a38'/>
<id>urn:sha1:376ed848f420a921325e9dc144c9cc7fa3829a38</id>
<content type='text'>
[ Upstream commit c2f3b8e3a44b6fe9e36704e30157ebe1a88c08b1 ]

The assertion of netif_device_present() at the top of
efx_hard_start_xmit() may fail if we don't do this.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0: adjust context]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Fix two causes of flush failure</title>
<updated>2013-03-28T19:06:00Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2011-05-23T11:18:45Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=442933f2b6a4c0e1d4a3b216c55c720a01c032be'/>
<id>urn:sha1:442933f2b6a4c0e1d4a3b216c55c720a01c032be</id>
<content type='text'>
[ Upstream commits a606f4325dca6950996abbae452d33f2af095f39,
  d5e8cc6c946e0857826dcfbb3585068858445bfe,
  525d9e824018cd7cc8d8d44832ddcd363abfe6e1 ]

The TX DMA engine issues upstream read requests when there is room in
the TX FIFO for the completion. However, the fetches for the rest of
the packet might be delayed by any back pressure.  Since a flush must
wait for an EOP, the entire flush may be delayed by back pressure.

Mitigate this by disabling flow control before the flushes are
started.  Since PF and VF flushes run in parallel introduce
fc_disable, a reference count of the number of flushes outstanding.

The same principle could be applied to Falcon, but that
would bring with it its own testing.

We sometimes hit a "failed to flush" timeout on some TX queues, but the
flushes have completed and the flush completion events seem to go missing.
In this case, we can check the TX_DESC_PTR_TBL register and drain the
queues if the flushes had finished.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0:
 - Call efx_nic_type::finish_flush() on both success and failure paths
 - Check the TX_DESC_PTR_TBL registers in the polling loop
 - Declare efx_mcdi_set_mac() extern]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Convert firmware subtypes to native byte order in efx_mcdi_get_board_cfg()</title>
<updated>2013-03-28T19:06:00Z</updated>
<author>
<name>Ben Hutchings</name>
<email>bhutchings@solarflare.com</email>
</author>
<published>2012-09-06T23:58:10Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=886033e132d6d83d6e7738e3edcd1598a7b66cf0'/>
<id>urn:sha1:886033e132d6d83d6e7738e3edcd1598a7b66cf0</id>
<content type='text'>
[ Upstream commit bfeed902946a31692e7a24ed355b6d13ac37d014 ]

On big-endian systems the MTD partition names currently have mangled
subtype numbers and are not recognised by the firmware update tool
(sfupdate).

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
[bwh: Backported to 3.0: use old macros for length of firmware subtype array]
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>sfc: Do not attempt to flush queues if DMA is disabled</title>
<updated>2013-03-28T19:06:00Z</updated>
<author>
<name>Stuart Hodgson</name>
<email>smhodgson@solarflare.com</email>
</author>
<published>2012-03-30T12:04:51Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c7c9da144089da9241afd57398144ba4860e91bd'/>
<id>urn:sha1:c7c9da144089da9241afd57398144ba4860e91bd</id>
<content type='text'>
[ Upstream commit 3dca9d2dc285faf1910d405b65df845cab061356 ]

efx_nic_fatal_interrupt() disables DMA before scheduling a reset.
After this, we need not and *cannot* flush queues.

Signed-off-by: Ben Hutchings &lt;bhutchings@solarflare.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
</feed>
