aboutsummaryrefslogtreecommitdiff
path: root/drivers
diff options
context:
space:
mode:
authorMichael S. Tsirkin <mst@redhat.com>2013-08-04 16:26:06 +0800
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2013-08-04 16:26:06 +0800
commit40a017c96a98a29c9d39bf0ca34651288984e9ce (patch)
tree9666fa3164cb824d79f85d088f16beb823140494 /drivers
parent2010fa31f8f5c5fa8385459dff03cb844cf79dd1 (diff)
virtio_net: fix race in RX VQ processing
commit cbdadbbf0c790f79350a8f36029208944c5487d0 upstream virtio net called virtqueue_enable_cq on RX path after napi_complete, so with NAPI_STATE_SCHED clear - outside the implicit napi lock. This violates the requirement to synchronize virtqueue_enable_cq wrt virtqueue_add_buf. In particular, used event can move backwards, causing us to lose interrupts. In a debug build, this can trigger panic within START_USE. Jason Wang reports that he can trigger the races artificially, by adding udelay() in virtqueue_enable_cb() after virtio_mb(). However, we must call napi_complete to clear NAPI_STATE_SCHED before polling the virtqueue for used buffers, otherwise napi_schedule_prep in a callback will fail, causing us to lose RX events. To fix, call virtqueue_enable_cb_prepare with NAPI_STATE_SCHED set (under napi lock), later call virtqueue_poll with NAPI_STATE_SCHED clear (outside the lock). Reported-by: Jason Wang <jasowang@redhat.com> Tested-by: Jason Wang <jasowang@redhat.com> Acked-by: Jason Wang <jasowang@redhat.com> Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net> [wg: Backported to 3.2] Signed-off-by: Wolfram Gloger <wmglo@dent.med.uni-muenchen.de> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'drivers')
-rw-r--r--drivers/net/virtio_net.c5
1 files changed, 3 insertions, 2 deletions
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index cbefe671bcc..bc177b996f4 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -518,7 +518,7 @@ static int virtnet_poll(struct napi_struct *napi, int budget)
{
struct virtnet_info *vi = container_of(napi, struct virtnet_info, napi);
void *buf;
- unsigned int len, received = 0;
+ unsigned int r, len, received = 0;
again:
while (received < budget &&
@@ -535,8 +535,9 @@ again:
/* Out of packets? */
if (received < budget) {
+ r = virtqueue_enable_cb_prepare(vi->rvq);
napi_complete(napi);
- if (unlikely(!virtqueue_enable_cb(vi->rvq)) &&
+ if (unlikely(virtqueue_poll(vi->rvq, r)) &&
napi_schedule_prep(napi)) {
virtqueue_disable_cb(vi->rvq);
__napi_schedule(napi);