diff options
author | Krishna Kumar <krkumar2@in.ibm.com> | 2007-06-24 19:57:27 -0700 |
---|---|---|
committer | David S. Miller <davem@sunset.davemloft.net> | 2007-07-10 22:15:36 -0700 |
commit | e50c41b53d7aa48152dd9c633b04fc7abd536f1f (patch) | |
tree | 3f9ecdbf7a685820ad06321dadc73441e850ba10 /net/sched | |
parent | 6c1361a6f285bf3df4b502651c0dd38d0eedc044 (diff) |
[NET]: qdisc_restart - couple of optimizations.
Changes :
- netif_queue_stopped need not be called inside qdisc_restart as
it has been called already in qdisc_run() before the first skb
is sent, and in __qdisc_run() after each intermediate skb is
sent (note : we are the only sender, so the queue cannot get
stopped while the tx lock was got in the ~LLTX case).
- BUG_ON((int) q->q.qlen < 0) was a relic from old times when -1
meant more packets are available, and __qdisc_run used to loop
when qdisc_restart() returned -1. During those days, it was
necessary to make sure that qlen is never less than zero, since
__qdisc_run would get into an infinite loop if no packets are on
the queue and this bug in qdisc was there (and worse - no more
skbs could ever get queue'd as we hold the queue lock too). With
Herbert's recent change to return values, this check is not
required. Hopefully Herbert can validate this change. If at all
this is required, it should be added to skb_dequeue (in failure
case), and not to qdisc_qlen.
Signed-off-by: Krishna Kumar <krkumar2@in.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Diffstat (limited to 'net/sched')
-rw-r--r-- | net/sched/sch_generic.c | 5 |
1 files changed, 1 insertions, 4 deletions
diff --git a/net/sched/sch_generic.c b/net/sched/sch_generic.c index 983c32caf71..2488dbb17b6 100644 --- a/net/sched/sch_generic.c +++ b/net/sched/sch_generic.c @@ -61,7 +61,6 @@ void qdisc_unlock_tree(struct net_device *dev) static inline int qdisc_qlen(struct Qdisc *q) { - BUG_ON((int) q->q.qlen < 0); return q->q.qlen; } @@ -167,9 +166,7 @@ static inline int qdisc_restart(struct net_device *dev) /* And release queue */ spin_unlock(&dev->queue_lock); - ret = NETDEV_TX_BUSY; - if (!netif_queue_stopped(dev)) - ret = dev_hard_start_xmit(skb, dev); + ret = dev_hard_start_xmit(skb, dev); if (!lockless) netif_tx_unlock(dev); |