<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/net/core, branch v2.6.22.5</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/net/core?h=v2.6.22.5</id>
<link rel='self' href='https://git.amat.us/linux/atom/net/core?h=v2.6.22.5'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2007-08-09T21:27:29Z</updated>
<entry>
<title>Netpoll leak</title>
<updated>2007-08-09T21:27:29Z</updated>
<author>
<name>Satyam Sharma</name>
<email>ssatyam@cse.iitk.ac.in</email>
</author>
<published>2007-07-18T09:54:19Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=acad36f93ea2afec9a73fb54283cbc359d1abf27'/>
<id>urn:sha1:acad36f93ea2afec9a73fb54283cbc359d1abf27</id>
<content type='text'>
[NETPOLL]: Fix a leak-n-bug in netpoll_cleanup()

93ec2c723e3f8a216dde2899aeb85c648672bc6b applied excessive duct tape to
the netpoll beast's netpoll_cleanup(), thus substituting one leak with
another, and opening up a little buglet :-)

net_device-&gt;npinfo (netpoll_info) is a shared and refcounted object and
cannot simply be set NULL the first time netpoll_cleanup() is called.
Otherwise, further netpoll_cleanup()'s see np-&gt;dev-&gt;npinfo == NULL and
become no-ops, thus leaking. And it's a bug too: the first call to
netpoll_cleanup() would thus (annoyingly) "disable" other (still alive)
netpolls too. Maybe nobody noticed this because netconsole (only user
of netpoll) never supported multiple netpoll objects earlier.

This is a trivial and obvious one-line fixlet.

Signed-off-by: Satyam Sharma &lt;ssatyam@cse.iitk.ac.in&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>gen estimator deadlock fix</title>
<updated>2007-08-09T21:27:27Z</updated>
<author>
<name>Ranko Zivojnovic</name>
<email>ranko@spidernet.net</email>
</author>
<published>2007-07-18T09:49:48Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=8a1c1646795c03edc0c4f18d3ad97e18e56f888c'/>
<id>urn:sha1:8a1c1646795c03edc0c4f18d3ad97e18e56f888c</id>
<content type='text'>
[NET]: gen_estimator deadlock fix

-Fixes ABBA deadlock noted by Patrick McHardy &lt;kaber@trash.net&gt;:

&gt; There is at least one ABBA deadlock, est_timer() does:
&gt; read_lock(&amp;est_lock)
&gt; spin_lock(e-&gt;stats_lock) (which is dev-&gt;queue_lock)
&gt;
&gt; and qdisc_destroy calls htb_destroy under dev-&gt;queue_lock, which
&gt; calls htb_destroy_class, then gen_kill_estimator and this
&gt; write_locks est_lock.

To fix the ABBA deadlock the rate estimators are now kept on an rcu list.

-The est_lock changes the use from protecting the list to protecting
the update to the 'bstat' pointer in order to avoid NULL dereferencing.

-The 'interval' member of the gen_estimator structure removed as it is
not needed.

Signed-off-by: Ranko Zivojnovic &lt;ranko@spidernet.net&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>gen estimator timer unload race</title>
<updated>2007-08-09T21:27:27Z</updated>
<author>
<name>Patrick McHardy</name>
<email>kaber@trash.net</email>
</author>
<published>2007-07-18T09:48:43Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=2e9d3cf88b10374bc7a863f4ad9906245d29d2b3'/>
<id>urn:sha1:2e9d3cf88b10374bc7a863f4ad9906245d29d2b3</id>
<content type='text'>
[NET]: Fix gen_estimator timer removal race

As noticed by Jarek Poplawski &lt;jarkao2@o2.pl&gt;, the timer removal in
gen_kill_estimator races with the timer function rearming the timer.

Check whether the timer list is empty before rearming the timer
in the timer function to fix this.

Signed-off-by: Patrick McHardy &lt;kaber@trash.net&gt;
Acked-by: Jarek Poplawski &lt;jarkao2@o2.pl&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>[NETPOLL]: Fixups for 'fix soft lockup when removing module'</title>
<updated>2007-07-06T00:42:44Z</updated>
<author>
<name>Jarek Poplawski</name>
<email>jarkao2@o2.pl</email>
</author>
<published>2007-07-06T00:42:44Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=25442cafb8cc3d979418caccabc91260707a0947'/>
<id>urn:sha1:25442cafb8cc3d979418caccabc91260707a0947</id>
<content type='text'>
&gt;From my recent patch:

&gt; &gt;    #1
&gt; &gt;    Until kernel ver. 2.6.21 (including) cancel_rearming_delayed_work()
&gt; &gt;    required a work function should always (unconditionally) rearm with
&gt; &gt;    delay &gt; 0 - otherwise it would endlessly loop. This patch replaces
&gt; &gt;    this function with cancel_delayed_work(). Later kernel versions don't
&gt; &gt;    require this, so here it's only for uniformity.

But Oleg Nesterov &lt;oleg@tv-sign.ru&gt; found:

&gt; But 2.6.22 doesn't need this change, why it was merged?
&gt; 
&gt; In fact, I suspect this change adds a race,
...

His description was right (thanks), so this patch reverts #1.

Signed-off-by: Jarek Poplawski &lt;jarkao2@o2.pl&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[NET]: net/core/netevent.c should #include &lt;net/netevent.h&gt;</title>
<updated>2007-07-06T00:40:27Z</updated>
<author>
<name>Adrian Bunk</name>
<email>bunk@stusta.de</email>
</author>
<published>2007-07-06T00:06:21Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=94b83419e5b56a87410fd9c9939f0081fc155d65'/>
<id>urn:sha1:94b83419e5b56a87410fd9c9939f0081fc155d65</id>
<content type='text'>
Every file should include the headers containing the prototypes for
its global functions.

Signed-off-by: Adrian Bunk &lt;bunk@stusta.de&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[NET] skbuff: remove export of static symbol</title>
<updated>2007-07-06T00:40:19Z</updated>
<author>
<name>Johannes Berg</name>
<email>johannes@sipsolutions.net</email>
</author>
<published>2007-07-06T00:03:09Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=2cd052e44329dd2b42eb958f8f346b053de6e2cd'/>
<id>urn:sha1:2cd052e44329dd2b42eb958f8f346b053de6e2cd</id>
<content type='text'>
skb_clone_fraglist is static so it shouldn't be exported.

Signed-off-by: Johannes Berg &lt;johannes@sipsolutions.net&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[NETPOLL] netconsole: fix soft lockup when removing module</title>
<updated>2007-06-29T05:11:47Z</updated>
<author>
<name>Jarek Poplawski</name>
<email>jarkao2@o2.pl</email>
</author>
<published>2007-06-29T05:11:47Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=17200811cf539b9107a99a39bf71ba3567966285'/>
<id>urn:sha1:17200811cf539b9107a99a39bf71ba3567966285</id>
<content type='text'>
#1
Until kernel ver. 2.6.21 (including) cancel_rearming_delayed_work()
required a work function should always (unconditionally) rearm with
delay &gt; 0 - otherwise it would endlessly loop. This patch replaces
this function with cancel_delayed_work(). Later kernel versions don't
require this, so here it's only for uniformity.

#2
After deleting a timer in cancel_[rearming_]delayed_work() there could
stay a last skb queued in npinfo-&gt;txq causing a memory leak after
kfree(npinfo).

Initial patch &amp; testing by: Jason Wessel &lt;jason.wessel@windriver.com&gt;

Signed-off-by: Jarek Poplawski &lt;jarkao2@o2.pl&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[NETPOLL]: tx lock deadlock fix</title>
<updated>2007-06-27T07:39:42Z</updated>
<author>
<name>Stephen Hemminger</name>
<email>shemminger@linux.foundation.org</email>
</author>
<published>2007-06-27T07:39:42Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=0db3dc73f7a3a73b0dc725b6a991253f5652c905'/>
<id>urn:sha1:0db3dc73f7a3a73b0dc725b6a991253f5652c905</id>
<content type='text'>
If sky2 device poll routine is called from netpoll_send_skb, it would
deadlock. The netpoll_send_skb held the netif_tx_lock, and the poll
routine could acquire it to clean up skb's. Other drivers might use
same locking model.

The driver is correct, netpoll should not introduce more locking
problems than it causes already. So change the code to drop lock
before calling poll handler.

Signed-off-by: Stephen Hemminger &lt;shemminger@linux.foundation.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[NET]: Make skb_seq_read unmap the last fragment</title>
<updated>2007-06-24T06:11:52Z</updated>
<author>
<name>Olaf Kirch</name>
<email>olaf.kirch@oracle.com</email>
</author>
<published>2007-06-24T06:11:52Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=5b5a60da281c767196427ce8144deae6ec46b389'/>
<id>urn:sha1:5b5a60da281c767196427ce8144deae6ec46b389</id>
<content type='text'>
Having walked through the entire skbuff, skb_seq_read would leave the
last fragment mapped.  As a consequence, the unwary caller would leak
kmaps, and proceed with preempt_count off by one. The only (kind of
non-intuitive) workaround is to use skb_seq_read_abort.

This patch makes sure skb_seq_read always unmaps frag_data after
having cycled through the skb's paged part.

Signed-off-by: Olaf Kirch &lt;olaf.kirch@oracle.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>[NET]: Re-enable irqs before pushing pending DMA requests</title>
<updated>2007-06-24T06:09:23Z</updated>
<author>
<name>Shannon Nelson</name>
<email>shannon.nelson@intel.com</email>
</author>
<published>2007-06-24T06:09:23Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=515e06c4556bd8388db6b2bb2cd8859126932946'/>
<id>urn:sha1:515e06c4556bd8388db6b2bb2cd8859126932946</id>
<content type='text'>
This moves the local_irq_enable() call in net_rx_action() to before
calling the CONFIG_NET_DMA's dma_async_memcpy_issue_pending() rather
than after.  This shortens the irq disabled window and allows for DMA
drivers that need to do their own irq hold.

Signed-off-by: Shannon Nelson &lt;shannon.nelson@intel.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
</feed>
