<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/net/ipv4/proc.c, branch v3.3.5</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/net/ipv4/proc.c?h=v3.3.5</id>
<link rel='self' href='https://git.amat.us/linux/atom/net/ipv4/proc.c?h=v3.3.5'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2012-01-22T20:08:44Z</updated>
<entry>
<title>tcp: detect loss above high_seq in recovery</title>
<updated>2012-01-22T20:08:44Z</updated>
<author>
<name>Yuchung Cheng</name>
<email>ycheng@google.com</email>
</author>
<published>2012-01-19T14:42:21Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=974c12360dfe6ab01201fe9e708e7755c413f8b6'/>
<id>urn:sha1:974c12360dfe6ab01201fe9e708e7755c413f8b6</id>
<content type='text'>
Correctly implement a loss detection heuristic: New sequences (above
high_seq) sent during the fast recovery are deemed lost when higher
sequences are SACKed.

Current code does not catch these losses, because tcp_mark_head_lost()
does not check packets beyond high_seq. The fix is straight-forward by
checking packets until the highest sacked packet. In addition, all the
FLAG_DATA_LOST logic are in-effective and redundant and can be removed.

Update the loss heuristic comments. The algorithm above is documented
as heuristic B, but it is redundant too because heuristic A already
covers B.

Note that this change only marks some forward-retransmitted packets LOST.
It does NOT forbid TCP performing further CWR on new losses. A potential
follow-up patch under preparation is to perform another CWR on "new"
losses such as
1) sequence above high_seq is lost (by resetting high_seq to snd_nxt)
2) retransmission is lost.

Signed-off-by: Yuchung Cheng &lt;ycheng@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>foundations of per-cgroup memory pressure controlling.</title>
<updated>2011-12-13T00:04:10Z</updated>
<author>
<name>Glauber Costa</name>
<email>glommer@parallels.com</email>
</author>
<published>2011-12-11T21:47:02Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=180d8cd942ce336b2c869d324855c40c5db478ad'/>
<id>urn:sha1:180d8cd942ce336b2c869d324855c40c5db478ad</id>
<content type='text'>
This patch replaces all uses of struct sock fields' memory_pressure,
memory_allocated, sockets_allocated, and sysctl_mem to acessor
macros. Those macros can either receive a socket argument, or a mem_cgroup
argument, depending on the context they live in.

Since we're only doing a macro wrapping here, no performance impact at all is
expected in the case where we don't have cgroups disabled.

Signed-off-by: Glauber Costa &lt;glommer@parallels.com&gt;
Reviewed-by: Hiroyouki Kamezawa &lt;kamezawa.hiroyu@jp.fujitsu.com&gt;
CC: David S. Miller &lt;davem@davemloft.net&gt;
CC: Eric W. Biederman &lt;ebiederm@xmission.com&gt;
CC: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>ipv4: reduce percpu needs for icmpmsg mibs</title>
<updated>2011-11-09T21:04:20Z</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2011-11-08T13:04:43Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=acb32ba3dee66d58704caeeb8c6ff95f60efdc66'/>
<id>urn:sha1:acb32ba3dee66d58704caeeb8c6ff95f60efdc66</id>
<content type='text'>
Reading /proc/net/snmp on a machine with a lot of cpus is very expensive
(can be ~88000 us).

This is because ICMPMSG MIB uses 4096 bytes per cpu, and folding values
for all possible cpus can read 16 Mbytes of memory.

ICMP messages are not considered as fast path on a typical server, and
eventually few cpus handle them anyway. We can afford an atomic
operation instead of using percpu data.

This saves 4096 bytes per cpu and per network namespace.

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>net: Add export.h for EXPORT_SYMBOL/THIS_MODULE to non-modules</title>
<updated>2011-10-31T23:30:30Z</updated>
<author>
<name>Paul Gortmaker</name>
<email>paul.gortmaker@windriver.com</email>
</author>
<published>2011-07-15T15:47:34Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bc3b2d7fb9b014d75ebb79ba371a763dbab5e8cf'/>
<id>urn:sha1:bc3b2d7fb9b014d75ebb79ba371a763dbab5e8cf</id>
<content type='text'>
These files are non modular, but need to export symbols using
the macros now living in export.h -- call out the include so
that things won't break when we remove the implicit presence
of module.h from everywhere.

Signed-off-by: Paul Gortmaker &lt;paul.gortmaker@windriver.com&gt;
</content>
</entry>
<entry>
<title>tcp: Change possible SYN flooding messages</title>
<updated>2011-09-15T18:49:43Z</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2011-08-30T03:21:44Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=946cedccbd7387488d2cee5da92cdfeb28d2e670'/>
<id>urn:sha1:946cedccbd7387488d2cee5da92cdfeb28d2e670</id>
<content type='text'>
"Possible SYN flooding on port xxxx " messages can fill logs on servers.

Change logic to log the message only once per listener, and add two new
SNMP counters to track :

TCPReqQFullDoCookies : number of times a SYNCOOKIE was replied to client

TCPReqQFullDrop : number of times a SYN request was dropped because
syncookies were not enabled.

Based on a prior patch from Tom Herbert, and suggestions from David.

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
CC: Tom Herbert &lt;therbert@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>tcp: Replace time wait bucket msg by counter</title>
<updated>2010-12-08T20:16:33Z</updated>
<author>
<name>Tom Herbert</name>
<email>therbert@google.com</email>
</author>
<published>2010-12-08T20:16:33Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=67631510a318d5a930055fe927607f483716e100'/>
<id>urn:sha1:67631510a318d5a930055fe927607f483716e100</id>
<content type='text'>
Rather than printing the message to the log, use a mib counter to keep
track of the count of occurences of time wait bucket overflow.  Reduces
spam in logs.

Signed-off-by: Tom Herbert &lt;therbert@google.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>net: avoid limits overflow</title>
<updated>2010-11-10T20:12:00Z</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2010-11-09T23:24:26Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=8d987e5c75107ca7515fa19e857cfa24aab6ec8f'/>
<id>urn:sha1:8d987e5c75107ca7515fa19e857cfa24aab6ec8f</id>
<content type='text'>
Robin Holt tried to boot a 16TB machine and found some limits were
reached : sysctl_tcp_mem[2], sysctl_udp_mem[2]

We can switch infrastructure to use long "instead" of "int", now
atomic_long_t primitives are available for free.

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Reported-by: Robin Holt &lt;holt@sgi.com&gt;
Reviewed-by: Robin Holt &lt;holt@sgi.com&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>snmp: 64bit ipstats_mib for all arches</title>
<updated>2010-06-30T20:31:19Z</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2010-06-30T20:31:19Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4ce3c183fcade7f4b30a33dae90cd774c3d9e094'/>
<id>urn:sha1:4ce3c183fcade7f4b30a33dae90cd774c3d9e094</id>
<content type='text'>
/proc/net/snmp and /proc/net/netstat expose SNMP counters.

Width of these counters is either 32 or 64 bits, depending on the size
of "unsigned long" in kernel.

This means user program parsing these files must already be prepared to
deal with 64bit values, regardless of user program being 32 or 64 bit.

This patch introduces 64bit snmp values for IPSTAT mib, where some
counters can wrap pretty fast if they are 32bit wide.

# netstat -s|egrep "InOctets|OutOctets"
    InOctets: 244068329096
    OutOctets: 244069348848

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>ipv4: add LINUX_MIB_IPRPFILTER snmp counter</title>
<updated>2010-06-03T10:18:19Z</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2010-06-02T12:05:27Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=b5f7e7554753e2cc3ef3bef0271fdb32027df2ba'/>
<id>urn:sha1:b5f7e7554753e2cc3ef3bef0271fdb32027df2ba</id>
<content type='text'>
Christoph Lameter mentioned that packets could be dropped in input path
because of rp_filter settings, without any SNMP counter being
incremented. System administrator can have a hard time to track the
problem.

This patch introduces a new counter, LINUX_MIB_IPRPFILTER, incremented
each time we drop a packet because Reverse Path Filter triggers.

(We receive an IPv4 datagram on a given interface, and find the route to
send an answer would use another interface)

netstat -s | grep IPReversePathFilter
    IPReversePathFilter: 21714

Reported-by: Christoph Lameter &lt;cl@linux-foundation.org&gt;
Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
<entry>
<title>tcp: Add SNMP counter for DEFER_ACCEPT</title>
<updated>2010-03-22T01:31:35Z</updated>
<author>
<name>Eric Dumazet</name>
<email>eric.dumazet@gmail.com</email>
</author>
<published>2010-03-19T05:37:18Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=907cdda5205b012eec7513f66713749b293188c9'/>
<id>urn:sha1:907cdda5205b012eec7513f66713749b293188c9</id>
<content type='text'>
Its currently hard to diagnose when ACK frames are dropped because an
application set TCP_DEFER_ACCEPT on its listening socket.

See http://bugzilla.kernel.org/show_bug.cgi?id=15507

This patch adds a SNMP value, named TCPDeferAcceptDrop

netstat -s | grep TCPDeferAcceptDrop
    TCPDeferAcceptDrop: 0

This counter is incremented every time we drop a pure ACK frame received
by a socket in SYN_RECV state because its SYNACK retrans count is lower
than defer_accept value.

Signed-off-by: Eric Dumazet &lt;eric.dumazet@gmail.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
</content>
</entry>
</feed>
