diff options
author | Eric Dumazet <eric.dumazet@gmail.com> | 2011-04-04 17:04:03 +0200 |
---|---|---|
committer | Patrick McHardy <kaber@trash.net> | 2011-04-04 17:04:03 +0200 |
commit | 7f5c6d4f665bb57a19a34ce1fb16cc708c04f219 (patch) | |
tree | e804faa506bbf9edcfd1fdadb2ab3749f58836cd /net/netfilter | |
parent | 8f7b01a178b8e6a7b663a1bbaa1710756d67b69b (diff) |
netfilter: get rid of atomic ops in fast path
We currently use a percpu spinlock to 'protect' rule bytes/packets
counters, after various attempts to use RCU instead.
Lately we added a seqlock so that get_counters() can run without
blocking BH or 'writers'. But we really only need the seqcount in it.
Spinlock itself is only locked by the current/owner cpu, so we can
remove it completely.
This cleanups api, using correct 'writer' vs 'reader' semantic.
At replace time, the get_counters() call makes sure all cpus are done
using the old table.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Jan Engelhardt <jengelh@medozas.de>
Signed-off-by: Patrick McHardy <kaber@trash.net>
Diffstat (limited to 'net/netfilter')
-rw-r--r-- | net/netfilter/x_tables.c | 9 |
1 files changed, 3 insertions, 6 deletions
diff --git a/net/netfilter/x_tables.c b/net/netfilter/x_tables.c index a9adf4c6b29..52959efca85 100644 --- a/net/netfilter/x_tables.c +++ b/net/netfilter/x_tables.c @@ -762,8 +762,8 @@ void xt_compat_unlock(u_int8_t af) EXPORT_SYMBOL_GPL(xt_compat_unlock); #endif -DEFINE_PER_CPU(struct xt_info_lock, xt_info_locks); -EXPORT_PER_CPU_SYMBOL_GPL(xt_info_locks); +DEFINE_PER_CPU(seqcount_t, xt_recseq); +EXPORT_PER_CPU_SYMBOL_GPL(xt_recseq); static int xt_jumpstack_alloc(struct xt_table_info *i) { @@ -1362,10 +1362,7 @@ static int __init xt_init(void) int rv; for_each_possible_cpu(i) { - struct xt_info_lock *lock = &per_cpu(xt_info_locks, i); - - seqlock_init(&lock->lock); - lock->readers = 0; + seqcount_init(&per_cpu(xt_recseq, i)); } xt = kmalloc(sizeof(struct xt_af) * NFPROTO_NUMPROTO, GFP_KERNEL); |