Age | Commit message (Collapse) | Author |
|
This patch generalises the existing rule transaction infrastructure
so it can be used to handle set, table and chain object transactions
as well. The transaction provides a data area that stores private
information depending on the transaction type.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The new transaction infrastructure updates the family, table and chain
objects in the context structure, so let's deconstify them. While at it,
move the context structure initialization routine to the top of the
source file as it will be also used from the table and chain routines.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Add the corresponding trace if we have a full match in a non-terminal
rule. Note that the traces will look slightly different than in
x_tables since the log message after all expressions have been
evaluated (contrary to x_tables, that emits it before the target
action). This manifests in two differences in nf_tables wrt. x_tables:
1) The rule that enables the tracing is included in the trace.
2) If the rule emits some log message, that is shown before the
trace log message.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
As suggested by several people, rename local_df to ignore_df,
since it means "ignore df bit if it is set".
Cc: Maciej Żenczykowski <maze@google.com>
Cc: Florian Westphal <fw@strlen.de>
Cc: David S. Miller <davem@davemloft.net>
Cc: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Cong Wang <xiyou.wangcong@gmail.com>
Acked-by: Maciej Żenczykowski <maze@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Conflicts:
drivers/net/ethernet/altera/altera_sgdma.c
net/netlink/af_netlink.c
net/sched/cls_api.c
net/sched/sch_api.c
The netlink conflict dealt with moving to netlink_capable() and
netlink_ns_capable() in the 'net' tree vs. supporting 'tc' operations
in non-init namespaces. These were simple transformations from
netlink_capable to netlink_ns_capable.
The Altera driver conflict was simply code removal overlapping some
void pointer cast cleanups in net-next.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Display "return" for implicit rule at the end of a non-base chain,
instead of when popping chain from the stack.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
After returning from the chain that we just went to with no matchings,
we get a bogus rule number in the trace. To fix this, we would need
to iterate over the list of remaining rules in the chain to update the
rule number counter.
Patrick suggested to set this to the maximum value since the default
base chain policy is the very last action when the processing the base
chain is over.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Add missing code to trace goto actions.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This patch fixes a crash when trying to access the counters and the
default chain policy from the non-base chain that we have reached
via the goto chain. Fix this by falling back on the original base
chain after returning from the custom chain.
While fixing this, kill the inline function to account chain statistics
to improve source code readability.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Otherwise we start incrementing the rule number counter from the
previous chain iteration.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This bug manifests when calling the nft command line tool without
nf_tables kernel support.
kernel message:
[ 44.071555] Netfilter messages via NETLINK v0.30.
[ 44.072253] BUG: unable to handle kernel NULL pointer dereference at 0000000000000119
[ 44.072264] IP: [<ffffffff8171db1f>] netlink_getsockbyportid+0xf/0x70
[ 44.072272] PGD 7f2b74067 PUD 7f2b73067 PMD 0
[ 44.072277] Oops: 0000 [#1] SMP
[...]
[ 44.072369] Call Trace:
[ 44.072373] [<ffffffff8171fd81>] netlink_unicast+0x91/0x200
[ 44.072377] [<ffffffff817206c9>] netlink_ack+0x99/0x110
[ 44.072381] [<ffffffffa004b951>] nfnetlink_rcv+0x3c1/0x408 [nfnetlink]
[ 44.072385] [<ffffffff8171fde3>] netlink_unicast+0xf3/0x200
[ 44.072389] [<ffffffff817201ef>] netlink_sendmsg+0x2ff/0x740
[ 44.072394] [<ffffffff81044752>] ? __mmdrop+0x62/0x90
[ 44.072398] [<ffffffff816dafdb>] sock_sendmsg+0x8b/0xc0
[ 44.072403] [<ffffffff812f1af5>] ? copy_user_enhanced_fast_string+0x5/0x10
[ 44.072406] [<ffffffff816dbb6c>] ? move_addr_to_kernel+0x2c/0x50
[ 44.072410] [<ffffffff816db423>] ___sys_sendmsg+0x3c3/0x3d0
[ 44.072415] [<ffffffff811301ba>] ? handle_mm_fault+0xa9a/0xc60
[ 44.072420] [<ffffffff811362d6>] ? mmap_region+0x166/0x5a0
[ 44.072424] [<ffffffff817da84c>] ? __do_page_fault+0x1dc/0x510
[ 44.072428] [<ffffffff812b8b2c>] ? apparmor_capable+0x1c/0x60
[ 44.072435] [<ffffffff817d6e9a>] ? _raw_spin_unlock_bh+0x1a/0x20
[ 44.072439] [<ffffffff816dfc86>] ? release_sock+0x106/0x150
[ 44.072443] [<ffffffff816dc212>] __sys_sendmsg+0x42/0x80
[ 44.072446] [<ffffffff816dc262>] SyS_sendmsg+0x12/0x20
[ 44.072450] [<ffffffff817df616>] system_call_fastpath+0x1a/0x1f
Signed-off-by: Denys Fedoryshchenko <nuclearcat@nuclearcat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Reduce copy-past a bit by adding a common helper.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
commit 0eba801b64cc8284d9024c7ece30415a2b981a72 tried to fix a race
where nat initialisation can happen after ctnetlink-created conntrack
has been created.
However, it causes the nat module(s) to be loaded needlessly on
systems that are not using NAT.
Fortunately, we do not have to create null bindings in that case.
conntracks injected via ctnetlink always have the CONFIRMED bit set,
which prevents addition of the nat extension in nf_nat_ipv4/6_fn().
We only need to make sure that either no nat extension is added
or that we've created both src and dst manips.
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
nfacct objects already support accounting at the byte and packet
level. As such it is a natural extension to add the possiblity to
define a ceiling limit for both metrics.
All the support for quotas itself is added to nfnetlink acctounting
framework to stay coherent with current accounting object management.
Quota limit checks are implemented in xt_nfacct filter where
statistic collection is already done.
Pablo Neira Ayuso has also contributed to this feature.
Signed-off-by: Mathieu Poirier <mathieu.poirier@linaro.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Use NLA_STRING for consistency with other string attributes in
nf_tables.
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
net/netfilter/nfnetlink.c: In function ‘nfnetlink_rcv’:
net/netfilter/nfnetlink.c:371:14: warning: unused variable ‘net’ [-Wunused-variable]
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
It is possible by passing a netlink socket to a more privileged
executable and then to fool that executable into writing to the socket
data that happens to be valid netlink message to do something that
privileged executable did not intend to do.
To keep this from happening replace bare capable and ns_capable calls
with netlink_capable, netlink_net_calls and netlink_ns_capable calls.
Which act the same as the previous calls except they verify that the
opener of the socket had the desired permissions as well.
Reported-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Fix string mismatch in ip_vs_proto_name()
Signed-off-by: Masanari Iida <standby24x7@gmail.com>
Signed-off-by: Simon Horman <horms@verge.net.au>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This will be useful to create network family dedicated META expression
as for NFPROTO_BRIDGE for instance.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
To ensure family tight expression gets selected in priority to family
agnostic ones.
Signed-off-by: Tomasz Bursztyka <tomasz.bursztyka@linux.intel.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Have the netlink per-protocol optional bind function return an int error code
rather than void to signal a failure.
This will enable netlink protocols to perform extra checks including
capabilities and permissions verifications when updating memberships in
multicast groups.
In netlink_bind() and netlink_setsockopt() the call to the per-protocol bind
function was moved above the multicast group update to prevent any access to
the multicast socket groups before checking with the per-protocol bind
function. This will enable the per-protocol bind function to be used to check
permissions which could be denied before making them available, and to avoid
the messy job of undoing the addition should the per-protocol bind function
fail.
The netfilter subsystem seems to be the only one currently using the
per-protocol bind function.
Signed-off-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Remove duplicity and simplify code flow by moving the rcu_read_unlock() above
the condition and let the flow control exit naturally at the end of the
function.
Signed-off-by: Richard Guy Briggs <rgb@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Mostly scripted conversion of the smp_mb__* barriers.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-55dhyhocezdw1dg7u19hmh1u@git.kernel.org
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: linux-arch@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
|
|
nft_cmp_fast is used for equality comparisions of size <= 4. For
comparisions of size < 4 byte a mask is calculated that is applied to
both the data from userspace (during initialization) and the register
value (during runtime). Both values are stored using (in effect) memcpy
to a memory area that is then interpreted as u32 by nft_cmp_fast.
This works fine on little endian since smaller types have the same base
address, however on big endian this is not true and the smaller types
are interpreted as a big number with trailing zero bytes.
The mask therefore must not include the lower bytes, but the higher bytes
on big endian. Add a helper function that does a cpu_to_le32 to switch
the bytes on big endian. Since we're dealing with a mask of just consequitive
bits, this works out fine.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
[ 251.920788] INFO: trying to register non-static key.
[ 251.921386] the code is fine but needs lockdep annotation.
[ 251.921386] turning off the locking correctness validator.
[ 251.921386] CPU: 2 PID: 15715 Comm: socket_listen Not tainted 3.14.0+ #294
[ 251.921386] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 251.921386] 0000000000000000 000000009d18c210 ffff880075f039b8 ffffffff816b7ecd
[ 251.921386] ffffffff822c3b10 ffff880075f039c8 ffffffff816b36f4 ffff880075f03aa0
[ 251.921386] ffffffff810c65ff ffffffff810c4a85 00000000fffffe01 ffffffffa0075172
[ 251.921386] Call Trace:
[ 251.921386] [<ffffffff816b7ecd>] dump_stack+0x45/0x56
[ 251.921386] [<ffffffff816b36f4>] register_lock_class.part.24+0x38/0x3c
[ 251.921386] [<ffffffff810c65ff>] __lock_acquire+0x168f/0x1b40
[ 251.921386] [<ffffffff810c4a85>] ? trace_hardirqs_on_caller+0x105/0x1d0
[ 251.921386] [<ffffffffa0075172>] ? nf_nat_setup_info+0x252/0x3a0 [nf_nat]
[ 251.921386] [<ffffffff816c1215>] ? _raw_spin_unlock_bh+0x35/0x40
[ 251.921386] [<ffffffffa0075172>] ? nf_nat_setup_info+0x252/0x3a0 [nf_nat]
[ 251.921386] [<ffffffff810c7272>] lock_acquire+0xa2/0x120
[ 251.921386] [<ffffffffa008ab90>] ? ipv4_confirm+0x90/0xf0 [nf_conntrack_ipv4]
[ 251.921386] [<ffffffffa0055989>] __nf_conntrack_confirm+0x129/0x410 [nf_conntrack]
[ 251.921386] [<ffffffffa008ab90>] ? ipv4_confirm+0x90/0xf0 [nf_conntrack_ipv4]
[ 251.921386] [<ffffffffa008ab90>] ipv4_confirm+0x90/0xf0 [nf_conntrack_ipv4]
[ 251.921386] [<ffffffff815e7b00>] ? ip_fragment+0x9f0/0x9f0
[ 251.921386] [<ffffffff815d8c5a>] nf_iterate+0xaa/0xc0
[ 251.921386] [<ffffffff815e7b00>] ? ip_fragment+0x9f0/0x9f0
[ 251.921386] [<ffffffff815d8d14>] nf_hook_slow+0xa4/0x190
[ 251.921386] [<ffffffff815e7b00>] ? ip_fragment+0x9f0/0x9f0
[ 251.921386] [<ffffffff815e98f2>] ip_output+0x92/0x100
[ 251.921386] [<ffffffff815e8df9>] ip_local_out+0x29/0x90
[ 251.921386] [<ffffffff815e9240>] ip_queue_xmit+0x170/0x4c0
[ 251.921386] [<ffffffff815e90d5>] ? ip_queue_xmit+0x5/0x4c0
[ 251.921386] [<ffffffff81601208>] tcp_transmit_skb+0x498/0x960
[ 251.921386] [<ffffffff81602d82>] tcp_connect+0x812/0x960
[ 251.921386] [<ffffffff810e3dc5>] ? ktime_get_real+0x25/0x70
[ 251.921386] [<ffffffff8159ea2a>] ? secure_tcp_sequence_number+0x6a/0xc0
[ 251.921386] [<ffffffff81606f57>] tcp_v4_connect+0x317/0x470
[ 251.921386] [<ffffffff8161f645>] __inet_stream_connect+0xb5/0x330
[ 251.921386] [<ffffffff8158dfc3>] ? lock_sock_nested+0x33/0xa0
[ 251.921386] [<ffffffff810c4b5d>] ? trace_hardirqs_on+0xd/0x10
[ 251.921386] [<ffffffff81078885>] ? __local_bh_enable_ip+0x75/0xe0
[ 251.921386] [<ffffffff8161f8f8>] inet_stream_connect+0x38/0x50
[ 251.921386] [<ffffffff8158b157>] SYSC_connect+0xe7/0x120
[ 251.921386] [<ffffffff810e3789>] ? current_kernel_time+0x69/0xd0
[ 251.921386] [<ffffffff810c4a85>] ? trace_hardirqs_on_caller+0x105/0x1d0
[ 251.921386] [<ffffffff810c4b5d>] ? trace_hardirqs_on+0xd/0x10
[ 251.921386] [<ffffffff8158c36e>] SyS_connect+0xe/0x10
[ 251.921386] [<ffffffff816caf69>] system_call_fastpath+0x16/0x1b
[ 312.014104] INFO: rcu_sched detected stalls on CPUs/tasks: {} (detected by 0, t=60003 jiffies, g=42359, c=42358, q=333)
[ 312.015097] INFO: Stall ended before state dump start
Fixes: 93bb0ceb75be ("netfilter: conntrack: remove central spinlock nf_conntrack_lock")
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
We currently have a limit of 8 * PAGE_SIZE anonymous sets. Lift that limit
by continuing the scan if the entire page is exhausted.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
nf_ct_gre_keymap_flush() removes a nf_ct_gre_keymap object from
net_gre->keymap_list and frees the object. But it doesn't clean
a reference on this object from ct_pptp_info->keymap[dir].
Then nf_ct_gre_keymap_destroy() may release the same object again.
So nf_ct_gre_keymap_flush() can be called only when we are sure that
when nf_ct_gre_keymap_destroy will not be called.
nf_ct_gre_keymap is created by nf_ct_gre_keymap_add() and the right way
to destroy it is to call nf_ct_gre_keymap_destroy().
This patch marks nf_ct_gre_keymap_flush() as static, so this patch can
break compilation of third party modules, which use
nf_ct_gre_keymap_flush. I'm not sure this is the right way to deprecate
this function.
[ 226.540793] general protection fault: 0000 [#1] SMP
[ 226.541750] Modules linked in: nf_nat_pptp nf_nat_proto_gre
nf_conntrack_pptp nf_conntrack_proto_gre ip_gre ip_tunnel gre
ppp_deflate bsd_comp ppp_async crc_ccitt ppp_generic slhc xt_nat
iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 nf_nat
nf_conntrack veth tun bridge stp llc ppdev microcode joydev pcspkr
serio_raw virtio_console virtio_balloon floppy parport_pc parport
pvpanic i2c_piix4 virtio_net drm_kms_helper ttm ata_generic virtio_pci
virtio_ring virtio drm i2c_core pata_acpi [last unloaded: ip_tunnel]
[ 226.541776] CPU: 0 PID: 49 Comm: kworker/u4:2 Not tainted 3.14.0-rc8+ #101
[ 226.541776] Hardware name: Bochs Bochs, BIOS Bochs 01/01/2011
[ 226.541776] Workqueue: netns cleanup_net
[ 226.541776] task: ffff8800371e0000 ti: ffff88003730c000 task.ti: ffff88003730c000
[ 226.541776] RIP: 0010:[<ffffffff81389ba9>] [<ffffffff81389ba9>] __list_del_entry+0x29/0xd0
[ 226.541776] RSP: 0018:ffff88003730dbd0 EFLAGS: 00010a83
[ 226.541776] RAX: 6b6b6b6b6b6b6b6b RBX: ffff8800374e6c40 RCX: dead000000200200
[ 226.541776] RDX: 6b6b6b6b6b6b6b6b RSI: ffff8800371e07d0 RDI: ffff8800374e6c40
[ 226.541776] RBP: ffff88003730dbd0 R08: 0000000000000000 R09: 0000000000000000
[ 226.541776] R10: 0000000000000001 R11: ffff88003730d92e R12: 0000000000000002
[ 226.541776] R13: ffff88007a4c42d0 R14: ffff88007aef0000 R15: ffff880036cf0018
[ 226.541776] FS: 0000000000000000(0000) GS:ffff88007fc00000(0000) knlGS:0000000000000000
[ 226.541776] CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b
[ 226.541776] CR2: 00007f07f643f7d0 CR3: 0000000036fd2000 CR4: 00000000000006f0
[ 226.541776] Stack:
[ 226.541776] ffff88003730dbe8 ffffffff81389c5d ffff8800374ffbe4 ffff88003730dc28
[ 226.541776] ffffffffa0162a43 ffffffffa01627c5 ffff88007a4c42d0 ffff88007aef0000
[ 226.541776] ffffffffa01651c0 ffff88007a4c45e0 ffff88007aef0000 ffff88003730dc40
[ 226.541776] Call Trace:
[ 226.541776] [<ffffffff81389c5d>] list_del+0xd/0x30
[ 226.541776] [<ffffffffa0162a43>] nf_ct_gre_keymap_destroy+0x283/0x2d0 [nf_conntrack_proto_gre]
[ 226.541776] [<ffffffffa01627c5>] ? nf_ct_gre_keymap_destroy+0x5/0x2d0 [nf_conntrack_proto_gre]
[ 226.541776] [<ffffffffa0162ab7>] gre_destroy+0x27/0x70 [nf_conntrack_proto_gre]
[ 226.541776] [<ffffffffa0117de3>] destroy_conntrack+0x83/0x200 [nf_conntrack]
[ 226.541776] [<ffffffffa0117d87>] ? destroy_conntrack+0x27/0x200 [nf_conntrack]
[ 226.541776] [<ffffffffa0117d60>] ? nf_conntrack_hash_check_insert+0x2e0/0x2e0 [nf_conntrack]
[ 226.541776] [<ffffffff81630142>] nf_conntrack_destroy+0x72/0x180
[ 226.541776] [<ffffffff816300d5>] ? nf_conntrack_destroy+0x5/0x180
[ 226.541776] [<ffffffffa011ef80>] ? kill_l3proto+0x20/0x20 [nf_conntrack]
[ 226.541776] [<ffffffffa011847e>] nf_ct_iterate_cleanup+0x14e/0x170 [nf_conntrack]
[ 226.541776] [<ffffffffa011f74b>] nf_ct_l4proto_pernet_unregister+0x5b/0x90 [nf_conntrack]
[ 226.541776] [<ffffffffa0162409>] proto_gre_net_exit+0x19/0x30 [nf_conntrack_proto_gre]
[ 226.541776] [<ffffffff815edf89>] ops_exit_list.isra.1+0x39/0x60
[ 226.541776] [<ffffffff815eecc0>] cleanup_net+0x100/0x1d0
[ 226.541776] [<ffffffff810a608a>] process_one_work+0x1ea/0x4f0
[ 226.541776] [<ffffffff810a6028>] ? process_one_work+0x188/0x4f0
[ 226.541776] [<ffffffff810a64ab>] worker_thread+0x11b/0x3a0
[ 226.541776] [<ffffffff810a6390>] ? process_one_work+0x4f0/0x4f0
[ 226.541776] [<ffffffff810af42d>] kthread+0xed/0x110
[ 226.541776] [<ffffffff8173d4dc>] ? _raw_spin_unlock_irq+0x2c/0x40
[ 226.541776] [<ffffffff810af340>] ? kthread_create_on_node+0x200/0x200
[ 226.541776] [<ffffffff8174747c>] ret_from_fork+0x7c/0xb0
[ 226.541776] [<ffffffff810af340>] ? kthread_create_on_node+0x200/0x200
[ 226.541776] Code: 00 00 55 48 8b 17 48 b9 00 01 10 00 00 00 ad de
48 8b 47 08 48 89 e5 48 39 ca 74 29 48 b9 00 02 20 00 00 00 ad de 48
39 c8 74 7a <4c> 8b 00 4c 39 c7 75 53 4c 8b 42 08 4c 39 c7 75 2b 48 89
42 08
[ 226.541776] RIP [<ffffffff81389ba9>] __list_del_entry+0x29/0xd0
[ 226.541776] RSP <ffff88003730dbd0>
[ 226.612193] ---[ end trace 985ae23ddfcc357c ]---
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Cc: Patrick McHardy <kaber@trash.net>
Cc: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Cc: "David S. Miller" <davem@davemloft.net>
Signed-off-by: Andrey Vagin <avagin@openvz.org>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The intended format in request_module is %.*s instead of %*.s.
Reported-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Currently, nf_tables trims off the set name if it exceeeds 15
bytes, so explicitly reject set names that are too large.
Reported-by: Giuseppe Longo <giuseppelng@gmail.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
There are no these aliases, so kernel can not request appropriate
match table:
$ iptables -I INPUT -p tcp -m osf --genre Windows --ttl 2 -j DROP
iptables: No chain/target/match by that name.
setsockopt() requests ipt_osf module, which is not present. Add
the aliases.
Signed-off-by: Kirill Tkhai <ktkhai@parallels.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This simple modification allows iptables to work with INPUT chain
in combination with cgroup module. It could be useful for counting
ingress traffic per cgroup with nfacct netfilter module. There
were no problems to count the egress traffic that way formerly.
It's possible to get classified sk_buff after PREROUTING, due to
socket lookup being done in early_demux (tcp_v4_early_demux). Also
it works for udp as well.
Trivial usage example, assuming we're in the same shell every step
and we have enough permissions:
1) Classic net_cls cgroup initialization:
mkdir /sys/fs/cgroup/net_cls
mount -t cgroup -o net_cls net_cls /sys/fs/cgroup/net_cls
2) Set up cgroup for interesting application:
mkdir /sys/fs/cgroup/net_cls/wget
echo 1 > /sys/fs/cgroup/net_cls/wget/net_cls.classid
echo $BASHPID > /sys/fs/cgroup/net_cls/wget/cgroup.procs
3) Create kernel counters:
nfacct add wget-cgroup-in
iptables -A INPUT -m cgroup ! --cgroup 1 -m nfacct --nfacct-name wget-cgroup-in
nfacct add wget-cgroup-out
iptables -A OUTPUT -m cgroup ! --cgroup 1 -m nfacct --nfacct-name wget-cgroup-out
4) Network usage:
wget https://www.kernel.org/pub/linux/kernel/v3.x/testing/linux-3.14-rc6.tar.xz
5) Check results:
nfacct list
Cgroup approach is being used for the DataUsage (counting & blocking
traffic) feature for Samsung's modification of the Tizen OS.
Signed-off-by: Alexey Perevalov <a.perevalov@samsung.com>
Acked-by: Daniel Borkmann <dborkman@redhat.com>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Eric points out that the locks can be global.
Moreover, both Jesper and Eric note that using only 32 locks increases
false sharing as only two cache lines are used.
This increases locks to 256 (16 cache lines assuming 64byte cacheline and
4 bytes per spinlock).
Suggested-by: Jesper Dangaard Brouer <brouer@redhat.com>
Suggested-by: Eric Dumazet <eric.dumazet@gmail.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
cannot use ARRAY_SIZE() if spinlock_t is empty struct.
Fixes: 1442e7507dd597 ("netfilter: connlimit: use keyed locks")
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
This patch adds set_elems notifications. When a set_elem is
added/deleted, all listening peers in userspace will receive the
corresponding notification.
Signed-off-by: Arturo Borrero Gonzalez <arturo.borrero.glez@gmail.com>
Acked-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@gnumonks.org>
|
|
Now that nf_tables performs global accounting of set elements, it is not
needed in the hash type anymore.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
The current set selection simply choses the first set type that provides
the requested features, which always results in the rbtree being chosen
by virtue of being the first set in the list.
What we actually want to do is choose the implementation that can provide
the requested features and is optimal from either a performance or memory
perspective depending on the characteristics of the elements and the
preferences specified by the user.
The elements are not known when creating a set. Even if we would provide
them for anonymous (literal) sets, we'd still have standalone sets where
the elements are not known in advance. We therefore need an abstract
description of the data charcteristics.
The kernel already knows the size of the key, this patch starts by
introducing a nested set description which so far contains only the maximum
amount of elements. Based on this the set implementations are changed to
provide an estimate of the required amount of memory and the lookup
complexity class.
The set ops have a new callback ->estimate() that is invoked during set
selection. It receives a structure containing the attributes known to the
kernel and is supposed to populate a struct nft_set_estimate with the
complexity class and, in case the size is known, the complete amount of
memory required, or the amount of memory required per element otherwise.
Based on the policy specified by the user (performance/memory, defaulting
to performance) the kernel will then select the best suited implementation.
Even if the set implementation would allow to add more than the specified
maximum amount of elements, they are enforced since new implementations
might not be able to add more than maximum based on which they were
selected.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
For value spanning multiple registers, we need to validate the length
of data loads. In order to add this to nft_ct, we need the length from
key validation. Split the nft_ct_init() function into two functions
for the get and set operations as preparation for that.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
For value spanning multiple registers, we need to validate the length
of data loads. In order to add this to nft_meta, we need the length from
key validation. Split the nft_meta_init() function into two functions
for the get and set operations as preparation for that.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: Pablo Neira Ayuso <pablo@gnumonks.org>
|
|
The set operation for ct mark is only valid if CONFIG_NF_CONNTRACK_MARK is
enabled.
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Conflicts:
drivers/net/ethernet/marvell/mvneta.c
The mvneta.c conflict is a case of overlapping changes,
a conversion to devm_ioremap_resource() vs. a conversion
to netdev_alloc_pcpu_stats.
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
skb_zerocopy can copy elements of the frags array between skbs, but it doesn't
orphan them. Also, it doesn't handle errors, so this patch takes care of that
as well, and modify the callers accordingly. skb_tx_error() is also added to
the callers so they will signal the failed delivery towards the creator of the
skb.
Signed-off-by: Zoltan Kiss <zoltan.kiss@citrix.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
Reported-by: Stephen Rothwell <sfr@canb.auug.org.au>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
ARRAY_SIZE(nf_conntrack_locks) is undefined if spinlock_t is an
empty structure. Replace it by CONNTRACK_LOCKS
Fixes: 93bb0ceb75be ("netfilter: conntrack: remove central spinlock nf_conntrack_lock")
Reported-by: kbuild test robot <fengguang.wu@intel.com>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Pablo Neira Ayuso <pablo@netfilter.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/pablo/nf-next
Pablo Neira Ayuso says:
====================
Netfilter/IPVS updates for net-next
The following patchset contains Netfilter/IPVS updates for net-next,
most relevantly they are:
* cleanup to remove double semicolon from stephen hemminger.
* calm down sparse warning in xt_ipcomp, from Fan Du.
* nf_ct_labels support for nf_tables, from Florian Westphal.
* new macros to simplify rcu dereferences in the scope of nfnetlink
and nf_tables, from Patrick McHardy.
* Accept queue and drop (including reason for drop) to verdict
parsing in nf_tables, also from Patrick.
* Remove unused random seed initialization in nfnetlink_log, from
Florian Westphal.
* Allow to attach user-specific information to nf_tables rules, useful
to attach user comments to rule, from me.
* Return errors in ipset according to the manpage documentation, from
Jozsef Kadlecsik.
* Fix coccinelle warnings related to incorrect bool type usage for ipset,
from Fengguang Wu.
* Add hash:ip,mark set type to ipset, from Vytas Dauksa.
* Fix message for each spotted by ipset for each netns that is created,
from Ilia Mirkin.
* Add forceadd option to ipset, which evicts a random entry from the set
if it becomes full, from Josh Hunt.
* Minor IPVS cleanups and fixes from Andi Kleen and Tingwei Liu.
* Improve conntrack scalability by removing a central spinlock, original
work from Eric Dumazet. Jesper Dangaard Brouer took them over to address
remaining issues. Several patches to prepare this change come in first
place.
* Rework nft_hash to resolve bugs (leaking chain, missing rcu synchronization
on element removal, etc. from Patrick McHardy.
* Restore context in the rule deletion path, as we now release rule objects
synchronously, from Patrick McHardy. This gets back event notification for
anonymous sets.
* Fix NAT family validation in nft_nat, also from Patrick.
* Improve scalability of xt_connlimit by using an array of spinlocks and
by introducing a rb-tree of hashtables for faster lookup of accounted
objects per network. This patch was preceded by several patches and
refactorizations to accomodate this change including the use of kmem_cache,
from Florian Westphal.
====================
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
With current match design every invocation of the connlimit_match
function means we have to perform (number_of_conntracks % 256) lookups
in the conntrack table [ to perform GC/delete stale entries ].
This is also the reason why ____nf_conntrack_find() in perf top has
> 20% cpu time per core.
This patch changes the storage to rbtree which cuts down the number of
ct objects that need testing.
When looking up a new tuple, we only test the connections of the host
objects we visit while searching for the wanted host/network (or
the leaf we need to insert at).
The slot count is reduced to 32. Increasing slot count doesn't
speed up things much because of rbtree nature.
before patch (50kpps rx, 10kpps tx):
+ 20.95% ksoftirqd/0 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.50% ksoftirqd/1 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.27% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 5.76% ksoftirqd/1 [nf_conntrack] [k] hash_conntrack_raw
+ 5.39% ksoftirqd/2 [nf_conntrack] [k] hash_conntrack_raw
+ 5.35% ksoftirqd/0 [nf_conntrack] [k] hash_conntrack_raw
after (90kpps, 51kpps tx):
+ 17.24% swapper [nf_conntrack] [k] ____nf_conntrack_find
+ 6.60% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 2.73% swapper [nf_conntrack] [k] hash_conntrack_raw
+ 2.36% swapper [xt_connlimit] [k] count_tree
Obvious disadvantages to previous version are the increase in code
complexity and the increased memory cost.
Partially based on Eric Dumazets fq scheduler.
Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
currently returns 1 if they're the same. Make it work like mem/strcmp
so it can be used as rbtree search function.
Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
connlimit currently suffers from spinlock contention, example for
4-core system with rps enabled:
+ 20.84% ksoftirqd/2 [kernel.kallsyms] [k] _raw_spin_lock_bh
+ 20.76% ksoftirqd/1 [kernel.kallsyms] [k] _raw_spin_lock_bh
+ 20.42% ksoftirqd/0 [kernel.kallsyms] [k] _raw_spin_lock_bh
+ 6.07% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 6.07% ksoftirqd/1 [nf_conntrack] [k] ____nf_conntrack_find
+ 5.97% ksoftirqd/0 [nf_conntrack] [k] ____nf_conntrack_find
+ 2.47% ksoftirqd/2 [nf_conntrack] [k] hash_conntrack_raw
+ 2.45% ksoftirqd/0 [nf_conntrack] [k] hash_conntrack_raw
+ 2.44% ksoftirqd/1 [nf_conntrack] [k] hash_conntrack_raw
May allow parallel lookup/insert/delete if the entry is hashed to
another slot. With patch:
+ 20.95% ksoftirqd/0 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.50% ksoftirqd/1 [nf_conntrack] [k] ____nf_conntrack_find
+ 20.27% ksoftirqd/2 [nf_conntrack] [k] ____nf_conntrack_find
+ 5.76% ksoftirqd/1 [nf_conntrack] [k] hash_conntrack_raw
+ 5.39% ksoftirqd/2 [nf_conntrack] [k] hash_conntrack_raw
+ 5.35% ksoftirqd/0 [nf_conntrack] [k] hash_conntrack_raw
+ 2.00% ksoftirqd/1 [kernel.kallsyms] [k] __rcu_read_unlock
Improved rx processing rate from ~35kpps to ~50 kpps.
Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
Replace the bh safe variant with the hard irq safe variant.
We need a hard irq safe variant to deal with netpoll transmitting
packets from hard irq context, and we need it in most if not all of
the places using the bh safe variant.
Except on 32bit uni-processor the code is exactly the same so don't
bother with a bh variant, just have a hard irq safe variant that
everyone can use.
Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
|
|
The use of __constant_<foo> has been unnecessary for quite awhile now.
Make these uses consistent with the rest of the kernel.
Signed-off-by: Joe Perches <joe@perches.com>
Acked-by: Jozsef Kadlecsik <kadlec@blackhole.kfki.hu>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|
|
We might allocate thousands of these (one object per connection).
Use distinct kmem cache to permit simplte tracking on how many
objects are currently used by the connlimit match via the sysfs.
Reviewed-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Florian Westphal <fw@strlen.de>
Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
|