Age | Commit message (Collapse) | Author |
|
|
|
commit 22e8099f4f6621b8d165e238cdef2a1cf655e159 upstream.
The MODULE_LICENSE macro invocation must use either "GPL" or "GPL v2",
but not "GPLv2" in order to be detected by the module loader.
This fixes the allmodconfig build error:
FATAL: modpost: GPL-incompatible module bcm2835-rng.ko uses GPL-only symbol 'platform_driver_unregister'
Signed-off-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Lubomir Rintel <lkundrak@v3.sk>
Cc: Dom Cobley <popcornmix@gmail.com>
Cc: Stephen Warren <swarren@wwwdotorg.org>
Cc: Matt Mackall <mpm@selenic.com>
Cc: linux-rpi-kernel@lists.infradead.org
Cc: Herbert Xu <herbert@gondor.apana.org.au>
Cc: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 707aee401d2467baa785a697f40a6e2d9ee79ad5 upstream.
The BT_CONFIG command that is sent to the device during
startup will enable BT coex unless the module parameter
turns it off, but on devices without Bluetooth this may
cause problems, as reported in Redhat BZ 885407.
Fix this by sending the BT_CONFIG command only when the
device has Bluetooth.
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: Johannes Berg <johannes@sipsolutions.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit bb963c4a43eb5127eb0bbfa16c7a6a209b0af5db upstream.
Set SSID bitmap for direct scan even on passive channels,
for the passive-to-active feature. Without this patch only
the SSID from probe request template is sent on passive
channels, after passive-to-active switching, causing us to
not find all desired networks.
Remove the unused passive scan mask constant.
Reviewed-by: Emmanuel Grumbach <emmanuel.grumbach@intel.com>
Signed-off-by: David Spinadel <david.spinadel@intel.com>
Signed-off-by: Johannes Berg <johannes.berg@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit b30513202c6c14120f70b2e9aa1e97d47bbc2313 ]
Slaves get the 64B CQE/EQE state from QUERY_HCA, not from the module parameter.
If the parameter is set to zero, the slave outputs an incorrect/irrelevant
warning message that 64B CQEs/EQEs are supported but not enabled (even if the
hypervisor has enabled 64B CQEs/EQEs).
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 0508ad646836007e6e6b62331eee7356844eac3d ]
If the user has not assigned a MAC address to a VM, then don't give it MAC which
is based on the PF one. The current derivation scheme is wrong and leads to VM
MAC collisions when the number of cards/hypervisors becomes big enough.
Instead, just give it zeros and let them figure out what to do with that.
Signed-off-by: Or Gerlitz <ogerlitz@mellanox.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit cf3c4c03060b688cbc389ebc5065ebcce5653e96 ]
Self explanitory dma_mapping_error addition to the 8139 driver, based on this:
https://bugzilla.redhat.com/show_bug.cgi?id=947250
It showed several backtraces arising for dma_map_* usage without checking the
return code on the mapping. Add the check and abort the rx/tx operation if its
failed. Untested as I have no hardware and the reporter has wandered off, but
seems pretty straightforward.
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
CC: "David S. Miller" <davem@davemloft.net>
CC: Francois Romieu <romieu@fr.zoreil.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit d9d10a30964504af834d8d250a0c76d4ae91eb1e ]
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 8cb3b9c3642c0263d48f31d525bcee7170eedc20 ]
The "pvc" struct has a hole after pvc.sap_family which is not cleared.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Reviewed-by: Jiri Pirko <jiri@resnulli.us>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 7b70176421993866e616f1cbc4d0dd4054f1bf78 ]
We had reports ( https://bugzilla.kernel.org/show_bug.cgi?id=54021 )
that using high order pages for skb allocations is problematic for atl1c
We do not know exactly what the problem is, but we suspect that crossing
4K pages is not well supported by this hardware.
Use a custom allocator, using page allocator and 2K fragments for
optimal stack behavior. We might make this allocator generic
in future kernels.
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Luis Henriques <luis.henriques@canonical.com>
Cc: Neil Horman <nhorman@tuxdriver.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit ff862a4668dd6dba962b1d2d8bd344afa6375683 ]
This is inspired by a5cc68f3d6 "af_key: fix info leaks in notify
messages". There are some struct members which don't get initialized
and could disclose small amounts of private information.
Acked-by: Mathias Krause <minipli@googlemail.com>
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Acked-by: Steffen Klassert <steffen.klassert@secunet.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit a0db856a95a29efb1c23db55c02d9f0ff4f0db48 ]
Make sure the reserved fields, and padding (if any), are
fully initialized.
Based upon a patch by Dan Carpenter and feedback from
Joe Perches.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit c74f2b2678f40b80265dd53556f1f778c8e1823f ]
Requesting external module with cb_lock taken can result in
the deadlock like showed below:
[ 2458.111347] Showing all locks held in the system:
[ 2458.111347] 1 lock held by NetworkManager/582:
[ 2458.111347] #0: (cb_lock){++++++}, at: [<ffffffff8162bc79>] genl_rcv+0x19/0x40
[ 2458.111347] 1 lock held by modprobe/603:
[ 2458.111347] #0: (cb_lock){++++++}, at: [<ffffffff8162baa5>] genl_lock_all+0x15/0x30
[ 2461.579457] SysRq : Show Blocked State
[ 2461.580103] task PC stack pid father
[ 2461.580103] NetworkManager D ffff880034b84500 4040 582 1 0x00000080
[ 2461.580103] ffff8800197ff720 0000000000000046 00000000001d5340 ffff8800197fffd8
[ 2461.580103] ffff8800197fffd8 00000000001d5340 ffff880019631700 7fffffffffffffff
[ 2461.580103] ffff8800197ff880 ffff8800197ff878 ffff880019631700 ffff880019631700
[ 2461.580103] Call Trace:
[ 2461.580103] [<ffffffff817355f9>] schedule+0x29/0x70
[ 2461.580103] [<ffffffff81731ad1>] schedule_timeout+0x1c1/0x360
[ 2461.580103] [<ffffffff810e69eb>] ? mark_held_locks+0xbb/0x140
[ 2461.580103] [<ffffffff817377ac>] ? _raw_spin_unlock_irq+0x2c/0x50
[ 2461.580103] [<ffffffff810e6b6d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[ 2461.580103] [<ffffffff81736398>] wait_for_completion_killable+0xe8/0x170
[ 2461.580103] [<ffffffff810b7fa0>] ? wake_up_state+0x20/0x20
[ 2461.580103] [<ffffffff81095825>] call_usermodehelper_exec+0x1a5/0x210
[ 2461.580103] [<ffffffff817362ed>] ? wait_for_completion_killable+0x3d/0x170
[ 2461.580103] [<ffffffff81095cc3>] __request_module+0x1b3/0x370
[ 2461.580103] [<ffffffff810e6b6d>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[ 2461.580103] [<ffffffff8162c5c9>] ctrl_getfamily+0x159/0x190
[ 2461.580103] [<ffffffff8162d8a4>] genl_family_rcv_msg+0x1f4/0x2e0
[ 2461.580103] [<ffffffff8162d990>] ? genl_family_rcv_msg+0x2e0/0x2e0
[ 2461.580103] [<ffffffff8162da1e>] genl_rcv_msg+0x8e/0xd0
[ 2461.580103] [<ffffffff8162b729>] netlink_rcv_skb+0xa9/0xc0
[ 2461.580103] [<ffffffff8162bc88>] genl_rcv+0x28/0x40
[ 2461.580103] [<ffffffff8162ad6d>] netlink_unicast+0xdd/0x190
[ 2461.580103] [<ffffffff8162b149>] netlink_sendmsg+0x329/0x750
[ 2461.580103] [<ffffffff815db849>] sock_sendmsg+0x99/0xd0
[ 2461.580103] [<ffffffff810bb58f>] ? local_clock+0x5f/0x70
[ 2461.580103] [<ffffffff810e96e8>] ? lock_release_non_nested+0x308/0x350
[ 2461.580103] [<ffffffff815dbc6e>] ___sys_sendmsg+0x39e/0x3b0
[ 2461.580103] [<ffffffff810565af>] ? kvm_clock_read+0x2f/0x50
[ 2461.580103] [<ffffffff810218b9>] ? sched_clock+0x9/0x10
[ 2461.580103] [<ffffffff810bb2bd>] ? sched_clock_local+0x1d/0x80
[ 2461.580103] [<ffffffff810bb448>] ? sched_clock_cpu+0xa8/0x100
[ 2461.580103] [<ffffffff810e33ad>] ? trace_hardirqs_off+0xd/0x10
[ 2461.580103] [<ffffffff810bb58f>] ? local_clock+0x5f/0x70
[ 2461.580103] [<ffffffff810e3f7f>] ? lock_release_holdtime.part.28+0xf/0x1a0
[ 2461.580103] [<ffffffff8120fec9>] ? fget_light+0xf9/0x510
[ 2461.580103] [<ffffffff8120fe0c>] ? fget_light+0x3c/0x510
[ 2461.580103] [<ffffffff815dd1d2>] __sys_sendmsg+0x42/0x80
[ 2461.580103] [<ffffffff815dd222>] SyS_sendmsg+0x12/0x20
[ 2461.580103] [<ffffffff81741ad9>] system_call_fastpath+0x16/0x1b
[ 2461.580103] modprobe D ffff88000f2c8000 4632 603 602 0x00000080
[ 2461.580103] ffff88000f04fba8 0000000000000046 00000000001d5340 ffff88000f04ffd8
[ 2461.580103] ffff88000f04ffd8 00000000001d5340 ffff8800377d4500 ffff8800377d4500
[ 2461.580103] ffffffff81d0b260 ffffffff81d0b268 ffffffff00000000 ffffffff81d0b2b0
[ 2461.580103] Call Trace:
[ 2461.580103] [<ffffffff817355f9>] schedule+0x29/0x70
[ 2461.580103] [<ffffffff81736d4d>] rwsem_down_write_failed+0xed/0x1a0
[ 2461.580103] [<ffffffff810bb200>] ? update_cpu_load_active+0x10/0xb0
[ 2461.580103] [<ffffffff8137b473>] call_rwsem_down_write_failed+0x13/0x20
[ 2461.580103] [<ffffffff8173492d>] ? down_write+0x9d/0xb2
[ 2461.580103] [<ffffffff8162baa5>] ? genl_lock_all+0x15/0x30
[ 2461.580103] [<ffffffff8162baa5>] genl_lock_all+0x15/0x30
[ 2461.580103] [<ffffffff8162cbb3>] genl_register_family+0x53/0x1f0
[ 2461.580103] [<ffffffffa01dc000>] ? 0xffffffffa01dbfff
[ 2461.580103] [<ffffffff8162d650>] genl_register_family_with_ops+0x20/0x80
[ 2461.580103] [<ffffffffa01dc000>] ? 0xffffffffa01dbfff
[ 2461.580103] [<ffffffffa017fe84>] nl80211_init+0x24/0xf0 [cfg80211]
[ 2461.580103] [<ffffffffa01dc000>] ? 0xffffffffa01dbfff
[ 2461.580103] [<ffffffffa01dc043>] cfg80211_init+0x43/0xdb [cfg80211]
[ 2461.580103] [<ffffffff810020fa>] do_one_initcall+0xfa/0x1b0
[ 2461.580103] [<ffffffff8105cb93>] ? set_memory_nx+0x43/0x50
[ 2461.580103] [<ffffffff810f75af>] load_module+0x1c6f/0x27f0
[ 2461.580103] [<ffffffff810f2c90>] ? store_uevent+0x40/0x40
[ 2461.580103] [<ffffffff810f82c6>] SyS_finit_module+0x86/0xb0
[ 2461.580103] [<ffffffff81741ad9>] system_call_fastpath+0x16/0x1b
[ 2461.580103] Sched Debug Version: v0.10, 3.11.0-0.rc1.git4.1.fc20.x86_64 #1
Problem start to happen after adding net-pf-16-proto-16-family-nl80211
alias name to cfg80211 module by below commit (though that commit
itself is perfectly fine):
commit fb4e156886ce6e8309e912d8b370d192330d19d3
Author: Marcel Holtmann <marcel@holtmann.org>
Date: Sun Apr 28 16:22:06 2013 -0700
nl80211: Add generic netlink module alias for cfg80211/nl80211
Reported-and-tested-by: Jeff Layton <jlayton@redhat.com>
Reported-by: Richard W.M. Jones <rjones@redhat.com>
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Reviewed-by: Pravin B Shelar <pshelar@nicira.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 20f0170377264e8449b6987041f0bcc4d746d3ed ]
usbnet doesn't support yet SG, so drivers should not advertise SG or TSO
capabilities, as they allow TCP stack to build large TSO packets that
need to be linearized and might use order-5 pages.
This adds an extra copy overhead and possible allocation failures.
Current code ignore skb_linearize() return code so crashes are even
possible.
Best is to not pretend SG/TSO is supported, and add this again when/if
usbnet really supports SG for devices who could get a performance gain.
Based on a prior patch from Freddy Xin <freddy@asix.com.tw>
Signed-off-by: Eric Dumazet <edumazet@google.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 905a6f96a1b18e490a75f810d733ced93c39b0e5 ]
Otherwise we end up dereferencing the already freed net->ipv6.mrt pointer
which leads to a panic (from Srivatsa S. Bhat):
BUG: unable to handle kernel paging request at ffff882018552020
IP: [<ffffffffa0366b02>] ip6mr_sk_done+0x32/0xb0 [ipv6]
PGD 290a067 PUD 207ffe0067 PMD 207ff1d067 PTE 8000002018552060
Oops: 0000 [#1] SMP DEBUG_PAGEALLOC
Modules linked in: ebtable_nat ebtables nfs fscache nf_conntrack_ipv4 nf_defrag_ipv4 ipt_REJECT xt_CHECKSUM iptable_mangle iptable_filter ip_tables nfsd lockd nfs_acl exportfs auth_rpcgss autofs4 sunrpc 8021q garp bridge stp llc ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 xt_state nf_conntrack ip6table_filter
+ip6_tables ipv6 vfat fat vhost_net macvtap macvlan vhost tun kvm_intel kvm uinput iTCO_wdt iTCO_vendor_support cdc_ether usbnet mii microcode i2c_i801 i2c_core lpc_ich mfd_core shpchp ioatdma dca mlx4_core be2net wmi acpi_cpufreq mperf ext4 jbd2 mbcache dm_mirror dm_region_hash dm_log dm_mod
CPU: 0 PID: 7 Comm: kworker/u33:0 Not tainted 3.11.0-rc1-ea45e-a #4
Hardware name: IBM -[8737R2A]-/00Y2738, BIOS -[B2E120RUS-1.20]- 11/30/2012
Workqueue: netns cleanup_net
task: ffff8810393641c0 ti: ffff881039366000 task.ti: ffff881039366000
RIP: 0010:[<ffffffffa0366b02>] [<ffffffffa0366b02>] ip6mr_sk_done+0x32/0xb0 [ipv6]
RSP: 0018:ffff881039367bd8 EFLAGS: 00010286
RAX: ffff881039367fd8 RBX: ffff882018552000 RCX: dead000000200200
RDX: 0000000000000000 RSI: ffff881039367b68 RDI: ffff881039367b68
RBP: ffff881039367bf8 R08: ffff881039367b68 R09: 2222222222222222
R10: 2222222222222222 R11: 2222222222222222 R12: ffff882015a7a040
R13: ffff882014eb89c0 R14: ffff8820289e2800 R15: 0000000000000000
FS: 0000000000000000(0000) GS:ffff88103fc00000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: ffff882018552020 CR3: 0000000001c0b000 CR4: 00000000000407f0
Stack:
ffff881039367c18 ffff882014eb89c0 ffff882015e28c00 0000000000000000
ffff881039367c18 ffffffffa034d9d1 ffff8820289e2800 ffff882014eb89c0
ffff881039367c58 ffffffff815bdecb ffffffff815bddf2 ffff882014eb89c0
Call Trace:
[<ffffffffa034d9d1>] rawv6_close+0x21/0x40 [ipv6]
[<ffffffff815bdecb>] inet_release+0xfb/0x220
[<ffffffff815bddf2>] ? inet_release+0x22/0x220
[<ffffffffa032686f>] inet6_release+0x3f/0x50 [ipv6]
[<ffffffff8151c1d9>] sock_release+0x29/0xa0
[<ffffffff81525520>] sk_release_kernel+0x30/0x70
[<ffffffffa034f14b>] icmpv6_sk_exit+0x3b/0x80 [ipv6]
[<ffffffff8152fff9>] ops_exit_list+0x39/0x60
[<ffffffff815306fb>] cleanup_net+0xfb/0x1a0
[<ffffffff81075e3a>] process_one_work+0x1da/0x610
[<ffffffff81075dc9>] ? process_one_work+0x169/0x610
[<ffffffff81076390>] worker_thread+0x120/0x3a0
[<ffffffff81076270>] ? process_one_work+0x610/0x610
[<ffffffff8107da2e>] kthread+0xee/0x100
[<ffffffff8107d940>] ? __init_kthread_worker+0x70/0x70
[<ffffffff8162a99c>] ret_from_fork+0x7c/0xb0
[<ffffffff8107d940>] ? __init_kthread_worker+0x70/0x70
Code: 20 48 89 5d e8 4c 89 65 f0 4c 89 6d f8 66 66 66 66 90 4c 8b 67 30 49 89 fd e8 db 3c 1e e1 49 8b 9c 24 90 08 00 00 48 85 db 74 06 <4c> 39 6b 20 74 20 bb f3 ff ff ff e8 8e 3c 1e e1 89 d8 4c 8b 65
RIP [<ffffffffa0366b02>] ip6mr_sk_done+0x32/0xb0 [ipv6]
RSP <ffff881039367bd8>
CR2: ffff882018552020
Reported-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Tested-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Hannes Frederic Sowa <hannes@stressinduktion.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 7aa0076c497d6f0d5d957b431d0d80e1e9780274 ]
Received packets are only scattered if this is enabled in both the
matching filter and the receiving queue. This was not being done for
filters inserted for RFS, so any packet requiring more than a single
descriptor was dropped.
Signed-off-by: Ben Hutchings <bhutchings@solarflare.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 651e92716aaae60fc41b9652f54cb6803896e0da ]
Limit the min/max value passed to the
/proc/sys/net/ipv4/tcp_syn_retries.
Signed-off-by: Michal Tesar <mtesar@redhat.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
[ Upstream commit 087d273caf4f7d3f2159256f255f1f432bc84a5b ]
This patch doesn't change the compiled code because ARC_HDR_SIZE is 4
and sizeof(int) is 4, but the intent was to use the header size and not
the sizeof the header size.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 89c66ee890af18500fa4598db300cc07c267f900 upstream.
Commit 048177ce3b3962852fd34a7e04938959271c7e70 (spi: spi-davinci:
convert to DMA engine API) introduced a regression: dma_map_single()
is called with direction DMA_FROM_DEVICE for rx and for tx.
Signed-off-by: Christian Eggers <ceggers@gmx.de>
Acked-by: Matt Porter <mporter@ti.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 803075dba31c17af110e1d9a915fe7262165b213 upstream.
Recently we added an early quirk to detect 5500/5520 chipsets
with early revisions that had problems with irq draining with
interrupt remapping enabled:
commit 03bbcb2e7e292838bb0244f5a7816d194c911d62
Author: Neil Horman <nhorman@tuxdriver.com>
Date: Tue Apr 16 16:38:32 2013 -0400
iommu/vt-d: add quirk for broken interrupt remapping on 55XX chipsets
It turns out this same problem is present in the intel X58
chipset as well. See errata 69 here:
http://www.intel.com/content/www/us/en/chipsets/x58-express-specification-update.html
This patch extends the pci early quirk so that the chip
devices/revisions specified in the above update are also covered
in the same way:
Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Donald Dutile <ddutile@redhat.com>
Cc: Joerg Roedel <joro@8bytes.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Malcolm Crossley <malcolm.crossley@citrix.com>
Cc: Prarit Bhargava <prarit@redhat.com>
Cc: Don Zickus <dzickus@redhat.com>
Link: http://lkml.kernel.org/r/1374059639-8631-1-git-send-email-nhorman@tuxdriver.com
[ Small edits. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 8742f229b635bf1c1c84a3dfe5e47c814c20b5c8 upstream.
Ensure that user_namespace->parent chain can't grow too much.
Currently we use the hardroded 32 as limit.
Reported-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6160968cee8b90a5dd95318d716e31d7775c4ef3 upstream.
unshare_userns(new_cred) does *new_cred = prepare_creds() before
create_user_ns() which can fail. However, the caller expects that
it doesn't need to take care of new_cred if unshare_userns() fails.
We could change the single caller, sys_unshare(), but I think it
would be more clean to avoid the side effects on failure, so with
this patch unshare_userns() does put_cred() itself and initializes
*new_cred only if create_user_ns() succeeeds.
Signed-off-by: Oleg Nesterov <oleg@redhat.com>
Reviewed-by: Andy Lutomirski <luto@amacapital.net>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 2865a8fb44cc32420407362cbda80c10fa09c6b2 upstream.
$echo '0' > /sys/bus/workqueue/devices/xxx/numa
$cat /sys/bus/workqueue/devices/xxx/numa
I got 1. It should be 0, the reason is copy_workqueue_attrs() called
in apply_workqueue_attrs() doesn't copy no_numa field.
Fix it by making copy_workqueue_attrs() copy ->no_numa too. This
would also make get_unbound_pool() set a pool's ->no_numa attribute
according to the workqueue attributes used when the pool was created.
While harmelss, as ->no_numa isn't a pool attribute, this is a bit
confusing. Clear it explicitly.
tj: Updated description and comments a bit.
Signed-off-by: Shaohua Li <shli@fusionio.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 3b0040a47ad63f7147e9e7d2febb61a3b564bb90 upstream.
The find_next_bit_left function is broken if used with an offset which
is not a multiple of 64. The shift to mask the bits of a 64-bit word
not to search is in the wrong direction, the result can be either a
bit found smaller than the offset or failure to find a set bit.
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 594712276e737961d30e11eae80d403b2b3815df upstream.
Just add the new model number where appropiate.
Signed-off-by: Heiko Carstens <heiko.carstens@de.ibm.com>
Signed-off-by: Martin Schwidefsky <schwidefsky@de.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 09ede5414f0215461c933032630bf9c3a61a8ba3 upstream.
We need to track this correctly. While at it shovel the boolean
to track whether the sdvo is in tv mode or not into pipe_config.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=36997
Tested-by: Pierre Assal <pierre.assal@verint.com>
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=63609
Tested-by: cancan,feng <cancan.feng@intel.com>
Reviewed-by: Jani Nikula <jani.nikula@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 35f0399db6658f465b00893bdd13b992a0acfef0 upstream.
Several users reported this crash of NULL pointer or general protection,
the story is that we add a rbtree for speedup ulist iteration, and we
use krealloc() to address ulist growth, and krealloc() use memcpy to copy
old data to new memory area, so it's OK for an array as it doesn't use
pointers while it's not OK for a rbtree as it uses pointers.
So krealloc() will mess up our rbtree and it ends up with crash.
Reviewed-by: Wang Shilong <wangsl-fnst@cn.fujitsu.com>
Signed-off-by: Liu Bo <bo.li.liu@oracle.com>
Signed-off-by: Josef Bacik <jbacik@fusionio.com>
Cc: BJ Quinn <bj@placs.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit eaa5a990191d204ba0f9d35dbe5505ec2cdd1460 upstream.
GCC will optimize mxcsr_feature_mask_init in arch/x86/kernel/i387.c:
memset(&fx_scratch, 0, sizeof(struct i387_fxsave_struct));
asm volatile("fxsave %0" : : "m" (fx_scratch));
mask = fx_scratch.mxcsr_mask;
if (mask == 0)
mask = 0x0000ffbf;
to
memset(&fx_scratch, 0, sizeof(struct i387_fxsave_struct));
asm volatile("fxsave %0" : : "m" (fx_scratch));
mask = 0x0000ffbf;
since asm statement doesn’t say it will update fx_scratch. As the
result, the DAZ bit will be cleared. This patch fixes it. This bug
dates back to at least kernel 2.6.12.
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 9cc2e0e9f13315559c85c9f99f141e420967c955 upstream.
Changing the UVD BOs offset on suspend/resume doesn't work because the VCPU
internally keeps pointers to it. Just keep it always pinned and save the
content manually.
Fixes: https://bugs.freedesktop.org/show_bug.cgi?id=66425
v2: fix compiler warning
v3: fix CIK support
v4: rebased for 3.10-stable tree
Note: a version of this patch needs to go to stable.
Signed-off-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 084457f284abf6789d90509ee11dae383842b23b upstream.
cgroup_cfts_commit() uses dget() to keep cgroup alive after cgroup_mutex
is dropped, but dget() won't prevent cgroupfs from being umounted. When
the race happens, vfs will see some dentries with non-zero refcnt while
umount is in process.
Keep running this:
mount -t cgroup -o blkio xxx /cgroup
umount /cgroup
And this:
modprobe cfq-iosched
rmmod cfs-iosched
After a while, the BUG() in shrink_dcache_for_umount_subtree() may
be triggered:
BUG: Dentry xxx{i=0,n=blkio.yyy} still in use (1) [umount of cgroup cgroup]
Signed-off-by: Li Zefan <lizefan@huawei.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit de1e0c40aceb9d5bff09c3a3b97b2f1b178af53f upstream.
The ->reserved field isn't cleared so we leak one byte of stack
information to userspace.
Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
Cc: Eric Paris <eparis@redhat.com>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luis Henriques <luis.henriques@canonical.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit bcf53de4e60d9000b82f541d654529e2902a4c2c upstream.
Otherwise the DDI_A_4_LANES bit gets lost and we can't use > 2 lanes
on eDP. This fixes eDP on hsw with > 2 lanes.
Also s/port_reversal/saved_port_bits/ since the current name is
confusing.
Signed-off-by: Stéphane Marchesin <marcheu@chromium.org>
Reviewed-by: Paulo Zanoni <paulo.r.zanoni@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Zhouping Liu <zliu@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b7649158a0d241f8d53d13ff7441858539e16656 upstream.
In blkif_queue_request blkfront iterates over the scatterlist in order
to set the segments of the request, and in blkif_completion blkfront
iterates over the raw request, which makes it hard to know the exact
position of the source and destination memory positions.
This can be solved by allocating a scatterlist for each request, that
will be keep until the request is finished, allowing us to copy the
data back to the original memory without having to iterate over the
raw request.
Oracle-Bug: 16660413 - LARGE ASYNCHRONOUS READS APPEAR BROKEN ON 2.6.39-400
CC: stable@vger.kernel.org
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reported-and-Tested-by: Anne Milicia <anne.milicia@oracle.com>
Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit aeea40cbf9388fc829e66fa049f64d97fd72e118 upstream.
They still seem to cause instability on some r6xx parts.
As a follow up, we can switch to using CP DMA for bo
moves on r6xx as a lighter weight alternative to using
the 3D engine.
A version of this patch should also go to stable kernels.
Tested-by: J.N. <golden.fleeced@gmail.com>
Reviewed-by: Christian König <christian.koenig@amd.com>
Signed-off-by: Alex Deucher <alexander.deucher@amd.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit aa914f5ec25e4371ba18b312971314be1b9b1076 upstream.
Ben Herrenschmidt reported the following problem:
- The bus has space for all desired MMIO resources, including optional
space for SR-IOV devices
- We attempt to allocate I/O port space, but it fails because the bus
has no I/O space
- Because of the I/O allocation failure, we retry MMIO allocation,
requesting only the required space, without the optional SR-IOV space
This means we don't allocate the optional SR-IOV space, even though we
could.
This is related to 0c5be0cb0e ("PCI: Retry on IORESOURCE_IO type
allocations").
This patch changes how we handle allocation failures. We will now retry
allocation of only the resource type that failed. If MMIO allocation
fails, we'll retry only MMIO allocation. If I/O port allocation fails,
we'll retry only I/O port allocation.
[bhelgaas: changelog]
Reference: https://lkml.kernel.org/r/1367712653.11982.19.camel@pasglop
Reported-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Tested-by: Gavin Shan <shangw@linux.vnet.ibm.com>
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 29ed1f29b68a8395d5679b3c4e38352b617b3236 upstream.
Hot-removing a device with SR-IOV enabled causes a null pointer dereference
in v3.9 and v3.10.
This is a regression caused by ba518e3c17 ("PCI: pciehp: Iterate over all
devices in slot, not functions 0-7"). When we iterate over the
bus->devices list, we first remove the PF, which also removes all the VFs
from the list. Then the list iterator blows up because more than just the
current entry was removed from the list.
ac205b7bb7 ("PCI: make sriov work with hotplug remove") works around a
similar problem in pci_stop_bus_devices() by iterating over the list in
reverse, so the VFs are stopped and removed from the list first, before the
PF.
This patch changes pciehp_unconfigure_device() to iterate over the list in
reverse, too.
[bhelgaas: bugzilla, changelog]
Reference: https://bugzilla.kernel.org/show_bug.cgi?id=60604
Signed-off-by: Yinghai Lu <yinghai@kernel.org>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Acked-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 148519120c6d1f19ad53349683aeae9f228b0b8d upstream.
Revert commit 69a37bea (cpuidle: Quickly notice prediction failure for
repeat mode), because it has been identified as the source of a
significant performance regression in v3.8 and later as explained by
Jeremy Eder:
We believe we've identified a particular commit to the cpuidle code
that seems to be impacting performance of variety of workloads.
The simplest way to reproduce is using netperf TCP_RR test, so
we're using that, on a pair of Sandy Bridge based servers. We also
have data from a large database setup where performance is also
measurably/positively impacted, though that test data isn't easily
share-able.
Included below are test results from 3 test kernels:
kernel reverts
-----------------------------------------------------------
1) vanilla upstream (no reverts)
2) perfteam2 reverts e11538d1f03914eb92af5a1a378375c05ae8520c
3) test reverts 69a37beabf1f0a6705c08e879bdd5d82ff6486c4
e11538d1f03914eb92af5a1a378375c05ae8520c
In summary, netperf TCP_RR numbers improve by approximately 4%
after reverting 69a37beabf1f0a6705c08e879bdd5d82ff6486c4. When
69a37beabf1f0a6705c08e879bdd5d82ff6486c4 is included, C0 residency
never seems to get above 40%. Taking that patch out gets C0 near
100% quite often, and performance increases.
The below data are histograms representing the %c0 residency @
1-second sample rates (using turbostat), while under netperf test.
- If you look at the first 4 histograms, you can see %c0 residency
almost entirely in the 30,40% bin.
- The last pair, which reverts 69a37beabf1f0a6705c08e879bdd5d82ff6486c4,
shows %c0 in the 80,90,100% bins.
Below each kernel name are netperf TCP_RR trans/s numbers for the
particular kernel that can be disclosed publicly, comparing the 3
test kernels. We ran a 4th test with the vanilla kernel where
we've also set /dev/cpu_dma_latency=0 to show overall impact
boosting single-threaded TCP_RR performance over 11% above
baseline.
3.10-rc2 vanilla RX + c0 lock (/dev/cpu_dma_latency=0):
TCP_RR trans/s 54323.78
-----------------------------------------------------------
3.10-rc2 vanilla RX (no reverts)
TCP_RR trans/s 48192.47
Receiver %c0
0.0000 - 10.0000 [ 1]: *
10.0000 - 20.0000 [ 0]:
20.0000 - 30.0000 [ 0]:
30.0000 - 40.0000 [ 59]:
***********************************************************
40.0000 - 50.0000 [ 1]: *
50.0000 - 60.0000 [ 0]:
60.0000 - 70.0000 [ 0]:
70.0000 - 80.0000 [ 0]:
80.0000 - 90.0000 [ 0]:
90.0000 - 100.0000 [ 0]:
Sender %c0
0.0000 - 10.0000 [ 1]: *
10.0000 - 20.0000 [ 0]:
20.0000 - 30.0000 [ 0]:
30.0000 - 40.0000 [ 11]: ***********
40.0000 - 50.0000 [ 49]:
*************************************************
50.0000 - 60.0000 [ 0]:
60.0000 - 70.0000 [ 0]:
70.0000 - 80.0000 [ 0]:
80.0000 - 90.0000 [ 0]:
90.0000 - 100.0000 [ 0]:
-----------------------------------------------------------
3.10-rc2 perfteam2 RX (reverts commit
e11538d1f03914eb92af5a1a378375c05ae8520c)
TCP_RR trans/s 49698.69
Receiver %c0
0.0000 - 10.0000 [ 1]: *
10.0000 - 20.0000 [ 1]: *
20.0000 - 30.0000 [ 0]:
30.0000 - 40.0000 [ 59]:
***********************************************************
40.0000 - 50.0000 [ 0]:
50.0000 - 60.0000 [ 0]:
60.0000 - 70.0000 [ 0]:
70.0000 - 80.0000 [ 0]:
80.0000 - 90.0000 [ 0]:
90.0000 - 100.0000 [ 0]:
Sender %c0
0.0000 - 10.0000 [ 1]: *
10.0000 - 20.0000 [ 0]:
20.0000 - 30.0000 [ 0]:
30.0000 - 40.0000 [ 2]: **
40.0000 - 50.0000 [ 58]:
**********************************************************
50.0000 - 60.0000 [ 0]:
60.0000 - 70.0000 [ 0]:
70.0000 - 80.0000 [ 0]:
80.0000 - 90.0000 [ 0]:
90.0000 - 100.0000 [ 0]:
-----------------------------------------------------------
3.10-rc2 test RX (reverts 69a37beabf1f0a6705c08e879bdd5d82ff6486c4
and e11538d1f03914eb92af5a1a378375c05ae8520c)
TCP_RR trans/s 47766.95
Receiver %c0
0.0000 - 10.0000 [ 1]: *
10.0000 - 20.0000 [ 1]: *
20.0000 - 30.0000 [ 0]:
30.0000 - 40.0000 [ 27]: ***************************
40.0000 - 50.0000 [ 2]: **
50.0000 - 60.0000 [ 0]:
60.0000 - 70.0000 [ 2]: **
70.0000 - 80.0000 [ 0]:
80.0000 - 90.0000 [ 0]:
90.0000 - 100.0000 [ 28]: ****************************
Sender:
0.0000 - 10.0000 [ 1]: *
10.0000 - 20.0000 [ 0]:
20.0000 - 30.0000 [ 0]:
30.0000 - 40.0000 [ 11]: ***********
40.0000 - 50.0000 [ 0]:
50.0000 - 60.0000 [ 1]: *
60.0000 - 70.0000 [ 0]:
70.0000 - 80.0000 [ 3]: ***
80.0000 - 90.0000 [ 7]: *******
90.0000 - 100.0000 [ 38]: **************************************
These results demonstrate gaining back the tendency of the CPU to
stay in more responsive, performant C-states (and thus yield
measurably better performance), by reverting commit
69a37beabf1f0a6705c08e879bdd5d82ff6486c4.
Requested-by: Jeremy Eder <jeder@redhat.com>
Tested-by: Len Brown <len.brown@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 2a99859932281ed6c2ecdd988855f8f6838f6743 upstream.
Since cpufreq_cpu_put() called by __cpufreq_remove_dev() drops the
driver module refcount, __cpufreq_remove_dev() causes that refcount
to become negative for the cpufreq driver after a suspend/resume
cycle.
This is not the only bad thing that happens there, however, because
kobject_put() should only be called for the policy kobject at this
point if the CPU is not the last one for that policy.
Namely, if the given CPU is the last one for that policy, the
policy kobject's refcount should be 1 at this point, as set by
cpufreq_add_dev_interface(), and only needs to be dropped once for
the kobject to go away. This actually happens under the cpu == 1
check, so it need not be done before by cpufreq_cpu_put().
On the other hand, if the given CPU is not the last one for that
policy, this means that cpufreq_add_policy_cpu() has been called
at least once for that policy and cpufreq_cpu_get() has been
called for it too. To balance that cpufreq_cpu_get(), we need to
call cpufreq_cpu_put() in that case.
Thus, to fix the described problem and keep the reference
counters balanced in both cases, move the cpufreq_cpu_get() call
in __cpufreq_remove_dev() to the code path executed only for
CPUs that share the policy with other CPUs.
Reported-and-tested-by: Toralf Förster <toralf.foerster@gmx.de>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Reviewed-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 228b30234f258a193317874854eee1ca7807186e upstream.
Revert commit e11538d1 (cpuidle: Quickly notice prediction failure in
general case), since it depends on commit 69a37be (cpuidle: Quickly
notice prediction failure for repeat mode) that has been identified
as the source of a significant performance regression in v3.8 and
later.
Requested-by: Jeremy Eder <jeder@redhat.com>
Tested-by: Len Brown <len.brown@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 016d5baad04269e8559332df05f89bd95b52d6ad upstream.
The _BIX method returns extended battery info as a package.
According the ACPI spec (ACPI 5, Section 10.2.2.2), the first member
of that package should be "Revision". However, the current ACPI
battery driver treats the first member as "Power Unit" which should
be the second member. This causes the result of _BIX return data
parsing to be incorrect.
Fix this by adding a new member called 'revision' to struct
acpi_battery and adding the offsetof() information on it to
extended_info_offsets[] as the first row.
[rjw: Changelog]
Reported-and-tested-by: Jan Hoffmann <jan.christian.hoffmann@gmail.com>
References: http://bugzilla.kernel.org/show_bug.cgi?id=60519
Signed-off-by: Lan Tianyu <tianyu.lan@intel.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5863e10b441e7ea4b492f930f1be180a97d026f3 upstream.
Use zram->init_lock to protect access to zram->meta, otherwise it
may cause invalid memory access if zram->meta has been freed by
zram_reset_device().
This issue may be triggered by:
Thread 1:
while true; do cat mem_used_total; done
Thread 2:
while true; do echo 8M > disksize; echo 1 > reset; done
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 12a7ad3b810e77137d0caf97a6dd97591e075b30 upstream.
Function valid_io_request() should verify the entire request are within
the zram device address range. Otherwise it may cause invalid memory
access when accessing/modifying zram->meta->table[index] because the
'index' is out of range. Then it may access non-exist memory, randomly
modify memory belong to other subsystems, which is hard to track down.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 65c484609a3b25c35e4edcd5f2c38f98f5226093 upstream.
When doing a patial write and the whole page is filled with zero,
zram_bvec_write() will free uncmem twice.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 39a9b8ac9333e4268ecff7da6c9d1ab3823ff243 upstream.
On error recovery path of zram_init(), it leaks the zram device object
causing the failure. So change create_device() to free allocated
resources on error path.
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Acked-by: Minchan Kim <minchan@kernel.org>
Acked-by: Jerome Marchand <jmarchan@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 57ab048532c0d975538cebd4456491b5c34248f4 upstream.
zram_slot_free_notify() is free-running without any protection from
concurrent operations. So there are race conditions between
zram_bvec_read()/zram_bvec_write() and zram_slot_free_notify(),
and possible consequences include:
1) Trigger BUG_ON(!handle) on zram_bvec_write() side.
2) Access to freed pages on zram_bvec_read() side.
3) Break some fields (bad_compress, good_compress, pages_stored)
in zram->stats if the swap layer makes concurrently call to
zram_slot_free_notify().
So enhance zram_slot_free_notify() to acquire writer lock on zram->lock
before calling zram_free_page().
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6030ea9b35971a4200062f010341ab832e878ac9 upstream.
Memory for zram->disk object may have already been freed after returning
from destroy_device(zram), then it's unsafe for zram_reset_device(zram)
to access zram->disk again.
We can't solve this bug by flipping the order of destroy_device(zram)
and zram_reset_device(zram), that will cause deadlock issues to the
zram sysfs handler.
So fix it by holding an extra reference to zram->disk before calling
destroy_device(zram).
Signed-off-by: Jiang Liu <jiang.liu@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 237b2ac8ac89a6b0120decdd05c7bf4637deb98a upstream.
This patch fixes an issue wherein adhoc rates were being copied
into association request from P2P client.
Signed-off-by: Avinash Patil <patila@marvell.com>
Signed-off-by: Stone Piao <piaoyun@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 953b3539ef9301b8ef73f4b6e2fd824b86aae65a upstream.
This patch fixes an issue wherein association would fail on P2P
interfaces. This happened because we are checking priv->mode
against NL80211_IFTYPE_STATION. While this check is correct for
infrastructure stations, it would fail P2P clients for which mode
is NL80211_IFTYPE_P2P_CLIENT.
Better check would be bss_role which has only 2 values: STA/AP.
Signed-off-by: Avinash Patil <patila@marvell.com>
Signed-off-by: Stone Piao <piaoyun@marvell.com>
Signed-off-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 83e612f632c3897be29ef02e0472f6d63e258378 upstream.
Both type and pkt_len variables are in host endian and these should be in
Little Endian in the payload.
Signed-off-by: Tomasz Moń <desowin@gmail.com>
Acked-by: Bing Zhao <bzhao@marvell.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit e2288b66fe7ff0288382b2af671b4da558b44472 upstream.
Since we clear QUEUE_STARTED in rt2x00queue_stop_queue(), following
call to rt2x00queue_pause_queue() reduce to noop, i.e we do not
stop queue in mac80211.
To fix that introduce rt2x00queue_pause_queue_nocheck() function,
which will stop queue in mac80211 directly.
Note that rt2x00_start_queue() explicitly set QUEUE_PAUSED bit.
Note also that reordering operations i.e. first call to
rt2x00queue_pause_queue() and then clear QUEUE_STARTED bit, will race
with rt2x00queue_unpause_queue(), so calling ieee80211_stop_queue()
directly is the only available solution to fix the problem without
major rework.
Signed-off-by: Stanislaw Gruszka <stf_xl@wp.pl>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|