Age | Commit message (Collapse) | Author |
|
HBRV-based default selection of backlight control strategy didn't work
well, at least the X41 defines it but doesn't use it and I don't think
it will stop there. Switch to a blacklist, and make sure only Radeon-
based models get ECNVRAM.
Symptoms of incorrect backlight mode selection are:
1. Non-working backlight control through sysfs;
2. Backlight gets reset to the lowest level at every shutdown, reboot
and when thinkpad-acpi gets unloaded;
This fixes a regression in 2.6.30, bugzilla #13826. This fix is
already present on 2.6.31.
This is a minimal patch for 2.6.30-stable, based on mainline
commits: 050df107c408a3df048524b3783a5fc6d4dccfdb,
7d95a3d564901e88ed42810f054e579874151999,
59fe4fe34d7afdf63208124f313be9056feaa2f4,
6da25bf51689a5cc60370d30275dbb9e6852e0cb
Signed-off-by: Henrique de Moraes Holschuh <hmh@hmh.eng.br>
Reported-by: Tobias Diedrich <ranma+kernel@tdiedrich.de>
Reported-by: Robert de Rooy <robert.de.rooy@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0c570cdeb8fdfcb354a3e9cd81bfc6a09c19de0c upstream.
Since 2.6.29 the PCI PM core have been restoring the standard
configuration registers of PCI devices in the early phase of
resume. In particular, PCI devices without drivers have been handled
this way since commit 355a72d75b3b4f4877db4c9070c798238028ecb5
(PCI: Rework default handling of suspend and resume). Unfortunately,
this leads to post-resume problems with CardBus devices which cannot
be accessed in the early phase of resume, because the sockets they
are on have not been woken up yet at that point.
To solve this problem, move the yenta socket resume to the early
phase of resume and, analogously, move the suspend of it to the late
phase of suspend. Additionally, remove some unnecessary PCI code
from the yenta socket's resume routine.
Fixes http://bugzilla.kernel.org/show_bug.cgi?id=13092, which is a
post-2.6.28 regression.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Reported-by: Florian <fs-kernelbugzilla@spline.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 827b4649d4626bf97b203b4bcd69476bb9b4e760 upstream.
pcmcia_socket_dev_suspend() doesn't use its second argument, so it
may be dropped safely.
This change is necessary for the subsequent yenta suspend/resume fix.
Signed-off-by: Rafael J. Wysocki <rjw@sisk.pl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 31b239ad1ba7225435e13f5afc47e48eb674c0cc upstream.
Commit a5bfc4714b3f01365aef89a92673f2ceb1ccf246 dropped explicit
pci_intx() manipulation from ahci because it seemed unnecessary and
ahci doesn't seem to be the right place to be tweaking it if it were.
This was largely okay but there are exceptions. There was one on an
embedded platform which was fixed via firmware and now bko#14124
reports it on a HP DL320.
http://bugzilla.kernel.org/show_bug.cgi?id=14124
I still think this isn't something libata drivers should be caring
about (the only ones which are calling pci_intx() explicitly are
libata ones and one other driver) but for now reverting the change
seems to be the right thing to do.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Thomas Jarosch <thomas.jarosch@intra2net.com>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit f7f71173ea69d4dabf166533beffa9294090b7ef upstream.
This patch adds a new usbid for Zcomax XG-705A to the device table.
Reported-by: Jari Jaakola <jari.jaakola@gmail.com>
Signed-off-by: Christian Lamparter <chunkeey@googlemail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7e24bc1ce669b2876ffa475ea1147f2bb9ffdc52 upstream.
Similar to commit b6adc195 (PCI hotplug: acpiphp wants a 64-bit
_SUN), pci_slot.ko reads and creates sysfs directories based on
the _SUN method.
Certain HP platforms return 64 bits in _SUN. This change to
pci_slot.ko allows us to see the correct sysfs directories.
Reported-by: Chad Smith <chad.smith@hp.com>
Signed-off-by: Alex Chiang <achiang@hp.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6b5096e4d4496e185cd1ada5d1b8e1d941c805ed upstream.
One more form factor for Compaq Evo D510, which needs the same quirk
as the other form factors. Apparently there's no hardware monitoring
chip on that one, but SPD EEPROMs, so it's still worth unhiding the
SMBus.
Signed-off-by: Jean Delvare <khali@linux-fr.org>
Tested-by: Nuzhna Pomoshch <nuzhna_pomoshch@yahoo.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ac8672ea922bde59acf50eaa1eaa1640a6395fd2 upstream.
ata_tf_read_block() has off-by-one error when converting CHS address
to LBA. The bug isn't very visible because ata_tf_read_block() is
used only when generating sense data for a failed RW command and CHS
addressing isn't used too often these days.
This problem was spotted by Atsushi Nemoto.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Atsushi Nemoto <anemo@mba.ocn.ne.jp>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 4eff3cae9c9809720c636e64bc72f212258e0bd5 upstream
virtio_blk: don't bounce highmem requests
By default a block driver bounces highmem requests, but virtio-blk is
perfectly fine with any request that fit into it's 64 bit addressing scheme,
mapped in the kernel virtual space or not.
Besides improving performance on highmem systems this also makes the
reproducible oops in __bounce_end_io go away (but hiding the real cause).
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
V4L: em28xx: set up tda9887_conf in em28xx_card_setup()
(cherry picked from commit ae3340cbf59ea362c2016eea762456cc0969fd9e)
Added tda9887_conf set up into em28xx_card_setup()
Signed-off-by: Franklin Meng <fmeng2002@yahoo.com>
Signed-off-by: Douglas Schilling Landgraf <dougsland@redhat.com>
Signed-off-by: Mauro Carvalho Chehab <mchehab@redhat.com>
Tested-by: Larry Finger <Larry.Finger@lwfinger.net>
Signed-off-by: Michael Krufky <mkrufky@linuxtv.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6dab62ee5a3bf4f71b8320c09db2e6022a19f40e upstream.
http://bugzilla.kernel.org/show_bug.cgi?id=12542 reports that with the
quirk not applied on resume, msi stops working after resuming and mcp78s
ahci fails due to IRQ mis-delivery. Apply it on resume too.
Signed-off-by: Tejun Heo <tj@kernel.org>
Cc: Peer Chen <pchen@nvidia.com>
Cc: Tj <linux@tjworld.net>
Reported-by: Nicolas Derive <kalon33@ubuntu.com>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit fa0681d2129732027355d6b7083dd8932b9b799d upstream.
The current implementation allocates a single host page for EQ context
memory, which was OK when we only allocated a few EQs. However, since
we now allocate an EQ for each CPU core, this patch removes the
hard-coded limit (which we exceed with 4 KB pages and 128 byte EQ
context entries with 32 CPUs) and uses the same ICM table code as all
other context tables, which ends up simplifying the code quite a bit
while fixing the problem.
This problem was actually hit in practice on a dual-socket Nehalem box
with 16 real hardware threads and sufficiently odd ACPI tables that it
shows on boot
SMP: Allowing 32 CPUs, 16 hotplug CPUs
so num_possible_cpus() ends up 32, and mlx4 ends up creating 33 MSI-X
interrupts and 33 EQs. This mlx4 bug means that mlx4 can't even
initialize at all on this quite mainstream system.
Reported-by: Eli Cohen <eli@mellanox.co.il>
Tested-by: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ec57935837a78f9661125b08a5d08b697568e040 upstream.
When probing the device in tpm_tis_init the call request_locality
uses timeout_a, which wasn't being initalized until after
request_locality. This results in request_locality falsely timing
out if the chip is still starting. Move the initialization to before
request_locality.
This probably only matters for embedded cases (ie mine), a BIOS likely
gets the TPM into a state where this code path isn't necessary.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Acked-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit bc00351edd5c1b84d48c3fdca740fedfce4ae6ce upstream.
A workaround for flash memory I/O errors when the PS3 internal
hard disk has not been formatted for OtherOS use.
This error condition mainly effects 'Live CD' users who have not
formatted the PS3's internal hard disk for OtherOS.
Fixes errors similar to these when using the ps3-flash-util
or ps3-boot-game-os programs:
ps3flash read failed 0x2050000
os_area_header_read: read error: os_area_header: Input/output error
main:627: os_area_read_hp error.
ERROR: can't change boot flag
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3355443ad7601991affa5992b0d53870335af765 upstream.
"Ath5k: unify resets"
introduced a regression into 2.6.28 where the PCU registers are never
initialized, due to ath5k_reset() always passing true for change_channel.
We subsequently program a lot of these registers but several may start
in an unknown state.
Reported-by: Forrest Zhang <forrest@hifulltech.com>
Signed-off-by: Bob Copeland <me@bobcopeland.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 121264827656f5f06328b17983c796af17dc5949 upstream.
As early pci resume has already restored config for host
bridge and graphics device, don't need to restore it again,
This removes an original order hack for graphics device restore.
This fixed the resume hang issue found by Alan Stern on 845G,
caused by extra config restore on graphics device.
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e71044ee2efa4792e21d243b03d49006db66aec9 upstream.
When the allocation fails in sg_build_indirect(), an oops happens in
the error path. It's caused by an obvious typo.
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
Reported-by: Bob Tracy <rct@gherkin.frus.com>
Acked-by: Douglas Gilbert <dgilbert@interlog.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ec8b4b7085605e801a7740a2c3c33256aebe249c upstream.
The KEY_MAX change in 2.6.28 changed the values of the JSIOCSBTNMAP and
JSIOCGBTNMAP constants; software compiled with the old values no longer
works with kernels following 2.6.28, because the ioctl switch statement
no longer matches the values given by the software. This patch handles
these ioctls independently of the length of data specified, and applies the
same treatment to JSIOCSAXMAP and JSIOCGAXMAP which currently depend on
ABS_MAX.
Signed-off-by: Stephen Kitt <steve@sk2.org>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ae0b7448e91353ea5f821601a055aca6b58042cd upstream.
Fix some problems seen in the chunk size processing when activating a
pre-existing snapshot.
For a new snapshot, the chunk size can either be supplied by the creator
or a default value can be used. For an existing snapshot, the
chunk size in the snapshot header on disk should always be used.
If someone attempts to load an existing snapshot and has the 'default
chunk size' option set, the kernel uses its default value even when it
is incorrect for the snapshot being loaded. This patch ensures the
correct on-disk value is always used.
Secondly, when the code does use the chunk size stored on the disk it is
prudent to revalidate it, so the code can exit cleanly if it got
corrupted as happened in
https://bugzilla.redhat.com/show_bug.cgi?id=461506 .
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2defcc3fb4661e7351cb2ac48d843efc4c64db13 upstream.
Break the function set_chunk_size to two functions in preparation for
the fix in the following patch.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 61578dcd3fafe6babd72e8db32110cc0b630a432 upstream.
If a persistent snapshot fills up, a race can corrupt the on-disk header
which causes a crash on any future attempt to activate the snapshot
(typically while booting). This patch fixes the race.
When the snapshot overflows, __invalidate_snapshot is called, which calls
snapshot store method drop_snapshot. It goes to persistent_drop_snapshot that
calls write_header. write_header constructs the new header in the "area"
location.
Concurrently, an existing kcopyd job may finish, call copy_callback
and commit_exception method, that goes to persistent_commit_exception.
persistent_commit_exception doesn't do locking, relying on the fact that
callbacks are single-threaded, but it can race with snapshot invalidation and
overwrite the header that is just being written while the snapshot is being
invalidated.
The result of this race is a corrupted header being written that can
lead to a crash on further reactivation (if chunk_size is zero in the
corrupted header).
The fix is to use separate memory areas for each.
See the bug: https://bugzilla.redhat.com/show_bug.cgi?id=461506
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 02d2fd31defce6ff77146ad0fef4f19006055d86 upstream.
Refactor chunk_io to prepare for the fix in the following patch.
Pass an area pointer to chunk_io and simplify zero_disk_area to use
chunk_io. No functional change.
Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d2b698644c97cb033261536a4f2010924a00eac9 upstream.
This patch fixes a bug which was triggering a case where the primary leg
could not be changed on failure even when the mirror was in-sync.
The case involves the failure of the primary device along with
the transient failure of the log device. The problem is that
bios can be put on the 'failures' list (due to log failure)
before 'fail_mirror' is called due to the primary device failure.
Normally, this is fine, but if the log device failure is transient,
a subsequent iteration of the work thread, 'do_mirror', will
reset 'log_failure'. The 'do_failures' function then resets
the 'in_sync' variable when processing bios on the failures list.
The 'in_sync' variable is what is used to determine if the
primary device can be switched in the event of a failure. Since
this has been reset, the primary device is incorrectly assumed
to be not switchable.
The case has been seen in the cluster mirror context, where one
machine realizes the log device is dead before the other machines.
As the responsibilities of the server migrate from one node to
another (because the mirror is being reconfigured due to the failure),
the new server may think for a moment that the log device is fine -
thus resetting the 'log_failure' variable.
In any case, it is inappropiate for us to reset the 'log_failure'
variable. The above bug simply illustrates that it can actually
hurt us.
Signed-off-by: Jonathan Brassow <jbrassow@redhat.com>
Signed-off-by: Alasdair G Kergon <agk@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 601e7638254c118fca135af9b1a9f35061420f62 upstream.
The async split up of probing in sd.c created a potential failure case where
something goes wrong with device_add(), but which we don't recover properly.
Since, in general, asynchronous error handling is hard, move the device_add()
into the asynchronous path (it should be fast) and make sure all the deferred
processing cannot fail.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6faf17f6f1ffc586d16efc2f9fa2083a7785ee74 upstream.
An SR-IOV capable device includes an SR-IOV PCIe capability which
describes the Virtual Function (VF) BAR requirements. A typical SR-IOV
device can support multiple VFs whose BARs must be in a contiguous region,
effectively an array of VF BARs. The BAR reports the size requirement
for a single VF. We calculate the full range needed by simply multiplying
the VF BAR size with the number of possible VFs and create a resource
spanning the full range.
This all seems sane enough except it artificially inflates the alignment
requirement for the VF BAR. The VF BAR need only be aligned to the size
of a single BAR not the contiguous range of VF BARs. This can cause us
to fail to allocate resources for the BAR despite the fact that we
actually have enough space.
This patch adds a thin PCI specific layer over the generic
resource_alignment() function which is aware of the special nature of
VF BARs and does sorting and allocation based on the smaller alignment
requirement.
I recognize that while resource_alignment is generic, it's basically a
PCI helper. An alternative to this patch is to add PCI VF BAR specific
information to struct resource. I opted for the extra layer rather than
adding such PCI specific information to struct resource. This does
have the slight downside that we don't cache the BAR size and re-read
for each alignment query (happens a small handful of times during boot
for each VF BAR).
Signed-off-by: Chris Wright <chrisw@sous-sol.org>
Cc: Ivan Kokshaysky <ink@jurassic.park.msu.ru>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Matthew Wilcox <matthew@wil.cx>
Cc: Yu Zhao <yu.zhao@intel.com>
Signed-off-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 446e72f30eca76d6f9a1a54adf84d2c6ba2831f8 ]
Failure to call unregister_pernet_gen_device() can exhaust memory
if module is loaded/unloaded many times.
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Acked-by: Cyrill Gorcunov <gorcunov@openvz.org>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit a53a8b56827cc429c6d9f861ad558beeb5f6103f ]
This patch fixes the corner cases where the sum of MTU of the free
channels (adjusted for fragmentation overheads) is less than the MTU
of PPP link. There are at least 3 situations where this case might
arise:
- some of the channels are busy
- the multilink session is running in a degraded state (i.e. with less
than its full complement of active channels)
- by design, where multilink protocol is being used to artificially
increase the effective link MTU of a single link.
Without this patch, at most 1 fragment is ever sent per free channel
for a given PPP frame and any remaining part of the PPP frame that
does not fit into those fragments is silently discarded.
This patch restores the original behaviour which was broken by commit
9c705260feea6ae329bc6b6d5f6d2ef0227eda0a 'ppp:ppp_mp_explode()
redesign'. Once all 'free' channels have been given a fragment, an
additional fragment is queued to each available channel in turn, as many
times as necessary, until the entire PPP frame has been consumed.
Signed-off-by: Ben McKeegan <ben@netservers.co.uk>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
[ Upstream commit 6ff9c2e7fa8ca63a575792534b63c5092099c286 ]
E100 places it's RX packet descriptors inside skb->data and uses them
with bidirectional streaming DMA mapping. Data in descriptors is
accessed simultaneously by the chip (writing status and size when
a packet is received) and CPU (reading to check if the packet was
received). This isn't a valid usage of PCI DMA API, which requires use
of the coherent (consistent) memory for such purpose. Unfortunately e100
chips working in "simplified" RX mode have to store received data
directly after the descriptor. Fixing the driver to conform to the API
would require using unsupported "flexible" RX mode or receiving data
into a coherent memory and using CPU to copy it to network buffers.
This patch, while not yet making the driver conform to the PCI DMA API,
allows it to work correctly on X86 with swiotlb (while not breaking
other architectures).
Signed-off-by: Krzysztof Hałasa <khc@pm.waw.pl>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0a80fb10239b04c45e5e80aad8d4b2ca5ac407b2 upstream.
As soon as the framebuffer is registered, our methods may be called by the
kernel. This leads to a crash as xenfb_refresh() gets called before we have
the irq.
Connect to the backend before registering our framebuffer with the kernel.
[ Fixes bug http://bugzilla.kernel.org/show_bug.cgi?id=14059 ]
Signed-off-by: Michal Schmidt <mschmidt@redhat.com>
Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e9d126cdfa60b575f1b5b02024c4faee27dccf07 upstream.
Backport done by Christian Lamparter <chunkeey@googlemail.com>
queue == __AR9170_NUM_TXQ would cause a bug on the next line.
found by Smatch ( http://repo.or.cz/w/smatch.git ).
Reported-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Dan Carpenter <error27@gmail.com>
Signed-off-by: Christian Lamparter <chunkeey@web.de>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7a0f0d951273eee889c2441846842348ebc00a2a upstream.
This patch (as1273) fixes two(!) bugs introduced by the new
Clear-TT-Buffer implementation in ehci-hcd.
It is now possible for an idle QH to have some URBs on its
queue -- this will happen if a Clear-TT-Buffer is pending for
the QH's endpoint. Consequently we should not issue a warning
when someone tries to unlink an URB from an idle QH; instead
we should process the request immediately.
The refcounts for QHs could get messed up, because
submit_async() would increment the refcount when calling
qh_link_async() and qh_link_async() would then refuse to link
the QH into the schedule if a Clear-TT-Buffer was pending.
Instead we should increment the refcount only when the QH
actually is added to the schedule. The current code tries to
be clever by leaving the refcount alone if an unlink is
immediately followed by a relink; the patch changes this to an
unconditional decrement and increment (although they occur in
the opposite order).
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: David Brownell <david-b@pacbell.net>
Tested-by: Manuel Lauss <manuel.lauss@gmail.com>
Tested-by: Matthijs Kooijman <matthijs@stdin.nl>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 914b701280a76f96890ad63eb0fa99bf204b961c upstream.
This patch (as1256) changes ehci-hcd and all the other drivers in the
EHCI family to make use of the new clear_tt_buffer callbacks. When a
Clear-TT-Buffer request is in progress for a QH, the QH is not allowed
to be linked into the async schedule until the request is finished.
At that time, if there are any URBs queued for the QH, it is linked
into the async schedule.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit cb88a1b887bb8908f6e00ce29e893ea52b074940 upstream.
This patch (as1255) updates the interface for calling
usb_hub_clear_tt_buffer(). Even the name of the function is changed!
When an async URB (i.e., Control or Bulk) going through a high-speed
hub to a non-high-speed device is cancelled or fails, the hub's
Transaction Translator buffer may be left busy still trying to
complete the transaction. The buffer has to be cleared; that's what
usb_hub_clear_tt_buffer() does.
It isn't safe to send any more URBs to the same endpoint until the TT
buffer is fully clear. Therefore the HCD needs to be told when the
Clear-TT-Buffer request has finished. This patch adds a callback
method to struct hc_driver for that purpose, and makes the hub driver
invoke the callback at the proper time.
The patch also changes a couple of names; "hub_tt_kevent" and
"tt.kevent" now look rather antiquated.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
The scan of the image packets of the sensor ov772x was broken when
the sensor ov965x was added.
[ Based on upstream c874f3aa, modified slightly for v2.6.30.5 ]
Signed-off-by: Jim Paris <jim@jtan.com>
Acked-by: Jean-Francois Moine <moinejf@free.fr>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2a908002c7b1b666616103e9df2419b38d7c6f1f upstream.
If the BIOS reports an invalid throttling state (which seems to be
fairly common after system boot), a reset is done to state T0.
Because of a check in acpi_processor_get_throttling_ptc(), the reset
never actually gets executed, which results in the error reoccurring
on every access of for example /proc/acpi/processor/CPU0/throttling.
Add a 'force' option to acpi_processor_set_throttling() to ensure
the reset really takes effect.
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13389
This patch, together with the next one, fixes a regression introduced in
2.6.30, listed on the regression list. They have been available for 2.5
months now in bugzilla, but have not been picked up, despite various
reminders and without any reason given.
Google shows that numerous people are hitting this issue. The issue is in
itself relatively minor, but the bug in the code is clear.
The patches have been in all my kernels and today testing has shown that
throttling works correctly with the patches applied when the system
overheats (http://bugzilla.kernel.org/show_bug.cgi?id=13918#c14).
Signed-off-by: Frans Pop <elendil@planet.nl>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Cc: Len Brown <lenb@kernel.org>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Cc: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 388ce4beb7135722c584b0af18f215e3ec657adf upstream.
Moving the setting and clearing of the mutex's to
_config_request. There was a mutex deadlock when diag reset is called from
inside _config_request, so diag reset was moved to outside the mutexs.
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: Eric Moore <Eric.moore@lsi.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit fcfe6392d18283df3c561b5ef59c330d485ff8ca upstream.
Fix another ocurring when the system resumes. This oops was due to driver
setting the pci drvdata to NULL on the prior hibernation. Becuase it was
set to NULL, upon resmume we assume the pci drvdata is non-zero, and we oops.
To fix the ooops, we don't set pci drvdata to NULL at hibernation time.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e4750c989f732555fca86dd73d488c79972362db upstream.
Fix oops ocurring at hibernation time. This oops was due to the firmware fault
watchdog timer still running after we freed resources. To fix the issue we need
to terminate the watchdog timer at hibernation time.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6bd4e1e4d6023f4da069fd68729c502cc4e6dfd0 upstream.
This restriction is introduced just to avoid loop of
config_request. Retry must be limited so we have restricted
config request to maximum 2 times.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit be9e8cd75ce8d94ae4aab721fdcc337fa78a9090 upstream.
Inhibit 0x3117 loginfos - during cable pull, there are too many printks going
to the syslog, this is have impact on how fast the interrupt routine can handle
keeping up with command completions; this was the root cause to the config
pages timeouts.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 62727a7ba43c0abf2673e3877079c136a9721792 upstream.
When a volume is activated, the driver will recieve a pair of ir config change
events to remove the foreign volume, then add the native.
In the process of the removal event, the hidden raid componet is removed from
the parent.When the disks is added back, the adding of the port fails becuase
there is no instance of the device in its parent.
To fix this issue, the driver needs to call mpt2sas_transport_update_links()
prior to calling _scsih_add_device. In addition, we added sanity checks on
volume add and removal to ignore events for foreign volumes.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 20f5895d55d9281830bfb7819c5c5b70b05297a6 upstream.
Kernel panic is seen because driver did not tear down the port which should
be dnoe using mpt2sas_transport_port_remove(). without this fix When expander
is added back we would oops inside sas_port_add_phy.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 15052c9e85bf0cdadcb69eb89623bf12bad8b4f8 upstream.
Kernel panic is seen because of enclosure_handle received from FW is zero.
Check is introduced before calling mpt2sas_config_get_enclosure_pg0.
Signed-off-by: Kashyap Desai <kashyap.desai@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7831387bda72af3059be48d39846d3eb6d8ce2f6 upstream.
OCZ Vertex SSD can't do HPA and not in a usual way. It reports HPA,
allows unlocking but then fails all IOs which fall in the unlocked
area. Quirk it so that HPA unlocking is not used for the device.
Reported by Daniel Perup in bnc#522414.
https://bugzilla.novell.com/show_bug.cgi?id=522414
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: Daniel Perup <probe@spray.se>
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 99f1b01562b7dcae75b043114f76163fbf84fcab upstream.
Do all key clearing except sending sommands to device when rfkill
enabled. When rfkill enabled the interface is brought down and will
be brought back up correctly after rfkill is enabled again.
Same change is not needed for iwl3945 as it ignores return code when
sending key clearing command to device.
This fixes http://bugzilla.kernel.org/show_bug.cgi?id=13742
Signed-off-by: Reinette Chatre <reinette.chatre@intel.com>
Tested-by: Frans Pop <elendil@planet.nl>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
(Not needed upstream, due to the major rewrite in 2.6.31)
Due to rfkill and iwlwifi mishmash of SW / HW killswitch representation,
we have race conditions which make unable turn wifi radio on, after enable
and disable again killswitch. I can observe this problem on my laptop
with iwl3945 device.
In rfkill core HW switch and SW switch are separate 'states'. Device can
be only in one of 3 states: RFKILL_STATE_SOFT_BLOCKED, RFKILL_STATE_UNBLOCKED,
RFKILL_STATE_HARD_BLOCKED. Whereas in iwlwifi driver we have separate bits
STATUS_RF_KILL_HW and STATUS_RF_KILL_SW for HW and SW switches - radio can be
turned on, only if both bits are cleared.
In this particular race conditions, radio can not be turned on if in driver
STATUS_RF_KILL_SW bit is set, and rfkill core is in state
RFKILL_STATE_HARD_BLOCKED, because rfkill core is unable to call
rfkill->toggle_radio(). This situation can be entered in case:
- killswitch is turned on
- rfkill core 'see' button change first and move to RFKILL_STATE_SOFT_BLOCKED
also call ->toggle_radio() and STATE_RF_KILL_SW in driver is set
- iwl3945 get info about button from hardware to set STATUS_RF_KILL_HW bit and
force rfkill to move to RFKILL_STATE_HARD_BLOCKED
- killsiwtch is turend off
- driver clear STATUS_RF_KILL_HW
- rfkill core is unable to clear STATUS_RF_KILL_SW in driver
Additionally call to rfkill_epo() when STATUS_RF_KILL_HW in driver is set
cause move to the same situation.
In 2.6.31 this problem is fixed due to _total_ rewrite of rfkill subsystem.
This is a quite small fix for 2.6.30.x in iwlwifi driver. We are changing
internal rfkill state to always have below relations true:
STATUS_RF_KILL_HW=1 STATUS_RF_KILL_SW=1 <-> RFKILL_STATUS_SOFT_BLOCKED
STATUS_RF_KILL_HW=0 STATUS_RF_KILL_SW=1 <-> RFKILL_STATUS_SOFT_BLOCKED
STATUS_RF_KILL_HW=1 STATUS_RF_KILL_SW=0 <-> RFKILL_STATUS_HARD_BLOCKED
STATUS_RF_KILL_HW=0 STATUS_RF_KILL_SW=0 <-> RFKILL_STATUS_UNBLOCKED
Signed-off-by: Stanislaw Gruszka <sgruszka@redhat.com>
Acked-by: Reinette Chatre <reinette.chatre@intel.com>
Acked-by: John W. Linville <linville@tuxdriver.com>
|
|
commit f3d83e2415445e5b157bef404d38674e9e8de169 upstream.
Summary:
Kernel panic arise when stack protection is enabled, since strncat will
add a null terminating byte '\0'; So in functions
like this one (wmi_query_block):
char wc[4]="WC";
....
strncat(method, block->object_id, 2);
...
the length of wc should be n+1 (wc[5]) or stack protection
fault will arise. This is not noticeable when stack protection is
disabled,but , isn't good either.
Config used: [CONFIG_CC_STACKPROTECTOR_ALL=y,
CONFIG_CC_STACKPROTECTOR=y]
Panic Trace
------------
.... stack-protector: kernel stack corrupted in : fa7b182c
2.6.30-rc8-obelisco-generic
call_trace:
[<c04a6c40>] ? panic+0x45/0xd9
[<c012925d>] ? __stack_chk_fail+0x1c/0x40
[<fa7b182c>] ? wmi_query_block+0x15a/0x162 [wmi]
[<fa7b182c>] ? wmi_query_block+0x15a/0x162 [wmi]
[<fa7e7000>] ? acer_wmi_init+0x00/0x61a [acer_wmi]
[<fa7e7135>] ? acer_wmi_init+0x135/0x61a [acer_wmi]
[<c0101159>] ? do_one_initcall+0x50+0x126
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=13514
Signed-off-by: Costantino Leandro <lcostantino@gmail.com>
Signed-off-by: Carlos Corbacho <carlos@strangeworlds.co.uk>
Cc: Len Brown <len.brown@intel.com>
Cc: Bjorn Helgaas <bjorn.helgaas@hp.com>
Cc: "Rafael J. Wysocki" <rjw@sisk.pl>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6b26dead3ce97d016b57724b01974d5ca5c84bd5 upstream.
Change rt2x00_rf_read() and rt2x00_rf_write() to subtract 1 from the rf
register number. This is needed because the rf registers are enumerated
starting with one. The size of the rf register cache is just enough to
hold all registers, so writing to the highest register was corrupting
memory. Add a check to make sure that the rf register number is valid.
Signed-off-by: Pavel Roskin <proski@gnu.org>
Acked-by: Ivo van Doorn <IvDoorn@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 357eb46d8f275b4e8484541234ea3ba06065e258 upstream.
This patch fixes the napi list handling when an ehea interface is shut
down to avoid corruption of the napi list.
Signed-off-by: Hannes Hering <hering2@de.ibm.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit bc146d23d1358af43f03793c3ad8c9f16bbcffcb upstream.
I'm using ide on 2.6.30.1 with xfs filesystem. I noticed a kernel memory
leak after writing lots of data, the kmalloc-96 slab cache keeps
growing. It seems the struct ide_cmd kmalloced by idedisk_prepare_flush
is never kfreed.
Commit a09485df9cda49fbde2766c86eb18a9cae585162 ("ide: move request
type specific code from ide_end_drive_cmd() to callers (v3)") and
f505d49ffd25ed062e76ffd17568d3937fcd338c ("ide: fix barriers support")
cause this regression, cmd->rq must now be set for ide_complete_cmd to
honor the IDE_TFLAG_DYN flag.
Signed-off-by: Maxime Bizon <mbizon@freebox.fr>
Acked-by: Bartlomiej Zolnierkiewicz <bzolnier@gmail.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Cc: Simon Kirby <sim@netnation.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|