Age | Commit message (Collapse) | Author |
|
commit b0df96a0068daee4f9c2189c29b9053eb6e46b17 upstream.
Missing delay is not getting set properly. The reason is that it is not
defined in the same file from where it is being invoked. The fix is to move
the missing delay module parameter from mpt2sas_base.c to mpt2sas_scsh.c.
Signed-off-by: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 48ba2efc382f94fae16ca8ca011e5961a81ad1ea upstream.
When SCSI command is received with task attribute not set, set it to SIMPLE.
Previously it is set to untagged. This causes the firmware to fail the commands.
Signed-off-by: Sreekanth Reddy <Sreekanth.Reddy@lsi.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 9edf7d75ee5f21663a0183d21f702682d0ef132f upstream.
Commit 64deb6efdc5504ce97b5c1c6f281fffbc150bd93
"[SCSI] zfcp: Use status_read_buf_num provided by FCP channel"
started using a value returned by the channel but only evaluated the value
if the fabric link is up.
Commit 8d88cf3f3b9af4713642caeb221b6d6a42019001
"[SCSI] zfcp: Update status read mempool"
introduced mempool resizings based on the above value.
On setting an FCP device online for the very first time since boot, a new
zeroed adapter object is allocated. If the link is down, the number of
status read requests remains zero. Since just the config data exchange is
incomplete, we proceed with adapter open recovery. However, we
unconditionally call mempool_resize with adapter->stat_read_buf_num == 0 in
this case.
This causes a kernel message "kernel BUG at mm/mempool.c:131!" in process
"zfcperp<FCP-device-bus-ID>" with last function mempool_resize in Krnl PSW
and zfcp_erp_thread in the Call Trace.
Don't evaluate channel values which are invalid on link down. The number of
status read requests is always valid, evaluated, and set to a positive
minimum greater than zero. The adapter open recovery can proceed and the
channel has status read buffers to inform us on a future link up event.
While we are not aware of any other code path that could result in mempool
resize attempts of size zero, we still also initialize the number of status
read buffers to be posted to a static minimum number on adapter object
allocation.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5fea4291deacd80188b996d2f555fc6a1940e5d4 upstream.
Commit 86a9668a8d29ea711613e1cb37efa68e7c4db564
"[SCSI] zfcp: support for hardware data router"
reduced the initial block queue limits in the scsi_host_template to the
absolute minimum and adjusted them later on. However, the adjustment was
too late for the BSG devices of Scsi_Host and fc_host.
Therefore, ioctl(..., SG_IO, ...) with request or response size > 4kB to a
BSG device of an fc_host or a Scsi_Host fails with EINVAL. As a result,
users of such ioctl such as HBA_SendCTPassThru() in libzfcphbaapi return
with error HBA_STATUS_ERROR.
Initialize the block queue limits in zfcp_scsi_host_template to the
greatest common denominator (GCD).
While we cannot exploit the slightly enlarged maximum request size with
data router, this should be neglectible. Doing so also avoids running into
trouble after live guest relocation (LGR) / migration from a data router
FCP device to an FCP device that does not support data router. In that
case, zfcp would figure out the new limits on adapter recovery, but the
fc_host and Scsi_Host (plus in fact all sdevs) still exist with the old and
now too large queue limits.
It should also OK, not to use half the size as in the DIX case, because
fc_host and Scsi_Host do not transport FCP requests including SCSI commands
using protection data.
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Reviewed-by: Martin Peschke <mpeschke@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit f76ccaac4f82c463a037aa4a1e4ccb85c7011814 upstream.
FCP device remains in status ERP_FAILED when device is switched online
or adapter recovery is triggered while link to SAN is down.
When Exchange Configuration Data command returns the FSF status
FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE it aborts the exchange process.
The only retries are done during the common error recovery procedure
(i.e. max. 3 retries with 8sec sleep between) and remains in status
ERP_FAILED with QDIO down.
This commit reverts the commit 0df138476c8306478d6e726f044868b4bccf411c
(zfcp: Fix adapter activation on link down).
When FSF status FSF_EXCHANGE_CONFIG_DATA_INCOMPLETE is received the
adapter recovery will be finished without any retries. QDIO will be
up now and status changes such as LINK UP will be received now.
Signed-off-by: Daniel Hansel <daniel.hansel@linux.vnet.ibm.com>
Signed-off-by: Steffen Maier <maier@linux.vnet.ibm.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c5bebd829dd95602c15f8da8cc50fa938b5e0254 upstream.
One of the customer had reported that the set of raid logical arrays will
become unavailable (I/O offline) after a long hours of IO stress test. The OS
wouldn`t be accessible afterwards and require a hard reset.
This driver patch has a fix for race condition between the doorbell and the
circular buffer. The driver is modified to do an extra read after clearing the
doorbell in case there had been a completion posted during the small timing
window.
With this fix, we ran IO stress for ~13 days. There were no IO failures.
Signed-off-by: Mahesh Rajashekhara <Mahesh.Rajashekhara@pmcs.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 66c28f97120e8a621afd5aa7a31c4b85c547d33d upstream.
SATA drives located behind a SAS controller would incorrectly receive
WRITE SAME commands. Tweak the heuristics so that:
- If REPORT SUPPORTED OPERATION CODES is provided we will use that to
choose between WRITE SAME(16), WRITE SAME(10) and disabled. This also
fixes an issue with the old code which would issue WRITE SAME(10)
despite the command not being whitelisted in REPORT SUPPORTED
OPERATION CODES.
- If REPORT SUPPORTED OPERATION CODES is not provided we will fall back
to WRITE SAME(10) unless the device has an ATA Information VPD page.
The assumption is that a SATL which is smart enough to implement
WRITE SAME would also provide REPORT SUPPORTED OPERATION CODES.
To facilitate the new heuristics scsi_report_opcode() has been modified
to so we can distinguish between "operation not supported" and "RSOC not
supported".
Reported-by: H. Peter Anvin <hpa@zytor.com>
Tested-by: Bernd Schubert <bernd.schubert@itwm.fraunhofer.de>
Signed-off-by: Martin K. Petersen <martin.petersen@oracle.com>
Signed-off-by: James Bottomley <JBottomley@Parallels.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d3bcb7b24bbf09fde8405770e676fe0c11c79662 upstream.
ah->noise is maintained globally and not per-channel. This
is updated in the reset() routine after the NF history has been
filled for the *current channel*, just before switching to
the new channel. There is no need to do it inside getnf(), since
ah->noise must contain a value for the new channel.
Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 696df78509d1f81b651dd98ecdc1aecab616db6b upstream.
The commits,
"ath9k: Fix regression in channelwidth switch at the same channel"
"ath9k: Fix invalid noisefloor reading due to channel update"
attempted to fix noisefloor calibration when a channel switch
happens due to HT20/HT40 bandwidth change. This is causing invalid
readings resulting in messages like:
"ath: phy16: NF[0] (-45) > MAX (-95), correcting to MAX".
This results in an incorrect noise being used initially for reporting
the signal level of received packets, until NF calibration is done
and the history buffer is updated via the ANI timer, which happens
much later.
When a bandwidth change happens, it is appropriate to reset
the internal history data for the channel. Do this correctly in the
reset() routine by checking the "chanmode" variable.
Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Cc: Rajkumar Manoharan <rmanohar@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 30d5b709da23f4ab9836c7f66d2d2e780a69cf12 upstream.
For AR9485 boards with XLNA, the default gpio config
is not set correctly, fix this.
Signed-off-by: Sujith Manoharan <c_manoha@qca.qualcomm.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 0847beb2865f5ef1c8626ec1a37def18f3d6c41a upstream.
The code writes the default_power2 value into the TX field
of the RFCSR50 register, however the condition in the if
statement uses default_power1. Due to this, wrong TX power
value might be written into the register.
Use the correct value in the condition to fix the issue.
Compile tested only.
Signed-off-by: Gabor Juhos <juhosg@openwrt.org>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 0a6f3a8ebaf13407523c2c7d575b4ca2debd23ba upstream.
The current code uses the same index value both
for the channel information array and for the TX
power table. The index starts from 14, however the
index of the TX power table must start from zero.
Fix it, in order to get the correct TX power value
for a given channel.
The changes in rt61pci.c and rt73usb.c are compile
tested only.
Signed-off-by: Gabor Juhos <juhosg@openwrt.org>
Acked-by: Stanislaw Gruszka <stf_xl@wp.pl>
Acked-by: Gertjan van Wingerde <gwingerde@gmail.com>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 1a33bd2be705cbb3f57d7223b60baea441039307 upstream.
irq_of_parse_and_map() returns 0 on error, while the code checks for NO_IRQ.
This breaks on platforms that have NO_IRQ != 0.
Signed-off-by: Baruch Siach <baruch@tkos.co.il>
Signed-off-by: Daniel Lezcano <daniel.lezcano@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 1f73a9806bdd07a5106409bbcab3884078bd34fe upstream.
When the system switches from periodic to oneshot mode, the broadcast
logic causes a possibility that a CPU which has not yet switched to
oneshot mode puts its own clock event device into oneshot mode without
updating the state and the timer handler.
CPU0 CPU1
per cpu tickdev is in periodic mode
and switched to broadcast
Switch to oneshot mode
tick_broadcast_switch_to_oneshot()
cpumask_copy(tick_oneshot_broacast_mask,
tick_broadcast_mask);
broadcast device mode = oneshot
Timer interrupt
irq_enter()
tick_check_oneshot_broadcast()
dev->set_mode(ONESHOT);
tick_handle_periodic()
if (dev->mode == ONESHOT)
dev->next_event += period;
FAIL.
We fail, because dev->next_event contains KTIME_MAX, if the device was
in periodic mode before the uncontrolled switch to oneshot happened.
We must copy the broadcast bits over to the oneshot mask, because
otherwise a CPU which relies on the broadcast would not been woken up
anymore after the broadcast device switched to oneshot mode.
So we need to verify in tick_check_oneshot_broadcast() whether the CPU
has already switched to oneshot mode. If not, leave the device
untouched and let the CPU switch controlled into oneshot mode.
This is a long standing bug, which was never noticed, because the main
user of the broadcast x86 cannot run into that scenario, AFAICT. The
nonarchitected timer mess of ARM creates a gazillion of differently
broken abominations which trigger the shortcomings of that broadcast
code, which better had never been necessary in the first place.
Reported-and-tested-by: Stehle Vincent-B46079 <B46079@freescale.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: John Stultz <john.stultz@linaro.org>,
Cc: Mark Rutland <mark.rutland@arm.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307012153060.4013@ionos.tec.linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 07bd1172902e782f288e4d44b1fde7dec0f08b6f upstream.
The recent implementation of a generic dummy timer resulted in a
different registration order of per cpu local timers which made the
broadcast control logic go belly up.
If the dummy timer is the first clock event device which is registered
for a CPU, then it is installed, the broadcast timer is initialized
and the CPU is marked as broadcast target.
If a real clock event device is installed after that, we can fail to
take the CPU out of the broadcast mask. In the worst case we end up
with two periodic timer events firing for the same CPU. One from the
per cpu hardware device and one from the broadcast.
Now the problem is that we have no way to distinguish whether the
system is in a state which makes broadcasting necessary or the
broadcast bit was set due to the nonfunctional dummy timer
installment.
To solve this we need to keep track of the system state seperately and
provide a more detailed decision logic whether we keep the CPU in
broadcast mode or not.
The old decision logic only clears the broadcast mode, if the newly
installed clock event device is not affected by power states.
The new logic clears the broadcast mode if one of the following is
true:
- The new device is not affected by power states.
- The system is not in a power state affected mode
- The system has switched to oneshot mode. The oneshot broadcast is
controlled from the deep idle state. The CPU is not in idle at
this point, so it's safe to remove it from the mask.
If we clear the broadcast bit for the CPU when a new device is
installed, we also shutdown the broadcast device when this was the
last CPU in the broadcast mask.
If the broadcast bit is kept, then we leave the new device in shutdown
state and rely on the broadcast to deliver the timer interrupts via
the broadcast ipis.
Reported-and-tested-by: Stehle Vincent-B46079 <B46079@freescale.com>
Reviewed-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: John Stultz <john.stultz@linaro.org>,
Cc: Mark Rutland <mark.rutland@arm.com>
Link: http://lkml.kernel.org/r/alpine.DEB.2.02.1307012153060.4013@ionos.tec.linutronix.de
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 7bb23c4934059c64cbee2e41d5d24ce122285176 upstream.
1/ When an different between blocks is found, data is copied from
one bio to the other. However bv_len is used as the length to
copy and this could be zero. So use r10_bio->sectors to calculate
length instead.
Using bv_len was probably always a bit dubious, but the introduction
of bio_advance made it much more likely to be a problem.
2/ When preparing some blocks for sync, we don't set BIO_UPTODATE
except on bios that we schedule for a read. This ensures that
missing/failed devices don't confuse the loop at the top of
sync_request write.
Commit 8be185f2c9d54d6 "raid10: Use bio_reset()"
removed a loop which set BIO_UPTDATE on all appropriate bios.
So we need to re-add that flag.
These bugs were introduced in 3.10, so this patch is suitable for
3.10-stable, and can remove a potential for data corruption.
Reported-by: Brassow Jonathan <jbrassow@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 78eaa0d4cbcdb345992fa3dd22b3bcbb473cc064 upstream.
1/ If a RAID10 is being reshaped to a fewer number of devices
and is stopped while this is ongoing, then when the array is
reassembled the 'mirrors' array will be allocated too small.
This will lead to an access error or memory corruption.
2/ A sanity test for a reshaping RAID10 array is restarted
is slightly incorrect.
Due to the first bug, this is suitable for any -stable
kernel since 3.5 where this code was introduced.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 1376512065b23f39d5f9a160948f313397dde972 upstream.
The recent comment:
commit 7e83ccbecd608b971f340e951c9e84cd0343002f
md/raid10: Allow skipping recovery when clean arrays are assembled
Causes raid10 to skip a recovery in certain cases where it is safe to
do so. Unfortunately it also causes a reshape to be skipped which is
never safe. The result is that an attempt to reshape a RAID10 will
appear to complete instantly, but no data will have been moves so the
array will now contain garbage.
(If nothing is written, you can recovery by simple performing the
reverse reshape which will also complete instantly).
Bug was introduced in 3.10, so this is suitable for 3.10-stable.
Signed-off-by: NeilBrown <neilb@suse.de>
Cc: Martin Wilck <mwilck@arcor.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5c78dfe87ea04b501ee000a7f03b9432ac9d008c upstream.
SGTL5000_PLL_FRAC_DIV_MASK is used to mask bits 0-10 (11 bits in total) of
register CHIP_PLL_CTRL, so fix the mask to accomodate all this bit range.
Reported-by: Oskar Schirmer <oskar@scara.com>
Signed-off-by: Fabio Estevam <fabio.estevam@freescale.com>
Signed-off-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 571185717f8d7f2a088a7ac38d94a9ad5fd9da5c upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Acked-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 61be2b9a18ec70f3cbe3deef7a5f77869c71b5ae upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Acked-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b996ac90f595dda271cbd858b136b45557fc1a57 upstream.
To add AMD CZ SMBus controller device ID.
[bhelgaas: drop pci_ids.h update]
Signed-off-by: Shane Huang <shane.huang@amd.com>
Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
Reviewed-by: Tejun Heo <tj@kernel.org>
Reviewed-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit ddfef5de3d716f77bad32dbbba6b280158dfd721 upstream.
Increase the retry count for the hard reset function to 100 but
shorten the time out period to 500 ms. See the comment for
ahci_highbank_hardreset for the reasons why those vaulues were
chosen.
Signed-off-by: Mark Langsdorf <mark.langsdorf@calxeda.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c7e8695bfa0611b39493a9dfe8bab9f63f9809bd upstream.
This patch adds the IDE-mode SATA DeviceIDs for the Intel Coleto Creek PCH.
Signed-off-by: Seth Heasley <seth.heasley@intel.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 7a87718d92760fc688628ad6a430643dafa16f1f upstream.
For some reason, a lot of port-multipliers have issues with softreset.
SIMG [34]7x series port-multipliers have been quite erratic in this
regard. I recall that it was better with some firmware revisions and
the current list of quirks worked fine for a while. I think it got
worse with later firmwares or maybe my test coverage wasn't good
enough. Anyways, HPA is reporting that his 3726 setup suffers SRST
failures and then the PMP gets confused and fails to probe the last
port.
The hope was that we try to stick to the standard as much as possible
and soonish the PMPs and their firmwares will improve in quality, so
the quirk list was kept to minimum. Well, it seems like that's never
gonna happen.
Let's set NO_SRST for all [34]7x PMPs so that whatever remaining
userbase of the device suffer the least. Maybe we should do the same
for 57xx's but unfortunately I don't have any device left to test and
I'm not even sure 57xx's have ever been made widely available, so
let's leave those alone for now.
Signed-off-by: Tejun Heo <tj@kernel.org>
Reported-by: "H. Peter Anvin" <hpa@zytor.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d0887c43f51c308b01605346e55d906ba858a6f9 upstream.
There are some SATA controllers which have both devices 0 and 1 but this module
just zeroes out taskfile and sets then ATA_TFLAG_DEVICE (not sure that's needed)
which could lead to a wrong device being selected just before issuing command.
Thus we should call ata_tf_init() which sets up the device register value
properly, like all other users of ata_exec_internal() do...
Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com>
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 41fa9a944fce1d7efd5ee3d50ac85b92f42dcc3d upstream.
NCT6775 does not support alarms for fans 4 and 5. Drop the attributes.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit b1d2bff6a61140454b9d203519cc686a2e9ef32f upstream.
Driver displays wrong alarms for temperature attributes.
Turns out that temperature alarm bits are not fixed, but determined
by temperature source mapping. To fix the problem, walk through
the temperature sources to determine the correct alarm bit associated
with a given attribute.
Signed-off-by: Guenter Roeck <linux@roeck-us.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5be1efb4c2ed79c3d7c0cbcbecae768377666e84 upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 60478295d6876619f8f47f6d1a5c25eaade69ee3 upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit cc7282b8d5abbd48c81d1465925d464d9e3eaa8f upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 46f6c1aaf790be9ea3c8ddfc8f235a5f677d08e2 upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Acked-by: Mark Brown <broonie@linaro.org>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 9538aa46c2427d6782aa10036c4da4c541605e0e upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Acked-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5b9ab3f7324a1b94a5a5a76d44cf92dfeb3b5e80 upstream.
snd_pcm_stop() must be called in the PCM substream lock context.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 256ca9c3ad5013ff8a8f165e5a82fab437628c8e upstream.
We've got bug reports that the module loading stuck on Debian system
with 3.10 kernel. The debugging session revealed that the initial
registration of OSS sequencer clients stuck at module loading time,
which involves again with request_module() at the init phase. This is
triggered only by special --install stuff Debian is using, but it's
still not good to have such loops.
As a workaround, call the registration part asynchronously. This is a
better approach irrespective of the hang fix, in anyway.
Reported-and-tested-by: Philipp Matthias Hahn <pmhahn@pmhahn.de>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d52392b1a80458c0510810789c7db4a39b88022a upstream.
Vendor ID 0x10de0060 is used by a yet-to-be-named GPU chip.
Reviewed-by: Andy Ritger <aritger@nvidia.com>
Signed-off-by: Aaron Plattner <aplattner@nvidia.com>
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 0c055b3413868227f2e85701c4e6938c9581f0e2 upstream.
add_control_with_pfx() in hda_generic.c assumes a shorter name string
for the control element, and this resulted in the truncation of the
long but valid string like "Headphone Surround Switch" in the middle.
This patch aligns the max size to the actual limit of snd_ctl_elem_id,
44.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d045c5dc43d829df9f067d363c3b42b14dacf434 upstream.
Some VIA codecs like VT1708S have Mic boost amps in the mic pins but
they aren't exposed in the capability bits. In the past driver code,
we override the pin caps and create mic boost controls forcibly.
While transition to the generic parser, we lost the mic boost controls
although the pin caps are still overridden, because the generic parser
code checks the widget caps, too.
So this patch adds a new helper function to allow the override of the
given widget capability bits, and makes VIA codecs driver to add the
missing input-amp capability bit.
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=59861
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit bddee96b5d0db869f47b195fe48c614ca824203c upstream.
When a selection to a converter MUX is changed in hdmi_pcm_open(), it
should be cached so that the given connection can be restored properly
at PM resume. We need just to replace the corresponding
snd_hda_codec_write() call with snd_hda_codec_write_cache().
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 06ec56d3c60238f27bfa50d245592fccc1b4ef0f upstream.
The refactoring by commit 9040d102 introduced the new function
snd_hda_check_power_state(). This function is supposed to return true
if the state already reached to the target state, but it actually
returns false for that. An utterly stupid typo while copy & paste.
Fortunately this didn't influence on much behavior because powering up
AFG usually powers up the child widgets, too. But the finer power
control must have been broken by this bug.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 8f0b3b7e222383a21f7d58bd97d5552b3a5dbced upstream.
ad1884_fixup_hp_eapd() tries to set the NID for controlling the
speaker EAPD from the pin configuration. But the current code can't
work expectedly since it sets spec->eapd_nid before calling the
generic parser where the autocfg pins are set up.
This patch changes the function to set spec->eapd_nid after the
generic parser call while it sets vmaster hook unconditionally. The
spec->eapd_nid check is moved in the hook function itself instead.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit f91d1b63a4e096d3023aaaafec9d9d3aff25997f upstream.
When reading IIO_CHAN_INFO_OFFSET, the return value of iio_channel_read() for
success will be IIO_VAL*, checking for 0 is not correct.
Without this fix the offset applied by iio drivers will be ignored when
converting a raw value to one in appropriate base units (e.g mV) in
a IIO client drivers that use iio_convert_raw_to_processed including
iio-hwmon.
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 1c297a66654a3295ae87e2b7f3724d214eb2b5ec upstream.
Since the info_mask split, iio_channel_has_info() is not working correctly.
info_mask_separate and info_mask_shared_by_type, it is not possible to compare
them directly with the iio_chan_info_enum enum. Correct that bit using the BIT()
macro.
Signed-off-by: Alexandre Belloni <alexandre.belloni@free-electrons.com>
Signed-off-by: Jonathan Cameron <jic23@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit db6f41063cbdb58b14846e600e6bc3f4e4c2e888 upstream.
On arm64, cache maintenance faults appear as data aborts with the CM
bit set in the ESR. The WnR bit, usually used to distinguish between
faulting loads and stores, always reads as 1 and (slightly confusingly)
the instructions are treated as reads by the architecture.
This patch fixes our fault handling code to treat cache maintenance
faults in the same way as loads.
Signed-off-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit e8d05276f236ee6435e78411f62be9714e0b9377 upstream.
commit 2f7021a8 "cpufreq: protect 'policy->cpus' from offlining
during __gov_queue_work()" caused a regression in CPU hotplug,
because it lead to a deadlock between cpufreq governor worker thread
and the CPU hotplug writer task.
Lockdep splat corresponding to this deadlock is shown below:
[ 60.277396] ======================================================
[ 60.277400] [ INFO: possible circular locking dependency detected ]
[ 60.277407] 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744 Not tainted
[ 60.277411] -------------------------------------------------------
[ 60.277417] bash/2225 is trying to acquire lock:
[ 60.277422] ((&(&j_cdbs->work)->work)){+.+...}, at: [<ffffffff810621b5>] flush_work+0x5/0x280
[ 60.277444] but task is already holding lock:
[ 60.277449] (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60
[ 60.277465] which lock already depends on the new lock.
[ 60.277472] the existing dependency chain (in reverse order) is:
[ 60.277477] -> #2 (cpu_hotplug.lock){+.+.+.}:
[ 60.277490] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[ 60.277503] [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410
[ 60.277514] [<ffffffff81042cbc>] get_online_cpus+0x3c/0x60
[ 60.277522] [<ffffffff814b842a>] gov_queue_work+0x2a/0xb0
[ 60.277532] [<ffffffff814b7891>] cs_dbs_timer+0xc1/0xe0
[ 60.277543] [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0
[ 60.277552] [<ffffffff81063d31>] worker_thread+0x121/0x3a0
[ 60.277560] [<ffffffff8106ae2b>] kthread+0xdb/0xe0
[ 60.277569] [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0
[ 60.277580] -> #1 (&j_cdbs->timer_mutex){+.+...}:
[ 60.277592] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[ 60.277600] [<ffffffff815b6157>] mutex_lock_nested+0x67/0x410
[ 60.277608] [<ffffffff814b785d>] cs_dbs_timer+0x8d/0xe0
[ 60.277616] [<ffffffff8106302d>] process_one_work+0x1cd/0x6a0
[ 60.277624] [<ffffffff81063d31>] worker_thread+0x121/0x3a0
[ 60.277633] [<ffffffff8106ae2b>] kthread+0xdb/0xe0
[ 60.277640] [<ffffffff815bb96c>] ret_from_fork+0x7c/0xb0
[ 60.277649] -> #0 ((&(&j_cdbs->work)->work)){+.+...}:
[ 60.277661] [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30
[ 60.277669] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[ 60.277677] [<ffffffff810621ed>] flush_work+0x3d/0x280
[ 60.277685] [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120
[ 60.277693] [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20
[ 60.277701] [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0
[ 60.277709] [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20
[ 60.277719] [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100
[ 60.277728] [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0
[ 60.277737] [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c
[ 60.277747] [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110
[ 60.277759] [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10
[ 60.277768] [<ffffffff815a0a68>] _cpu_down+0x88/0x330
[ 60.277779] [<ffffffff815a0d46>] cpu_down+0x36/0x50
[ 60.277788] [<ffffffff815a2748>] store_online+0x98/0xd0
[ 60.277796] [<ffffffff81452a28>] dev_attr_store+0x18/0x30
[ 60.277806] [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150
[ 60.277818] [<ffffffff8116806d>] vfs_write+0xbd/0x1f0
[ 60.277826] [<ffffffff811686fc>] SyS_write+0x4c/0xa0
[ 60.277834] [<ffffffff815bbbbe>] tracesys+0xd0/0xd5
[ 60.277842] other info that might help us debug this:
[ 60.277848] Chain exists of:
(&(&j_cdbs->work)->work) --> &j_cdbs->timer_mutex --> cpu_hotplug.lock
[ 60.277864] Possible unsafe locking scenario:
[ 60.277869] CPU0 CPU1
[ 60.277873] ---- ----
[ 60.277877] lock(cpu_hotplug.lock);
[ 60.277885] lock(&j_cdbs->timer_mutex);
[ 60.277892] lock(cpu_hotplug.lock);
[ 60.277900] lock((&(&j_cdbs->work)->work));
[ 60.277907] *** DEADLOCK ***
[ 60.277915] 6 locks held by bash/2225:
[ 60.277919] #0: (sb_writers#6){.+.+.+}, at: [<ffffffff81168173>] vfs_write+0x1c3/0x1f0
[ 60.277937] #1: (&buffer->mutex){+.+.+.}, at: [<ffffffff811d9e3c>] sysfs_write_file+0x3c/0x150
[ 60.277954] #2: (s_active#61){.+.+.+}, at: [<ffffffff811d9ec3>] sysfs_write_file+0xc3/0x150
[ 60.277972] #3: (x86_cpu_hotplug_driver_mutex){+.+...}, at: [<ffffffff81024cf7>] cpu_hotplug_driver_lock+0x17/0x20
[ 60.277990] #4: (cpu_add_remove_lock){+.+.+.}, at: [<ffffffff815a0d32>] cpu_down+0x22/0x50
[ 60.278007] #5: (cpu_hotplug.lock){+.+.+.}, at: [<ffffffff81042d8b>] cpu_hotplug_begin+0x2b/0x60
[ 60.278023] stack backtrace:
[ 60.278031] CPU: 3 PID: 2225 Comm: bash Not tainted 3.10.0-rc7-dbg-01385-g241fd04-dirty #1744
[ 60.278037] Hardware name: Acer Aspire 5741G /Aspire 5741G , BIOS V1.20 02/08/2011
[ 60.278042] ffffffff8204e110 ffff88014df6b9f8 ffffffff815b3d90 ffff88014df6ba38
[ 60.278055] ffffffff815b0a8d ffff880150ed3f60 ffff880150ed4770 3871c4002c8980b2
[ 60.278068] ffff880150ed4748 ffff880150ed4770 ffff880150ed3f60 ffff88014df6bb00
[ 60.278081] Call Trace:
[ 60.278091] [<ffffffff815b3d90>] dump_stack+0x19/0x1b
[ 60.278101] [<ffffffff815b0a8d>] print_circular_bug+0x2b6/0x2c5
[ 60.278111] [<ffffffff810ab826>] __lock_acquire+0x1766/0x1d30
[ 60.278123] [<ffffffff81067e08>] ? __kernel_text_address+0x58/0x80
[ 60.278134] [<ffffffff810ac6d4>] lock_acquire+0xa4/0x200
[ 60.278142] [<ffffffff810621b5>] ? flush_work+0x5/0x280
[ 60.278151] [<ffffffff810621ed>] flush_work+0x3d/0x280
[ 60.278159] [<ffffffff810621b5>] ? flush_work+0x5/0x280
[ 60.278169] [<ffffffff810a9b14>] ? mark_held_locks+0x94/0x140
[ 60.278178] [<ffffffff81062d77>] ? __cancel_work_timer+0x77/0x120
[ 60.278188] [<ffffffff810a9cbd>] ? trace_hardirqs_on_caller+0xfd/0x1c0
[ 60.278196] [<ffffffff81062d8a>] __cancel_work_timer+0x8a/0x120
[ 60.278206] [<ffffffff81062e53>] cancel_delayed_work_sync+0x13/0x20
[ 60.278214] [<ffffffff814b89d9>] cpufreq_governor_dbs+0x529/0x6f0
[ 60.278225] [<ffffffff814b76a7>] cs_cpufreq_governor_dbs+0x17/0x20
[ 60.278234] [<ffffffff814b5df8>] __cpufreq_governor+0x48/0x100
[ 60.278244] [<ffffffff814b6b80>] __cpufreq_remove_dev.isra.14+0x80/0x3c0
[ 60.278255] [<ffffffff815adc0d>] cpufreq_cpu_callback+0x38/0x4c
[ 60.278265] [<ffffffff81071a4d>] notifier_call_chain+0x5d/0x110
[ 60.278275] [<ffffffff81071b0e>] __raw_notifier_call_chain+0xe/0x10
[ 60.278284] [<ffffffff815a0a68>] _cpu_down+0x88/0x330
[ 60.278292] [<ffffffff81024cf7>] ? cpu_hotplug_driver_lock+0x17/0x20
[ 60.278302] [<ffffffff815a0d46>] cpu_down+0x36/0x50
[ 60.278311] [<ffffffff815a2748>] store_online+0x98/0xd0
[ 60.278320] [<ffffffff81452a28>] dev_attr_store+0x18/0x30
[ 60.278329] [<ffffffff811d9edb>] sysfs_write_file+0xdb/0x150
[ 60.278337] [<ffffffff8116806d>] vfs_write+0xbd/0x1f0
[ 60.278347] [<ffffffff81185950>] ? fget_light+0x320/0x4b0
[ 60.278355] [<ffffffff811686fc>] SyS_write+0x4c/0xa0
[ 60.278364] [<ffffffff815bbbbe>] tracesys+0xd0/0xd5
[ 60.280582] smpboot: CPU 1 is now offline
The intention of that commit was to avoid warnings during CPU
hotplug, which indicated that offline CPUs were getting IPIs from the
cpufreq governor's work items. But the real root-cause of that
problem was commit a66b2e5 (cpufreq: Preserve sysfs files across
suspend/resume) because it totally skipped all the cpufreq callbacks
during CPU hotplug in the suspend/resume path, and hence it never
actually shut down the cpufreq governor's worker threads during CPU
offline in the suspend/resume path.
Reflecting back, the reason why we never suspected that commit as the
root-cause earlier, was that the original issue was reported with
just the halt command and nobody had brought in suspend/resume to the
equation.
The reason for _that_ in turn, as it turns out, is that earlier
halt/shutdown was being done by disabling non-boot CPUs while tasks
were frozen, just like suspend/resume.... but commit cf7df378a
(reboot: migrate shutdown/reboot to boot cpu) which came somewhere
along that very same time changed that logic: shutdown/halt no longer
takes CPUs offline. Thus, the test-cases for reproducing the bug
were vastly different and thus we went totally off the trail.
Overall, it was one hell of a confusion with so many commits
affecting each other and also affecting the symptoms of the problems
in subtle ways. Finally, now since the original problematic commit
(a66b2e5) has been completely reverted, revert this intermediate fix
too (2f7021a8), to fix the CPU hotplug deadlock. Phew!
Reported-by: Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
Reported-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Tested-by: Peter Wu <lekensteyn@gmail.com>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit aae760ed21cd690fe8a6db9f3a177ad55d7e12ab upstream.
commit a66b2e (cpufreq: Preserve sysfs files across suspend/resume)
has unfortunately caused several things in the cpufreq subsystem to
break subtly after a suspend/resume cycle.
The intention of that patch was to retain the file permissions of the
cpufreq related sysfs files across suspend/resume. To achieve that,
the commit completely removed the calls to cpufreq_add_dev() and
__cpufreq_remove_dev() during suspend/resume transitions. But the
problem is that those functions do 2 kinds of things:
1. Low-level initialization/tear-down that are critical to the
correct functioning of cpufreq-core.
2. Kobject and sysfs related initialization/teardown.
Ideally we should have reorganized the code to cleanly separate these
two responsibilities, and skipped only the sysfs related parts during
suspend/resume. Since we skipped the entire callbacks instead (which
also included some CPU and cpufreq-specific critical components),
cpufreq subsystem started behaving erratically after suspend/resume.
So revert the commit to fix the regression. We'll revisit and address
the original goal of that commit separately, since it involves quite a
bit of careful code reorganization and appears to be non-trivial.
(While reverting the commit, note that another commit f51e1eb
(cpufreq: Fix cpufreq regression after suspend/resume) already
reverted part of the original set of changes. So revert only the
remaining ones).
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat@linux.vnet.ibm.com>
Acked-by: Viresh Kumar <viresh.kumar@linaro.org>
Tested-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 4ea355b5368bde0574c12430df53334c4be3bdcf upstream.
In power_pmu_enable() we still enable the PMU even if we have zero
events. This should have no effect but doesn't make much sense. Instead
just return after telling the hypervisor that we are not using the PMCs.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 0a48843d6c5114cfa4a9540ee4d6af87628cec01 upstream.
In power_pmu_enable() we can use the existing out label to reduce the
number of return paths.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 7a7a41f9d5b28ac3a916b057a7d3cd3f435ee9a6 upstream.
On Power8 we can freeze PMC5 and 6 if we're not using them. Normally they
run all the time.
As noticed by Anshuman, we should unfreeze them when we disable the PMU
as there are legacy tools which expect them to run all the time.
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 378a6ee99e4a431ec84e4e61893445c041c93007 upstream.
In pmu_disable() we disable the PMU by setting the FC (Freeze Counters)
bit in MMCR0. In order to do this we have to read/modify/write MMCR0.
It's possible that we read a value from MMCR0 which has PMAO (PMU Alert
Occurred) set. When we write that value back it will cause an interrupt
to occur. We will then end up in the PMU interrupt handler even though
we are supposed to have just disabled the PMU.
We can avoid this by making sure we never write PMAO back. We should not
lose interrupts because when the PMU is re-enabled the overflowed values
will cause another interrupt.
We also reorder the clearing of SAMPLE_ENABLE so that is done after the
PMU is frozen. Otherwise there is a small window between the clearing of
SAMPLE_ENABLE and the setting of FC where we could take an interrupt and
incorrectly see SAMPLE_ENABLE not set. This would for example change the
logic in perf_read_regs().
Signed-off-by: Michael Ellerman <michael@ellerman.id.au>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|