Age | Commit message (Collapse) | Author |
|
commit 969830b2fedf8336c41d6195f49d250b1e166ff8 upstream
Some chips appear to have the 2D engine hang during screen redraw,
typically in a sequence of copyarea operations. This appear to be
solved by adding a flush of the engine destination pixel cache
and waiting for the engine to be idle before issuing the accel
operation. The performance impact seems to be fairly small.
Here is a trace on an RV370 (PCI device ID 0x5b64), it records the
RBBM_STATUS register, then the source x/y, destination x/y, and
width/height used for the copy:
----------------------------------------
radeonfb_prim_copyarea: STATUS[00000140] src[210:70] dst[210:60] wh[a0:10]
radeonfb_prim_copyarea: STATUS[00000140] src[2b8:70] dst[2b8:60] wh[88:10]
radeonfb_prim_copyarea: STATUS[00000140] src[348:70] dst[348:60] wh[40:10]
radeonfb_prim_copyarea: STATUS[80020140] src[390:70] dst[390:60] wh[88:10]
radeonfb_prim_copyarea: STATUS[8002613f] src[40:80] dst[40:70] wh[28:10]
radeonfb_prim_copyarea: STATUS[80026139] src[a8:80] dst[a8:70] wh[38:10]
radeonfb_prim_copyarea: STATUS[80026133] src[e8:80] dst[e8:70] wh[80:10]
radeonfb_prim_copyarea: STATUS[8002612d] src[170:80] dst[170:70] wh[30:10]
radeonfb_prim_copyarea: STATUS[80026127] src[1a8:80] dst[1a8:70] wh[8:10]
radeonfb_prim_copyarea: STATUS[80026121] src[1b8:80] dst[1b8:70] wh[88:10]
radeonfb_prim_copyarea: STATUS[8002611b] src[248:80] dst[248:70] wh[68:10]
----------------------------------------
When things are going fine the copies complete before the next ROP is
even issued, but all of a sudden the 2D unit becomes active (bit 17 in
RBBM_STATUS) and the FIFO retry (bit 13) and FIFO pipeline busy (bit
14) are set as well. The FIFO begins to backup until it becomes full.
What happens next is the radeon_fifo_wait() times out, and we access
the chip illegally leading to a bus error which usually wedges the
box. None of this makes it to the console screen, of course :-)
radeon_fifo_wait() should be modified to reset the accelerator when
this timeout happens instead of programming the chip anyways.
----------------------------------------
radeonfb: FIFO Timeout !
ERROR(0): Cheetah error trap taken afsr[0010080005000000] afar[000007f900800e40] TL1(0)
ERROR(0): TPC[595114] TNPC[595118] O7[459788] TSTATE[11009601]
ERROR(0): TPC<radeonfb_copyarea+0xfc/0x248>
ERROR(0): M_SYND(0), E_SYND(0), Privileged
ERROR(0): Highest priority error (0000080000000000) "Bus error response from system bus"
ERROR(0): D-cache idx[0] tag[0000000000000000] utag[0000000000000000] stag[0000000000000000]
ERROR(0): D-cache data0[0000000000000000] data1[0000000000000000] data2[0000000000000000] data3[0000000000000000]
ERROR(0): I-cache idx[0] tag[0000000000000000] utag[0000000000000000] stag[0000000000000000] u[0000000000000000] l[00\
ERROR(0): I-cache INSN0[0000000000000000] INSN1[0000000000000000] INSN2[0000000000000000] INSN3[0000000000000000]
ERROR(0): I-cache INSN4[0000000000000000] INSN5[0000000000000000] INSN6[0000000000000000] INSN7[0000000000000000]
ERROR(0): E-cache idx[800e40] tag[000000000e049f4c]
ERROR(0): E-cache data0[fffff8127d300180] data1[00000000004b5384] data2[0000000000000000] data3[0000000000000000]
Ker:xnel panic - not syncing: Irrecoverable deferred error trap.
----------------------------------------
Another quirk is that these copyarea calls will not happen until the
first drivers/char/vt.c:redraw_screen() occurs. This will only happen
if you 1) VC switch or 2) run "consolechars" or 3) unblank the screen.
This seems to happen because until a redraw_screen() the screen scrolling
method used by fbcon is not finalized yet. I've seen this with other fb
drivers too.
So if all you do is boot straight into X you will never see this bug on
the relevant chips.
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 32194450330be327f3b25bf6b66298bd122599e9 upstream
In relay's current read implementation, if the buffer is completely full
but hasn't triggered the buffer-full condition (i.e. the last write
didn't cross the subbuffer boundary) and the last subbuffer is exactly
full, the subbuffer accounting code erroneously finds nothing available.
This patch fixes the problem.
Signed-off-by: Tom Zanussi <tzanussi@gmail.com>
Cc: Eduard - Gabriel Munteanu <eduard.munteanu@linux360.ro>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Jens Axboe <jens.axboe@oracle.com>
Cc: Mathieu Desnoyers <compudj@krystal.dyndns.org>
Cc: Andrea Righi <righi.andrea@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ad337591f4fd20de6a0ca03d6715267a5c1d2b16 upstream
It seems cdrwtool in the udftools has been unusable on "modern" kernels
for some time. A Google search reveals many people with the same issue
but no solution (cdrwtool fails to format the disk). After spending some
time tracking down the issue, it comes down to the following:
The udftools still use the older CDROM_SEND_PACKET interface to send
things like FORMAT_UNIT through to the drive. They should really be
updated, but that's another story. Since most distros are using libata
now, the cd or dvd burner appears as a SCSI device, and we wind up in
block/scsi_ioctl.c. Here, the code tries to take the "struct
cdrom_generic_command" and translate it and stuff it into a "struct
sg_io_hdr" structure so it can pass it to the modern sg_io() routine
instead. Unfortunately, there is one error, or rather an omission in the
translation. The timeout that is passed in in the "struct
cdrom_generic_command" is in HZ=100 units, and this is modified and
correctly converted to jiffies by use of clock_t_to_jiffies(). However,
a little further down, this cgc.timeout value in jiffies is simply
copied into the sg_io_hdr timeout, which should be in milliseconds.
Since most modern x86 kernels seems to be getting build with HZ=250, the
timeout that is passed to sg_io and eventually converted to the
timeout_per_command member of the scsi_cmnd structure is now four times
too small. Since cdrwtool tries to set the timeout to one hour for the
FORMAT_UNIT command, and it takes about 20 minutes to format a 4x CDRW,
the SCSI error-handler kicks in after the FORMAT_UNIT completes because
it took longer than the incorrectly-calculated timeout.
[jejb: fix up whitespace]
Signed-off-by: Tim Wright <timw@splhi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit dd07428b44944b42f699408fe31af47977f1e733 upstream
Add PCI device ID for new adapter models.
Signed-off-by: HighPoint Linux Team <linux@highpoint-tech.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e8bac9e0647dd04c83fd0bfe7cdfe2f6dfb100d0 upstream
The class_device->device conversion is causing an oops in revalidate
because it's assuming that the device_for_each_child iterator will only
return struct scsi_device children. The conversion made all former
class_devices children of the device as well, so this assumption is
broken. Fix it.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 671a99c8eb2f1dde08ac5538d8cd912047c61ddf upstream
There are a few kerneloops.org reports like this one:
http://www.kerneloops.org/search.php?search=ses_match_to_enclosure
That seem to imply we're running off the end of the VPD inquiry data
(although at 512 bytes, it should be long enough for just about
anything). we should be using correctly sized buffers anyway, so put
those in and hope this oops goes away.
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a00c3cadc2bf50b3c925acdb3d0e5789b1650498 upstream
The Patch adds support for Luminance Stellaris Evaluation/Development
Kits (FTDI 2232C based).
The PIDs were missing.
Successfully tested with a Stellaris LM3S8962 Evaluation kit.
Signed-off-by: Frederik Kriewitz <frederik@kriewitz.eu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b5894a500127fce1db1309db5f9ca8b77a2ac266 upstream
USB product id registration for the ELV HS485 USB adapter (www.elv.de) to
their home automation bus system. Applies to 2.6.26.
Signed-off-by: Andre Schenk <andre@melior.s.bawue.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 8c809681ba0289afd0ed7bbb63679a0568dd441d upstream
USB ID 4348:5523 is handled by the ch341 driver. Remove it from the
pl2023 driver.
Reverts 002e8f2c80c6be76bb312940bc278fc10b2b2487.
Signed-off-by: Tollef Fog Heen <tfheen@err.no>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0282b7f2a874e72c18fcd5a112ccf67f71ba7f5c upstream
This patch (as1121) fixes a bug in the USB serial core. When a device
is unregistered, the core will give back its minors -- even if the
device hasn't been assigned any!
The patch reserves the highest minor value (255) to mean that no minor
was assigned. It also removes some dead code and does a small style
fixup.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 368ee6469c327364ea10082a348f91c1f5ba47f7 upstream
This patch (as1115) adds unusual_devs entries with the IGNORE_RESIDE
flag for the iRiver T10 and the Simple Tech/Datafab CF+SM card
reader. Apparently these devices provide reasonable residue values
for READ and WRITE operations, but not for others like INQUIRY or READ
CAPACITY.
This fixes the iRiver T10 problem reported in Bugzilla #11125.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit b9a097f26e55968cbc52e30a4a2e73d32d7604ce upstream
usb-storage: quirk around v1.11 firmware on Nikon D40
https://bugzilla.redhat.com/show_bug.cgi?id=454028
Just as in earlier firmware versions, we need to perform this
quirk for the latest version too.
Speculatively do the entry for the D80 too, as they seem to
have the same firmware problems historically.
Signed-off-by: Dave Jones <davej@redhat.com>
Cc: Johannes Berg <johannes@sipsolutions.net>
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
|
|
commit 82e68f7ffec3800425f2391c8c86277606860442 upstream
snd_seq_oss_synth_make_info() incorrectly reports information
to userspace without first checking for the validity of the
device number, leading to possible information leak (CVE-2008-3272).
Reported-By: Tobias Klein <tk@trapkit.de>
Acked-and-tested-by: Takashi Iwai <tiwai@suse.de>
Cc: stable@kernel.org
Signed-off-by: Willy Tarreau <w@1wt.eu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d70b67c8bc72ee23b55381bd6a884f4796692f77 upstream
Lookup can install a child dentry for a deleted directory. This keeps
the directory dentry alive, and the inode pinned in the cache and on
disk, even after all external references have gone away.
This isn't a big problem normally, since memory pressure or umount
will clear out the directory dentry and its children, releasing the
inode. But for UBIFS this causes problems because its orphan area can
overflow.
Fix this by returning ENOENT for all lookups on a S_DEAD directory
before creating a child dentry.
Thanks to Zoltan Sogor for noticing this while testing UBIFS, and
Artem for the excellent analysis of the problem and testing.
Reported-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Tested-by: Artem Bityutskiy <Artem.Bityutskiy@nokia.com>
Signed-off-by: Miklos Szeredi <mszeredi@suse.cz>
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 43785eaeb1cfb8aed3cf8027f298b242f88fdc45 upstream
Don't create mixer volume elements for Headphone and Speaker if they
use the same DAC as normal line-outs on AD1988. Otherwise the amp
value gets screwed up, e.g.
https://bugzilla.novell.com/show_bug.cgi?id=398255
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 470eaf6be78424fc499a5039e5d5fe58bace2bc3 upstream
Added the missing SSID of Thinkpad Z60m for model=thinkpad with
AD1981HD.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Jaroslav Kysela <perex@perex.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Backport fixes for usb-audio Oops at reconnection.
9eb70e6... [ALSA] usb-audio - Fix race in reconnection
f18638d... [ALSA] Clean up snd_card_free*()
73d38b1... [ALSA] Fix the race of card instance unregistration
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d2cd74b158d7214a556226e3312f9fb1de64d7ae upstream
On Audigy2 Platinum, the Analog/Digital mixer switch is inverted.
https://bugzilla.novell.com/show_bug.cgi?id=396204
The patch adds a simple workaround.
There might be another device requiring a similar fix, too (or fix for
audigy2 generically), but right now I fix only the known broken one.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 6c4cc3a8ed15aacc06a5fd369639fef633cee2bc upstream
Added more fallbacks to OSS PHONEOUT mixer mapping. This corresponds
to the speaker output in general, so now "Mono" and "Speaker" are
assigned.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e48d6d97bb6bd8c008045ea0522ea8278fdccc55 upstream
ASUS A9T laptop uses line-out pin as the real front-output while
other devices use it as the surround.
Signed-off-by: Takashi Iwai <tiwai@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 66d715c95a39e84cd25204a665915621457d9691 upstream
It seems VT3336 can't do msi either as with its bro 3351. Disable it.
Reported in the following SUSE bug.
https://bugzilla.novell.com/show_bug.cgi?id=300001
Signed-off-by: Tejun Heo <tj@kernel.org>
Acked-by: Jesse Barnes <jbarnes@virtuousgeek.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit d1f114d12bb4db3147e1b1342ae31083c5a79c84 upstream
This patch (as1097) fixes a bug in the remote-wakeup handling in
ehci-hcd. The driver currently does not keep track of whether the
change-suspend feature is enabled for each port; the feature is
automatically reset the first time it is read. But recent changes to
the hub driver require that the feature be read at least twice in
order to work properly.
A bit-vector is added for storing the change-suspend feature values.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Acked-by: David Brownell <dbrownell@users.sourceforge.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 90d95ef617a535a8832bdcb8dee07bf591e5dd82 upstream
On some boxes the touchpad needs to be reinitialized after resume to make
it function again. This fixes bugzilla #10825.
Signed-off-by: Oliver Neukum <oneukum@suse.de>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 76ecb4f2d7ea5c3aac8970b9529775316507c6d2 upstream
Fix the problem that thermal_get_temp returns the cached value,
which causes the temperature in generic thermal never updates.
Signed-off-by: Zhang Rui <rui.zhang@intel.com>
Acked-by: Jean Delvare <khali@linux-fr.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit a39a2d7c72b358c6253a2ec28e17b023b7f6f41c upstream
My laptop thinks that it's a good idea to give -73C as the critical
CPU temperature.... which isn't the best thing since it causes a shutdown
right at bootup.
Temperatures below freezing are clearly invalid critical thresholds
so just reject these as such.
Signed-off-by: Arjan van de Ven <arjan@linux.intel.com>
Acked-by: Zhang Rui <rui.zhang@intel.com>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e4cc58944c1e2ce41e3079d4eb60c95e7ce04b2b upstream
Current versions of gdb require a working implementation of
PTRACE_GETSIGINFO for proper watchpoint support. Since struct siginfo
contains pointers it must be converted when passed to a 32-bit debugger.
Signed-off-by: Andreas Schwab <schwab@suse.de>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Commit 6844e63a9458d15b4437aa467c99128d994b0f6c
Hardware encryption doesn't work yet so lets use software
encryption for now.
Changes-licensed-under: 3-Clause-BSD
Signed-off-by: Luis R. Rodriguez <mcgrof@winlab.rutgers.edu>
Signed-off-by: John W. Linville <linville@tuxdriver.com>
Cc: Jiri Benc <jbenc@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7efd52a407bed6a2b02015b8ebbff7beba155392 upstream
If acpi_install_notify_handler() for a bay device fails, the bay driver is
superfluous. Most likely, another driver (like libata) is already caring
about this device anyway. Furthermore,
register_hotplug_dock_device(acpi_handle) from the dock driver must not be
called twice with the same handler. This would result in an endless loop
consuming 100% of CPU. So clean up and exit.
Signed-off-by: Holger Macht <hmacht@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Len Brown <len.brown@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ec8dab36e0738d3059980d144e34f16a26bbda7d upstream
When using the HIDP or BNEP kernel support, the user-space needs to
know if the connection has been terminated for some reasons. Wake up
the application if that happens. Otherwise kernel and user-space are
no longer on the same page and weird behaviors can happen.
Signed-off-by: Marcel Holtmann <marcel@holtmann.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0607fd02587a6b4b086dc746d63123c1f284db68 upstream
I received a complaint that some FAT formated medias (e.g. sd memory cards)
trigger a "unknown partition table" message even though there is no partition
table and they work correctly, while in general (when e.g. formated with
mkdosfs or even Windows Vista) this message is not shown.
Currently this seems only to happen when the medias get formatted with Windows
XP (and possibly Win 2000). Then the boot indicator byte contains garbage
(part of text message) and so do the other parts checked by msdos_paritition
which then later triggers this message.
References: novell bug #364365
Most fat formatted media without partition table contains zeros in the boot
indication and the other tested bytes and so falls through the checks in
msdos_partition, leading it to return with 1 (all is fine).
But some (e.g. WinXP formatted) fat fomated medias don't use boot_ind and so
the check fails and causes a "unkown partition table" warning eventhough there
is none and everything would be fine.
This additional check directly verifies if there is a fat formatted medium
without a partition table.
Signed-off-by: Frank Seidel <fseidel@suse.de>
Cc: Andreas Dilger <adilger@sun.com>
Acked-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 73f20e58b1d586e9f6d3ddc3aad872829aca7743 upstream
The on-disk media specification field in FAT is only 8-bits, so testing for
<=0xff is pointless, and can generate a "comparison is always true due to
limited range of data type" warning.
While we're there, convert FAT_VALID_MEDIA() into a C function - the present
implementation is buggy: it generates either one or two references to its
argument.
Cc: Frank Seidel <fseidel@suse.de>
Acked-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0376bce7b0659fe1e80d045860087072583ab93f upstream.
Acer Aspire 1360 needs to be added to nomux blacklist, otherwise its
touchpad misbehaves.
Reported-by: Clark Tompsett <clarkt@cnsp.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3f79b1e94002791a42837a46b5817e87a0ac0873 upstream
Reported-by: Hans Aschauer <Hans.Aschauer@web.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Commit efd5184646d5d400fc538d093e9a0bec22a75551 upstream
Fujitsu Siemens Amilo Pro V2030 needs nomux table entry, in addition to
already existing entry for V2010 model (note that Fujitsu-Siemens changed
the capitalization in the DMI data for product).
Tested-by: Jiri Mleziva <jmleziva@tiscali.cz>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5b5b43d0b32ea586036638288c31179f00de5443 upstream
Gericom Bellagio needs to be added to nomux blacklist, otherwise its
touchpad misbehaves.
Reported-by: Roland Kletzing <roland.kletzing@materna.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit c3a34f4390396a4bede3f8b7bcc5153f50b974bb upstream
This patch introduces i8042_dmi_nopnp_table to make it possible to perform
DMI matches for systems that need 'i8042.nopnp' to work correctly, and
introduces such an entry for Intel D845PESV -- this system doesn't
detect PS2 mouse reliably without this option, as reported by Robert
Lewis.
[dtor@mail.ru - make it compile if CONFIG_PNP is off - reported
by Randy Dunlap]
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 2f6a77d56523c14651236bc401a99b0e2aca2fdd upstream
There are systems that fail in i8042_resume() with
i8042: Can't write CTR to resume
as i8042_command(&i8042_ctr, I8042_CMD_CTL_WCTR) fails even though the
controller claimed itself to be ready before.
One retry after failing write fixes the problems on the failing systems.
Reported-by: Helmut Schaa <hschaa@novell.com>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Signed-off-by: Dmitry Torokhov <dtor@mail.ru>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5b9a499d77e9dd39c9e6611ea10c56a31604f274 upstream
There are several cases where the running transaction can get buffers added to
its BJ_Metadata list which it never dirtied, which makes its t_nr_buffers
counter end up larger than its t_outstanding_credits counter.
This will cause issues when starting new transactions as while we are logging
buffers we decrement t_outstanding_buffers, so when t_outstanding_buffers goes
negative, we will report that we need less space in the journal than we
actually need, so transactions will be started even though there may not be
enough room for them. In the worst case scenario (which admittedly is almost
impossible to reproduce) this will result in the journal running out of space.
The fix is to only
refile buffers from the committing transaction to the running transactions
BJ_Modified list when b_modified is set on that journal, which is the only way
to be sure if the running transaction has modified that buffer.
This patch also fixes an accounting error in journal_forget, it is possible
that we can call journal_forget on a buffer without having modified it, only
gotten write access to it, so instead of freeing a credit, we only do so if
the buffer was modified. The assert will help catch if this problem occurs.
Without these two patches I could hit this assert within minutes of running
postmark, with them this issue no longer arises. Thank you,
Signed-off-by: Josef Bacik <jbacik@redhat.com>
Cc: <linux-ext4@vger.kernel.org>
Acked-by: Jan Kara <jack@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 3f31fddfa26b7594b44ff2b34f9a04ba409e0f91 upstream
journal_try_to_free_buffers() could race with jbd commit transaction when
the later is holding the buffer reference while waiting for the data
buffer to flush to disk. If the caller of journal_try_to_free_buffers()
request tries hard to release the buffers, it will treat the failure as
error and return back to the caller. We have seen the directo IO failed
due to this race. Some of the caller of releasepage() also expecting the
buffer to be dropped when passed with GFP_KERNEL mask to the
releasepage()->journal_try_to_free_buffers().
With this patch, if the caller is passing the __GFP_WAIT and __GFP_FS to
indicating this call could wait, in case of try_to_free_buffers() failed,
let's waiting for journal_commit_transaction() to finish commit the
current committing transaction, then try to free those buffers again.
[akpm@linux-foundation.org: coding-style fixes]
Signed-off-by: Mingming Cao <cmm@us.ibm.com>
Reviewed-by: Badari Pulavarty <pbadari@us.ibm.com>
Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 5bc833feaa8b2236265764e7e81f44937be46eda upstream
Currently at the start of a journal commit we loop through all of the buffers
on the committing transaction and clear the b_modified flag (the flag that is
set when a transaction modifies the buffer) under the j_list_lock.
The problem is that everywhere else this flag is modified only under the jbd
lock buffer flag, so it will race with a running transaction who could
potentially set it, and have it unset by the committing transaction.
This is also a big waste, you can have several thousands of buffers that you
are clearing the modified flag on when you may not need to. This patch
removes this code and instead clears the b_modified flag upon entering
do_get_write_access/journal_get_create_access, so if that transaction does
indeed use the buffer then it will be accounted for properly, and if it does
not then we know we didn't use it.
That will be important for the next patch in this series. Tested thoroughly
by myself using postmark/iozone/bonnie++.
Signed-off-by: Josef Bacik <jbacik@redhat.com>
Cc: <linux-ext4@vger.kernel.org>
Acked-by: Jan Kara <jack@ucw.cz>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit f41f741838480aeaa3a189cff6e210503cf9c42d upstream
...and ensure that we obey the NFS_INO_INVALID_ACL flag when retrieving the
acls.
Signed-off-by: Trond Myklebust <Trond.Myklebust@netapp.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 483d8876f75aa5707a646442377051f1b90db206 upstream
Add an include <asm/time.h> statement for get_tb().
Signed-off-by: FUJITA Tomonori <fujita.tomonori@lab.ntt.co.jp>
Signed-off-by: Geoff Levand <geoffrey.levand@am.sony.com>
Signed-off-by: Paul Mackerras <paulus@samba.org>
Cc: Jeff Mahoney <jeffm@suse.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e9baf6e59842285bcf9570f5094e4c27674a0f7c upstream
In case when both EEXIST and EROFS would apply we used to
return the former in mkdir(2) and friends. Lest anyone suspects
us of being consistent, in the same situation knfsd gave clients
nfs_erofs...
ro-bind series had switched the syscall side of things to
returning -EROFS and immediately broke an application - namely,
mkdir -p. Patch restores the original behaviour...
Signed-off-by: Al Viro <viro@zeniv.linux.org.uk>
Acked-by: Jan Blunck <jblunck@suse.de>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 69cd39e94669e2994277a29249b6ef93b088ddbb upstream
Newer Dell CERC firmware (>= 6.62) implement a random deletion handling
compatible with the legacy megaraid driver. The legacy handling shifted
the target ID by 0x80 only for I/O commands (READ/WRITE/etc), whereas
megaraid_mbox shifts the target ID always if random deletion is supported.
The resulted in megaraid_mbox sending an INQUIRY to the wrong channel, and
not finding any devices, obviously.
So we disable the random deletion support if the offending firmware is
found.
Addresses http://bugzilla.kernel.org/show_bug.cgi?id=6695
Signed-off-by: Hannes Reinecke <hare@suse.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Bo Yang <Bo.Yang@lsi.com>
Signed-off-by: James Bottomley <James.Bottomley@HansenPartnership.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 756a6c68556600aec9460346332884d891d5beb4 upstream
x86: ioremap of 64-bit resource on 32-bit kernel fix
Signed-off-by: Ingo Molnar <mingo@elte.hu>
Cc: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0056e65f9e28d83ee1a3fb4f7d0041e838f03c34 upstream
We zero-fill them like we are supposed to, and that's all fine. It's
only an error if the 'romfs_copyfrom()' routine isn't able to fill the
data that is supposed to be there.
Most of the patch is really just re-organizing the code a bit, and using
separate variables for the error value and for how much of the page we
actually filled from the filesystem.
Reported-and-tested-by: Chris Fester <cfester@wms.com>
Cc: Alexander Viro <viro@zeniv.linux.org.uk>
Cc: Matt Waddel <matt.waddel@freescale.com>
Cc: Greg Ungerer <gerg@snapgear.com>
Signed-of-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
|
|
commit 94ad374a0751f40d25e22e036c37f7263569d24c upstream
The iov_iter_advance() function would look at the iov->iov_len entry
even though it might have iterated over the whole array, and iov was
pointing past the end. This would cause DEBUG_PAGEALLOC to trigger a
kernel page fault if the allocation was at the end of a page, and the
next page was unallocated.
The quick fix is to just change the order of the tests: check that there
is any iovec data left before we check the iov entry itself.
Thanks to Alexey Dobriyan for finding this case, and testing the fix.
Reported-and-tested-by: Alexey Dobriyan <adobriyan@gmail.com>
Cc: Nick Piggin <npiggin@suse.de>
Cc: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
netfilter: nf_conntrack_tcp: fix endless loop
Upstream commit 6b69fe0:
When a conntrack entry is destroyed in process context and destruction
is interrupted by packet processing and the packet is an attempt to
reopen a closed connection, TCP conntrack tries to kill the old entry
itself and returns NF_REPEAT to pass the packet through the hook
again. This may lead to an endless loop: TCP conntrack repeatedly
finds the old entry, but can not kill it itself since destruction
is already in progress, but destruction in process context can not
complete since TCP conntrack is keeping the CPU busy.
Drop the packet in TCP conntrack if we can't kill the connection
ourselves to avoid this.
Reported by: hemao77@gmail.com [ Kernel bugzilla #11058 ]
Signed-off-by: Patrick McHardy <kaber@trash.net>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|