Age | Commit message (Collapse) | Author |
|
commit c8e33141911bf8fe87dc6c92793b9a59b2be0130 upstream.
The locking logic in this function is extremely subtle, and it broke
when we started doing potentially concurrent 'flush_to_ldisc()' calls in
commit e043e42bdb66885b3ac10d27a01ccb9972e2b0a3 ("pty: avoid forcing
'low_latency' tty flag").
The code in flush_to_ldisc() used to set 'tty->buf.head' to NULL, with
the intention that this would then cause any other concurrent calls to
not do anything (locking note: we have to drop the buf.lock over the
call to ->receive_buf that can block, which is why we can have
concurrency here at all in the first place).
It also used to set the TTY_FLUSHING bit, which would then cause any
concurrent 'tty_buffer_flush()' to not free all the tty buffers and
clear 'tty->buf.tail'. And with 'buf.head' being NULL, and 'buf.tail'
being non-NULL, new data would never touch 'buf.head'.
Does that sound a bit too subtle? It was. If another concurrent call to
'flush_to_ldisc()' were to come in, the NULL buf.head would indeed cause
it to not process the buffer list, but it would still clear TTY_FLUSHING
afterwards, making the buffer protection against 'tty_buffer_flush()' no
longer work.
So this clears it all up. We depend purely on TTY_FLUSHING for handling
re-entrancy, and stop playing games with the buffer list entirely. In
fact, the buffer list handling is now robust enough that we could
probably stop doing the whole "protect against 'tty_buffer_flush()'"
thing entirely.
However, Alan also points out that we would probably be better off
simplifying the locking even further, and just take the tty ldisc_mutex
around all the buffer flushing calls. That seems like a good idea, but
in the meantime this is a conceptually minimal fix (with the patch
itself being bigger than required just to clean the code up and make it
readable).
This fixes keyboard trouble under X:
http://bugzilla.kernel.org/show_bug.cgi?id=14388
Reported-and-tested-by: Frédéric Meunier <fredlwm@gmail.com>
Reported-and-tested-by: Boyan <btanastasov@yahoo.co.uk>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Cc: Paul Fulghum <paulkf@microgate.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 15d031c394e7bef9da4ec764e6b0330d701a0126 upstream.
The previously sent patch:
http://marc.info/?l=tpmdd-devel&m=125208945007834&w=2
Had its first hunk cropped when merged, submitting only this first hunk
again.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: Debora Velarde <debora@linux.vnet.ibm.com>
Cc: Marcel Selhorst <m.selhorst@sirrix.com>
Cc: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Acked-by: Mimi Zohar <zohar@us.ibm.com>
Tested-by: Mimi Zohar <zohar@us.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0afd9056f1b43c9fcbfdf933b263d72023d382fe upstream.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Cc: Debora Velarde <debora@linux.vnet.ibm.com>
Cc: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Cc: Marcel Selhorst <m.selhorst@sirrix.com>
Cc: James Morris <jmorris@namei.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 0b5759c654e74c8dc317ea2c6b3a7476160f688a upstream.
A couple of people have hit the WARN_ON() in drivers/char/tty_io.c,
tty_open() that is unhappy about seeing the tty line discipline go away
during the tty hangup. See for example
http://bugzilla.kernel.org/show_bug.cgi?id=14255
and the reason is that we do the tty_ldisc_halt() outside the
ldisc_mutex in order to be able to flush the scheduled work without a
deadlock with vhangup_work.
However, it turns out that we can solve this particular case by
- using "cancel_delayed_work_sync()" in tty_ldisc_halt(), which waits
for just the particular work, rather than synchronizing with any
random outstanding pending work.
This won't deadlock, since the buf.work we synchronize with doesn't
care about the ldisc_mutex, it just flushes the tty ldisc buffers.
- realize that for this particular case, we don't need to wait for any
hangup work, because we are inside the hangup codepaths ourselves.
so as a result we can just drop the flush_scheduled_work() entirely, and
then move the tty_ldisc_halt() call to inside the mutex. That way we
never expose the partially torn down ldisc state to tty_open(), and hold
the ldisc_mutex over the whole sequence.
Reported-by: Ingo Molnar <mingo@elte.hu>
Reported-by: Heinz Diehl <htd@fancy-poultry.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 1f5c13fad4ec5617b610e12205902c06298c096a upstream.
This patch (as1282) fixes some obvious typos in the TTY core.
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
CC: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit fe1ae7fdd2ee603f2d95f04e09a68f7f79045127 upstream.
Various drivers have hacks to mangle termios structures. This stems from
the fact there is no nice setup hook for configuring the termios settings
when the port is created
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 7ca0ff9ab3218ec443a7a9ad247e4650373ed41e upstream.
Now we are extracting out methods for shutdown and the like we can add a
proper tty_port_close method that knows all the innards of the tty closing
process and hides the lot from the caller.
At some point in the future this will be paired with a similar open()
helper and the drivers can stick to hardware management.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 202c4675c55ddf6b443c7e057d2dff6b42ef71aa upstream.
Commit ac89a9174 ("pty: don't limit the writes to 'pty_space()' inside
'pty_write()'") removed the pty_space() checking, in order to let the
regular tty buffer code limit the buffering itself.
That was all good, but as a subtle side effect it meant that we'd be
doing a tty_wakeup() even in the case where the buffers were all filled
up, and didn't actually make any progress on the write.
Which sounds innocuous, but it interacts very badly with the ppp_async
code, which has an infinite loop in ppp_async_push() that tries to push
out data to the tty. When we call tty_wakeup(), that loop ends up
thinking that progress was made (see the subtle interactions between
XMIT_WAKEUP and 'tty_stuffed' for details). End result: one unhappy ppp
user.
Fixed by noticing when tty_insert_flip_string() didn't actually do
anything, and then not doing any more processing (including, very much
not calling tty_wakeup()).
Bisected-and-tested-by: Peter Volkov <pva@gentoo.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit e517a5e97080bbe52857bd0d7df9b66602d53c4d upstream.
Ever since we enabled GEM, the pre-9xx chipsets (particularly 865) have had
serious stability issues. Back in May a wbinvd was added to the DRM to
work around much of the problem. Some failure remained -- easily visible
by dragging a window around on an X -retro desktop, or by looking at bugzilla.
The chipset flush was on the right track -- hitting the right amount of
memory, and it appears to be the only way to flush on these chipsets, but the
flush page was mapped uncached. As a result, the writes trying to clear the
writeback cache ended up bypassing the cache, and not flushing anything! The
wbinvd would flush out other writeback data and often cause the data we wanted
to get flushed, but not always. By removing the setting of the page to UC
and instead just clflushing the data we write to try to flush it, we get the
desired behavior with no wbinvd.
This exports clflush_cache_range(), which was laying around and happened to
basically match the code I was otherwise going to copy from the DRM.
Signed-off-by: Eric Anholt <eric@anholt.net>
Signed-off-by: Brice Goglin <Brice.Goglin@ens-lyon.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit 121264827656f5f06328b17983c796af17dc5949 upstream.
As early pci resume has already restored config for host
bridge and graphics device, don't need to restore it again,
This removes an original order hack for graphics device restore.
This fixed the resume hang issue found by Alan Stern on 845G,
caused by extra config restore on graphics device.
Cc: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Dave Airlie <airlied@linux.ie>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
commit ec57935837a78f9661125b08a5d08b697568e040 upstream.
When probing the device in tpm_tis_init the call request_locality
uses timeout_a, which wasn't being initalized until after
request_locality. This results in request_locality falsely timing
out if the chip is still starting. Move the initialization to before
request_locality.
This probably only matters for embedded cases (ie mine), a BIOS likely
gets the TPM into a state where this code path isn't necessary.
Signed-off-by: Jason Gunthorpe <jgunthorpe@obsidianresearch.com>
Acked-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: James Morris <jmorris@namei.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/anholt/drm-intel:
agp/intel: support for new chip variant of IGDNG mobile
drm/i915: Unref old_obj on get_fence_reg() error path
drm/i915: increase default latency constant (v2 w/comment)
|
|
The whole write-room thing is something that is up to the _caller_ to
worry about, not the pty layer itself. The total buffer space will
still be limited by the buffering routines themselves, so there is no
advantage or need in having pty_write() artificially limit the size
somehow.
And what happened was that the caller (the n_tty line discipline, in
this case) may have verified that there is room for 2 bytes to be
written (for NL -> CRNL expansion), and it used to then do those writes
as two single-byte writes. And if the first byte written (CR) then
caused a new tty buffer to be allocated, pty_space() may have returned
zero when trying to write the second byte (LF), and then incorrectly
failed the write - leading to a lost newline character.
This should finally fix
http://bugzilla.kernel.org/show_bug.cgi?id=14015
Reported-by: Mikael Pettersson <mikpe@it.uu.se>
Acked-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
When translating CR to CRNL in the n_tty line discipline, we did it as
two tty_put_char() calls. Which works, but is stupid, and has caused
problems before too with bad interactions with the write_room() logic.
The generic USB serial driver had that problem, for example.
Now the pty layer had similar issues after being moved to the generic
tty buffering code (in commit d945cb9cce20ac7143c2de8d88b187f62db99bdc:
"pty: Rework the pty layer to use the normal buffering logic").
So stop doing the silly separate two writes, and do it as a single write
instead. That's what the n_tty layer already does for the space
expansion of tabs (XTABS), and it means that we'll now always have just
a single write for the CRNL to match the single 'tty_write_room()' test,
which hopefully means that the next time somebody screws up buffering,
it won't cause weeks of debugging.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
New variant of IGDNG mobile chip has new host bridge id.
[anholt: Note that this new PCI ID doesn't impact the DRM, which doesn't
care about the PCI ID of the bridge]
Signed-off-by: Zhenyu Wang <zhenyuw@linux.intel.com>
Signed-off-by: Eric Anholt <eric@anholt.net>
|
|
When I rewrote tty ldisc code to use proper reference counts (commits
65b770468e98 and cbe9352fa08f) in order to avoid a race with hangup, the
test-program that Eric Biederman used to trigger the original problem
seems to have exposed another long-standing bug: the hangup code did the
'tty_ldisc_halt()' to stop any buffer flushing activity, but unlike the
other call sites it never actually flushed any pending work.
As a result, if you get just the right timing, the pending work may be
just about to execute (ie the timer has already triggered and thus
cancel_delayed_work() was a no-op), when we then re-initialize the ldisc
from under it.
That, in turn, results in various random problems, usually seen as a
NULL pointer dereference in run_timer_softirq() or a BUG() in
worker_thread (but it can be almost anything).
Fix it by adding the required 'flush_scheduled_work()' after doing the
tty_ldisc_halt() (this also requires us to move the ldisc halt to before
taking the ldisc mutex in order to avoid a deadlock with the workqueue
executing do_tty_hangup, which requires the mutex).
The locking should be cleaned up one day (the requirement to do this
outside the ldisc_mutex is very annoying, and weakens the lock), but
that's a larger and separate undertaking.
Reported-by: Eric W. Biederman <ebiederm@xmission.com>
Tested-by: Xiaotian Feng <xtfeng@gmail.com>
Tested-by: Yanmin Zhang <yanmin_zhang@linux.intel.com>
Tested-by: Dave Young <hidave.darkstar@gmail.com>
Cc: Frederic Weisbecker <fweisbec@gmail.com>
Cc: Greg Kroah-Hartman <gregkh@suse.de>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit d945cb9cc ("pty: Rework the pty layer to use the normal buffering
logic") dropped the test for 'tty->stopped' in pty_write_room(), which
then causes the n_tty line discipline thing to not throttle the data
properly when the tty is stopped.
So instead of pausing the write due to the tty being stopped, the ldisc
layer would go ahead and push it down to the pty. The pty write()
routine would then refuse to take the data (because it _did_ check
'stopped'), and the data wouldn't actually be written.
This whole stopped test should eventually be moved into the tty ldisc
layer rather than have low-level tty drivers care about these things,
but right now the fix is to just re-instate the missing pty 'stopped'
handling.
Reported-and-tested-by: Artur Skawina <art.08.09@gmail.com>
Cc: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
* git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty-2.6:
tty-ldisc: be more careful in 'put_ldisc' locking
tty-ldisc: turn ldisc user count into a proper refcount
tty-ldisc: make refcount be atomic_t 'users' count
|
|
Use 'atomic_dec_and_lock()' to make sure that we always hold the
tty_ldisc_lock when the ldisc count goes to zero. That way we can never
race against 'tty_ldisc_try()' increasing the count again.
Reported-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@mail.by>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
By using the user count for the actual lifetime rules, we can get rid of
the silly "wait_for_idle" logic, because any busy ldisc will
automatically stay around until the last user releases it. This avoids
a host of odd issues, and simplifies the code.
So now, when the last ldisc reference is dropped, we just release the
ldisc operations struct reference, and free the ldisc.
It looks obvious enough, and it does work for me, but the counting
_could_ be off. It probably isn't (bad counting in the new version would
generally imply that the old code did something really bad, like free an
ldisc with a non-zero count), but it does need some testing, and
preferably somebody looking at it.
With this change, both 'tty_ldisc_put()' and 'tty_ldisc_deref()' are
just aliases for the new ref-counting 'put_ldisc()'. Both of them
decrement the ldisc user count and free it if it goes down to zero.
They're identical functions, in other words.
But the reason they still exist as sepate functions is that one of them
was exported (tty_ldisc_deref) and had a stupid name (so I don't want to
use it as the main name), and the other one was used in multiple places
(and I didn't want to make the patch larger just to rename the users).
In addition to the refcounting, I did do some minimal cleanup. For
example, now "tty_ldisc_try()" actually returns the ldisc it got under
the lock, rather than returning true/false and then the caller would
look up the ldisc again (now without the protection of the lock).
That said, there's tons of dubious use of 'tty->ldisc' without obviously
proper locking or refcounting left. I expressly did _not_ want to try to
fix it all, keeping the patch minimal. There may or may not be bugs in
that kind of code, but they wouldn't be _new_ bugs.
That said, even if the bugs aren't new, the timing and lifetime will
change. For example, some silly code may depend on the 'tty->ldisc'
pointer not changing because they hold a refcount on the 'ldisc'. And
that's no longer true - if you hold a ref on the ldisc, the 'ldisc'
itself is safe, but tty->ldisc may change.
So the proper locking (remains) to hold tty->ldisc_mutex if you expect
tty->ldisc to be stable. That's not really a _new_ rule, but it's an
example of something that the old code might have unintentionally
depended on and hidden bugs.
Whatever. The patch _looks_ sensible to me. The only users of
ldisc->users are:
- get_ldisc() - atomically increment the count
- put_ldisc() - atomically decrements the count and releases if zero
- tty_ldisc_try_get() - creates the ldisc, and sets the count to 1.
The ldisc should then either be released, or be attached to a tty.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Tested-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@mail.by>
Acked-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
This is pure preparation of changing the ldisc reference counting to be
a true refcount that defines the lifetime of the ldisc. But this is a
purely syntactic change for now to make the next steps easier.
This patch should make no semantic changes at all. But I wanted to make
the ldisc refcount be an atomic (I will be touching it without locks
soon enough), and I wanted to rename it so that there isn't quite as
much confusion between 'ldo->refcount' (ldisk operations refcount) and
'ld->refcount' (ldisc refcount itself) in the same file.
So it's now an atomic 'ld->users' count. It still starts at zero,
despite having a reference from 'tty->ldisc', but that will change once
we turn it into a _real_ refcount.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Tested-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Tested-by: Sergey Senozhatsky <sergey.senozhatsky@mail.by>
Acked-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@suse.de>
|
|
Fix those compiler warnings, which indeed point to a bug:
drivers/char/agp/parisc-agp.c:228: warning: initialization from incompatible pointer type
drivers/char/agp/parisc-agp.c:201: warning: 'parisc_agp_page_mask_memory' defined but not used
Signed-off-by: Helge Deller <deller@gmx.de>
|
|
commit d6580a9f15238b87e618310c862231ae3f352d2d ("kexec: sysrq: simplify
sysrq-c handler") changed the behavior of sysrq-c to unconditional
dereference of NULL pointer. So in cases with CONFIG_KEXEC, where
crash_kexec() was directly called from sysrq-c before, now it can be said
that a step of "real oops" was inserted before starting kdump.
However, in contrast to oops via SysRq-c from keyboard which results in
panic due to in_interrupt(), oops via "echo c > /proc/sysrq-trigger" will
not become panic unless panic_on_oops=1. It means that even if dump is
properly configured to be taken on panic, the sysrq-c from proc interface
might not start crashdump while the sysrq-c from keyboard can start
crashdump. This confuses traditional users of kdump, i.e. people who
expect sysrq-c to do common behavior in both of the keyboard and proc
interface.
This patch brings the keyboard and proc interface behavior of sysrq-c in
line, by forcing panic_on_oops=1 before oops in sysrq-c handler.
And some updates in documentation are included, to clarify that there is
no longer dependency with CONFIG_KEXEC, and that now the system can just
crash by sysrq-c if no dump mechanism is configured.
Signed-off-by: Hidetoshi Seto <seto.hidetoshi@jp.fujitsu.com>
Cc: Lai Jiangshan <laijs@cn.fujitsu.com>
Cc: Ken'ichi Ohmichi <oomichi@mxs.nes.nec.co.jp>
Acked-by: Neil Horman <nhorman@tuxdriver.com>
Acked-by: Vivek Goyal <vgoyal@redhat.com>
Cc: Brayan Arraes <brayan@yack.com.br>
Cc: Eric W. Biederman <ebiederm@xmission.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/misc-2.6
* 'zero-length' of git://git.kernel.org/pub/scm/linux/kernel/git/jgarzik/misc-2.6:
Remove zero-length file drivers/char/vr41xx_giu.c
|
|
We really don't want to mark the pty as a low-latency device, because as
Alan points out, the ->write method can be called from an IRQ (ppp?),
and that means we can't use ->low_latency=1 as we take mutexes in the
low_latency case.
So rather than using low_latency to force the written data to be pushed
to the ldisc handling at 'write()' time, just make the reader side (or
the poll function) do the flush when it checks whether there is data to
be had.
This also fixes the problem with lost data in an emacs compile buffer
(bugzilla 13815), and we can thus revert the low_latency pty hack
(commit 3a54297478e6578f96fd54bf4daa1751130aca86: "pty: quickfix for the
pty ENXIO timing problems").
Signed-off-by: OGAWA Hirofumi <hirofumi@mail.parknet.co.jp>
Tested-by: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
[ Modified to do the tty_flush_to_ldisc() inside input_available_p() so
that it triggers for both read and poll() - Linus]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Jeff Garzik <jgarzik@redhat.com>
|
|
This also makes close stall in the normal case which is apparently
needed to fix emacs
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This function does not have an error return and returning an error is
instead interpreted as having a lot of pending bytes.
Reported by Jeff Harris who provided a list of some of the remaining
offenders.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
If spin_lock_irqsave is called twice in a row with the same second
argument, the interrupt state at the point of the second call overwrites
the value saved by the first call. Indeed, the second call does not
need to save the interrupt state, so it is changed to a simple
spin_lock.
The semantic match that finds this problem is as follows:
(http://www.emn.fr/x-info/coccinelle/)
// <smpl>
@@
expression lock1,lock2;
expression flags;
@@
*spin_lock_irqsave(lock1,flags)
... when != flags
*spin_lock_irqsave(lock2,flags)
// </smpl>
Signed-off-by: Julia Lawall <julia@diku.dk>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
The buffer for the consoles are unconditionally allocated at con_init()
time, which miss the creation of the vcs(a) devices.
Since 2.6.30 (commit 4995f8ef9d3aac72745e12419d7fbaa8d01b1d81, 'vcs:
hook sysfs devices into object lifetime instead of "binding"' to be
exact) these devices are no longer created at open() and removed on
close(), but controlled by the lifetime of the buffers.
Reported-by: Gerardo Exequiel Pozzi <vmlinuz386@yahoo.com.ar>
Tested-by: Gerardo Exequiel Pozzi <vmlinuz386@yahoo.com.ar>
Cc: stable@kernel.org
Signed-off-by: Kay Sievers <kay.sievers@vrfy.org>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Whoops.. fortunately not many people use this yet.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
If a tty in N_TTY mode with echo enabled manages to get itself into a state
where
- echo characters are pending
- FASYNC is enabled
- tty_write_wakeup is called from either
- a device write path (pty)
- an IRQ (serial)
then it either deadlocks or explodes taking a mutex in the IRQ path.
On the serial side it is almost impossible to reproduce because you have to
go from a full serial port to a near empty one with echo characters
pending. The pty case happens to have become possible to trigger using
emacs and ptys, the pty changes having created a scenario which shows up
this bug.
The code path is
n_tty:process_echoes() (takes mutex)
tty_io:tty_put_char()
pty:pty_write (or serial paths)
tty_wakeup (from pty_write or serial IRQ)
n_tty_write_wakeup()
process_echoes()
*KABOOM*
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Don't forget to drop a tty refererence on fail paths in
receive_data().
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Bootmem is not used for the vt screen buffer anymore as slab is now
available at the time the console is initialized.
Get rid of the now superfluous distinction between slab and bootmem,
it's always slab.
This also fixes a kmalloc leak which Catalin described thusly:
Commit a5f4f52e ("vt: use kzalloc() instead of the bootmem allocator")
replaced the alloc_bootmem() with kzalloc() but didn't set vc_kmalloced to
1 and the memory block is later leaked. The corresponding kmemleak trace:
unreferenced object 0xdf828000 (size 8192):
comm "swapper", pid 0, jiffies 4294937296
backtrace:
[<c006d473>] __save_stack_trace+0x17/0x1c
[<c000d869>] log_early+0x55/0x84
[<c01cfa4b>] kmemleak_alloc+0x33/0x3c
[<c006c013>] __kmalloc+0xd7/0xe4
[<c00108c7>] con_init+0xbf/0x1b8
[<c0010149>] console_init+0x11/0x20
[<c0008797>] start_kernel+0x137/0x1e4
Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
Reviewed-by: Pekka Enberg <penberg@cs.helsinki.fi>
Tested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Alan Cox <alan@lxorguk.ukuu.org.uk>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
We can get a situation where a hangup occurs during or after a close. In
that case the ldisc gets disposed of by the close and the hangup then
explodes.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
* Remove smp_lock.h from files which don't need it (including some headers!)
* Add smp_lock.h to files which do need it
* Make smp_lock.h include conditional in hardirq.h
It's needed only for one kernel_locked() usage which is under CONFIG_PREEMPT
This will make hardirq.h inclusion cheaper for every PREEMPT=n config
(which includes allmodconfig/allyesconfig, BTW)
Signed-off-by: Alexey Dobriyan <adobriyan@gmail.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Commit 5fd29d6ccbc98884569d6f3105aeca70858b3e0f ("printk: clean up
handling of log-levels and newlines") changed printk semantics. printk
lines with multiple KERN_<level> prefixes are no longer emitted as
before the patch.
<level> is now included in the output on each additional use.
Remove all uses of multiple KERN_<level>s in formats.
Signed-off-by: Joe Perches <joe@perches.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
This fixes the ppp problems and various other issues with call locking
caused by one side of a pty called in one locking context trying to match
another with differing rules on the other side. We also get a big slack
space to work with that means we can bury the flow control deadlock case
for any conceivable real world situation.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Signed-off-by: Yoichi Yuasa <yuasa@linux-mips.org>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Signed-off-by: Yoichi Yuasa <yyuasa@linux.com>
Signed-off-by: Ralf Baechle <ralf@linux-mips.org>
|
|
Currently we reinit the ldisc on final tty close which is what the old code
did to ensure that if the device retained its termios settings then it had the
right ldisc. tty_ldisc_reinit does that but also leaves us with the reset
ldisc reference which is then leaked.
At this point we know the port will be recycled so we can kill the ldisc
off completely rather than try and add another ldisc free up when the kref
count hits zero.
At this point it is safe to keep the ldisc closed as tty_ldisc waiting
methods are only used from the user side, and as the final close we are
the last such reference. Interrupt/driver side methods will always use the
non wait version and get back a NULL.
Found with kmemleak and investigated/identified by Catalin Marinas.
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
On Mon, Nov 17, 2008 at 01:26:13AM -0600, Sonny Rao wrote:
> On Fri, Nov 07, 2008 at 04:28:29PM +1100, Paul Mackerras wrote:
> > Sonny Rao writes:
> >
> > > Fix the BSR driver to allow small BSR devices, which are limited to a
> > > single 4k space, on a 64k page kernel. Previously the driver would
> > > reject the mmap since the size was smaller than PAGESIZE (or because
> > > the size was greater than the size of the device). Now, we check for
> > > this case use remap_4k_pfn(). Also, take out code to set vm_flags,
> > > as the remap_pfn functions will do this for us.
> >
> > Thanks.
> >
> > Do we know that the BSR size will always be 4k if it's not a multiple
> > of 64k? Is it possible that we could get 8k, 16k or 32k or BSRs?
> > If it is possible, what does the user need to be able to do? Do they
> > just want to map 4k, or might then want to map the whole thing?
>
>
> Hi Paul, I took a look at changing the driver to reject a request for
> mapping more than a single 4k page, however the only indication we get
> of the requested size in the mmap function is the vma size, and this
> is always one page at minimum. So, it's not possible to determine if
> the user wants one 4k page or more. As I noted in my first response,
> there is only one case where this is even possible and I don't think
> it is a significant concern.
>
> I did notice that I left out the check to see if the user is trying to
> map more than the device length, so I fixed that. Here's the revised
> patch.
Alright, I've reworked this now so that if we get one of these cases
where there's a bsr that's > 4k and < 64k on a 64k kernel we'll only
advertise that it is a 4k BSR to userspace. I think this is the best
solution since user programs are only supposed to look at sysfs to
determine how much can be mapped, and libbsr does this as well.
Please consider for 2.6.31 as a fix, thanks.
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
Add a 4096 byte BSR size which will be used on new machines. Also, remove
the warning when we run into an unknown size, as this can spam the kernel
log excessively.
Signed-off-by: Sonny Rao <sonnyrao@us.ibm.com>
Signed-off-by: Benjamin Herrenschmidt <benh@kernel.crashing.org>
|
|
The kernel oopses if this flag is set.
[and neither driver should set it as they call tty_flip_buffer_push from IRQ
paths so have always been buggy]
Signed-off-by: Chuck Ebbert <cebbert@redhat.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Since commit 3e3b5c087799e536871c8261b05bc28e4783c8da ("tty: use
prepare/finish_wait"), tty_port_block_til_ready() is using
prepare_to_wait()/finish_wait(). Those functions require that the
wait_queue_t be initialised with .func=autoremove_wake_function, via
DEFINE_WAIT().
But the conversion from DECLARE_WAITQUEUE() to DEFINE_WAIT() was not made,
so this code will oops in finish_wait().
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Fix race condition when adding transmit data to active DMA buffer ring
that can cause transmit stall.
Update transmit timeout when adding data to active DMA buffer ring.
Base transmit timeout on amount of buffered data instead of using fixed
value.
Signed-off-by: Paul Fulghum <paulkf@microgate.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Add flush_buffer tty callback to flush rx buffers.
Add TCFLSH ioctl processing to flush tx buffers.
Increase default tx buffers from 1 to 3.
Remove unneeded flush_buffer call in open callback.
Remove vendor specific CVS version string.
Signed-off-by: Paul Fulghum <paulkf@microgate.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Don't return from switch/case directly in vt_ioctl. Set ret and break
instead so that we unlock BKL.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
Don't return from switch/case, break instead, so that we unlock BKL.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
There is omitted BKunL in r3964_read.
Centralize the paths to one point with one unlock.
Signed-off-by: Jiri Slaby <jirislaby@gmail.com>
Signed-off-by: Alan Cox <alan@linux.intel.com>
Cc: stable@kernel.org
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|