Age | Commit message (Collapse) | Author |
|
commit 9ba5c80b1aea8648a3efe5f22dc1f7cacdfbeeb8 upstream.
When multiple ovq operations are being performed (lots of open/close
operations on virtio_console fds), the __send_control_msg() function can
get confused without locking.
A simple recipe to cause badness is:
* create a QEMU VM with two virtio-serial ports
* in the guest, do
while true;do echo abc >/dev/vport0p1;done
while true;do echo edf >/dev/vport0p2;done
In one run, this caused a panic in __send_control_msg(). In another, I
got
virtio_console virtio0: control-o:id 0 is not a head!
This also results repeated messages similar to these on the host:
qemu-kvm: virtio-serial-bus: Unexpected port id 478762112 for device virtio-serial-bus.0
qemu-kvm: virtio-serial-bus: Unexpected port id 478762368 for device virtio-serial-bus.0
Reported-by: FuXiangChun <xfu@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Reviewed-by: Asias He <asias@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
[wyj: Backported to 3.4: adjust context]
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 165b1b8bbc17c9469b053bab78b11b7cbce6d161 upstream.
The cvq_lock was taken for the c_ivq. Rename the lock to make that
obvious.
We'll also add a lock around the c_ovq in the next commit, so there's no
ambiguity.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Reviewed-by: Asias He <asias@redhat.com>
Reviewed-by: Wanlong Gao <gaowanlong@cn.fujitsu.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
[bwh: Backported to 3.2:
- Adjust context
- Drop change to virtcons_restore()]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
[wyj: Backported to 3.4:
- pick change to virtcons_restore() from upsteam patch]
Signed-off-by: Yijing Wang <wangyijing@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 10b3a32d292c21ea5b3ad5ca5975e88bb20b8d68 upstream.
Commit 902c098a3663 ("random: use lockless techniques in the interrupt
path") turned IRQ path from being spinlock protected into lockless
cmpxchg-retry update.
That commit removed r->lock serialization between crediting entropy bits
from IRQ context and accounting when extracting entropy on userspace
read path, but didn't turn the r->entropy_count reads/updates in
account() to use cmpxchg as well.
It has been observed, that under certain circumstances this leads to
read() on /dev/urandom to return 0 (EOF), as r->entropy_count gets
corrupted and becomes negative, which in turn results in propagating 0
all the way from account() to the actual read() call.
Convert the accounting code to be the proper lockless counterpart of
what has been partially done by 902c098a3663.
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
Cc: Theodore Ts'o <tytso@mit.edu>
Cc: Greg KH <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Cc: Qiang Huang <h.huangqiang@huawei.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit eb6d78ec213e6938559b801421d64714dafcf4b2 upstream.
The OBF timer in KCS was not reset in one situation when error recovery
was started, resulting in an immediate timeout.
Reported-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 48e8ac2979920ffa39117e2d725afa3a749bfe8d upstream.
With recent changes it is possible for the timer handler to detect an
idle interface and not start the timer, but the thread to start an
operation at the same time. The thread will not start the timer in that
instance, resulting in the timer not running.
Instead, move all timer operations under the lock and start the timer in
the thread if it detect non-idle and the timer is not already running.
Moving under locks allows the last timeout to be set in both the thread
and the timer. 'Timer is not running' means that the timer is not
pending and smi_timeout() is not running. So we need a flag to detect
this correctly.
Also fix a few other timeout bugs: setting the last timeout when the
interrupt has to be disabled and the timer started, and setting the last
timeout in check_start_timer_thread possibly racing with the timer
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Bodo Stroesser <bstroesser@ts.fujitsu.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit a94cdd1f4d30f12904ab528152731fb13a812a16 upstream.
In read_all_bytes, we do
unsigned char i;
...
bt->read_data[0] = BMC2HOST;
bt->read_count = bt->read_data[0];
...
for (i = 1; i <= bt->read_count; i++)
bt->read_data[i] = BMC2HOST;
If bt->read_data[0] == bt->read_count == 255, we loop infinitely in the
'for' loop. Make 'i' an 'int' instead of 'char' to get rid of the
overflow and finish the loop after 255 iterations every time.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Reported-and-debugged-by: Rui Hui Dian <rhdian@novell.com>
Cc: Tomas Cech <tcech@suse.cz>
Cc: Corey Minyard <minyard@acm.org>
Cc: <openipmi-developer@lists.sourceforge.net>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 5bbb2ae3d6f896f8d2082d1eceb6131c2420b7cf upstream.
bind_get() checks the device number it is called with. It uses
MAX_RAW_MINORS for the upper bound. But MAX_RAW_MINORS is set at compile
time while the actual number of raw devices can be set at runtime. This
means the test can either be too strict or too lenient. And if the test
ends up being too lenient bind_get() might try to access memory beyond
what was allocated for "raw_devices".
So check against the runtime value (max_raw_minors) in this function.
Signed-off-by: Paul Bolle <pebolle@tiscali.nl>
Acked-by: Jan Kara <jack@suse.cz>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 9aa5b0181bdf335f0b731d8502e128a862884bcd upstream.
Addresses https://bugzilla.kernel.org/show_bug.cgi?id=60772
Signed-off-by: Alan Cox <alan@linux.intel.com>
Reported-by: Leho Kraav <leho@kraav.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 47d06e532e95b71c0db3839ebdef3fe8812fca2c upstream.
The some platforms (e.g., ARM) initializes their clocks as
late_initcalls for some unknown reason. So make sure
random_int_secret_init() is run after all of the late_initcalls are
run.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 96f97a83910cdb9d89d127c5ee523f8fc040a804 upstream.
If a port gets unplugged while a user is blocked on read(), -ENODEV is
returned. However, subsequent read()s returned 0, indicating there's no
host-side connection (but not indicating the device went away).
This also happened when a port was unplugged and the user didn't have
any blocking operation pending. If the user didn't monitor the SIGIO
signal, they won't have a chance to find out if the port went away.
Fix by returning -ENODEV on all read()s after the port gets unplugged.
write() already behaves this way.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 92d3453815fbe74d539c86b60dab39ecdf01bb99 upstream.
SIGIO should be sent when a port gets unplugged. It should only be sent
to prcesses that have the port opened, and have asked for SIGIO to be
delivered. We were clearing out guest_connected before calling
send_sigio_to_port(), resulting in a sigio not getting sent to
processes.
Fix by setting guest_connected to false after invoking the sigio
function.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit ea3768b4386a8d1790f4cc9a35de4f55b92d6442 upstream.
We used to keep the port's char device structs and the /sys entries
around till the last reference to the port was dropped. This is
actually unnecessary, and resulted in buggy behaviour:
1. Open port in guest
2. Hot-unplug port
3. Hot-plug a port with the same 'name' property as the unplugged one
This resulted in hot-plug being unsuccessful, as a port with the same
name already exists (even though it was unplugged).
This behaviour resulted in a warning message like this one:
-------------------8<---------------------------------------
WARNING: at fs/sysfs/dir.c:512 sysfs_add_one+0xc9/0x130() (Not tainted)
Hardware name: KVM
sysfs: cannot create duplicate filename
'/devices/pci0000:00/0000:00:04.0/virtio0/virtio-ports/vport0p1'
Call Trace:
[<ffffffff8106b607>] ? warn_slowpath_common+0x87/0xc0
[<ffffffff8106b6f6>] ? warn_slowpath_fmt+0x46/0x50
[<ffffffff811f2319>] ? sysfs_add_one+0xc9/0x130
[<ffffffff811f23e8>] ? create_dir+0x68/0xb0
[<ffffffff811f2469>] ? sysfs_create_dir+0x39/0x50
[<ffffffff81273129>] ? kobject_add_internal+0xb9/0x260
[<ffffffff812733d8>] ? kobject_add_varg+0x38/0x60
[<ffffffff812734b4>] ? kobject_add+0x44/0x70
[<ffffffff81349de4>] ? get_device_parent+0xf4/0x1d0
[<ffffffff8134b389>] ? device_add+0xc9/0x650
-------------------8<---------------------------------------
Instead of relying on guest applications to release all references to
the ports, we should go ahead and unregister the port from all the core
layers. Any open/read calls on the port will then just return errors,
and an unplug/plug operation on the host will succeed as expected.
This also caused buggy behaviour in case of the device removal (not just
a port): when the device was removed (which means all ports on that
device are removed automatically as well), the ports with active
users would clean up only when the last references were dropped -- and
it would be too late then to be referencing char device pointers,
resulting in oopses:
-------------------8<---------------------------------------
PID: 6162 TASK: ffff8801147ad500 CPU: 0 COMMAND: "cat"
#0 [ffff88011b9d5a90] machine_kexec at ffffffff8103232b
#1 [ffff88011b9d5af0] crash_kexec at ffffffff810b9322
#2 [ffff88011b9d5bc0] oops_end at ffffffff814f4a50
#3 [ffff88011b9d5bf0] die at ffffffff8100f26b
#4 [ffff88011b9d5c20] do_general_protection at ffffffff814f45e2
#5 [ffff88011b9d5c50] general_protection at ffffffff814f3db5
[exception RIP: strlen+2]
RIP: ffffffff81272ae2 RSP: ffff88011b9d5d00 RFLAGS: 00010246
RAX: 0000000000000000 RBX: ffff880118901c18 RCX: 0000000000000000
RDX: ffff88011799982c RSI: 00000000000000d0 RDI: 3a303030302f3030
RBP: ffff88011b9d5d38 R8: 0000000000000006 R9: ffffffffa0134500
R10: 0000000000001000 R11: 0000000000001000 R12: ffff880117a1cc10
R13: 00000000000000d0 R14: 0000000000000017 R15: ffffffff81aff700
ORIG_RAX: ffffffffffffffff CS: 0010 SS: 0018
#6 [ffff88011b9d5d00] kobject_get_path at ffffffff8126dc5d
#7 [ffff88011b9d5d40] kobject_uevent_env at ffffffff8126e551
#8 [ffff88011b9d5dd0] kobject_uevent at ffffffff8126e9eb
#9 [ffff88011b9d5de0] device_del at ffffffff813440c7
-------------------8<---------------------------------------
So clean up when we have all the context, and all that's left to do when
the references to the port have dropped is to free up the port struct
itself.
Reported-by: chayang <chayang@redhat.com>
Reported-by: YOGANANTH SUBRAMANIAN <anantyog@in.ibm.com>
Reported-by: FuXiangChun <xfu@redhat.com>
Reported-by: Qunfang Zhang <qzhang@redhat.com>
Reported-by: Sibiao Luo <sluo@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 671bdea2b9f210566610603ecbb6584c8a201c8c upstream.
Between open() being called and processed, the port can be unplugged.
Check if this happened, and bail out.
A simple test script to reproduce this is:
while true; do for i in $(seq 1 100); do echo $i > /dev/vport0p3; done; done;
This opens and closes the port a lot of times; unplugging the port while
this is happening triggers the bug.
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 057b82be3ca3d066478e43b162fc082930a746c9 upstream.
There's a window between find_port_by_devt() returning a port and us
taking a kref on the port, where the port could get unplugged. Fix it
by taking the reference in find_port_by_devt() itself.
Problem reported and analyzed by Mateusz Guzik.
Reported-by: Mateusz Guzik <mguzik@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 6368087e851e697679af059b4247aca33a69cef3 upstream.
When a 32 bit version of ipmitool is used on a 64 bit kernel, the
ipmi_devintf code fails to correctly acquire ipmi_mutex. This results in
incomplete data being retrieved in some cases, or other possible failures.
Add a wrapper around compat_ipmi_ioctl() to take ipmi_mutex to fix this.
Signed-off-by: Benjamin LaHaise <bcrl@kvack.org>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit a5f2b3d6a738e7d4180012fe7b541172f8c8dcea upstream.
When calling memcpy, read_data and write_data need additional 2 bytes.
write_data:
for checking: "if (size > IPMI_MAX_MSG_LENGTH)"
for operating: "memcpy(bt->write_data + 3, data + 1, size - 1)"
read_data:
for checking: "if (msg_len < 3 || msg_len > IPMI_MAX_MSG_LENGTH)"
for operating: "memcpy(data + 2, bt->read_data + 4, msg_len - 2)"
Signed-off-by: Chen Gang <gang.chen@asianux.com>
Signed-off-by: Corey Minyard <cminyard@mvista.com>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 2323036dfec8ce3ce6e1c86a49a31b039f3300d1 upstream.
This is my example conversion of a few existing mmap users. The HPET
case is simple, widely available, and easy to test (Clemens Ladisch sent
a trivial test-program for it).
Test-program-by: Clemens Ladisch <clemens@ladisch.de>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit e84e7a56a3aa2963db506299e29a5f3f09377f9b upstream.
The code currently only supports one virtio-rng device at a time.
Invoking guests with multiple devices causes the guest to blow up.
Check if we've already registered and initialised the driver. Also
cleanup in case of registration errors or hot-unplug so that a new
device can be used.
Reported-by: Peter Krempa <pkrempa@redhat.com>
Reported-by: <yunzheng@redhat.com>
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit f7f154f1246ccc5a0a7e9ce50932627d60a0c878 upstream.
virtio_rng feeds the randomness buffer handed by the core directly
into the scatterlist, since commit bb347d98079a547e80bd4722dee1de61e4dca0e8.
However, if CONFIG_HW_RANDOM=m, the static buffer isn't a linear address
(at least on most archs). We could fix this in virtio_rng, but it's actually
far easier to just do it in the core as virtio_rng would have to allocate
a buffer every time (it doesn't know how much the core will want to read).
Reported-by: Aurelien Jarno <aurelien@aurel32.net>
Tested-by: Aurelien Jarno <aurelien@aurel32.net>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit aded024a12b32fc1ed9a80639681daae2d07ec25 upstream.
Don't access uninitialized work-queue when removing device.
The work queue is initialized only if the device multi-queue.
So don't call cancel_work unless this is a multi-queue device.
This fixes the following panic:
Kernel panic - not syncing: BUG!
Call Trace:
62031b28: [<6026085d>] panic+0x16b/0x2d3
62031b30: [<6004ef5e>] flush_work+0x0/0x1d7
62031b60: [<602606f2>] panic+0x0/0x2d3
62031b68: [<600333b0>] memcpy+0x0/0x140
62031b80: [<6002d58a>] unblock_signals+0x0/0x84
62031ba0: [<602609c5>] printk+0x0/0xa0
62031bd8: [<60264e51>] __mutex_unlock_slowpath+0x13d/0x148
62031c10: [<6004ef5e>] flush_work+0x0/0x1d7
62031c18: [<60050234>] try_to_grab_pending+0x0/0x17e
62031c38: [<6004e984>] get_work_gcwq+0x71/0x8f
62031c48: [<60050539>] __cancel_work_timer+0x5b/0x115
62031c78: [<628acc85>] unplug_port+0x0/0x191 [virtio_console]
62031c98: [<6005061c>] cancel_work_sync+0x12/0x14
62031ca8: [<628ace96>] virtcons_remove+0x80/0x15c [virtio_console]
62031ce8: [<628191de>] virtio_dev_remove+0x1e/0x7e [virtio]
62031d08: [<601cf242>] __device_release_driver+0x75/0xe4
62031d28: [<601cf2dd>] device_release_driver+0x2c/0x40
62031d48: [<601ce0dd>] driver_unbind+0x7d/0xc6
62031d88: [<601cd5d9>] drv_attr_store+0x27/0x29
62031d98: [<60115f61>] sysfs_write_file+0x100/0x14d
62031df8: [<600b737d>] vfs_write+0xcb/0x184
62031e08: [<600b58b8>] filp_close+0x88/0x94
62031e38: [<600b7686>] sys_write+0x59/0x88
62031e88: [<6001ced1>] handle_syscall+0x5d/0x80
62031ea8: [<60030a74>] userspace+0x405/0x531
62031f08: [<600d32cc>] sys_dup+0x0/0x5e
62031f28: [<601b11d6>] strcpy+0x0/0x18
62031f38: [<600be46c>] do_execve+0x10/0x12
62031f48: [<600184c7>] run_init_process+0x43/0x45
62031fd8: [<60019a91>] new_thread_handler+0xba/0xbc
Signed-off-by: Sjur Brændeland <sjur.brandeland@stericsson.com>
Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit abce9ac292e13da367bbd22c1f7669f988d931ac upstream.
tpm_write calls tpm_transmit without checking the return value and
assigns the return value unconditionally to chip->pending_data, even if
it's an error value.
This causes three bugs.
So if we write to /dev/tpm0 with a tpm_param_size bigger than
TPM_BUFSIZE=0x1000 (e.g. 0x100a)
and a bufsize also bigger than TPM_BUFSIZE (e.g. 0x100a)
tpm_transmit returns -E2BIG which is assigned to chip->pending_data as
-7, but tpm_write returns that TPM_BUFSIZE bytes have been successfully
been written to the TPM, altough this is not true (bug #1).
As we did write more than than TPM_BUFSIZE bytes but tpm_write reports
that only TPM_BUFSIZE bytes have been written the vfs tries to write
the remaining bytes (in this case 10 bytes) to the tpm device driver via
tpm_write which then blocks at
/* cannot perform a write until the read has cleared
either via tpm_read or a user_read_timer timeout */
while (atomic_read(&chip->data_pending) != 0)
msleep(TPM_TIMEOUT);
for 60 seconds, since data_pending is -7 and nobody is able to
read it (since tpm_read luckily checks if data_pending is greater than
0) (#bug 2).
After that the remaining bytes are written to the TPM which are
interpreted by the tpm as a normal command. (bug #3)
So if the last bytes of the command stream happen to be a e.g.
tpm_force_clear this gets accidentally sent to the TPM.
This patch fixes all three bugs, by propagating the error code of
tpm_write and returning -E2BIG if the input buffer is too big,
since the response from the tpm for a truncated value is bogus anyway.
Moreover it returns -EBUSY to userspace if there is a response ready to be
read.
Signed-off-by: Peter Huewe <peter.huewe@infineon.com>
Signed-off-by: Kent Yoder <key@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit ee8b593affdf893012e57f4c54a21984d1b0d92e upstream.
If a user provides a buffer larger than a tty->write_buf chunk and
passes '\r' at the end of the buffer, we touch an out-of-bound memory.
Add a check there to prevent this.
Signed-off-by: Jiri Slaby <jslaby@suse.cz>
Cc: Samo Pogacnik <samo_pogacnik@t-2.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit d2e7c96af1e54b507ae2a6a7dd2baf588417a7e5 upstream.
Mix in any architectural randomness in extract_buf() instead of
xfer_secondary_buf(). This allows us to mix in more architectural
randomness, and it also makes xfer_secondary_buf() faster, moving a
tiny bit of additional CPU overhead to process which is extracting the
randomness.
[ Commit description modified by tytso to remove an extended
advertisement for the RDRAND instruction. ]
Signed-off-by: H. Peter Anvin <hpa@linux.intel.com>
Acked-by: Ingo Molnar <mingo@kernel.org>
Cc: DJ Johnston <dj.johnston@intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit cbc96b7594b5691d61eba2db8b2ea723645be9ca upstream.
Many platforms have per-machine instance data (serial numbers,
asset tags, etc.) squirreled away in areas that are accessed
during early system bringup. Mixing this data into the random
pools has a very high value in providing better random data,
so we should allow (and even encourage) architecture code to
call add_device_randomness() from the setup_arch() paths.
However, this limits our options for internal structure of
the random driver since random_initialize() is not called
until long after setup_arch().
Add a big fat comment to rand_initialize() spelling out
this requirement.
Suggested-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Theodore Ts'o <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c5857ccf293968348e5eb4ebedc68074de3dcda6 upstream.
With the new interrupt sampling system, we are no longer using the
timer_rand_state structure in the irq descriptor, so we can stop
initializing it now.
[ Merged in fixes from Sedat to find some last missing references to
rand_initialize_irq() ]
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Sedat Dilek <sedat.dilek@gmail.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 00ce1db1a634746040ace24c09a4e3a7949a3145 upstream.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c2557a303ab6712bb6e09447df828c557c710ac9 upstream.
Create a new function, get_random_bytes_arch() which will use the
architecture-specific hardware random number generator if it is
present. Change get_random_bytes() to not use the HW RNG, even if it
is avaiable.
The reason for this is that the hw random number generator is fast (if
it is present), but it requires that we trust the hardware
manufacturer to have not put in a back door. (For example, an
increasing counter encrypted by an AES key known to the NSA.)
It's unlikely that Intel (for example) was paid off by the US
Government to do this, but it's impossible for them to prove otherwise
--- especially since Bull Mountain is documented to use AES as a
whitener. Hence, the output of an evil, trojan-horse version of
RDRAND is statistically indistinguishable from an RDRAND implemented
to the specifications claimed by Intel. Short of using a tunnelling
electronic microscope to reverse engineer an Ivy Bridge chip and
disassembling and analyzing the CPU microcode, there's no way for us
to tell for sure.
Since users of get_random_bytes() in the Linux kernel need to be able
to support hardware systems where the HW RNG is not present, most
time-sensitive users of this interface have already created their own
cryptographic RNG interface which uses get_random_bytes() as a seed.
So it's much better to use the HW RNG to improve the existing random
number generator, by mixing in any entropy returned by the HW RNG into
/dev/random's entropy pool, but to always _use_ /dev/random's entropy
pool.
This way we get almost of the benefits of the HW RNG without any
potential liabilities. The only benefits we forgo is the
speed/performance enhancements --- and generic kernel code can't
depend on depend on get_random_bytes() having the speed of a HW RNG
anyway.
For those places that really want access to the arch-specific HW RNG,
if it is available, we provide get_random_bytes_arch().
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit e6d4947b12e8ad947add1032dd754803c6004824 upstream.
If the CPU supports a hardware random number generator, use it in
xfer_secondary_pool(), where it will significantly improve things and
where we can afford it.
Also, remove the use of the arch-specific rng in
add_timer_randomness(), since the call is significantly slower than
get_cycles(), and we're much better off using it in
xfer_secondary_pool() anyway.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit a2080a67abe9e314f9e9c2cc3a4a176e8a8f8793 upstream.
Add a new interface, add_device_randomness() for adding data to the
random pool that is likely to differ between two devices (or possibly
even per boot). This would be things like MAC addresses or serial
numbers, or the read-out of the RTC. This does *not* add any actual
entropy to the pool, but it initializes the pool to different values
for devices that might otherwise be identical and have very little
entropy available to them (particularly common in the embedded world).
[ Modified by tytso to mix in a timestamp, since there may be some
variability caused by the time needed to detect/configure the hardware
in question. ]
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 902c098a3663de3fa18639efbb71b6080f0bcd3c upstream.
The real-time Linux folks don't like add_interrupt_randomness() taking
a spinlock since it is called in the low-level interrupt routine.
This also allows us to reduce the overhead in the fast path, for the
random driver, which is the interrupt collection path.
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 775f4b297b780601e61787b766f306ed3e1d23eb upstream.
We've been moving away from add_interrupt_randomness() for various
reasons: it's too expensive to do on every interrupt, and flooding the
CPU with interrupts could theoretically cause bogus floods of entropy
from a somewhat externally controllable source.
This solves both problems by limiting the actual randomness addition
to just once a second or after 64 interrupts, whicever comes first.
During that time, the interrupt cycle data is buffered up in a per-cpu
pool. Also, we make sure the the nonblocking pool used by urandom is
initialized before we start feeding the normal input pool. This
assures that /dev/urandom is returning unpredictable data as soon as
possible.
(Based on an original patch by Linus, but significantly modified by
tytso.)
Tested-by: Eric Wustrow <ewust@umich.edu>
Reported-by: Eric Wustrow <ewust@umich.edu>
Reported-by: Nadia Heninger <nadiah@cs.ucsd.edu>
Reported-by: Zakir Durumeric <zakir@umich.edu>
Reported-by: J. Alex Halderman <jhalderm@umich.edu>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit a119365586b0130dfea06457f584953e0ff6481d upstream.
The following build error occured during a ia64 build with
swap-over-NFS patches applied.
net/core/sock.c:274:36: error: initializer element is not constant
net/core/sock.c:274:36: error: (near initialization for 'memalloc_socks')
net/core/sock.c:274:36: error: initializer element is not constant
This is identical to a parisc build error. Fengguang Wu, Mel Gorman
and James Bottomley did all the legwork to track the root cause of
the problem. This fix and entire commit log is shamelessly copied
from them with one extra detail to change a dubious runtime use of
ATOMIC_INIT() to atomic_set() in drivers/char/mspec.c
Dave Anglin says:
> Here is the line in sock.i:
>
> struct static_key memalloc_socks = ((struct static_key) { .enabled =
> ((atomic_t) { (0) }) });
The above line contains two compound literals. It also uses a designated
initializer to initialize the field enabled. A compound literal is not a
constant expression.
The location of the above statement isn't fully clear, but if a compound
literal occurs outside the body of a function, the initializer list must
consist of constant expressions.
Signed-off-by: Tony Luck <tony.luck@intel.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 24ebe6670de3d1f0dca11c9eb372134c7ab05503 upstream.
tpm_do_selftest() attempts to read a PCR in order to
decide if one can rely on the TPM being used or not.
The function that's used by __tpm_pcr_read() does not
expect the TPM to be disabled or deactivated, and if so,
reports an error.
It's fine if the TPM returns this error when trying to
use it for the first time after a power cycle, but it's
definitely not if it already returned success for a
previous attempt to read one of its PCRs.
The tpm_do_selftest() was modified so that the driver only
reports this return code as an error when it really is.
Reported-and-tested-by: Paul Bolle <pebolle@tiscali.nl>
Signed-off-by: Rajiv Andrade <srajiv@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit c475c06f4bb689d6ad87d7512e036d6dface3160 upstream.
Brown paper bag: Data valid is LSB of the ISR (status register), and NOT
of ODATA (current random data word)!
With this, rngtest is a lot happier. Before:
rngtest 3
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warr.
rngtest: starting FIPS tests...
rngtest: bits received from input: 20000032
rngtest: FIPS 140-2 successes: 3
rngtest: FIPS 140-2 failures: 997
rngtest: FIPS 140-2(2001-10-10) Monobit: 604
rngtest: FIPS 140-2(2001-10-10) Poker: 996
rngtest: FIPS 140-2(2001-10-10) Runs: 36
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 117
rngtest: input channel speed: (min=622.371; avg=23682.481; max=28224.350)Kibitss
rngtest: FIPS tests speed: (min=12.361; avg=12.718; max=12.861)Mibits/s
rngtest: Program run time: 2331696 microsecondsx
After:
rngtest 3
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warr.
rngtest: starting FIPS tests...
rngtest: bits received from input: 20000032
rngtest: FIPS 140-2 successes: 999
rngtest: FIPS 140-2 failures: 1
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 1
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=777.363; avg=43588.270; max=47870.711)Kibitss
rngtest: FIPS tests speed: (min=11.943; avg=12.716; max=12.844)Mibits/s
rngtest: Program run time: 1955282 microseconds
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Reported-by: George Pontis <GPontis@z9.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 121daad8fd1dce63076fa55aaedd5dc3f981b334 upstream.
Data valid gets cleared by reading the ISR (status register) and NOT from
reading ODATA (data register). A new data word can become available between
checking ISR and reading ODATA, causing us to reuse the same data word next
time atmel_trng_read() gets called, if that happens before the following
data word is ready.
With this fixed, rngtest no longer complains of 'Continous run' errors.
Before:
rngtest -c 1000 < /dev/hwrng
rngtest 3
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warr.
rngtest: starting FIPS tests...
rngtest: bits received from input: 20000032
rngtest: FIPS 140-2 successes: 923
rngtest: FIPS 140-2 failures: 77
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 1
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 76
rngtest: input channel speed: (min=721.402; avg=46003.510; max=49321.338)Kibitss
rngtest: FIPS tests speed: (min=11.442; avg=12.714; max=12.801)Mibits/s
rngtest: Program run time: 1931860 microseconds
After:
rngtest -c 1000 < /dev/hwrng
rngtest 3
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warr.
rngtest: starting FIPS tests...
rngtest: bits received from input: 20000032
rngtest: FIPS 140-2 successes: 1000
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=777.518; avg=36988.482; max=43115.342)Kibitss
rngtest: FIPS tests speed: (min=11.951; avg=12.715; max=12.887)Mibits/s
rngtest: Program run time: 2035543 microseconds
Signed-off-by: Peter Korsgaard <jacmet@sunsite.dk>
Reported-by: George Pontis <GPontis@z9.com>
Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com>
Signed-off-by: Herbert Xu <herbert@gondor.apana.org.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
commit 67384fe3fd450536342330f684ea1f7dcaef8130 upstream.
This seems to come on Gigabyte H55M-S2V and was discovered through the
https://bugs.freedesktop.org/show_bug.cgi?id=50381 debugging.
Bugzilla: https://bugs.freedesktop.org/show_bug.cgi?id=50381
Signed-off-by: Eugeni Dodonov <eugeni.dodonov@intel.com>
Signed-off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
If a port was open before going into one of the sleep states, the port
can continue normal operation after restore. However, the host has to
be told that the guest side of the connection is open to restore
pre-suspend state.
This wasn't noticed so far due to a bug in qemu that was fixed recently
(which marked the guest-side connection as always open).
CC: stable@vger.kernel.org # Only for 3.3
Signed-off-by: Amit Shah <amit.shah@redhat.com>
Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty
Pull tty and serial fixes from Greg KH:
"Here are some tty and serial fixes for 3.4-rc2.
Most important here is the pl011 fix, which has been reported by about
100 different people, which means more people use it than I expected
:)
There are also some 8250 driver reverts due to some problems reported
by them. And other minor fixes as well.
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>"
* tag 'tty-3.4-rc2' of git://git.kernel.org/pub/scm/linux/kernel/git/gregkh/tty:
pch_uart: Add Kontron COMe-mTT10 uart clock quirk
pch_uart: Fix MSI setting issue
serial/8250_pci: add a "force background timer" flag and use it for the "kt" serial port
Revert "serial/8250_pci: setup-quirk workaround for the kt serial controller"
Revert "serial/8250_pci: init-quirk msi support for kt serial controller"
tty/serial/omap: console can only be built-in
serial: samsung: fix omission initialize ulcon in reset port fn()
printk(): add KERN_CONT where needed in hpet and vt code
tty/serial: atmel_serial: fix RS485 half-duplex problem
tty: serial: altera_uart: Check for NULL platform_data in probe.
isdn/gigaset: use gig_dbg() for debugging output
omap-serial: Fix the error handling in the omap_serial probe
serial: PL011: move interrupt clearing
|
|
/proc/sys/kernel/random/boot_id can be read concurrently by userspace
processes. If two (or more) user-space processes concurrently read
boot_id when sysctl_bootid is not yet assigned, a race can occur making
boot_id differ between the reads. Because the whole point of the boot id
is to be unique across a kernel execution, fix this by protecting this
operation with a spinlock.
Given that this operation is not frequently used, hitting the spinlock
on each call should not be an issue.
Signed-off-by: Mathieu Desnoyers <mathieu.desnoyers@efficios.com>
Cc: "Theodore Ts'o" <tytso@mit.edu>
Cc: Matt Mackall <mpm@selenic.com>
Signed-off-by: Eric Dumazet <eric.dumazet@gmail.com>
Cc: Greg Kroah-Hartman <greg@kroah.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
A prototype for kmsg records instead of a byte-stream buffer revealed
a couple of missing printk(KERN_CONT ...) uses. Subsequent calls produce
one record per printk() call, while all should have ended up in a single
record.
Instead of:
ACPI: (supports S0 S5)
ACPI: PCI Interrupt Link [LNKA] (IRQs 5 *10 11)
hpet0: at MMIO 0xfed00000, IRQs 2 , 8 , 0
It prints:
ACPI: (supports S0
S5
)
ACPI: PCI Interrupt Link [LNKA] (IRQs
5
*10
11
)
hpet0: at MMIO 0xfed00000, IRQs
2
, 8
, 0
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Len Brown <lenb@kernel.org>
Signed-off-by: Kay Sievers <kay@vrfy.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile
Pull arch/tile bug fixes from Chris Metcalf:
"This includes Paul Gortmaker's change to fix the <asm/system.h>
disintegration issues on tile, a fix to unbreak the tilepro ethernet
driver, and a backlog of bugfix-only changes from internal Tilera
development over the last few months.
They have all been to LKML and on linux-next for the last few days.
The EDAC change to MAINTAINERS is an oddity but discussion on the
linux-edac list suggested I ask you to pull that change through my
tree since they don't have a tree to pull edac changes from at the
moment."
* 'stable' of git://git.kernel.org/pub/scm/linux/kernel/git/cmetcalf/linux-tile: (39 commits)
drivers/net/ethernet/tile: fix netdev_alloc_skb() bombing
MAINTAINERS: update EDAC information
tilepro ethernet driver: fix a few minor issues
tile-srom.c driver: minor code cleanup
edac: say "TILEGx" not "TILEPro" for the tilegx edac driver
arch/tile: avoid accidentally unmasking NMI-type interrupt accidentally
arch/tile: remove bogus performance optimization
arch/tile: return SIGBUS for addresses that are unaligned AND invalid
arch/tile: fix finv_buffer_remote() for tilegx
arch/tile: use atomic exchange in arch_write_unlock()
arch/tile: stop mentioning the "kvm" subdirectory
arch/tile: export the page_home() function.
arch/tile: fix pointer cast in cacheflush.c
arch/tile: fix single-stepping over swint1 instructions on tilegx
arch/tile: implement panic_smp_self_stop()
arch/tile: add "nop" after "nap" to help GX idle power draw
arch/tile: use proper memparse() for "maxmem" options
arch/tile: fix up locking in pgtable.c slightly
arch/tile: don't leak kernel memory when we unload modules
arch/tile: fix bug in delay_backoff()
...
|
|
git://git.kernel.org/pub/scm/linux/kernel/git/jikos/apm
Pull an APM fix from Jiri Kosina:
"One deadlock/race fix from Niel that got introduced when we were
moving away from freezer_*_count() to wait_event_freezable()."
* 'for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/jikos/apm:
APM: fix deadlock in APM_IOC_SUSPEND ioctl
|
|
Merge batch of fixes from Andrew Morton:
"The simple_open() cleanup was held back while I wanted for laggards to
merge things.
I still need to send a few checkpoint/restore patches. I've been
wobbly about merging them because I'm wobbly about the overall
prospects for success of the project. But after speaking with Pavel
at the LSF conference, it sounds like they're further toward
completion than I feared - apparently davem is at the "has stopped
complaining" stage regarding the net changes. So I need to go back
and re-review those patchs and their (lengthy) discussion."
* emailed from Andrew Morton <akpm@linux-foundation.org>: (16 patches)
memcg swap: use mem_cgroup_uncharge_swap fix
backlight: add driver for DA9052/53 PMIC v1
C6X: use set_current_blocked() and block_sigmask()
MAINTAINERS: add entry for sparse checker
MAINTAINERS: fix REMOTEPROC F: typo
alpha: use set_current_blocked() and block_sigmask()
simple_open: automatically convert to simple_open()
scripts/coccinelle/api/simple_open.cocci: semantic patch for simple_open()
libfs: add simple_open()
hugetlbfs: remove unregister_filesystem() when initializing module
drivers/rtc/rtc-88pm860x.c: fix rtc irq enable callback
fs/xattr.c:setxattr(): improve handling of allocation failures
fs/xattr.c:listxattr(): fall back to vmalloc() if kmalloc() failed
fs/xattr.c: suppress page allocation failure warnings from sys_listxattr()
sysrq: use SEND_SIG_FORCED instead of force_sig()
proc: fix mount -t proc -o AAA
|
|
Many users of debugfs copy the implementation of default_open() when
they want to support a custom read/write function op. This leads to a
proliferation of the default_open() implementation across the entire
tree.
Now that the common implementation has been consolidated into libfs we
can replace all the users of this function with simple_open().
This replacement was done with the following semantic patch:
<smpl>
@ open @
identifier open_f != simple_open;
identifier i, f;
@@
-int open_f(struct inode *i, struct file *f)
-{
(
-if (i->i_private)
-f->private_data = i->i_private;
|
-f->private_data = i->i_private;
)
-return 0;
-}
@ has_open depends on open @
identifier fops;
identifier open.open_f;
@@
struct file_operations fops = {
...
-.open = open_f,
+.open = simple_open,
...
};
</smpl>
[akpm@linux-foundation.org: checkpatch fixes]
Signed-off-by: Stephen Boyd <sboyd@codeaurora.org>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Al Viro <viro@zeniv.linux.org.uk>
Cc: Julia Lawall <Julia.Lawall@lip6.fr>
Acked-by: Ingo Molnar <mingo@elte.hu>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
|
|
I found the Xorg server on my ARM device stuck in the 'msleep()' loop
in apm_ioctl.
I suspect it had attempted suspend immediately after resuming and lost
a race.
During that msleep(10);, a new suspend cycle must have started and
changed ->suspend_state to SUSPEND_PENDING, so it was never seen to
be SUSPEND_DONE and the loop could never exited. It would have moved on
to SUSPEND_ACKTO but never been able to reach SUSPEND_DONE.
So change the loop to only run while SUSPEND_ACKED rather than until
SUSPEND_DONE. This is much safer.
Signed-off-by: NeilBrown <neilb@suse.de>
Signed-off-by: Jiri Kosina <jkosina@suse.cz>
|
|
Signed-off-by: Chris Metcalf <cmetcalf@tilera.com>
|
|
Totally unexpected that this regressed. Luckily it sounds like we just
need to have dmar disable on the igfx, not the entire system. At least
that's what a few days of testing between Tony Vroon and me indicates.
Reported-by: Tony Vroon <tony@linx.net>
Cc: Tony Vroon <tony@linx.net>
Bugzilla: https://bugzilla.kernel.org/show_bug.cgi?id=43024
Acked-by: Chris Wilson <chris@chris-wilson.co.uk>
Signed-Off-by: Daniel Vetter <daniel.vetter@ffwll.ch>
|