diff options
author | Theodore Ts'o <tytso@mit.edu> | 2012-07-02 07:52:16 -0400 |
---|---|---|
committer | Willy Tarreau <w@1wt.eu> | 2012-10-07 23:41:15 +0200 |
commit | 8c3711e7d2a86b6ca4fd8344c18209606d4a8a21 (patch) | |
tree | 20f11edc89617bf01b6ac618142241ca8897bf5f /kernel | |
parent | cf3062b3d5bb6625f570abcc030ab24792aeee35 (diff) |
random: make 'add_interrupt_randomness()' do something sane
commit 775f4b297b780601e61787b766f306ed3e1d23eb upstream.
We've been moving away from add_interrupt_randomness() for various
reasons: it's too expensive to do on every interrupt, and flooding the
CPU with interrupts could theoretically cause bogus floods of entropy
from a somewhat externally controllable source.
This solves both problems by limiting the actual randomness addition
to just once a second or after 64 interrupts, whicever comes first.
During that time, the interrupt cycle data is buffered up in a per-cpu
pool. Also, we make sure the the nonblocking pool used by urandom is
initialized before we start feeding the normal input pool. This
assures that /dev/urandom is returning unpredictable data as soon as
possible.
(Based on an original patch by Linus, but significantly modified by
tytso.)
Tested-by: Eric Wustrow <ewust@umich.edu>
Reported-by: Eric Wustrow <ewust@umich.edu>
Reported-by: Nadia Heninger <nadiah@cs.ucsd.edu>
Reported-by: Zakir Durumeric <zakir@umich.edu>
Reported-by: J. Alex Halderman <jhalderm@umich.edu>.
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: "Theodore Ts'o" <tytso@mit.edu>
[PG: minor adjustment required since .34 doesn't have f9e4989eb8
which renames "status" to "random" in kernel/irq/handle.c ]
Signed-off-by: Paul Gortmaker <paul.gortmaker@windriver.com>
Signed-off-by: Willy Tarreau <w@1wt.eu>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/irq/handle.c | 7 |
1 files changed, 3 insertions, 4 deletions
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c index 17c71bb565c..27fd0a631d6 100644 --- a/kernel/irq/handle.c +++ b/kernel/irq/handle.c @@ -370,7 +370,7 @@ static void warn_no_thread(unsigned int irq, struct irqaction *action) irqreturn_t handle_IRQ_event(unsigned int irq, struct irqaction *action) { irqreturn_t ret, retval = IRQ_NONE; - unsigned int status = 0; + unsigned int flags = 0; if (!(action->flags & IRQF_DISABLED)) local_irq_enable_in_hardirq(); @@ -413,7 +413,7 @@ irqreturn_t handle_IRQ_event(unsigned int irq, struct irqaction *action) /* Fall through to add to randomness */ case IRQ_HANDLED: - status |= action->flags; + flags |= action->flags; break; default: @@ -424,8 +424,7 @@ irqreturn_t handle_IRQ_event(unsigned int irq, struct irqaction *action) action = action->next; } while (action); - if (status & IRQF_SAMPLE_RANDOM) - add_interrupt_randomness(irq); + add_interrupt_randomness(irq, flags); local_irq_disable(); return retval; |