diff options
author | Paul E. McKenney <paulmck@linux.vnet.ibm.com> | 2012-04-13 03:35:13 +0000 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2012-04-22 15:38:58 -0700 |
commit | f5940f71a878f7d41541ac6350c3a72ebd5df288 (patch) | |
tree | 9454ee22496a7cacef452d72bfde8d122ac49271 /arch | |
parent | e74711e8dcca9e5f49479c8ef1ea89698717be1e (diff) |
sparc64: Eliminate obsolete __handle_softirq() function
commit 3d3eeb2ef26112a200785e5fca58ec58dd33bf1e upstream.
The invocation of softirq is now handled by irq_exit(), so there is no
need for sparc64 to invoke it on the trap-return path. In fact, doing so
is a bug because if the trap occurred in the idle loop, this invocation
can result in lockdep-RCU failures. The problem is that RCU ignores idle
CPUs, and the sparc64 trap-return path to the softirq handlers fails to
tell RCU that the CPU must be considered non-idle while those handlers
are executing. This means that RCU is ignoring any RCU read-side critical
sections in those handlers, which in turn means that RCU-protected data
can be yanked out from under those read-side critical sections.
The shiny new lockdep-RCU ability to detect RCU read-side critical sections
that RCU is ignoring located this problem.
The fix is straightforward: Make sparc64 stop manually invoking the
softirq handlers.
Reported-by: Meelis Roos <mroos@linux.ee>
Suggested-by: David Miller <davem@davemloft.net>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Tested-by: Meelis Roos <mroos@linux.ee>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'arch')
-rw-r--r-- | arch/sparc/kernel/rtrap_64.S | 7 |
1 files changed, 0 insertions, 7 deletions
diff --git a/arch/sparc/kernel/rtrap_64.S b/arch/sparc/kernel/rtrap_64.S index 77f1b95e080..9171fc238de 100644 --- a/arch/sparc/kernel/rtrap_64.S +++ b/arch/sparc/kernel/rtrap_64.S @@ -20,11 +20,6 @@ .text .align 32 -__handle_softirq: - call do_softirq - nop - ba,a,pt %xcc, __handle_softirq_continue - nop __handle_preemption: call schedule wrpr %g0, RTRAP_PSTATE, %pstate @@ -89,9 +84,7 @@ rtrap: cmp %l1, 0 /* mm/ultra.S:xcall_report_regs KNOWS about this load. */ - bne,pn %icc, __handle_softirq ldx [%sp + PTREGS_OFF + PT_V9_TSTATE], %l1 -__handle_softirq_continue: rtrap_xcall: sethi %hi(0xf << 20), %l4 and %l1, %l4, %l4 |