diff options
author | Steven Rostedt (Red Hat) <rostedt@goodmis.org> | 2013-03-13 11:15:19 -0400 |
---|---|---|
committer | Greg Kroah-Hartman <gregkh@linuxfoundation.org> | 2013-03-28 12:06:02 -0700 |
commit | 7bdb127976b88b761bdd0b2a2756b35681655ce1 (patch) | |
tree | dc35c5e029b8f07c849131d9f94a431156775824 /kernel | |
parent | cdeff82601556a61c22f6e27dfeefb9af823485a (diff) |
tracing: Fix free of probe entry by calling call_rcu_sched()
commit 740466bc89ad8bd5afcc8de220f715f62b21e365 upstream.
Because function tracing is very invasive, and can even trace
calls to rcu_read_lock(), RCU access in function tracing is done
with preempt_disable_notrace(). This requires a synchronize_sched()
for updates and not a synchronize_rcu().
Function probes (traceon, traceoff, etc) must be freed after
a synchronize_sched() after its entry has been removed from the
hash. But call_rcu() is used. Fix this by using call_rcu_sched().
Also fix the usage to use hlist_del_rcu() instead of hlist_del().
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Paul McKenney <paulmck@linux.vnet.ibm.com>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Diffstat (limited to 'kernel')
-rw-r--r-- | kernel/trace/ftrace.c | 4 |
1 files changed, 2 insertions, 2 deletions
diff --git a/kernel/trace/ftrace.c b/kernel/trace/ftrace.c index 86fd4170d24..b2ca34aac77 100644 --- a/kernel/trace/ftrace.c +++ b/kernel/trace/ftrace.c @@ -2709,8 +2709,8 @@ __unregister_ftrace_function_probe(char *glob, struct ftrace_probe_ops *ops, continue; } - hlist_del(&entry->node); - call_rcu(&entry->rcu, ftrace_free_entry_rcu); + hlist_del_rcu(&entry->node); + call_rcu_sched(&entry->rcu, ftrace_free_entry_rcu); } } __disable_ftrace_function_probe(); |