<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/kernel, branch v3.4.87</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/kernel?h=v3.4.87</id>
<link rel='self' href='https://git.amat.us/linux/atom/kernel?h=v3.4.87'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2014-04-14T13:44:16Z</updated>
<entry>
<title>workqueue: cond_resched() after processing each work item</title>
<updated>2014-04-14T13:44:16Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2013-08-28T21:33:37Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=00cef7a5e0766f0f4bedc9da1c80fbe992cf68ef'/>
<id>urn:sha1:00cef7a5e0766f0f4bedc9da1c80fbe992cf68ef</id>
<content type='text'>
commit b22ce2785d97423846206cceec4efee0c4afd980 upstream.

If !PREEMPT, a kworker running work items back to back can hog CPU.
This becomes dangerous when a self-requeueing work item which is
waiting for something to happen races against stop_machine.  Such
self-requeueing work item would requeue itself indefinitely hogging
the kworker and CPU it's running on while stop_machine would wait for
that CPU to enter stop_machine while preventing anything else from
happening on all other CPUs.  The two would deadlock.

Jamie Liu reports that this deadlock scenario exists around
scsi_requeue_run_queue() and libata port multiplier support, where one
port may exclude command processing from other ports.  With the right
timing, scsi_requeue_run_queue() can end up requeueing itself trying
to execute an IO which is asked to be retried while another device has
an exclusive access, which in turn can't make forward progress due to
stop_machine.

Fix it by invoking cond_resched() after executing each work item.

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Reported-by: Jamie Liu &lt;jamieliu@google.com&gt;
References: http://thread.gmane.org/gmane.linux.kernel/1552567
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Qiang Huang &lt;h.huangqiang@huawei.com&gt;
Cc: Li Zefan &lt;lizefan@huawei.com&gt;
Cc: Jianguo Wu &lt;wujianguo@huawei.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>printk: Fix rq-&gt;lock vs logbuf_lock unlock lock inversion</title>
<updated>2014-04-14T13:44:16Z</updated>
<author>
<name>Bu, Yitian</name>
<email>ybu@qti.qualcomm.com</email>
</author>
<published>2013-02-18T12:53:37Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=aa34e62c2f0d4a105606971a1eb666f22338993b'/>
<id>urn:sha1:aa34e62c2f0d4a105606971a1eb666f22338993b</id>
<content type='text'>
commit dbda92d16f8655044e082930e4e9d244b87fde77 upstream.

commit 07354eb1a74d1 ("locking printk: Annotate logbuf_lock as raw")
reintroduced a lock inversion problem which was fixed in commit
0b5e1c5255 ("printk: Release console_sem after logbuf_lock"). This
happened probably when fixing up patch rejects.

Restore the ordering and unlock logbuf_lock before releasing
console_sem.

Signed-off-by: ybu &lt;ybu@qti.qualcomm.com&gt;
Cc: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Link: http://lkml.kernel.org/r/E807E903FE6CBE4D95E420FBFCC273B827413C@nasanexd01h.na.qualcomm.com
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
[bwh: Backported to 3.2: adjust context]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Qiang Huang &lt;h.huangqiang@huawei.com&gt;
Cc: Li Zefan &lt;lizefan@huawei.com&gt;
Cc: Jianguo Wu &lt;wujianguo@huawei.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>audit: wait_for_auditd() should use TASK_UNINTERRUPTIBLE</title>
<updated>2014-04-14T13:44:15Z</updated>
<author>
<name>Oleg Nesterov</name>
<email>oleg@redhat.com</email>
</author>
<published>2013-06-12T21:04:46Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=0df19efa7549d47aa906b8316665e46366b56061'/>
<id>urn:sha1:0df19efa7549d47aa906b8316665e46366b56061</id>
<content type='text'>
commit f000cfdde5de4fc15dead5ccf524359c07eadf2b upstream.

audit_log_start() does wait_for_auditd() in a loop until
audit_backlog_wait_time passes or audit_skb_queue has a room.

If signal_pending() is true this becomes a busy-wait loop, schedule() in
TASK_INTERRUPTIBLE won't block.

Thanks to Guy for fully investigating and explaining the problem.

(akpm: that'll cause the system to lock up on a non-preemptible
uniprocessor kernel)

(Guy: "Our customer was in fact running a uniprocessor machine, and they
reported a system hang.")

Signed-off-by: Oleg Nesterov &lt;oleg@redhat.com&gt;
Reported-by: Guy Streeter &lt;streeter@redhat.com&gt;
Cc: Eric Paris &lt;eparis@redhat.com&gt;
Cc: Al Viro &lt;viro@zeniv.linux.org.uk&gt;
Signed-off-by: Andrew Morton &lt;akpm@linux-foundation.org&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
[bwh: Backported to 3.2: adjust context, indentation]
Signed-off-by: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Cc: Qiang Huang &lt;h.huangqiang@huawei.com&gt;
Cc: Li Zefan &lt;lizefan@huawei.com&gt;
Cc: Jianguo Wu &lt;wujianguo@huawei.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>tracing: Do not add event files for modules that fail tracepoints</title>
<updated>2014-03-24T04:37:06Z</updated>
<author>
<name>Steven Rostedt (Red Hat)</name>
<email>rostedt@goodmis.org</email>
</author>
<published>2014-02-26T18:37:38Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=a299804140325db7b93173419b0724056b60f34d'/>
<id>urn:sha1:a299804140325db7b93173419b0724056b60f34d</id>
<content type='text'>
commit 45ab2813d40d88fc575e753c38478de242d03f88 upstream.

If a module fails to add its tracepoints due to module tainting, do not
create the module event infrastructure in the debugfs directory. As the events
will not work and worse yet, they will silently fail, making the user wonder
why the events they enable do not display anything.

Having a warning on module load and the events not visible to the users
will make the cause of the problem much clearer.

Link: http://lkml.kernel.org/r/20140227154923.265882695@goodmis.org

Fixes: 6d723736e472 "tracing/events: add support for modules to TRACE_EVENT"
Acked-by: Mathieu Desnoyers &lt;mathieu.desnoyers@efficios.com&gt;
Cc: Rusty Russell &lt;rusty@rustcorp.com.au&gt;
Signed-off-by: Steven Rostedt &lt;rostedt@goodmis.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>cpuset: fix a race condition in __cpuset_node_allowed_softwall()</title>
<updated>2014-03-24T04:37:05Z</updated>
<author>
<name>Li Zefan</name>
<email>lizefan@huawei.com</email>
</author>
<published>2014-02-27T10:19:36Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=c5ad4fdec0ae15d197508185643c68470868121d'/>
<id>urn:sha1:c5ad4fdec0ae15d197508185643c68470868121d</id>
<content type='text'>
commit 99afb0fd5f05aac467ffa85c36778fec4396209b upstream.

It's not safe to access task's cpuset after releasing task_lock().
Holding callback_mutex won't help.

Signed-off-by: Li Zefan &lt;lizefan@huawei.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>genirq: Remove racy waitqueue_active check</title>
<updated>2014-03-24T04:37:05Z</updated>
<author>
<name>Chuansheng Liu</name>
<email>chuansheng.liu@intel.com</email>
</author>
<published>2014-02-24T03:29:50Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=b46741f24d1c0d0d8dcfb3c63439338d2dc0337e'/>
<id>urn:sha1:b46741f24d1c0d0d8dcfb3c63439338d2dc0337e</id>
<content type='text'>
commit c685689fd24d310343ac33942e9a54a974ae9c43 upstream.

We hit one rare case below:

T1 calling disable_irq(), but hanging at synchronize_irq()
always;
The corresponding irq thread is in sleeping state;
And all CPUs are in idle state;

After analysis, we found there is one possible scenerio which
causes T1 is waiting there forever:
CPU0                                       CPU1
 synchronize_irq()
  wait_event()
    spin_lock()
                                           atomic_dec_and_test(&amp;threads_active)
      insert the __wait into queue
    spin_unlock()
                                           if(waitqueue_active)
    atomic_read(&amp;threads_active)
                                             wake_up()

Here after inserted the __wait into queue on CPU0, and before
test if queue is empty on CPU1, there is no barrier, it maybe
cause it is not visible for CPU1 immediately, although CPU0 has
updated the queue list.
It is similar for CPU0 atomic_read() threads_active also.

So we'd need one smp_mb() before waitqueue_active.that, but removing
the waitqueue_active() check solves it as wel l and it makes
things simple and clear.

Signed-off-by: Chuansheng Liu &lt;chuansheng.liu@intel.com&gt;
Cc: Xiaoming Wang &lt;xiaoming.wang@intel.com&gt;
Link: http://lkml.kernel.org/r/1393212590-32543-1-git-send-email-chuansheng.liu@intel.com
Signed-off-by: Thomas Gleixner &lt;tglx@linutronix.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>sched: Fix double normalization of vruntime</title>
<updated>2014-03-24T04:37:03Z</updated>
<author>
<name>George McCollister</name>
<email>george.mccollister@gmail.com</email>
</author>
<published>2014-02-18T23:56:51Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6ba4d1d9112b3f6bf0f634870bb950b5b59fa86f'/>
<id>urn:sha1:6ba4d1d9112b3f6bf0f634870bb950b5b59fa86f</id>
<content type='text'>
commit 791c9e0292671a3bfa95286bb5c08129d8605618 upstream.

dequeue_entity() is called when p-&gt;on_rq and sets se-&gt;on_rq = 0
which appears to guarentee that the !se-&gt;on_rq condition is met.
If the task has done set_current_state(TASK_INTERRUPTIBLE) without
schedule() the second condition will be met and vruntime will be
incorrectly adjusted twice.

In certain cases this can result in the task's vruntime never increasing
past the vruntime of other tasks on the CFS' run queue, starving them of
CPU time.

This patch changes switched_from_fair() to use !p-&gt;on_rq instead of
!se-&gt;on_rq.

I'm able to cause a task with a priority of 120 to starve all other
tasks with the same priority on an ARM platform running 3.2.51-rt72
PREEMPT RT by writing one character at time to a serial tty (16550 UART)
in a tight loop. I'm also able to verify making this change corrects the
problem on that platform and kernel version.

Signed-off-by: George McCollister &lt;george.mccollister@gmail.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Link: http://lkml.kernel.org/r/1392767811-28916-1-git-send-email-george.mccollister@gmail.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>cgroup: cgroup_subsys-&gt;fork() should be called after the task is added to css_set</title>
<updated>2014-03-11T23:10:03Z</updated>
<author>
<name>Tejun Heo</name>
<email>tj@kernel.org</email>
</author>
<published>2012-10-16T22:03:14Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=30ec268be37bdb5b1614cce40af8083f1a7c27f3'/>
<id>urn:sha1:30ec268be37bdb5b1614cce40af8083f1a7c27f3</id>
<content type='text'>
commit 5edee61edeaaebafe584f8fb7074c1ef4658596b upstream.

cgroup core has a bug which violates a basic rule about event
notifications - when a new entity needs to be added, you add that to
the notification list first and then make the new entity conform to
the current state.  If done in the reverse order, an event happening
inbetween will be lost.

cgroup_subsys-&gt;fork() is invoked way before the new task is added to
the css_set.  Currently, cgroup_freezer is the only user of -&gt;fork()
and uses it to make new tasks conform to the current state of the
freezer.  If FROZEN state is requested while fork is in progress
between cgroup_fork_callbacks() and cgroup_post_fork(), the child
could escape freezing - the cgroup isn't frozen when -&gt;fork() is
called and the freezer couldn't see the new task on the css_set.

This patch moves cgroup_subsys-&gt;fork() invocation to
cgroup_post_fork() after the new task is added to the css_set.
cgroup_fork_callbacks() is removed.

Because now a task may be migrated during cgroup_subsys-&gt;fork(),
freezer_fork() is updated so that it adheres to the usual RCU locking
and the rather pointless comment on why locking can be different there
is removed (if it doesn't make anything simpler, why even bother?).

Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Cc: Oleg Nesterov &lt;oleg@redhat.com&gt;
Cc: Rafael J. Wysocki &lt;rjw@sisk.pl&gt;
[hq: Backported to 3.4:
 - Adjust context
 - Iterate over first CGROUP_BUILTIN_SUBSYS_COUNT elements of subsys]
Signed-off-by: Qiang Huang &lt;h.huangqiang@huawei.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>perf: Fix hotplug splat</title>
<updated>2014-03-11T23:10:02Z</updated>
<author>
<name>Peter Zijlstra</name>
<email>peterz@infradead.org</email>
</author>
<published>2014-02-24T11:06:12Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=f80747a43fc2613b9f5e1ded16f50ef28815652e'/>
<id>urn:sha1:f80747a43fc2613b9f5e1ded16f50ef28815652e</id>
<content type='text'>
commit e3703f8cdfcf39c25c4338c3ad8e68891cca3731 upstream.

Drew Richardson reported that he could make the kernel go *boom* when hotplugging
while having perf events active.

It turned out that when you have a group event, the code in
__perf_event_exit_context() fails to remove the group siblings from
the context.

We then proceed with destroying and freeing the event, and when you
re-plug the CPU and try and add another event to that CPU, things go
*boom* because you've still got dead entries there.

Reported-by: Drew Richardson &lt;drew.richardson@arm.com&gt;
Signed-off-by: Peter Zijlstra &lt;peterz@infradead.org&gt;
Cc: Will Deacon &lt;will.deacon@arm.com&gt;
Link: http://lkml.kernel.org/n/tip-k6v5wundvusvcseqj1si0oz0@git.kernel.org
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>workqueue: ensure @task is valid across kthread_stop()</title>
<updated>2014-03-11T23:10:02Z</updated>
<author>
<name>Lai Jiangshan</name>
<email>laijs@cn.fujitsu.com</email>
</author>
<published>2014-02-15T14:02:28Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=23f0913c2b3816b11f32dcf774758b3f63b81f60'/>
<id>urn:sha1:23f0913c2b3816b11f32dcf774758b3f63b81f60</id>
<content type='text'>
commit 5bdfff96c69a4d5ab9c49e60abf9e070ecd2acbb upstream.

When a kworker should die, the kworkre is notified through WORKER_DIE
flag instead of kthread_should_stop().  This, IIRC, is primarily to
keep the test synchronized inside worker_pool lock.  WORKER_DIE is
first set while holding pool-&gt;lock, the lock is dropped and
kthread_stop() is called.

Unfortunately, this means that there's a slight chance that the target
kworker may see WORKER_DIE before kthread_stop() finishes and exits
and frees the target task before or during kthread_stop().

Fix it by pinning the target task before setting WORKER_DIE and
putting it after kthread_stop() is done.

tj: Improved patch description and comment.  Moved pinning above
    WORKER_DIE for better signify what it's protecting.

Signed-off-by: Lai Jiangshan &lt;laijs@cn.fujitsu.com&gt;
Signed-off-by: Tejun Heo &lt;tj@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
