<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/arch, branch v3.4.43</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/arch?h=v3.4.43</id>
<link rel='self' href='https://git.amat.us/linux/atom/arch?h=v3.4.43'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2013-05-01T16:41:03Z</updated>
<entry>
<title>sparc64: Fix race in TLB batch processing.</title>
<updated>2013-05-01T16:41:03Z</updated>
<author>
<name>David S. Miller</name>
<email>davem@davemloft.net</email>
</author>
<published>2013-04-19T21:26:26Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bf6f841f7fde2731ea39698064a81492ce777fa6'/>
<id>urn:sha1:bf6f841f7fde2731ea39698064a81492ce777fa6</id>
<content type='text'>
[ Commits f36391d2790d04993f48da6a45810033a2cdf847 and
  f0af97070acbad5d6a361f485828223a4faaa0ee upstream. ]

As reported by Dave Kleikamp, when we emit cross calls to do batched
TLB flush processing we have a race because we do not synchronize on
the sibling cpus completing the cross call.

So meanwhile the TLB batch can be reset (tb-&gt;tlb_nr set to zero, etc.)
and either flushes are missed or flushes will flush the wrong
addresses.

Fix this by using generic infrastructure to synchonize on the
completion of the cross call.

This first required getting the flush_tlb_pending() call out from
switch_to() which operates with locks held and interrupts disabled.
The problem is that smp_call_function_many() cannot be invoked with
IRQs disabled and this is explicitly checked for with WARN_ON_ONCE().

We get the batch processing outside of locked IRQ disabled sections by
using some ideas from the powerpc port. Namely, we only batch inside
of arch_{enter,leave}_lazy_mmu_mode() calls.  If we're not in such a
region, we flush TLBs synchronously.

1) Get rid of xcall_flush_tlb_pending and per-cpu type
   implementations.

2) Do TLB batch cross calls instead via:

	smp_call_function_many()
		tlb_pending_func()
			__flush_tlb_pending()

3) Batch only in lazy mmu sequences:

	a) Add 'active' member to struct tlb_batch
	b) Define __HAVE_ARCH_ENTER_LAZY_MMU_MODE
	c) Set 'active' in arch_enter_lazy_mmu_mode()
	d) Run batch and clear 'active' in arch_leave_lazy_mmu_mode()
	e) Check 'active' in tlb_batch_add_one() and do a synchronous
           flush if it's clear.

4) Add infrastructure for synchronous TLB page flushes.

	a) Implement __flush_tlb_page and per-cpu variants, patch
	   as needed.
	b) Likewise for xcall_flush_tlb_page.
	c) Implement smp_flush_tlb_page() to invoke the cross-call.
	d) Wire up global_flush_tlb_page() to the right routine based
           upon CONFIG_SMP

5) It turns out that singleton batches are very common, 2 out of every
   3 batch flushes have only a single entry in them.

   The batch flush waiting is very expensive, both because of the poll
   on sibling cpu completeion, as well as because passing the tlb batch
   pointer to the sibling cpus invokes a shared memory dereference.

   Therefore, in flush_tlb_pending(), if there is only one entry in
   the batch perform a completely asynchronous global_flush_tlb_page()
   instead.

Reported-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Signed-off-by: David S. Miller &lt;davem@davemloft.net&gt;
Acked-by: Dave Kleikamp &lt;dave.kleikamp@oracle.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;
</content>
</entry>
<entry>
<title>perf/x86: Fix offcore_rsp valid mask for SNB/IVB</title>
<updated>2013-04-26T04:19:55Z</updated>
<author>
<name>Stephane Eranian</name>
<email>eranian@google.com</email>
</author>
<published>2013-04-16T11:51:43Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=6b48c21afcf0f9f01bb37144a5da3274a3590404'/>
<id>urn:sha1:6b48c21afcf0f9f01bb37144a5da3274a3590404</id>
<content type='text'>
commit f1923820c447e986a9da0fc6bf60c1dccdf0408e upstream.

The valid mask for both offcore_response_0 and
offcore_response_1 was wrong for SNB/SNB-EP,
IVB/IVB-EP. It was possible to write to
reserved bit and cause a GP fault crashing
the kernel.

This patch fixes the problem by correctly marking the
reserved bits in the valid mask for all the processors
mentioned above.

A distinction between desktop and server parts is introduced
because bits 24-30 are only available on the server parts.

This version of the  patch is just a rebase to perf/urgent tree
and should apply to older kernels as well.

Signed-off-by: Stephane Eranian &lt;eranian@google.com&gt;
Cc: peterz@infradead.org
Cc: jolsa@redhat.com
Cc: ak@linux.intel.com
Signed-off-by: Ingo Molnar &lt;mingo@kernel.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>ARM: 7698/1: perf: fix group validation when using enable_on_exec</title>
<updated>2013-04-26T04:19:55Z</updated>
<author>
<name>Will Deacon</name>
<email>will.deacon@arm.com</email>
</author>
<published>2013-04-12T18:04:19Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=bb93ad5d30517e917aec4062c97c7712c4acaa0f'/>
<id>urn:sha1:bb93ad5d30517e917aec4062c97c7712c4acaa0f</id>
<content type='text'>
commit cb2d8b342aa084d1f3ac29966245dec9163677fb upstream.

Events may be created with attr-&gt;disabled == 1 and attr-&gt;enable_on_exec
== 1, which confuses the group validation code because events with the
PERF_EVENT_STATE_OFF are not considered candidates for scheduling, which
may lead to failure at group scheduling time.

This patch fixes the validation check for ARM, so that events in the
OFF state are still considered when enable_on_exec is true.

Reported-by: Sudeep KarkadaNagesha &lt;Sudeep.KarkadaNagesha@arm.com&gt;
Cc: Peter Zijlstra &lt;a.p.zijlstra@chello.nl&gt;
Cc: Arnaldo Carvalho de Melo &lt;acme@ghostprotocols.net&gt;
Cc: Jiri Olsa &lt;jolsa@redhat.com&gt;
Signed-off-by: Will Deacon &lt;will.deacon@arm.com&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>ARM: 7696/1: Fix kexec by setting outer_cache.inv_all for Feroceon</title>
<updated>2013-04-26T04:19:55Z</updated>
<author>
<name>Illia Ragozin</name>
<email>illia.ragozin@grapecom.com</email>
</author>
<published>2013-04-10T18:43:34Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=9c275826b7522820c0346250bc373e32dfbec13d'/>
<id>urn:sha1:9c275826b7522820c0346250bc373e32dfbec13d</id>
<content type='text'>
commit cd272d1ea71583170e95dde02c76166c7f9017e6 upstream.

On Feroceon the L2 cache becomes non-coherent with the CPU
when the L1 caches are disabled. Thus the L2 needs to be invalidated
after both L1 caches are disabled.

On kexec before the starting the code for relocation the kernel,
the L1 caches are disabled in cpu_froc_fin (cpu_v7_proc_fin for Feroceon),
but after L2 cache is never invalidated, because inv_all is not set
in cache-feroceon-l2.c.
So kernel relocation and decompression may has (and usually has) errors.
Setting the function enables L2 invalidation and fixes the issue.

Signed-off-by: Illia Ragozin &lt;illia.ragozin@grapecom.com&gt;
Acked-by: Jason Cooper &lt;jason@lakedaemon.net&gt;
Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>KVM: Allow cross page reads and writes from cached translations.</title>
<updated>2013-04-26T04:19:55Z</updated>
<author>
<name>Andrew Honig</name>
<email>ahonig@google.com</email>
</author>
<published>2013-03-29T16:35:21Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=2a6b0247eee46f424e032fb7431cc4700ad19ea5'/>
<id>urn:sha1:2a6b0247eee46f424e032fb7431cc4700ad19ea5</id>
<content type='text'>
commit 8f964525a121f2ff2df948dac908dcc65be21b5b upstream.

This patch adds support for kvm_gfn_to_hva_cache_init functions for
reads and writes that will cross a page.  If the range falls within
the same memslot, then this will be a fast operation.  If the range
is split between two memslots, then the slower kvm_read_guest and
kvm_write_guest are used.

Tested: Test against kvm_clock unit tests.

Signed-off-by: Andrew Honig &lt;ahonig@google.com&gt;
Signed-off-by: Gleb Natapov &lt;gleb@redhat.com&gt;
Cc: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>KVM: x86: Convert MSR_KVM_SYSTEM_TIME to use gfn_to_hva_cache functions (CVE-2013-1797)</title>
<updated>2013-04-26T04:19:54Z</updated>
<author>
<name>Andy Honig</name>
<email>ahonig@google.com</email>
</author>
<published>2013-02-20T22:48:10Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=f6dfc740c1d8f6133d2f53c1074770c13d040364'/>
<id>urn:sha1:f6dfc740c1d8f6133d2f53c1074770c13d040364</id>
<content type='text'>
commit 0b79459b482e85cb7426aa7da683a9f2c97aeae1 upstream.

There is a potential use after free issue with the handling of
MSR_KVM_SYSTEM_TIME.  If the guest specifies a GPA in a movable or removable
memory such as frame buffers then KVM might continue to write to that
address even after it's removed via KVM_SET_USER_MEMORY_REGION.  KVM pins
the page in memory so it's unlikely to cause an issue, but if the user
space component re-purposes the memory previously used for the guest, then
the guest will be able to corrupt that memory.

Tested: Tested against kvmclock unit test

Signed-off-by: Andrew Honig &lt;ahonig@google.com&gt;
Signed-off-by: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;


</content>
</entry>
<entry>
<title>KVM: x86: fix for buffer overflow in handling of MSR_KVM_SYSTEM_TIME (CVE-2013-1796)</title>
<updated>2013-04-26T04:19:54Z</updated>
<author>
<name>Andy Honig</name>
<email>ahonig@google.com</email>
</author>
<published>2013-03-11T16:34:52Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ce7d8662581f032101ca70bbe1a2e62cd93fd1bc'/>
<id>urn:sha1:ce7d8662581f032101ca70bbe1a2e62cd93fd1bc</id>
<content type='text'>
commit c300aa64ddf57d9c5d9c898a64b36877345dd4a9 upstream.

If the guest sets the GPA of the time_page so that the request to update the
time straddles a page then KVM will write onto an incorrect page.  The
write is done byusing kmap atomic to get a pointer to the page for the time
structure and then performing a memcpy to that page starting at an offset
that the guest controls.  Well behaved guests always provide a 32-byte aligned
address, however a malicious guest could use this to corrupt host kernel
memory.

Tested: Tested against kvmclock unit test.

Signed-off-by: Andrew Honig &lt;ahonig@google.com&gt;
Signed-off-by: Marcelo Tosatti &lt;mtosatti@redhat.com&gt;
Cc: Ben Hutchings &lt;ben@decadent.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>ARM: Do 15e0d9e37c (ARM: pm: let platforms select cpu_suspend support) properly</title>
<updated>2013-04-26T04:19:54Z</updated>
<author>
<name>Russell King</name>
<email>rmk+kernel@arm.linux.org.uk</email>
</author>
<published>2013-04-08T10:44:57Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=816e2bb1b20b734d2e4286c12f0b18ae9dff35ab'/>
<id>urn:sha1:816e2bb1b20b734d2e4286c12f0b18ae9dff35ab</id>
<content type='text'>
commit b6c7aabd923a17af993c5a5d5d7995f0b27c000a upstream.

Let's do the changes properly and fix the same problem everywhere, not
just for one case.

Signed-off-by: Russell King &lt;rmk+kernel@arm.linux.org.uk&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>x86, mm: Patch out arch_flush_lazy_mmu_mode() when running on bare metal</title>
<updated>2013-04-17T04:27:27Z</updated>
<author>
<name>Boris Ostrovsky</name>
<email>boris.ostrovsky@oracle.com</email>
</author>
<published>2013-03-23T13:36:36Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=7ad0908564a3554f9b13258a7927411af62bf3bc'/>
<id>urn:sha1:7ad0908564a3554f9b13258a7927411af62bf3bc</id>
<content type='text'>
commit 511ba86e1d386f671084b5d0e6f110bb30b8eeb2 upstream.

Invoking arch_flush_lazy_mmu_mode() results in calls to
preempt_enable()/disable() which may have performance impact.

Since lazy MMU is not used on bare metal we can patch away
arch_flush_lazy_mmu_mode() so that it is never called in such
environment.

[ hpa: the previous patch "Fix vmalloc_fault oops during lazy MMU
  updates" may cause a minor performance regression on
  bare metal.  This patch resolves that performance regression.  It is
  somewhat unclear to me if this is a good -stable candidate. ]

Signed-off-by: Boris Ostrovsky &lt;boris.ostrovsky@oracle.com&gt;
Link: http://lkml.kernel.org/r/1364045796-10720-2-git-send-email-konrad.wilk@oracle.com
Tested-by: Josh Boyer &lt;jwboyer@redhat.com&gt;
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Acked-by: Borislav Petkov &lt;bp@suse.de&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
<entry>
<title>x86, mm, paravirt: Fix vmalloc_fault oops during lazy MMU updates</title>
<updated>2013-04-17T04:27:27Z</updated>
<author>
<name>Samu Kallio</name>
<email>samu.kallio@aberdeencloud.com</email>
</author>
<published>2013-03-23T13:36:35Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e082a177477ef0221076a201236a20c8f51c0090'/>
<id>urn:sha1:e082a177477ef0221076a201236a20c8f51c0090</id>
<content type='text'>
commit 1160c2779b826c6f5c08e5cc542de58fd1f667d5 upstream.

In paravirtualized x86_64 kernels, vmalloc_fault may cause an oops
when lazy MMU updates are enabled, because set_pgd effects are being
deferred.

One instance of this problem is during process mm cleanup with memory
cgroups enabled. The chain of events is as follows:

- zap_pte_range enables lazy MMU updates
- zap_pte_range eventually calls mem_cgroup_charge_statistics,
  which accesses the vmalloc'd mem_cgroup per-cpu stat area
- vmalloc_fault is triggered which tries to sync the corresponding
  PGD entry with set_pgd, but the update is deferred
- vmalloc_fault oopses due to a mismatch in the PUD entries

The OOPs usually looks as so:

------------[ cut here ]------------
kernel BUG at arch/x86/mm/fault.c:396!
invalid opcode: 0000 [#1] SMP
.. snip ..
CPU 1
Pid: 10866, comm: httpd Not tainted 3.6.10-4.fc18.x86_64 #1
RIP: e030:[&lt;ffffffff816271bf&gt;]  [&lt;ffffffff816271bf&gt;] vmalloc_fault+0x11f/0x208
.. snip ..
Call Trace:
 [&lt;ffffffff81627759&gt;] do_page_fault+0x399/0x4b0
 [&lt;ffffffff81004f4c&gt;] ? xen_mc_extend_args+0xec/0x110
 [&lt;ffffffff81624065&gt;] page_fault+0x25/0x30
 [&lt;ffffffff81184d03&gt;] ? mem_cgroup_charge_statistics.isra.13+0x13/0x50
 [&lt;ffffffff81186f78&gt;] __mem_cgroup_uncharge_common+0xd8/0x350
 [&lt;ffffffff8118aac7&gt;] mem_cgroup_uncharge_page+0x57/0x60
 [&lt;ffffffff8115fbc0&gt;] page_remove_rmap+0xe0/0x150
 [&lt;ffffffff8115311a&gt;] ? vm_normal_page+0x1a/0x80
 [&lt;ffffffff81153e61&gt;] unmap_single_vma+0x531/0x870
 [&lt;ffffffff81154962&gt;] unmap_vmas+0x52/0xa0
 [&lt;ffffffff81007442&gt;] ? pte_mfn_to_pfn+0x72/0x100
 [&lt;ffffffff8115c8f8&gt;] exit_mmap+0x98/0x170
 [&lt;ffffffff810050d9&gt;] ? __raw_callee_save_xen_pmd_val+0x11/0x1e
 [&lt;ffffffff81059ce3&gt;] mmput+0x83/0xf0
 [&lt;ffffffff810624c4&gt;] exit_mm+0x104/0x130
 [&lt;ffffffff8106264a&gt;] do_exit+0x15a/0x8c0
 [&lt;ffffffff810630ff&gt;] do_group_exit+0x3f/0xa0
 [&lt;ffffffff81063177&gt;] sys_exit_group+0x17/0x20
 [&lt;ffffffff8162bae9&gt;] system_call_fastpath+0x16/0x1b

Calling arch_flush_lazy_mmu_mode immediately after set_pgd makes the
changes visible to the consistency checks.

RedHat-Bugzilla: https://bugzilla.redhat.com/show_bug.cgi?id=914737
Tested-by: Josh Boyer &lt;jwboyer@redhat.com&gt;
Reported-and-Tested-by: Krishna Raman &lt;kraman@redhat.com&gt;
Signed-off-by: Samu Kallio &lt;samu.kallio@aberdeencloud.com&gt;
Link: http://lkml.kernel.org/r/1364045796-10720-1-git-send-email-konrad.wilk@oracle.com
Tested-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: Konrad Rzeszutek Wilk &lt;konrad.wilk@oracle.com&gt;
Signed-off-by: H. Peter Anvin &lt;hpa@linux.intel.com&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@linuxfoundation.org&gt;

</content>
</entry>
</feed>
