aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorGreg Edwards <gedwards@ddn.com>2013-11-04 09:08:12 -0700
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>2013-11-29 11:28:06 -0800
commit6492d85c63e0e59f8c6c42ad8cc7eee2f0bee9df (patch)
tree85bc4a716ef5bedc2b32c71d390142023cbd788a
parent26146207c9fc024ff6752225d271ec47ccd35dfc (diff)
KVM: IOMMU: hva align mapping page size
commit 27ef63c7e97d1e5dddd85051c03f8d44cc887f34 upstream. When determining the page size we could use to map with the IOMMU, the page size should also be aligned with the hva, not just the gfn. The gfn may not reflect the real alignment within the hugetlbfs file. Most of the time, this works fine. However, if the hugetlbfs file is backed by non-contiguous huge pages, a multi-huge page memslot starts at an unaligned offset within the hugetlbfs file, and the gfn is aligned with respect to the huge page size, kvm_host_page_size() will return the huge page size and we will use that to map with the IOMMU. When we later unpin that same memslot, the IOMMU returns the unmap size as the huge page size, and we happily unpin that many pfns in monotonically increasing order, not realizing we are spanning non-contiguous huge pages and partially unpin the wrong huge page. Ensure the IOMMU mapping page size is aligned with the hva corresponding to the gfn, which does reflect the alignment within the hugetlbfs file. Reviewed-by: Marcelo Tosatti <mtosatti@redhat.com> Signed-off-by: Greg Edwards <gedwards@ddn.com> Signed-off-by: Gleb Natapov <gleb@redhat.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
-rw-r--r--virt/kvm/iommu.c4
1 files changed, 4 insertions, 0 deletions
diff --git a/virt/kvm/iommu.c b/virt/kvm/iommu.c
index 72a130bc448..c329c8fc57f 100644
--- a/virt/kvm/iommu.c
+++ b/virt/kvm/iommu.c
@@ -103,6 +103,10 @@ int kvm_iommu_map_pages(struct kvm *kvm, struct kvm_memory_slot *slot)
while ((gfn << PAGE_SHIFT) & (page_size - 1))
page_size >>= 1;
+ /* Make sure hva is aligned to the page size we want to map */
+ while (__gfn_to_hva_memslot(slot, gfn) & (page_size - 1))
+ page_size >>= 1;
+
/*
* Pin all pages we are about to map in memory. This is
* important because we unmap and unpin in 4kb steps later.