diff options
author | Wanpeng Li <liwanp@linux.vnet.ibm.com> | 2013-03-22 15:04:40 -0700 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2013-03-27 02:41:23 +0000 |
commit | e58c0a52f9510dee5dc715cb76954baf36f5b6e7 (patch) | |
tree | f4b5dcea113e35a9fa8100ee6d6dcfd7ad4fed2e | |
parent | 4c239ba63bccc9d0f2829815a31f7972e925ab7f (diff) |
mm/hugetlb: fix total hugetlbfs pages count when using memory overcommit accouting
commit d00285884c0892bb1310df96bce6056e9ce9b9d9 upstream.
hugetlb_total_pages is used for overcommit calculations but the current
implementation considers only the default hugetlb page size (which is
either the first defined hugepage size or the one specified by
default_hugepagesz kernel boot parameter).
If the system is configured for more than one hugepage size, which is
possible since commit a137e1cc6d6e ("hugetlbfs: per mount huge page
sizes") then the overcommit estimation done by __vm_enough_memory()
(resp. shown by meminfo_proc_show) is not precise - there is an
impression of more available/allowed memory. This can lead to an
unexpected ENOMEM/EFAULT resp. SIGSEGV when memory is accounted.
Testcase:
boot: hugepagesz=1G hugepages=1
the default overcommit ratio is 50
before patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit: 55434168 kB
after patch:
egrep 'CommitLimit' /proc/meminfo
CommitLimit: 54909880 kB
[akpm@linux-foundation.org: coding-style tweak]
Signed-off-by: Wanpeng Li <liwanp@linux.vnet.ibm.com>
Acked-by: Michal Hocko <mhocko@suse.cz>
Cc: "Aneesh Kumar K.V" <aneesh.kumar@linux.vnet.ibm.com>
Cc: Hillf Danton <dhillf@gmail.com>
Cc: KAMEZAWA Hiroyuki <kamezawa.hiroyu@jp.fujitsu.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
-rw-r--r-- | mm/hugetlb.c | 8 |
1 files changed, 6 insertions, 2 deletions
diff --git a/mm/hugetlb.c b/mm/hugetlb.c index d6c0fdfd802..4c7d42af22e 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2092,8 +2092,12 @@ int hugetlb_report_node_meminfo(int nid, char *buf) /* Return the number pages of memory we physically have, in PAGE_SIZE units. */ unsigned long hugetlb_total_pages(void) { - struct hstate *h = &default_hstate; - return h->nr_huge_pages * pages_per_huge_page(h); + struct hstate *h; + unsigned long nr_total_pages = 0; + + for_each_hstate(h) + nr_total_pages += h->nr_huge_pages * pages_per_huge_page(h); + return nr_total_pages; } static int hugetlb_acct_memory(struct hstate *h, long delta) |