From df9d6985be2a7e7683c46e4c6ea608fc69f02b45 Mon Sep 17 00:00:00 2001 From: Christoph Lameter Date: Mon, 31 Oct 2011 17:09:35 -0700 Subject: mm: do not drain pagevecs for mlockall(MCL_FUTURE) MCL_FUTURE does not move pages between lru list and draining the LRU per cpu pagevecs is a nasty activity. Avoid doing it unecessarily. Signed-off-by: Christoph Lameter Cc: David Rientjes Reviewed-by: Minchan Kim Acked-by: KOSAKI Motohiro Cc: Mel Gorman Acked-by: Johannes Weiner Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/mlock.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'mm/mlock.c') diff --git a/mm/mlock.c b/mm/mlock.c index 048260c4e02..7debb4fdf79 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -549,7 +549,8 @@ SYSCALL_DEFINE1(mlockall, int, flags) if (!can_do_mlock()) goto out; - lru_add_drain_all(); /* flush pagevec */ + if (flags & MCL_CURRENT) + lru_add_drain_all(); /* flush pagevec */ down_write(¤t->mm->mmap_sem); -- cgit v1.2.3-18-g5258 From 3d470fc385defa60d9af610f05db8e7f8b4f2f5e Mon Sep 17 00:00:00 2001 From: Hugh Dickins Date: Mon, 31 Oct 2011 17:09:43 -0700 Subject: mm: munlock use mapcount to avoid terrible overhead A process spent 30 minutes exiting, just munlocking the pages of a large anonymous area that had been alternately mprotected into page-sized vmas: for every single page there's an anon_vma walk through all the other little vmas to find the right one. A general fix to that would be a lot more complicated (use prio_tree on anon_vma?), but there's one very simple thing we can do to speed up the common case: if a page to be munlocked is mapped only once, then it is our vma that it is mapped into, and there's no need whatever to walk through all the others. Okay, there is a very remote race in munlock_vma_pages_range(), if between its follow_page() and lock_page(), another process were to munlock the same page, then page reclaim remove it from our vma, then another process mlock it again. We would find it with page_mapcount 1, yet it's still mlocked in another process. But never mind, that's much less likely than the down_read_trylock() failure which munlocking already tolerates (in try_to_unmap_one()): in due course page reclaim will discover and move the page to unevictable instead. [akpm@linux-foundation.org: add comment] Signed-off-by: Hugh Dickins Cc: Michel Lespinasse Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/mlock.c | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) (limited to 'mm/mlock.c') diff --git a/mm/mlock.c b/mm/mlock.c index 7debb4fdf79..bd34b3a1085 100644 --- a/mm/mlock.c +++ b/mm/mlock.c @@ -110,7 +110,15 @@ void munlock_vma_page(struct page *page) if (TestClearPageMlocked(page)) { dec_zone_page_state(page, NR_MLOCK); if (!isolate_lru_page(page)) { - int ret = try_to_munlock(page); + int ret = SWAP_AGAIN; + + /* + * Optimization: if the page was mapped just once, + * that's our mapping and we don't need to check all the + * other vmas. + */ + if (page_mapcount(page) > 1) + ret = try_to_munlock(page); /* * did try_to_unlock() succeed or punt? */ -- cgit v1.2.3-18-g5258