From e303297e6c3a7b847c4731eb14006ca6b435ecca Mon Sep 17 00:00:00 2001 From: Peter Zijlstra Date: Tue, 24 May 2011 17:12:01 -0700 Subject: mm: extended batches for generic mmu_gather Instead of using a single batch (the small on-stack, or an allocated page), try and extend the batch every time it runs out and only flush once either the extend fails or we're done. Signed-off-by: Peter Zijlstra Requested-by: Nick Piggin Reviewed-by: KAMEZAWA Hiroyuki Acked-by: Hugh Dickins Cc: Benjamin Herrenschmidt Cc: David Miller Cc: Martin Schwidefsky Cc: Russell King Cc: Paul Mundt Cc: Jeff Dike Cc: Richard Weinberger Cc: Tony Luck Cc: Mel Gorman Cc: KOSAKI Motohiro Cc: Nick Piggin Cc: Namhyung Kim Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/memory.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm/memory.c') diff --git a/mm/memory.c b/mm/memory.c index a77fd23ee68..17193d74f30 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -994,8 +994,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb, spinlock_t *ptl; int rss[NR_MM_COUNTERS]; - init_rss_vec(rss); again: + init_rss_vec(rss); pte = pte_offset_map_lock(mm, pmd, addr, &ptl); arch_enter_lazy_mmu_mode(); do { -- cgit v1.2.3-18-g5258