diff options
author | Hong zhi guo <honkiko@gmail.com> | 2012-07-31 16:41:35 -0700 |
---|---|---|
committer | Linus Torvalds <torvalds@linux-foundation.org> | 2012-07-31 18:42:39 -0700 |
commit | 92ca922f0a19145f2dcc99d84fe656fa55b52c2e (patch) | |
tree | 3d7e616ce15d79d44c1c21f16958978a211fc738 | |
parent | c2cddf991974a00aa7b40a21e829bc034b8199b6 (diff) |
vmalloc: walk vmap_areas by sorted list instead of rb_next()
There's a walk by repeating rb_next to find a suitable hole. Could be
simply replaced by walk on the sorted vmap_area_list. More simpler and
efficient.
Mutation of the list and tree only happens in pair within
__insert_vmap_area and __free_vmap_area, under protection of
vmap_area_lock. The patch code is also under vmap_area_lock, so the list
walk is safe, and consistent with the tree walk.
Tested on SMP by repeating batch of vmalloc anf vfree for random sizes and
rounds for hours.
Signed-off-by: Hong Zhiguo <honkiko@gmail.com>
Cc: Nick Piggin <npiggin@kernel.dk>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
-rw-r--r-- | mm/vmalloc.c | 8 |
1 files changed, 4 insertions, 4 deletions
diff --git a/mm/vmalloc.c b/mm/vmalloc.c index e03f4c7307a..7e25ee3ce6e 100644 --- a/mm/vmalloc.c +++ b/mm/vmalloc.c @@ -413,11 +413,11 @@ nocache: if (addr + size - 1 < addr) goto overflow; - n = rb_next(&first->rb_node); - if (n) - first = rb_entry(n, struct vmap_area, rb_node); - else + if (list_is_last(&first->list, &vmap_area_list)) goto found; + + first = list_entry(first->list.next, + struct vmap_area, list); } found: |