From 10fad5e46f6c7bdfb01b1a012380a38e3c6ab346 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Wed, 10 Mar 2010 18:57:54 +0900 Subject: percpu, module: implement and use is_kernel/module_percpu_address() lockdep has custom code to check whether a pointer belongs to static percpu area which is somewhat broken. Implement proper is_kernel/module_percpu_address() and replace the custom code. On UP, percpu variables are regular static variables and can't be distinguished from them. Always return %false on UP. Signed-off-by: Tejun Heo Acked-by: Peter Zijlstra Cc: Rusty Russell Cc: Ingo Molnar --- mm/percpu.c | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) (limited to 'mm') diff --git a/mm/percpu.c b/mm/percpu.c index 768419d44ad..6e09741ddc6 100644 --- a/mm/percpu.c +++ b/mm/percpu.c @@ -1303,6 +1303,32 @@ void free_percpu(void __percpu *ptr) } EXPORT_SYMBOL_GPL(free_percpu); +/** + * is_kernel_percpu_address - test whether address is from static percpu area + * @addr: address to test + * + * Test whether @addr belongs to in-kernel static percpu area. Module + * static percpu areas are not considered. For those, use + * is_module_percpu_address(). + * + * RETURNS: + * %true if @addr is from in-kernel static percpu area, %false otherwise. + */ +bool is_kernel_percpu_address(unsigned long addr) +{ + const size_t static_size = __per_cpu_end - __per_cpu_start; + void __percpu *base = __addr_to_pcpu_ptr(pcpu_base_addr); + unsigned int cpu; + + for_each_possible_cpu(cpu) { + void *start = per_cpu_ptr(base, cpu); + + if ((void *)addr >= start && (void *)addr < start + static_size) + return true; + } + return false; +} + /** * per_cpu_ptr_to_phys - convert translated percpu address to physical address * @addr: the address to be converted to physical address -- cgit v1.2.3-18-g5258 From 5a0e3ad6af8660be21ca98a971cd00f331318c05 Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Wed, 24 Mar 2010 17:04:11 +0900 Subject: include cleanup: Update gfp.h and slab.h includes to prepare for breaking implicit slab.h inclusion from percpu.h percpu.h is included by sched.h and module.h and thus ends up being included when building most .c files. percpu.h includes slab.h which in turn includes gfp.h making everything defined by the two files universally available and complicating inclusion dependencies. percpu.h -> slab.h dependency is about to be removed. Prepare for this change by updating users of gfp and slab facilities include those headers directly instead of assuming availability. As this conversion needs to touch large number of source files, the following script is used as the basis of conversion. http://userweb.kernel.org/~tj/misc/slabh-sweep.py The script does the followings. * Scan files for gfp and slab usages and update includes such that only the necessary includes are there. ie. if only gfp is used, gfp.h, if slab is used, slab.h. * When the script inserts a new include, it looks at the include blocks and try to put the new include such that its order conforms to its surrounding. It's put in the include block which contains core kernel includes, in the same order that the rest are ordered - alphabetical, Christmas tree, rev-Xmas-tree or at the end if there doesn't seem to be any matching order. * If the script can't find a place to put a new include (mostly because the file doesn't have fitting include block), it prints out an error message indicating which .h file needs to be added to the file. The conversion was done in the following steps. 1. The initial automatic conversion of all .c files updated slightly over 4000 files, deleting around 700 includes and adding ~480 gfp.h and ~3000 slab.h inclusions. The script emitted errors for ~400 files. 2. Each error was manually checked. Some didn't need the inclusion, some needed manual addition while adding it to implementation .h or embedding .c file was more appropriate for others. This step added inclusions to around 150 files. 3. The script was run again and the output was compared to the edits from #2 to make sure no file was left behind. 4. Several build tests were done and a couple of problems were fixed. e.g. lib/decompress_*.c used malloc/free() wrappers around slab APIs requiring slab.h to be added manually. 5. The script was run on all .h files but without automatically editing them as sprinkling gfp.h and slab.h inclusions around .h files could easily lead to inclusion dependency hell. Most gfp.h inclusion directives were ignored as stuff from gfp.h was usually wildly available and often used in preprocessor macros. Each slab.h inclusion directive was examined and added manually as necessary. 6. percpu.h was updated not to include slab.h. 7. Build test were done on the following configurations and failures were fixed. CONFIG_GCOV_KERNEL was turned off for all tests (as my distributed build env didn't work with gcov compiles) and a few more options had to be turned off depending on archs to make things build (like ipr on powerpc/64 which failed due to missing writeq). * x86 and x86_64 UP and SMP allmodconfig and a custom test config. * powerpc and powerpc64 SMP allmodconfig * sparc and sparc64 SMP allmodconfig * ia64 SMP allmodconfig * s390 SMP allmodconfig * alpha SMP allmodconfig * um on x86_64 SMP allmodconfig 8. percpu.h modifications were reverted so that it could be applied as a separate patch and serve as bisection point. Given the fact that I had only a couple of failures from tests on step 6, I'm fairly confident about the coverage of this conversion patch. If there is a breakage, it's likely to be something in one of the arch headers which should be easily discoverable easily on most builds of the specific arch. Signed-off-by: Tejun Heo Guess-its-ok-by: Christoph Lameter Cc: Ingo Molnar Cc: Lee Schermerhorn --- mm/bootmem.c | 1 + mm/bounce.c | 1 + mm/failslab.c | 1 - mm/filemap.c | 2 +- mm/filemap_xip.c | 1 + mm/hugetlb.c | 2 +- mm/kmemcheck.c | 1 - mm/kmemleak.c | 1 - mm/memory-failure.c | 1 + mm/memory.c | 1 + mm/mempolicy.c | 1 - mm/migrate.c | 1 + mm/mincore.c | 2 +- mm/mmu_notifier.c | 1 + mm/mprotect.c | 1 - mm/mremap.c | 1 - mm/oom_kill.c | 1 + mm/page_io.c | 1 + mm/quicklist.c | 1 + mm/readahead.c | 1 + mm/sparse-vmemmap.c | 1 + mm/sparse.c | 1 + mm/swap.c | 1 + mm/swap_state.c | 1 + mm/truncate.c | 1 + mm/vmscan.c | 2 +- mm/vmstat.c | 1 + 27 files changed, 21 insertions(+), 10 deletions(-) (limited to 'mm') diff --git a/mm/bootmem.c b/mm/bootmem.c index 9b134460b01..eff22422057 100644 --- a/mm/bootmem.c +++ b/mm/bootmem.c @@ -10,6 +10,7 @@ */ #include #include +#include #include #include #include diff --git a/mm/bounce.c b/mm/bounce.c index a2b76a588e3..13b6dad1eed 100644 --- a/mm/bounce.c +++ b/mm/bounce.c @@ -6,6 +6,7 @@ #include #include #include +#include #include #include #include diff --git a/mm/failslab.c b/mm/failslab.c index bb41f98dd8b..c5f88f240dd 100644 --- a/mm/failslab.c +++ b/mm/failslab.c @@ -1,5 +1,4 @@ #include -#include #include static struct { diff --git a/mm/filemap.c b/mm/filemap.c index 045b31c3765..140ebda9640 100644 --- a/mm/filemap.c +++ b/mm/filemap.c @@ -10,13 +10,13 @@ * the NFS filesystem used to do this differently, for example) */ #include -#include #include #include #include #include #include #include +#include #include #include #include diff --git a/mm/filemap_xip.c b/mm/filemap_xip.c index 78b94f0b6d5..83364df74a3 100644 --- a/mm/filemap_xip.c +++ b/mm/filemap_xip.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 3a5aeb37c11..6034dc9e979 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -2,7 +2,6 @@ * Generic hugetlb support. * (C) William Irwin, April 2004 */ -#include #include #include #include @@ -18,6 +17,7 @@ #include #include #include +#include #include #include diff --git a/mm/kmemcheck.c b/mm/kmemcheck.c index fd814fd6131..8f8e48acf7d 100644 --- a/mm/kmemcheck.c +++ b/mm/kmemcheck.c @@ -1,7 +1,6 @@ #include #include #include -#include #include void kmemcheck_alloc_shadow(struct page *page, int order, gfp_t flags, int node) diff --git a/mm/kmemleak.c b/mm/kmemleak.c index 5b069e4f5e4..2c0d032ac89 100644 --- a/mm/kmemleak.c +++ b/mm/kmemleak.c @@ -72,7 +72,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/memory-failure.c b/mm/memory-failure.c index d1f33516297..620b0b46159 100644 --- a/mm/memory-failure.c +++ b/mm/memory-failure.c @@ -44,6 +44,7 @@ #include #include #include +#include #include "internal.h" int sysctl_memory_failure_early_kill __read_mostly = 0; diff --git a/mm/memory.c b/mm/memory.c index bc9ba5a1f5b..1d2ea39260e 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -56,6 +56,7 @@ #include #include #include +#include #include #include diff --git a/mm/mempolicy.c b/mm/mempolicy.c index 8034abd3a13..08f40a2f3fe 100644 --- a/mm/mempolicy.c +++ b/mm/mempolicy.c @@ -73,7 +73,6 @@ #include #include #include -#include #include #include #include diff --git a/mm/migrate.c b/mm/migrate.c index 88000b89fc9..d3f3f7f8107 100644 --- a/mm/migrate.c +++ b/mm/migrate.c @@ -32,6 +32,7 @@ #include #include #include +#include #include "internal.h" diff --git a/mm/mincore.c b/mm/mincore.c index 7a3436ef39e..f77433c2027 100644 --- a/mm/mincore.c +++ b/mm/mincore.c @@ -7,8 +7,8 @@ /* * The mincore() system call. */ -#include #include +#include #include #include #include diff --git a/mm/mmu_notifier.c b/mm/mmu_notifier.c index 7e33f2cb3c7..438951d366f 100644 --- a/mm/mmu_notifier.c +++ b/mm/mmu_notifier.c @@ -16,6 +16,7 @@ #include #include #include +#include /* * This function can't run concurrently against mmu_notifier_register diff --git a/mm/mprotect.c b/mm/mprotect.c index 8bc969d8112..2d1bf7cf885 100644 --- a/mm/mprotect.c +++ b/mm/mprotect.c @@ -10,7 +10,6 @@ #include #include -#include #include #include #include diff --git a/mm/mremap.c b/mm/mremap.c index e9c75efce60..cde56ee51ef 100644 --- a/mm/mremap.c +++ b/mm/mremap.c @@ -9,7 +9,6 @@ #include #include -#include #include #include #include diff --git a/mm/oom_kill.c b/mm/oom_kill.c index 9b223af6a14..b68e802a7a7 100644 --- a/mm/oom_kill.c +++ b/mm/oom_kill.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include diff --git a/mm/page_io.c b/mm/page_io.c index a19af956ee1..31a3b962230 100644 --- a/mm/page_io.c +++ b/mm/page_io.c @@ -12,6 +12,7 @@ #include #include +#include #include #include #include diff --git a/mm/quicklist.c b/mm/quicklist.c index 6633965bb27..2876349339a 100644 --- a/mm/quicklist.c +++ b/mm/quicklist.c @@ -14,6 +14,7 @@ */ #include +#include #include #include #include diff --git a/mm/readahead.c b/mm/readahead.c index 337b20e946f..999b54bb462 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -9,6 +9,7 @@ #include #include +#include #include #include #include diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c index 392b9bb5bc0..aa33fd67fa4 100644 --- a/mm/sparse-vmemmap.c +++ b/mm/sparse-vmemmap.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include #include diff --git a/mm/sparse.c b/mm/sparse.c index 22896d58913..dc0cc4d43ff 100644 --- a/mm/sparse.c +++ b/mm/sparse.c @@ -2,6 +2,7 @@ * sparse memory mappings. */ #include +#include #include #include #include diff --git a/mm/swap.c b/mm/swap.c index 9036b89813a..7cd60bf0a97 100644 --- a/mm/swap.c +++ b/mm/swap.c @@ -30,6 +30,7 @@ #include #include #include +#include #include "internal.h" diff --git a/mm/swap_state.c b/mm/swap_state.c index 6d1daeb1cb4..e10f5833167 100644 --- a/mm/swap_state.c +++ b/mm/swap_state.c @@ -8,6 +8,7 @@ */ #include #include +#include #include #include #include diff --git a/mm/truncate.c b/mm/truncate.c index e87e3724482..f42675a3615 100644 --- a/mm/truncate.c +++ b/mm/truncate.c @@ -9,6 +9,7 @@ #include #include +#include #include #include #include diff --git a/mm/vmscan.c b/mm/vmscan.c index 79c809895fb..e0e5f15bb72 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -13,7 +13,7 @@ #include #include -#include +#include #include #include #include diff --git a/mm/vmstat.c b/mm/vmstat.c index 7f760cbc73f..fa12ea3051f 100644 --- a/mm/vmstat.c +++ b/mm/vmstat.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include -- cgit v1.2.3-18-g5258 From ea5a9f0c3447889abceb7482c391bb977472eab9 Mon Sep 17 00:00:00 2001 From: Randy Dunlap Date: Tue, 30 Mar 2010 03:01:14 +0900 Subject: kmemcheck: Fix build errors due to missing slab.h mm/kmemcheck.c:69: error: dereferencing pointer to incomplete type mm/kmemcheck.c:69: error: 'SLAB_NOTRACK' undeclared (first use in this function) mm/kmemcheck.c:82: error: dereferencing pointer to incomplete type mm/kmemcheck.c:94: error: dereferencing pointer to incomplete type mm/kmemcheck.c:94: error: dereferencing pointer to incomplete type mm/kmemcheck.c:94: error: 'SLAB_DESTROY_BY_RCU' undeclared (first use in this function) Signed-off-by: Randy Dunlap Signed-off-by: Tejun Heo --- mm/kmemcheck.c | 1 + 1 file changed, 1 insertion(+) (limited to 'mm') diff --git a/mm/kmemcheck.c b/mm/kmemcheck.c index 8f8e48acf7d..fd814fd6131 100644 --- a/mm/kmemcheck.c +++ b/mm/kmemcheck.c @@ -1,6 +1,7 @@ #include #include #include +#include #include void kmemcheck_alloc_shadow(struct page *page, int order, gfp_t flags, int node) -- cgit v1.2.3-18-g5258 From de380b55f92986c1a84198149cb71b7228d15fbd Mon Sep 17 00:00:00 2001 From: Tejun Heo Date: Wed, 24 Mar 2010 17:06:43 +0900 Subject: percpu: don't implicitly include slab.h from percpu.h percpu.h has always been including slab.h to get k[mz]alloc/free() for UP inline implementation. percpu.h being used by very low level headers including module.h and sched.h, this meant that a lot files unintentionally got slab.h inclusion. Lee Schermerhorn was trying to make topology.h use percpu.h and got bitten by this implicit inclusion. The right thing to do is break this ultimately unnecessary dependency. The previous patch added explicit inclusion of either gfp.h or slab.h to the source files using them. This patch updates percpu.h such that slab.h is no longer included from percpu.h. Signed-off-by: Tejun Heo Reviewed-by: Christoph Lameter Cc: Ingo Molnar Cc: Lee Schermerhorn --- mm/Makefile | 6 +++++- mm/percpu_up.c | 30 ++++++++++++++++++++++++++++++ 2 files changed, 35 insertions(+), 1 deletion(-) create mode 100644 mm/percpu_up.c (limited to 'mm') diff --git a/mm/Makefile b/mm/Makefile index 7a68d2ab556..6c2a73a54a4 100644 --- a/mm/Makefile +++ b/mm/Makefile @@ -33,7 +33,11 @@ obj-$(CONFIG_FAILSLAB) += failslab.o obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o obj-$(CONFIG_FS_XIP) += filemap_xip.o obj-$(CONFIG_MIGRATION) += migrate.o -obj-$(CONFIG_SMP) += percpu.o +ifdef CONFIG_SMP +obj-y += percpu.o +else +obj-y += percpu_up.o +endif obj-$(CONFIG_QUICKLIST) += quicklist.o obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o obj-$(CONFIG_MEMORY_FAILURE) += memory-failure.o diff --git a/mm/percpu_up.c b/mm/percpu_up.c new file mode 100644 index 00000000000..c4351c7f57d --- /dev/null +++ b/mm/percpu_up.c @@ -0,0 +1,30 @@ +/* + * mm/percpu_up.c - dummy percpu memory allocator implementation for UP + */ + +#include +#include +#include + +void __percpu *__alloc_percpu(size_t size, size_t align) +{ + /* + * Can't easily make larger alignment work with kmalloc. WARN + * on it. Larger alignment should only be used for module + * percpu sections on SMP for which this path isn't used. + */ + WARN_ON_ONCE(align > SMP_CACHE_BYTES); + return kzalloc(size, GFP_KERNEL); +} +EXPORT_SYMBOL_GPL(__alloc_percpu); + +void free_percpu(void __percpu *p) +{ + kfree(p); +} +EXPORT_SYMBOL_GPL(free_percpu); + +phys_addr_t per_cpu_ptr_to_phys(void *addr) +{ + return __pa(addr); +} -- cgit v1.2.3-18-g5258 From 337998587f802535896e9ed16d19f97915ccd368 Mon Sep 17 00:00:00 2001 From: Yinghai Lu Date: Wed, 31 Mar 2010 20:44:09 -0700 Subject: nobootmem, x86: Fix 32bit numa system without RAM on node 0 On one system without RAM on node0, got following boot dump with a 32 bit NUMA kernel: early_node_map[4] active PFN ranges 1: 0x00000010 -> 0x00000099 1: 0x00000100 -> 0x0007da00 1: 0x0007e800 -> 0x0007ffa0 1: 0x0007ffae -> 0x0007ffb0 ... Subtract (29 early reservations) #000 [0000001000 - 0000002000] #001 [0000089000 - 000008f000] #002 [0000091000 - 0000093500] ... #027 [007cbfef40 - 007e800000] #028 [007e9ca000 - 007ff95000] (0 free memory ranges) Initializing HighMem for node 0 (00000000:00000000) Initializing HighMem for node 1 (00000000:00000000) Memory: 0k/2096832k available (6662k kernel code, 2096300k reserved, 4829k data, 484k init, 0k highmem) ... Checking if this processor honours the WP bit even in supervisor mode...Ok. swapper: page allocation failure. order:0, mode:0x0 Pid: 0, comm: swapper Not tainted 2.6.34-rc3-tip-03818-g4b1ea6c-dirty #35 Call Trace: [<4087a5dc>] ? printk+0xf/0x11 [<40286728>] __alloc_pages_nodemask+0x417/0x487 [<402a9ce1>] new_slab+0xe2/0x1fe [<402aa5b2>] kmem_cache_open+0x185/0x358 [<402abbc0>] T.954+0x1c/0x60 [<40d52a29>] kmem_cache_init+0x24/0x113 [<40d39738>] start_kernel+0x166/0x2e4 [<40d3940e>] ? unknown_bootoption+0x0/0x18e [<40d390ce>] i386_start_kernel+0xce/0xd5 Mem-Info: Node 1 DMA per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 Node 1 Normal per-cpu: CPU 0: hi: 0, btch: 1 usd: 0 active_anon:0 inactive_anon:0 isolated_anon:0 active_file:0 inactive_file:0 isolated_file:0 unevictable:0 dirty:0 writeback:0 unstable:0 free:0 slab_reclaimable:0 slab_unreclaimable:0 mapped:0 shmem:0 pagetables:0 bounce:0 When 32bit NUMA is used, free_all_bootmem() will still only go over with node id 0. If node 0 doesn't have RAM installed, We need to go with node1 because early_node_map still use 1 for all ranges, and ram from node1 become low ram. Use MAX_NUMNODES like 64-bit NUMA does. Note: BOOTMEM path has the same problem. this bug exist before We have NO_BOOTMEM support. -v3: add more comments, and fix bootmem path too. -v4: seperate bootmem path fix Signed-off-by: Yinghai Lu LKML-Reference: <4BB41689.9090502@kernel.org> Signed-off-by: H. Peter Anvin --- mm/bootmem.c | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/bootmem.c b/mm/bootmem.c index 9b134460b01..2058cb7595f 100644 --- a/mm/bootmem.c +++ b/mm/bootmem.c @@ -303,7 +303,14 @@ unsigned long __init free_all_bootmem_node(pg_data_t *pgdat) unsigned long __init free_all_bootmem(void) { #ifdef CONFIG_NO_BOOTMEM - return free_all_memory_core_early(NODE_DATA(0)->node_id); + /* + * We need to use MAX_NUMNODES instead of NODE_DATA(0)->node_id + * because in some case like Node0 doesnt have RAM installed + * low ram will be on Node1 + * Use MAX_NUMNODES will make sure all ranges in early_node_map[] + * will be used instead of only Node0 related + */ + return free_all_memory_core_early(MAX_NUMNODES); #else return free_all_bootmem_core(NODE_DATA(0)->bdata); #endif -- cgit v1.2.3-18-g5258 From aa235fc712f379d4194cff9217f07026c452c141 Mon Sep 17 00:00:00 2001 From: Yinghai Lu Date: Wed, 31 Mar 2010 20:45:27 -0700 Subject: bootmem, x86: Fix 32bit numa system without RAM on node 0 When 32bit numa is used, free_all_bootmem() will still only go over with node id 0. If node 0 doesn't have RAM installed, the lowest populated node becomes low RAM. This one fixes BOOTMEM path by iterating over the bdata_list. -v3: add more comments, and fix bootmem path too. -v4: seperate from one big patch Signed-off-by: Yinghai Lu LKML-Reference: <4BB416D7.6090203@kernel.org> Signed-off-by: H. Peter Anvin --- mm/bootmem.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/bootmem.c b/mm/bootmem.c index 2058cb7595f..ba37d62b684 100644 --- a/mm/bootmem.c +++ b/mm/bootmem.c @@ -312,7 +312,13 @@ unsigned long __init free_all_bootmem(void) */ return free_all_memory_core_early(MAX_NUMNODES); #else - return free_all_bootmem_core(NODE_DATA(0)->bdata); + unsigned long total_pages = 0; + bootmem_data_t *bdata; + + list_for_each_entry(bdata, &bdata_list, list) + total_pages += free_all_bootmem_core(bdata); + + return total_pages; #endif } -- cgit v1.2.3-18-g5258 From 144214537370b4f133a735446ebe86e90cfb2501 Mon Sep 17 00:00:00 2001 From: Anton Blanchard Date: Fri, 2 Apr 2010 09:46:55 +0200 Subject: backing-dev: Handle class_create() failure I hit this when we had a bug in IDR for a few days. Basically sysfs would fail to create new inodes since it uses an IDR and therefore class_create would fail. While we are unlikely to see this fail we may as well handle it instead of oopsing. Signed-off-by: Anton Blanchard Signed-off-by: Jens Axboe --- mm/backing-dev.c | 3 +++ 1 file changed, 3 insertions(+) (limited to 'mm') diff --git a/mm/backing-dev.c b/mm/backing-dev.c index 0e8ca034770..f13e067e146 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -227,6 +227,9 @@ static struct device_attribute bdi_dev_attrs[] = { static __init int bdi_class_init(void) { bdi_class = class_create(THIS_MODULE, "bdi"); + if (IS_ERR(bdi_class)) + return PTR_ERR(bdi_class); + bdi_class->dev_attrs = bdi_dev_attrs; bdi_debug_init(); return 0; -- cgit v1.2.3-18-g5258 From 4946d54cb55e86a156216fcfeed5568514b0830f Mon Sep 17 00:00:00 2001 From: Rik van Riel Date: Mon, 5 Apr 2010 12:13:33 -0400 Subject: rmap: fix anon_vma_fork() memory leak Fix a memory leak in anon_vma_fork(), where we fail to tear down the anon_vmas attached to the new VMA in case setting up the new anon_vma fails. This bug also has the potential to leave behind anon_vma_chain structs with pointers to invalid memory. Reported-by: Minchan Kim Signed-off-by: Rik van Riel Signed-off-by: Linus Torvalds --- mm/rmap.c | 1 + 1 file changed, 1 insertion(+) (limited to 'mm') diff --git a/mm/rmap.c b/mm/rmap.c index fcd593c9c99..eaa7a09eb72 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -232,6 +232,7 @@ int anon_vma_fork(struct vm_area_struct *vma, struct vm_area_struct *pvma) out_error_free_anon_vma: anon_vma_free(anon_vma); out_error: + unlink_anon_vmas(vma); return -ENOMEM; } -- cgit v1.2.3-18-g5258 From a3a2e76c77fa22b114e421ac11dec0c56c3503fb Mon Sep 17 00:00:00 2001 From: KAMEZAWA Hiroyuki Date: Tue, 6 Apr 2010 14:34:42 -0700 Subject: mm: avoid null-pointer deref in sync_mm_rss() - We weren't zeroing p->rss_stat[] at fork() - Consequently sync_mm_rss() was dereferencing tsk->mm for kernel threads and was oopsing. - Make __sync_task_rss_stat() static, too. Addresses https://bugzilla.kernel.org/show_bug.cgi?id=15648 [akpm@linux-foundation.org: remove the BUG_ON(!mm->rss)] Reported-by: Troels Liebe Bentsen Signed-off-by: KAMEZAWA Hiroyuki "Michael S. Tsirkin" Cc: Andrea Arcangeli Cc: Rik van Riel Cc: Minchan Kim Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/memory.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) (limited to 'mm') diff --git a/mm/memory.c b/mm/memory.c index 1d2ea39260e..833952d8b74 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -125,13 +125,12 @@ core_initcall(init_zero_pfn); #if defined(SPLIT_RSS_COUNTING) -void __sync_task_rss_stat(struct task_struct *task, struct mm_struct *mm) +static void __sync_task_rss_stat(struct task_struct *task, struct mm_struct *mm) { int i; for (i = 0; i < NR_MM_COUNTERS; i++) { if (task->rss_stat.count[i]) { - BUG_ON(!mm); add_mm_counter(mm, i, task->rss_stat.count[i]); task->rss_stat.count[i] = 0; } -- cgit v1.2.3-18-g5258 From 70655c06bd3f25111312d63985888112aed15ac5 Mon Sep 17 00:00:00 2001 From: Wu Fengguang Date: Tue, 6 Apr 2010 14:34:53 -0700 Subject: readahead: fix NULL filp dereference btrfs relocate_file_extent_cluster() calls us with NULL filp: [ 4005.426805] BUG: unable to handle kernel NULL pointer dereference at 00000021 [ 4005.426818] IP: [] page_cache_sync_readahead+0x18/0x3e Signed-off-by: Wu Fengguang Cc: Yan Zheng Reported-by: Kirill A. Shutemov Tested-by: Kirill A. Shutemov Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/readahead.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/readahead.c b/mm/readahead.c index 999b54bb462..dfa9a1a03a1 100644 --- a/mm/readahead.c +++ b/mm/readahead.c @@ -503,7 +503,7 @@ void page_cache_sync_readahead(struct address_space *mapping, return; /* be dumb */ - if (filp->f_mode & FMODE_RANDOM) { + if (filp && (filp->f_mode & FMODE_RANDOM)) { force_page_cache_readahead(mapping, filp, offset, req_size); return; } -- cgit v1.2.3-18-g5258 From d6da1a5abc2bf3a06a5bda08e0f6833409234666 Mon Sep 17 00:00:00 2001 From: KOSAKI Motohiro Date: Tue, 6 Apr 2010 14:34:56 -0700 Subject: mm: revert "vmscan: get_scan_ratio() cleanup" Shaohua Li reported his tmpfs streaming I/O test can lead to make oom. The test uses a 6G tmpfs in a system with 3G memory. In the tmpfs, there are 6 copies of kernel source and the test does kbuild for each copy. His investigation shows the test has a lot of rotated anon pages and quite few file pages, so get_scan_ratio calculates percent[0] (i.e. scanning percent for anon) to be zero. Actually the percent[0] shoule be a big value, but our calculation round it to zero. Although before commit 84b18490 ("vmscan: get_scan_ratio() cleanup") , we have the same problem too. But the old logic can rescue percent[0]==0 case only when priority==0. It had hided the real issue. I didn't think merely streaming io can makes percent[0]==0 && priority==0 situation. but I was wrong. So, definitely we have to fix such tmpfs streaming io issue. but anyway I revert the regression commit at first. This reverts commit 84b18490d1f1bc7ed5095c929f78bc002eb70f26. Signed-off-by: KOSAKI Motohiro Reported-by: Shaohua Li Cc: Rik van Riel Cc: KAMEZAWA Hiroyuki Cc: Minchan Kim Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/vmscan.c | 23 +++++++++-------------- 1 file changed, 9 insertions(+), 14 deletions(-) (limited to 'mm') diff --git a/mm/vmscan.c b/mm/vmscan.c index e0e5f15bb72..3ff3311447f 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -1535,13 +1535,6 @@ static void get_scan_ratio(struct zone *zone, struct scan_control *sc, unsigned long ap, fp; struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc); - /* If we have no swap space, do not bother scanning anon pages. */ - if (!sc->may_swap || (nr_swap_pages <= 0)) { - percent[0] = 0; - percent[1] = 100; - return; - } - anon = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_ANON) + zone_nr_lru_pages(zone, sc, LRU_INACTIVE_ANON); file = zone_nr_lru_pages(zone, sc, LRU_ACTIVE_FILE) + @@ -1639,20 +1632,22 @@ static void shrink_zone(int priority, struct zone *zone, unsigned long nr_reclaimed = sc->nr_reclaimed; unsigned long nr_to_reclaim = sc->nr_to_reclaim; struct zone_reclaim_stat *reclaim_stat = get_reclaim_stat(zone, sc); + int noswap = 0; - get_scan_ratio(zone, sc, percent); + /* If we have no swap space, do not bother scanning anon pages. */ + if (!sc->may_swap || (nr_swap_pages <= 0)) { + noswap = 1; + percent[0] = 0; + percent[1] = 100; + } else + get_scan_ratio(zone, sc, percent); for_each_evictable_lru(l) { int file = is_file_lru(l); unsigned long scan; - if (percent[file] == 0) { - nr[l] = 0; - continue; - } - scan = zone_nr_lru_pages(zone, sc, l); - if (priority) { + if (priority || noswap) { scan >>= priority; scan = (scan * percent[file]) / 100; } -- cgit v1.2.3-18-g5258 From 116354d177ba2da37e91cf884e3d11e67f825efd Mon Sep 17 00:00:00 2001 From: Naoya Horiguchi Date: Tue, 6 Apr 2010 14:35:04 -0700 Subject: pagemap: fix pfn calculation for hugepage When we look into pagemap using page-types with option -p, the value of pfn for hugepages looks wrong (see below.) This is because pte was evaluated only once for one vma although it should be updated for each hugepage. This patch fixes it. $ page-types -p 3277 -Nl -b huge voffset offset len flags 7f21e8a00 11e400 1 ___U___________H_G________________ 7f21e8a01 11e401 1ff ________________TG________________ ^^^ 7f21e8c00 11e400 1 ___U___________H_G________________ 7f21e8c01 11e401 1ff ________________TG________________ ^^^ One hugepage contains 1 head page and 511 tail pages in x86_64 and each two lines represent each hugepage. Voffset and offset mean virtual address and physical address in the page unit, respectively. The different hugepages should not have the same offset value. With this patch applied: $ page-types -p 3386 -Nl -b huge voffset offset len flags 7fec7a600 112c00 1 ___UD__________H_G________________ 7fec7a601 112c01 1ff ________________TG________________ ^^^ 7fec7a800 113200 1 ___UD__________H_G________________ 7fec7a801 113201 1ff ________________TG________________ ^^^ OK More info: - This patch modifies walk_page_range()'s hugepage walker. But the change only affects pagemap_read(), which is the only caller of hugepage callback. - Without this patch, hugetlb_entry() callback is called per vma, that doesn't match the natural expectation from its name. - With this patch, hugetlb_entry() is called per hugepte entry and the callback can become much simpler. Signed-off-by: Naoya Horiguchi Signed-off-by: KAMEZAWA Hiroyuki Acked-by: Matt Mackall Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/pagewalk.c | 47 +++++++++++++++++++++++++++++++++++++---------- 1 file changed, 37 insertions(+), 10 deletions(-) (limited to 'mm') diff --git a/mm/pagewalk.c b/mm/pagewalk.c index 7b47a57b664..8b1a2ce21ee 100644 --- a/mm/pagewalk.c +++ b/mm/pagewalk.c @@ -80,6 +80,37 @@ static int walk_pud_range(pgd_t *pgd, unsigned long addr, unsigned long end, return err; } +#ifdef CONFIG_HUGETLB_PAGE +static unsigned long hugetlb_entry_end(struct hstate *h, unsigned long addr, + unsigned long end) +{ + unsigned long boundary = (addr & huge_page_mask(h)) + huge_page_size(h); + return boundary < end ? boundary : end; +} + +static int walk_hugetlb_range(struct vm_area_struct *vma, + unsigned long addr, unsigned long end, + struct mm_walk *walk) +{ + struct hstate *h = hstate_vma(vma); + unsigned long next; + unsigned long hmask = huge_page_mask(h); + pte_t *pte; + int err = 0; + + do { + next = hugetlb_entry_end(h, addr, end); + pte = huge_pte_offset(walk->mm, addr & hmask); + if (pte && walk->hugetlb_entry) + err = walk->hugetlb_entry(pte, hmask, addr, next, walk); + if (err) + return err; + } while (addr = next, addr != end); + + return 0; +} +#endif + /** * walk_page_range - walk a memory map's page tables with a callback * @mm: memory map to walk @@ -128,20 +159,16 @@ int walk_page_range(unsigned long addr, unsigned long end, vma = find_vma(walk->mm, addr); #ifdef CONFIG_HUGETLB_PAGE if (vma && is_vm_hugetlb_page(vma)) { - pte_t *pte; - struct hstate *hs; - if (vma->vm_end < next) next = vma->vm_end; - hs = hstate_vma(vma); - pte = huge_pte_offset(walk->mm, - addr & huge_page_mask(hs)); - if (pte && !huge_pte_none(huge_ptep_get(pte)) - && walk->hugetlb_entry) - err = walk->hugetlb_entry(pte, addr, - next, walk); + /* + * Hugepage is very tightly coupled with vma, so + * walk through hugetlb entries within a given vma. + */ + err = walk_hugetlb_range(vma, addr, next, walk); if (err) break; + pgd = pgd_offset(walk->mm, next); continue; } #endif -- cgit v1.2.3-18-g5258 From 8725d5416213a145ccc9c236dbd26830ba409e00 Mon Sep 17 00:00:00 2001 From: KAMEZAWA Hiroyuki Date: Tue, 6 Apr 2010 14:35:05 -0700 Subject: memcg: fix race in file_mapped accounting Presently, memcg's FILE_MAPPED accounting has following race with move_account (happens at rmdir()). increment page->mapcount (rmap.c) mem_cgroup_update_file_mapped() move_account() lock_page_cgroup() check page_mapped() if page_mapped(page)>1 { FILE_MAPPED -1 from old memcg FILE_MAPPED +1 to old memcg } ..... overwrite pc->mem_cgroup unlock_page_cgroup() lock_page_cgroup() FILE_MAPPED + 1 to pc->mem_cgroup unlock_page_cgroup() Then, old memcg (-1 file mapped) new memcg (+2 file mapped) This happens because move_account see page_mapped() which is not guarded by lock_page_cgroup(). This patch adds FILE_MAPPED flag to page_cgroup and move account information based on it. Now, all checks are synchronous with lock_page_cgroup(). Signed-off-by: KAMEZAWA Hiroyuki Reviewed-by: Balbir Singh Reviewed-by: Daisuke Nishimura Cc: Andrea Righi Cc: Andrea Arcangeli Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/memcontrol.c | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) (limited to 'mm') diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 9ed760dc744..f4ede99c8b9 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1359,16 +1359,19 @@ void mem_cgroup_update_file_mapped(struct page *page, int val) lock_page_cgroup(pc); mem = pc->mem_cgroup; - if (!mem) - goto done; - - if (!PageCgroupUsed(pc)) + if (!mem || !PageCgroupUsed(pc)) goto done; /* * Preemption is already disabled. We can use __this_cpu_xxx */ - __this_cpu_add(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED], val); + if (val > 0) { + __this_cpu_inc(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); + SetPageCgroupFileMapped(pc); + } else { + __this_cpu_dec(mem->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); + ClearPageCgroupFileMapped(pc); + } done: unlock_page_cgroup(pc); @@ -1801,16 +1804,13 @@ static void __mem_cgroup_commit_charge(struct mem_cgroup *mem, static void __mem_cgroup_move_account(struct page_cgroup *pc, struct mem_cgroup *from, struct mem_cgroup *to, bool uncharge) { - struct page *page; - VM_BUG_ON(from == to); VM_BUG_ON(PageLRU(pc->page)); VM_BUG_ON(!PageCgroupLocked(pc)); VM_BUG_ON(!PageCgroupUsed(pc)); VM_BUG_ON(pc->mem_cgroup != from); - page = pc->page; - if (page_mapped(page) && !PageAnon(page)) { + if (PageCgroupFileMapped(pc)) { /* Update mapped_file data for mem_cgroup */ preempt_disable(); __this_cpu_dec(from->stat->count[MEM_CGROUP_STAT_FILE_MAPPED]); -- cgit v1.2.3-18-g5258 From fc1c183353a113c71675fecd0485e5aa0fe68d72 Mon Sep 17 00:00:00 2001 From: Pekka Enberg Date: Wed, 7 Apr 2010 19:23:40 +0300 Subject: slab: Generify kernel pointer validation As suggested by Linus, introduce a kern_ptr_validate() helper that does some sanity checks to make sure a pointer is a valid kernel pointer. This is a preparational step for fixing SLUB kmem_ptr_validate(). Cc: Andrew Morton Cc: Christoph Lameter Cc: David Rientjes Cc: Ingo Molnar Cc: Matt Mackall Cc: Nick Piggin Signed-off-by: Pekka Enberg Signed-off-by: Linus Torvalds --- mm/slab.c | 13 +------------ mm/util.c | 21 +++++++++++++++++++++ 2 files changed, 22 insertions(+), 12 deletions(-) (limited to 'mm') diff --git a/mm/slab.c b/mm/slab.c index a9f325b28be..bac0f4fcc21 100644 --- a/mm/slab.c +++ b/mm/slab.c @@ -3602,21 +3602,10 @@ EXPORT_SYMBOL(kmem_cache_alloc_notrace); */ int kmem_ptr_validate(struct kmem_cache *cachep, const void *ptr) { - unsigned long addr = (unsigned long)ptr; - unsigned long min_addr = PAGE_OFFSET; - unsigned long align_mask = BYTES_PER_WORD - 1; unsigned long size = cachep->buffer_size; struct page *page; - if (unlikely(addr < min_addr)) - goto out; - if (unlikely(addr > (unsigned long)high_memory - size)) - goto out; - if (unlikely(addr & align_mask)) - goto out; - if (unlikely(!kern_addr_valid(addr))) - goto out; - if (unlikely(!kern_addr_valid(addr + size - 1))) + if (unlikely(!kern_ptr_validate(ptr, size))) goto out; page = virt_to_page(ptr); if (unlikely(!PageSlab(page))) diff --git a/mm/util.c b/mm/util.c index 834db7be240..f5712e8964b 100644 --- a/mm/util.c +++ b/mm/util.c @@ -186,6 +186,27 @@ void kzfree(const void *p) } EXPORT_SYMBOL(kzfree); +int kern_ptr_validate(const void *ptr, unsigned long size) +{ + unsigned long addr = (unsigned long)ptr; + unsigned long min_addr = PAGE_OFFSET; + unsigned long align_mask = sizeof(void *) - 1; + + if (unlikely(addr < min_addr)) + goto out; + if (unlikely(addr > (unsigned long)high_memory - size)) + goto out; + if (unlikely(addr & align_mask)) + goto out; + if (unlikely(!kern_addr_valid(addr))) + goto out; + if (unlikely(!kern_addr_valid(addr + size - 1))) + goto out; + return 1; +out: + return 0; +} + /* * strndup_user - duplicate an existing string from user space * @s: The string to duplicate -- cgit v1.2.3-18-g5258 From d3e06e2b15590b70ea73733fc4612e4741ff46e0 Mon Sep 17 00:00:00 2001 From: Pekka Enberg Date: Wed, 7 Apr 2010 19:23:41 +0300 Subject: slub: Fix kmem_ptr_validate() for non-kernel pointers As suggested by Linus, fix up kmem_ptr_validate() to handle non-kernel pointers more graciously. The patch changes kmem_ptr_validate() to use the newly introduced kern_ptr_validate() helper to check that a pointer is a valid kernel pointer before we attempt to convert it into a 'struct page'. Cc: Andrew Morton Cc: Ingo Molnar Cc: Matt Mackall Cc: Nick Piggin Signed-off-by: Pekka Enberg Acked-by: Christoph Lameter Acked-by: David Rientjes Signed-off-by: Linus Torvalds --- mm/slub.c | 3 +++ 1 file changed, 3 insertions(+) (limited to 'mm') diff --git a/mm/slub.c b/mm/slub.c index b364844a106..7d6c8b1ccf6 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2386,6 +2386,9 @@ int kmem_ptr_validate(struct kmem_cache *s, const void *object) { struct page *page; + if (!kern_ptr_validate(object, s->size)) + return 0; + page = get_object_page(object); if (!page || s != page->slab) -- cgit v1.2.3-18-g5258 From d0e9fe1758f222f13ec893f856552d81a10d266d Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sat, 10 Apr 2010 10:36:19 -0700 Subject: Simplify and comment on anon_vma re-use for anon_vma_prepare() This changes the anon_vma reuse case to require that we only reuse simple anon_vma's - ie the case when the vma only has a single anon_vma associated with it. This means that a reuse of an anon_vma from an adjacent vma will always guarantee that both vma's are associated not only with the same anon_vma, they will also have the same anon_vma chain (of just a single entry in this case). And since anon_vma re-use was the only case where the same anon_vma might be associated with different chains of anon_vma's, we now have the case that every vma that shares the same anon_vma will always also have the same chain. That makes it much easier to think about merging vma's that share the same anon_vma's: you can always just drop the other anon_vma chain in anon_vma_merge() since you know that they are always identical. This also splits up the function to validate the anon_vma re-use, and adds a lot of commentary about the possible races. Reviewed-by: Rik van Riel Acked-by: Johannes Weiner Tested-by: Borislav Petkov [ "That didn't fix it" ] Signed-off-by: Linus Torvalds --- mm/mmap.c | 86 +++++++++++++++++++++++++++++++++++++++++++++------------------ 1 file changed, 62 insertions(+), 24 deletions(-) (limited to 'mm') diff --git a/mm/mmap.c b/mm/mmap.c index 75557c639ad..acb023e2d35 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -824,6 +824,61 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, return NULL; } +/* + * Rough compatbility check to quickly see if it's even worth looking + * at sharing an anon_vma. + * + * They need to have the same vm_file, and the flags can only differ + * in things that mprotect may change. + * + * NOTE! The fact that we share an anon_vma doesn't _have_ to mean that + * we can merge the two vma's. For example, we refuse to merge a vma if + * there is a vm_ops->close() function, because that indicates that the + * driver is doing some kind of reference counting. But that doesn't + * really matter for the anon_vma sharing case. + */ +static int anon_vma_compatible(struct vm_area_struct *a, struct vm_area_struct *b) +{ + return a->vm_end == b->vm_start && + mpol_equal(vma_policy(a), vma_policy(b)) && + a->vm_file == b->vm_file && + !((a->vm_flags ^ b->vm_flags) & ~(VM_READ|VM_WRITE|VM_EXEC)) && + b->vm_pgoff == a->vm_pgoff + ((b->vm_start - a->vm_start) >> PAGE_SHIFT); +} + +/* + * Do some basic sanity checking to see if we can re-use the anon_vma + * from 'old'. The 'a'/'b' vma's are in VM order - one of them will be + * the same as 'old', the other will be the new one that is trying + * to share the anon_vma. + * + * NOTE! This runs with mm_sem held for reading, so it is possible that + * the anon_vma of 'old' is concurrently in the process of being set up + * by another page fault trying to merge _that_. But that's ok: if it + * is being set up, that automatically means that it will be a singleton + * acceptable for merging, so we can do all of this optimistically. But + * we do that ACCESS_ONCE() to make sure that we never re-load the pointer. + * + * IOW: that the "list_is_singular()" test on the anon_vma_chain only + * matters for the 'stable anon_vma' case (ie the thing we want to avoid + * is to return an anon_vma that is "complex" due to having gone through + * a fork). + * + * We also make sure that the two vma's are compatible (adjacent, + * and with the same memory policies). That's all stable, even with just + * a read lock on the mm_sem. + */ +static struct anon_vma *reusable_anon_vma(struct vm_area_struct *old, struct vm_area_struct *a, struct vm_area_struct *b) +{ + if (anon_vma_compatible(a, b)) { + struct anon_vma *anon_vma = ACCESS_ONCE(old->anon_vma); + + if (anon_vma && list_is_singular(&old->anon_vma_chain)) + return anon_vma; + } + return NULL; +} + /* * find_mergeable_anon_vma is used by anon_vma_prepare, to check * neighbouring vmas for a suitable anon_vma, before it goes off @@ -834,28 +889,16 @@ struct vm_area_struct *vma_merge(struct mm_struct *mm, */ struct anon_vma *find_mergeable_anon_vma(struct vm_area_struct *vma) { + struct anon_vma *anon_vma; struct vm_area_struct *near; - unsigned long vm_flags; near = vma->vm_next; if (!near) goto try_prev; - /* - * Since only mprotect tries to remerge vmas, match flags - * which might be mprotected into each other later on. - * Neither mlock nor madvise tries to remerge at present, - * so leave their flags as obstructing a merge. - */ - vm_flags = vma->vm_flags & ~(VM_READ|VM_WRITE|VM_EXEC); - vm_flags |= near->vm_flags & (VM_READ|VM_WRITE|VM_EXEC); - - if (near->anon_vma && vma->vm_end == near->vm_start && - mpol_equal(vma_policy(vma), vma_policy(near)) && - can_vma_merge_before(near, vm_flags, - NULL, vma->vm_file, vma->vm_pgoff + - ((vma->vm_end - vma->vm_start) >> PAGE_SHIFT))) - return near->anon_vma; + anon_vma = reusable_anon_vma(near, vma, near); + if (anon_vma) + return anon_vma; try_prev: /* * It is potentially slow to have to call find_vma_prev here. @@ -868,14 +911,9 @@ try_prev: if (!near) goto none; - vm_flags = vma->vm_flags & ~(VM_READ|VM_WRITE|VM_EXEC); - vm_flags |= near->vm_flags & (VM_READ|VM_WRITE|VM_EXEC); - - if (near->anon_vma && near->vm_end == vma->vm_start && - mpol_equal(vma_policy(near), vma_policy(vma)) && - can_vma_merge_after(near, vm_flags, - NULL, vma->vm_file, vma->vm_pgoff)) - return near->anon_vma; + anon_vma = reusable_anon_vma(near, near, vma); + if (anon_vma) + return anon_vma; none: /* * There's no absolute need to look only at touching neighbours: -- cgit v1.2.3-18-g5258 From 287d97ac032136724143cde8d5964b414d562ee3 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sat, 10 Apr 2010 15:22:30 -0700 Subject: vma_adjust: fix the copying of anon_vma chains When we move the boundaries between two vma's due to things like mprotect, we need to make sure that the anon_vma of the pages that got moved from one vma to another gets properly copied around. And that was not always the case, in this rather hard-to-follow code sequence. Clarify the code, and fix it so that it copies the anon_vma from the right source. Reviewed-by: Rik van Riel Acked-by: Johannes Weiner Tested-by: Borislav Petkov [ "Yeah, not so much this one either" ] Signed-off-by: Linus Torvalds --- mm/mmap.c | 24 ++++++++---------------- 1 file changed, 8 insertions(+), 16 deletions(-) (limited to 'mm') diff --git a/mm/mmap.c b/mm/mmap.c index acb023e2d35..f90ea92f755 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -507,11 +507,12 @@ int vma_adjust(struct vm_area_struct *vma, unsigned long start, struct address_space *mapping = NULL; struct prio_tree_root *root = NULL; struct file *file = vma->vm_file; - struct anon_vma *anon_vma = NULL; long adjust_next = 0; int remove_next = 0; if (next && !insert) { + struct vm_area_struct *exporter = NULL; + if (end >= next->vm_end) { /* * vma expands, overlapping all the next, and @@ -519,7 +520,7 @@ int vma_adjust(struct vm_area_struct *vma, unsigned long start, */ again: remove_next = 1 + (end > next->vm_end); end = next->vm_end; - anon_vma = next->anon_vma; + exporter = next; importer = vma; } else if (end > next->vm_start) { /* @@ -527,7 +528,7 @@ again: remove_next = 1 + (end > next->vm_end); * mprotect case 5 shifting the boundary up. */ adjust_next = (end - next->vm_start) >> PAGE_SHIFT; - anon_vma = next->anon_vma; + exporter = next; importer = vma; } else if (end < vma->vm_end) { /* @@ -536,28 +537,19 @@ again: remove_next = 1 + (end > next->vm_end); * mprotect case 4 shifting the boundary down. */ adjust_next = - ((vma->vm_end - end) >> PAGE_SHIFT); - anon_vma = next->anon_vma; + exporter = vma; importer = next; } - } - /* - * When changing only vma->vm_end, we don't really need anon_vma lock. - */ - if (vma->anon_vma && (insert || importer || start != vma->vm_start)) - anon_vma = vma->anon_vma; - if (anon_vma) { /* * Easily overlooked: when mprotect shifts the boundary, * make sure the expanding vma has anon_vma set if the * shrinking vma had, to cover any anon pages imported. */ - if (importer && !importer->anon_vma) { - /* Block reverse map lookups until things are set up. */ - if (anon_vma_clone(importer, vma)) { + if (exporter && exporter->anon_vma && !importer->anon_vma) { + if (anon_vma_clone(importer, exporter)) return -ENOMEM; - } - importer->anon_vma = anon_vma; + importer->anon_vma = exporter->anon_vma; } } -- cgit v1.2.3-18-g5258 From 646d87b481dab4ba8301716600dfd276605b0ab0 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Sun, 11 Apr 2010 17:15:03 -0700 Subject: anon_vma: clone the anon_vma chain in the right order We want to walk the chain in reverse order when cloning it, so that the order of the result chain will be the same as the order in the source chain. When we add entries to the chain, they go at the head of the chain, so we want to add the source head last. Reviewed-by: Rik van Riel Acked-by: Johannes Weiner Tested-by: Borislav Petkov [ "No, it still oopses" ] Signed-off-by: Linus Torvalds --- mm/rmap.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/rmap.c b/mm/rmap.c index eaa7a09eb72..ee97d38ed7d 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -182,7 +182,7 @@ int anon_vma_clone(struct vm_area_struct *dst, struct vm_area_struct *src) { struct anon_vma_chain *avc, *pavc; - list_for_each_entry(pavc, &src->anon_vma_chain, same_vma) { + list_for_each_entry_reverse(pavc, &src->anon_vma_chain, same_vma) { avc = anon_vma_chain_alloc(); if (!avc) goto enomem_failure; -- cgit v1.2.3-18-g5258 From ea90002b0fa7bdee86ec22eba1d951f30bf043a6 Mon Sep 17 00:00:00 2001 From: Linus Torvalds Date: Mon, 12 Apr 2010 12:44:29 -0700 Subject: anonvma: when setting up page->mapping, we need to pick the _oldest_ anonvma Otherwise we might be mapping in a page in a new mapping, but that page (through the swapcache) would later be mapped into an old mapping too. The page->mapping must be the case that works for everybody, not just the mapping that happened to page it in first. Here's the scenario: - page gets allocated/mapped by process A. Let's call the anon_vma we associate the page with 'A' to keep it easy to track. - Process A forks, creating process B. The anon_vma in B is 'B', and has a chain that looks like 'B' -> 'A'. Everything is fine. - Swapping happens. The page (with mapping pointing to 'A') gets swapped out (perhaps not to disk - it's enough to assume that it's just not mapped any more, and lives entirely in the swap-cache) - Process B pages it in, which goes like this: do_swap_page -> page = lookup_swap_cache(entry); ... set_pte_at(mm, address, page_table, pte); page_add_anon_rmap(page, vma, address); And think about what happens here! In particular, what happens is that this will now be the "first" mapping of that page, so page_add_anon_rmap() used to do if (first) __page_set_anon_rmap(page, vma, address); and notice what anon_vma it will use? It will use the anon_vma for process B! What happens then? Trivial: process 'A' also pages it in (nothing happens, it's not the first mapping), and then process 'B' execve's or exits or unmaps, making anon_vma B go away. End result: process A has a page that points to anon_vma B, but anon_vma B does not exist any more. This can go on forever. Forget about RCU grace periods, forget about locking, forget anything like that. The bug is simply that page->mapping points to an anon_vma that was correct at one point, but was _not_ the one that was shared by all users of that possible mapping. Changing it to always use the deepest anon_vma in the anonvma chain gets us to the safest model. This can be improved in certain cases: if we know the page is private to just this particular mapping (for example, it's a new page, or it is the only swapcache entry), we could pick the top (most specific) anon_vma. But that's a future optimization. Make it _work_ reliably first. Reviewed-by: Rik van Riel Acked-by: Johannes Weiner Tested-by: Borislav Petkov [ "What do you know, I think you fixed it!" ] Signed-off-by: Linus Torvalds --- mm/rmap.c | 15 +++++++++++++-- 1 file changed, 13 insertions(+), 2 deletions(-) (limited to 'mm') diff --git a/mm/rmap.c b/mm/rmap.c index ee97d38ed7d..4bad3267537 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -734,9 +734,20 @@ void page_move_anon_rmap(struct page *page, static void __page_set_anon_rmap(struct page *page, struct vm_area_struct *vma, unsigned long address) { - struct anon_vma *anon_vma = vma->anon_vma; + struct anon_vma_chain *avc; + struct anon_vma *anon_vma; + + BUG_ON(!vma->anon_vma); + + /* + * We must use the _oldest_ possible anon_vma for the page mapping! + * + * So take the last AVC chain entry in the vma, which is the deepest + * ancestor, and use the anon_vma from that. + */ + avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma); + anon_vma = avc->anon_vma; - BUG_ON(!anon_vma); anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; page->mapping = (struct address_space *) anon_vma; page->index = linear_page_index(vma, address); -- cgit v1.2.3-18-g5258 From e8a03feb54ca7f1768bbdc2b491f9ef654e6d01d Mon Sep 17 00:00:00 2001 From: Rik van Riel Date: Wed, 14 Apr 2010 17:59:28 -0400 Subject: rmap: add exclusively owned pages to the newest anon_vma The recent anon_vma fixes cause many anonymous pages to end up in the parent process anon_vma, even when the page is exclusively owned by the current process. Adding exclusively owned anonymous pages to the top anon_vma reduces rmap scanning overhead, especially in workloads with forking servers. This patch adds a parameter to __page_set_anon_rmap that can be used to indicate whether or not the added page is exclusively owned by the current process. Pages added through page_add_new_anon_rmap are exclusively owned by the current process, and can be added to the top anon_vma. Pages added through page_add_anon_rmap can be either shared or exclusively owned, so we do the conservative thing and add it to the oldest anon_vma. A next step would be to add the exclusive parameter to page_add_anon_rmap, to be used from functions where we do know for sure whether a page is exclusively owned. Signed-off-by: Rik van Riel Reviewed-by: Johannes Weiner Lightly-tested-by: Borislav Petkov Reviewed-by: Minchan Kim [ Edited to look nicer - Linus ] Signed-off-by: Linus Torvalds --- mm/rmap.c | 27 ++++++++++++++++----------- 1 file changed, 16 insertions(+), 11 deletions(-) (limited to 'mm') diff --git a/mm/rmap.c b/mm/rmap.c index 4bad3267537..526704e8215 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -730,23 +730,28 @@ void page_move_anon_rmap(struct page *page, * @page: the page to add the mapping to * @vma: the vm area in which the mapping is added * @address: the user virtual address mapped + * @exclusive: the page is exclusively owned by the current process */ static void __page_set_anon_rmap(struct page *page, - struct vm_area_struct *vma, unsigned long address) + struct vm_area_struct *vma, unsigned long address, int exclusive) { - struct anon_vma_chain *avc; - struct anon_vma *anon_vma; + struct anon_vma *anon_vma = vma->anon_vma; - BUG_ON(!vma->anon_vma); + BUG_ON(!anon_vma); /* - * We must use the _oldest_ possible anon_vma for the page mapping! + * If the page isn't exclusively mapped into this vma, + * we must use the _oldest_ possible anon_vma for the + * page mapping! * - * So take the last AVC chain entry in the vma, which is the deepest - * ancestor, and use the anon_vma from that. + * So take the last AVC chain entry in the vma, which is + * the deepest ancestor, and use the anon_vma from that. */ - avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma); - anon_vma = avc->anon_vma; + if (!exclusive) { + struct anon_vma_chain *avc; + avc = list_entry(vma->anon_vma_chain.prev, struct anon_vma_chain, same_vma); + anon_vma = avc->anon_vma; + } anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON; page->mapping = (struct address_space *) anon_vma; @@ -802,7 +807,7 @@ void page_add_anon_rmap(struct page *page, VM_BUG_ON(!PageLocked(page)); VM_BUG_ON(address < vma->vm_start || address >= vma->vm_end); if (first) - __page_set_anon_rmap(page, vma, address); + __page_set_anon_rmap(page, vma, address, 0); else __page_check_anon_rmap(page, vma, address); } @@ -824,7 +829,7 @@ void page_add_new_anon_rmap(struct page *page, SetPageSwapBacked(page); atomic_set(&page->_mapcount, 0); /* increment count (starts at -1) */ __inc_zone_page_state(page, NR_ANON_PAGES); - __page_set_anon_rmap(page, vma, address); + __page_set_anon_rmap(page, vma, address, 1); if (page_evictable(page, vma)) lru_cache_add_lru(page, LRU_ACTIVE_ANON); else -- cgit v1.2.3-18-g5258 From c3c532061e46156e8aab1268f38d66cfb63aeb2d Mon Sep 17 00:00:00 2001 From: Jens Axboe Date: Thu, 22 Apr 2010 11:37:01 +0200 Subject: bdi: add helper function for doing init and register of a bdi for a file system Pretty trivial helper, just sets up the bdi and registers it. An atomic sequence count is used to ensure that the registered sysfs names are unique. Signed-off-by: Jens Axboe --- mm/backing-dev.c | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) (limited to 'mm') diff --git a/mm/backing-dev.c b/mm/backing-dev.c index f13e067e146..dbda4707f59 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -11,6 +11,8 @@ #include #include +static atomic_long_t bdi_seq = ATOMIC_LONG_INIT(0); + void default_unplug_io_fn(struct backing_dev_info *bdi, struct page *page) { } @@ -715,6 +717,33 @@ void bdi_destroy(struct backing_dev_info *bdi) } EXPORT_SYMBOL(bdi_destroy); +/* + * For use from filesystems to quickly init and register a bdi associated + * with dirty writeback + */ +int bdi_setup_and_register(struct backing_dev_info *bdi, char *name, + unsigned int cap) +{ + char tmp[32]; + int err; + + bdi->name = name; + bdi->capabilities = cap; + err = bdi_init(bdi); + if (err) + return err; + + sprintf(tmp, "%.28s%s", name, "-%d"); + err = bdi_register(bdi, NULL, tmp, atomic_long_inc_return(&bdi_seq)); + if (err) { + bdi_destroy(bdi); + return err; + } + + return 0; +} +EXPORT_SYMBOL(bdi_setup_and_register); + static wait_queue_head_t congestion_wqh[2] = { __WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[0]), __WAIT_QUEUE_HEAD_INITIALIZER(congestion_wqh[1]) -- cgit v1.2.3-18-g5258 From 93d5c9be1ddd57d4063ce463c9ac2be1e5ee14f1 Mon Sep 17 00:00:00 2001 From: Andrea Arcangeli Date: Fri, 23 Apr 2010 13:17:39 -0400 Subject: memcg: fix prepare migration If a signal is pending (task being killed by sigkill) __mem_cgroup_try_charge will write NULL into &mem, and css_put will oops on null pointer dereference. BUG: unable to handle kernel NULL pointer dereference at 0000000000000010 IP: [] mem_cgroup_prepare_migration+0x7c/0xc0 PGD a5d89067 PUD a5d8a067 PMD 0 Oops: 0000 [#1] SMP last sysfs file: /sys/devices/platform/microcode/firmware/microcode/loading CPU 0 Modules linked in: nfs lockd nfs_acl auth_rpcgss sunrpc acpi_cpufreq pcspkr sg [last unloaded: microcode] Pid: 5299, comm: largepages Tainted: G W 2.6.34-rc3 #3 Penryn1600SLI-110dB/To Be Filled By O.E.M. RIP: 0010:[] [] mem_cgroup_prepare_migration+0x7c/0xc0 [nishimura@mxp.nes.nec.co.jp: fix merge issues] Signed-off-by: Andrea Arcangeli Acked-by: KAMEZAWA Hiroyuki Cc: Balbir Singh Signed-off-by: Daisuke Nishimura Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/memcontrol.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'mm') diff --git a/mm/memcontrol.c b/mm/memcontrol.c index f4ede99c8b9..6c755de385f 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2429,11 +2429,11 @@ int mem_cgroup_prepare_migration(struct page *page, struct mem_cgroup **ptr) } unlock_page_cgroup(pc); + *ptr = mem; if (mem) { - ret = __mem_cgroup_try_charge(NULL, GFP_KERNEL, &mem, false); + ret = __mem_cgroup_try_charge(NULL, GFP_KERNEL, ptr, false); css_put(&mem->css); } - *ptr = mem; return ret; } -- cgit v1.2.3-18-g5258 From 23be7468e8802a2ac1de6ee3eecb3ec7f14dc703 Mon Sep 17 00:00:00 2001 From: Mel Gorman Date: Fri, 23 Apr 2010 13:17:56 -0400 Subject: hugetlb: fix infinite loop in get_futex_key() when backed by huge pages If a futex key happens to be located within a huge page mapped MAP_PRIVATE, get_futex_key() can go into an infinite loop waiting for a page->mapping that will never exist. See https://bugzilla.redhat.com/show_bug.cgi?id=552257 for more details about the problem. This patch makes page->mapping a poisoned value that includes PAGE_MAPPING_ANON mapped MAP_PRIVATE. This is enough for futex to continue but because of PAGE_MAPPING_ANON, the poisoned value is not dereferenced or used by futex. No other part of the VM should be dereferencing the page->mapping of a hugetlbfs page as its page cache is not on the LRU. This patch fixes the problem with the test case described in the bugzilla. [akpm@linux-foundation.org: mel cant spel] Signed-off-by: Mel Gorman Acked-by: Peter Zijlstra Acked-by: Darren Hart Cc: Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/hugetlb.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/hugetlb.c b/mm/hugetlb.c index 6034dc9e979..ffbdfc86aed 100644 --- a/mm/hugetlb.c +++ b/mm/hugetlb.c @@ -546,6 +546,7 @@ static void free_huge_page(struct page *page) mapping = (struct address_space *) page_private(page); set_page_private(page, 0); + page->mapping = NULL; BUG_ON(page_count(page)); INIT_LIST_HEAD(&page->lru); @@ -2447,8 +2448,10 @@ retry: spin_lock(&inode->i_lock); inode->i_blocks += blocks_per_huge_page(h); spin_unlock(&inode->i_lock); - } else + } else { lock_page(page); + page->mapping = HUGETLB_POISON; + } } /* -- cgit v1.2.3-18-g5258 From 31f2b0ebc01fd332cb0997f7ce9f9cde29af9e20 Mon Sep 17 00:00:00 2001 From: Oleg Nesterov Date: Fri, 23 Apr 2010 13:18:01 -0400 Subject: rmap: anon_vma_prepare() can leak anon_vma_chain If find_mergeable_anon_vma() succeeds but another thread installs ->anon_vma before we take ptl, then allocated == NULL but avc should be freed. Change the code to check avc != NULL to detect this case. Also, a couple of whitespace changes to make the critical section more visible. Signed-off-by: Oleg Nesterov Reviewed-by: Rik van Riel Cc: Hugh Dickins Cc: Pete Zaitcev Cc: Borislav Petkov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/rmap.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) (limited to 'mm') diff --git a/mm/rmap.c b/mm/rmap.c index 526704e8215..07fc9475879 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -133,8 +133,8 @@ int anon_vma_prepare(struct vm_area_struct *vma) goto out_enomem_free_avc; allocated = anon_vma; } - spin_lock(&anon_vma->lock); + spin_lock(&anon_vma->lock); /* page_table_lock to protect against threads */ spin_lock(&mm->page_table_lock); if (likely(!vma->anon_vma)) { @@ -144,14 +144,15 @@ int anon_vma_prepare(struct vm_area_struct *vma) list_add(&avc->same_vma, &vma->anon_vma_chain); list_add(&avc->same_anon_vma, &anon_vma->head); allocated = NULL; + avc = NULL; } spin_unlock(&mm->page_table_lock); - spin_unlock(&anon_vma->lock); - if (unlikely(allocated)) { + + if (unlikely(allocated)) anon_vma_free(allocated); + if (unlikely(avc)) anon_vma_chain_free(avc); - } } return 0; -- cgit v1.2.3-18-g5258 From 22eccdd7d2d94be48ae9b01fef5f52ccbb81dcd5 Mon Sep 17 00:00:00 2001 From: Dan Carpenter Date: Fri, 23 Apr 2010 13:18:10 -0400 Subject: ksm: check for ERR_PTR from follow_page() The follow_page() function can potentially return -EFAULT so I added checks for this. Also I silenced an uninitialized variable warning on my version of gcc (version 4.3.2). Signed-off-by: Dan Carpenter Acked-by: Rik van Riel Acked-by: Izik Eidus Cc: Andrea Arcangeli Cc: Johannes Weiner Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds --- mm/ksm.c | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) (limited to 'mm') diff --git a/mm/ksm.c b/mm/ksm.c index 8cdfc2a1e8b..956880f2ff4 100644 --- a/mm/ksm.c +++ b/mm/ksm.c @@ -365,7 +365,7 @@ static int break_ksm(struct vm_area_struct *vma, unsigned long addr) do { cond_resched(); page = follow_page(vma, addr, FOLL_GET); - if (!page) + if (IS_ERR_OR_NULL(page)) break; if (PageKsm(page)) ret = handle_mm_fault(vma->vm_mm, vma, addr, @@ -447,7 +447,7 @@ static struct page *get_mergeable_page(struct rmap_item *rmap_item) goto out; page = follow_page(vma, addr, FOLL_GET); - if (!page) + if (IS_ERR_OR_NULL(page)) goto out; if (PageAnon(page)) { flush_anon_page(vma, page, addr); @@ -1086,7 +1086,7 @@ struct rmap_item *unstable_tree_search_insert(struct rmap_item *rmap_item, cond_resched(); tree_rmap_item = rb_entry(*new, struct rmap_item, node); tree_page = get_mergeable_page(tree_rmap_item); - if (!tree_page) + if (IS_ERR_OR_NULL(tree_page)) return NULL; /* @@ -1294,7 +1294,7 @@ next_mm: if (ksm_test_exit(mm)) break; *page = follow_page(vma, ksm_scan.address, FOLL_GET); - if (*page && PageAnon(*page)) { + if (!IS_ERR_OR_NULL(*page) && PageAnon(*page)) { flush_anon_page(vma, *page, ksm_scan.address); flush_dcache_page(*page); rmap_item = get_next_rmap_item(slot, @@ -1308,7 +1308,7 @@ next_mm: up_read(&mm->mmap_sem); return rmap_item; } - if (*page) + if (!IS_ERR_OR_NULL(*page)) put_page(*page); ksm_scan.address += PAGE_SIZE; cond_resched(); @@ -1367,7 +1367,7 @@ next_mm: static void ksm_do_scan(unsigned int scan_npages) { struct rmap_item *rmap_item; - struct page *page; + struct page *uninitialized_var(page); while (scan_npages--) { cond_resched(); -- cgit v1.2.3-18-g5258 From 5129a469a91a91427334c40e29e64c6d0ab68caf Mon Sep 17 00:00:00 2001 From: Jörn Engel Date: Sun, 25 Apr 2010 08:54:42 +0200 Subject: Catch filesystems lacking s_bdi noop_backing_dev_info is used only as a flag to mark filesystems that don't have any backing store, like tmpfs, procfs, spufs, etc. Signed-off-by: Joern Engel Changed the BUG_ON() to a WARN_ON(). Note that adding dirty inodes to the noop_backing_dev_info is not legal and will not result in them being flushed, but we already catch this condition in __mark_inode_dirty() when checking for a registered bdi. Signed-off-by: Jens Axboe --- mm/backing-dev.c | 5 +++++ 1 file changed, 5 insertions(+) (limited to 'mm') diff --git a/mm/backing-dev.c b/mm/backing-dev.c index dbda4707f59..707d0dc6da0 100644 --- a/mm/backing-dev.c +++ b/mm/backing-dev.c @@ -27,6 +27,11 @@ struct backing_dev_info default_backing_dev_info = { }; EXPORT_SYMBOL_GPL(default_backing_dev_info); +struct backing_dev_info noop_backing_dev_info = { + .name = "noop", +}; +EXPORT_SYMBOL_GPL(noop_backing_dev_info); + static struct class *bdi_class; /* -- cgit v1.2.3-18-g5258 From 5892753383090a3eddf0e1b043c95e3b2c7feda5 Mon Sep 17 00:00:00 2001 From: Rik van Riel Date: Mon, 26 Apr 2010 12:33:03 -0400 Subject: mmap: check ->vm_ops before dereferencing Check whether the VMA has a vm_ops before calling close, just like we check vm_ops before calling open a few dozen lines higher up in the function. Signed-off-by: Rik van Riel Reported-by: Dan Carpenter Signed-off-by: Linus Torvalds --- mm/mmap.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) (limited to 'mm') diff --git a/mm/mmap.c b/mm/mmap.c index f90ea92f755..456ec6f2788 100644 --- a/mm/mmap.c +++ b/mm/mmap.c @@ -1977,7 +1977,8 @@ static int __split_vma(struct mm_struct * mm, struct vm_area_struct * vma, return 0; /* Clean everything up if vma_adjust failed. */ - new->vm_ops->close(new); + if (new->vm_ops && new->vm_ops->close) + new->vm_ops->close(new); if (new->vm_file) { if (vma->vm_flags & VM_EXECUTABLE) removed_exe_file_vma(mm); -- cgit v1.2.3-18-g5258