diff options
Diffstat (limited to 'Documentation/vm')
-rw-r--r-- | Documentation/vm/cleancache.txt | 2 | ||||
-rw-r--r-- | Documentation/vm/page-types.c | 2 | ||||
-rw-r--r-- | Documentation/vm/pagemap.txt | 4 | ||||
-rw-r--r-- | Documentation/vm/unevictable-lru.txt | 8 |
4 files changed, 11 insertions, 5 deletions
diff --git a/Documentation/vm/cleancache.txt b/Documentation/vm/cleancache.txt index f726717ace6..142fbb0f325 100644 --- a/Documentation/vm/cleancache.txt +++ b/Documentation/vm/cleancache.txt @@ -93,7 +93,7 @@ failed_gets - number of gets that failed puts - number of puts attempted (all "succeed") invalidates - number of invalidates attempted -A backend implementatation may provide additional metrics. +A backend implementation may provide additional metrics. FAQ diff --git a/Documentation/vm/page-types.c b/Documentation/vm/page-types.c index 7445caa26d0..0b13f02d405 100644 --- a/Documentation/vm/page-types.c +++ b/Documentation/vm/page-types.c @@ -98,6 +98,7 @@ #define KPF_HWPOISON 19 #define KPF_NOPAGE 20 #define KPF_KSM 21 +#define KPF_THP 22 /* [32-] kernel hacking assistances */ #define KPF_RESERVED 32 @@ -147,6 +148,7 @@ static const char *page_flag_names[] = { [KPF_HWPOISON] = "X:hwpoison", [KPF_NOPAGE] = "n:nopage", [KPF_KSM] = "x:ksm", + [KPF_THP] = "t:thp", [KPF_RESERVED] = "r:reserved", [KPF_MLOCKED] = "m:mlocked", diff --git a/Documentation/vm/pagemap.txt b/Documentation/vm/pagemap.txt index df09b9650a8..4600cbe3d6b 100644 --- a/Documentation/vm/pagemap.txt +++ b/Documentation/vm/pagemap.txt @@ -60,6 +60,7 @@ There are three components to pagemap: 19. HWPOISON 20. NOPAGE 21. KSM + 22. THP Short descriptions to the page flags: @@ -97,6 +98,9 @@ Short descriptions to the page flags: 21. KSM identical memory pages dynamically shared between one or more processes +22. THP + contiguous pages which construct transparent hugepages + [IO related page flags] 1. ERROR IO error occurred 3. UPTODATE page has up-to-date data diff --git a/Documentation/vm/unevictable-lru.txt b/Documentation/vm/unevictable-lru.txt index 97bae3c576c..fa206cccf89 100644 --- a/Documentation/vm/unevictable-lru.txt +++ b/Documentation/vm/unevictable-lru.txt @@ -538,7 +538,7 @@ different reverse map mechanisms. process because mlocked pages are migratable. However, for reclaim, if the page is mapped into a VM_LOCKED VMA, the scan stops. - try_to_unmap_anon() attempts to acquire in read mode the mmap semphore of + try_to_unmap_anon() attempts to acquire in read mode the mmap semaphore of the mm_struct to which the VMA belongs. If this is successful, it will mlock the page via mlock_vma_page() - we wouldn't have gotten to try_to_unmap_anon() if the page were already mlocked - and will return @@ -619,11 +619,11 @@ all PTEs from the page. For this purpose, the unevictable/mlock infrastructure introduced a variant of try_to_unmap() called try_to_munlock(). try_to_munlock() calls the same functions as try_to_unmap() for anonymous and -mapped file pages with an additional argument specifing unlock versus unmap +mapped file pages with an additional argument specifying unlock versus unmap processing. Again, these functions walk the respective reverse maps looking for VM_LOCKED VMAs. When such a VMA is found for anonymous pages and file pages mapped in linear VMAs, as in the try_to_unmap() case, the functions -attempt to acquire the associated mmap semphore, mlock the page via +attempt to acquire the associated mmap semaphore, mlock the page via mlock_vma_page() and return SWAP_MLOCK. This effectively undoes the pre-clearing of the page's PG_mlocked done by munlock_vma_page. @@ -641,7 +641,7 @@ with it - the usual fallback position. Note that try_to_munlock()'s reverse map walk must visit every VMA in a page's reverse map to determine that a page is NOT mapped into any VM_LOCKED VMA. However, the scan can terminate when it encounters a VM_LOCKED VMA and can -successfully acquire the VMA's mmap semphore for read and mlock the page. +successfully acquire the VMA's mmap semaphore for read and mlock the page. Although try_to_munlock() might be called a great many times when munlocking a large region or tearing down a large address space that has been mlocked via mlockall(), overall this is a fairly rare event. |