aboutsummaryrefslogtreecommitdiff
path: root/mm
diff options
context:
space:
mode:
authorJohannes Weiner <jweiner@redhat.com>2012-01-12 17:18:06 -0800
committerBen Hutchings <ben@decadent.org.uk>2012-08-02 14:37:35 +0100
commit01d90f1200a300dfa8d0fb58d98c18ca51c7363b (patch)
tree4a330401b6d6615881d674be4242f1a2d7c1756e /mm
parent33e037b9f75061e882e81106be10a2a4de42850d (diff)
mm: vmscan: convert global reclaim to per-memcg LRU lists
commit b95a2f2d486d0d768a92879c023a03757b9c7e58 upstream - WARNING: this is a substitute patch. Stable note: Not tracked in Bugzilla. This is a partial backport of an upstream commit addressing a completely different issue that accidentally contained an important fix. The workload this patch helps was memcached when IO is started in the background. memcached should stay resident but without this patch it gets swapped. Sometimes this manifests as a drop in throughput but mostly it was observed through /proc/vmstat. Commit [246e87a9: memcg: fix get_scan_count() for small targets] was meant to fix a problem whereby small scan targets on memcg were ignored causing priority to raise too sharply. It forced scanning to take place if the target was small, memcg or kswapd. From the time it was introduced it caused excessive reclaim by kswapd with workloads being pushed to swap that previously would have stayed resident. This was accidentally fixed in commit [b95a2f2d: mm: vmscan: convert global reclaim to per-memcg LRU lists] by making it harder for kswapd to force scan small targets but that patchset is not suitable for backporting. This was later changed again by commit [90126375: mm/vmscan: push lruvec pointer into get_scan_count()] into a format that looks like it would be a straight-forward backport but there is a subtle difference due to the use of lruvecs. The impact of the accidental fix is to make it harder for kswapd to force scan small targets by taking zone->all_unreclaimable into account. This patch is the closest equivalent available based on what is backported. Signed-off-by: Mel Gorman <mgorman@suse.de> Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'mm')
-rw-r--r--mm/vmscan.c3
1 files changed, 2 insertions, 1 deletions
diff --git a/mm/vmscan.c b/mm/vmscan.c
index b9eaa06c6e6..ded18577909 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -1911,7 +1911,8 @@ static void get_scan_count(struct zone *zone, struct scan_control *sc,
* latencies, so it's better to scan a minimum amount there as
* well.
*/
- if (scanning_global_lru(sc) && current_is_kswapd())
+ if (scanning_global_lru(sc) && current_is_kswapd() &&
+ zone->all_unreclaimable)
force_scan = true;
if (!scanning_global_lru(sc))
force_scan = true;