diff options
| author | Tejun Heo <tj@kernel.org> | 2009-04-15 22:10:25 +0900 | 
|---|---|---|
| committer | Jens Axboe <jens.axboe@oracle.com> | 2009-04-22 08:35:09 +0200 | 
| commit | cd0aca2d550f238d80ba58e7dcade4ea3d0a3aa7 (patch) | |
| tree | 9581a77ce54247a18963c9d827063923a667add7 | |
| parent | 25636e282fe95508cae96bb27f86407aef935817 (diff) | |
block: fix queue bounce limit setting
Impact: don't set GFP_DMA in q->bounce_gfp unnecessarily
All DMA address limits are expressed in terms of the last addressable
unit (byte or page) instead of one plus that.  However, when
determining bounce_gfp for 64bit machines in blk_queue_bounce_limit(),
it compares the specified limit against 0x100000000UL to determine
whether it's below 4G ending up falsely setting GFP_DMA in
q->bounce_gfp.
As DMA zone is very small on x86_64, this makes larger SG_IO transfers
very eager to trigger OOM killer.  Fix it.  While at it, rename the
parameter to @dma_mask for clarity and convert comment to proper
winged style.
Signed-off-by: Tejun Heo <tj@kernel.org>
Signed-off-by: Jens Axboe <jens.axboe@oracle.com>
| -rw-r--r-- | block/blk-settings.c | 20 | 
1 files changed, 11 insertions, 9 deletions
| diff --git a/block/blk-settings.c b/block/blk-settings.c index 69c42adde52..57af728d94b 100644 --- a/block/blk-settings.c +++ b/block/blk-settings.c @@ -156,26 +156,28 @@ EXPORT_SYMBOL(blk_queue_make_request);  /**   * blk_queue_bounce_limit - set bounce buffer limit for queue - * @q:  the request queue for the device - * @dma_addr:   bus address limit + * @q: the request queue for the device + * @dma_mask: the maximum address the device can handle   *   * Description:   *    Different hardware can have different requirements as to what pages   *    it can do I/O directly to. A low level driver can call   *    blk_queue_bounce_limit to have lower memory pages allocated as bounce - *    buffers for doing I/O to pages residing above @dma_addr. + *    buffers for doing I/O to pages residing above @dma_mask.   **/ -void blk_queue_bounce_limit(struct request_queue *q, u64 dma_addr) +void blk_queue_bounce_limit(struct request_queue *q, u64 dma_mask)  { -	unsigned long b_pfn = dma_addr >> PAGE_SHIFT; +	unsigned long b_pfn = dma_mask >> PAGE_SHIFT;  	int dma = 0;  	q->bounce_gfp = GFP_NOIO;  #if BITS_PER_LONG == 64 -	/* Assume anything <= 4GB can be handled by IOMMU. -	   Actually some IOMMUs can handle everything, but I don't -	   know of a way to test this here. */ -	if (b_pfn < (min_t(u64, 0x100000000UL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT)) +	/* +	 * Assume anything <= 4GB can be handled by IOMMU.  Actually +	 * some IOMMUs can handle everything, but I don't know of a +	 * way to test this here. +	 */ +	if (b_pfn < (min_t(u64, 0xffffffffUL, BLK_BOUNCE_HIGH) >> PAGE_SHIFT))  		dma = 1;  	q->bounce_pfn = max_low_pfn;  #else | 
