aboutsummaryrefslogtreecommitdiff
path: root/drivers/dma
AgeCommit message (Collapse)Author
2014-03-06ioat: fix tasklet tear downDan Williams
commit da87ca4d4ca101f177fffd84f1f0a5e4c0343557 upstream. Since commit 77873803363c "net_dma: mark broken" we no longer pin dma engines active for the network-receive-offload use case. As a result the ->free_chan_resources() that occurs after the driver self test no longer has a NET_DMA induced ->alloc_chan_resources() to back it up. A late firing irq can lead to ksoftirqd spinning indefinitely due to the tasklet_disable() performed by ->free_chan_resources(). Only ->alloc_chan_resources() can clear this condition in affected kernels. This problem has been present since commit 3e037454bcfa "I/OAT: Add support for MSI and MSI-X" in 2.6.24, but is now exposed. Given the NET_DMA use case is deprecated we can revisit moving the driver to use threaded irqs. For now, just tear down the irq and tasklet properly by: 1/ Disable the irq from triggering the tasklet 2/ Disable the irq from re-arming 3/ Flush inflight interrupts 4/ Flush the timer 5/ Flush inflight tasklets References: https://lkml.org/lkml/2014/1/27/282 https://lkml.org/lkml/2014/2/19/672 Cc: Ingo Molnar <mingo@elte.hu> Cc: Steven Rostedt <rostedt@goodmis.org> Reported-by: Mike Galbraith <bitbucket@online.de> Reported-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Tested-by: Mike Galbraith <bitbucket@online.de> Tested-by: Stanislav Fomichev <stfomichev@yandex-team.ru> Reviewed-by: Thomas Gleixner <tglx@linutronix.de> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-03-06dma: ste_dma40: don't dereference free:d descriptorLinus Walleij
commit e9baa9d9d520fb0e24cca671e430689de2d4a4b2 upstream. It appears that in the DMA40 driver the DMA tasklet will very often dereference memory for a descriptor just free:d from the DMA40 slab. Nothing happens because no other part of the driver has yet had a chance to claim this memory, but it's really nasty to dereference free:d memory, so let's check the flag before the descriptor is free and store it in a bool variable. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Linus Walleij <linus.walleij@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2014-01-09net_dma: mark brokenDan Williams
commit 77873803363c9e831fc1d1e6895c084279090c22 upstream. net_dma can cause data to be copied to a stale mapping if a copy-on-write fault occurs during dma. The application sees missing data. The following trace is triggered by modifying the kernel to WARN if it ever triggers copy-on-write on a page that is undergoing dma: WARNING: CPU: 24 PID: 2529 at lib/dma-debug.c:485 debug_dma_assert_idle+0xd2/0x120() ioatdma 0000:00:04.0: DMA-API: cpu touching an active dma mapped page [pfn=0x16bcd9] Modules linked in: iTCO_wdt iTCO_vendor_support ioatdma lpc_ich pcspkr dca CPU: 24 PID: 2529 Comm: linbug Tainted: G W 3.13.0-rc1+ #353 00000000000001e5 ffff88016f45f688 ffffffff81751041 ffff88017ab0ef70 ffff88016f45f6d8 ffff88016f45f6c8 ffffffff8104ed9c ffffffff810f3646 ffff8801768f4840 0000000000000282 ffff88016f6cca10 00007fa2bb699349 Call Trace: [<ffffffff81751041>] dump_stack+0x46/0x58 [<ffffffff8104ed9c>] warn_slowpath_common+0x8c/0xc0 [<ffffffff810f3646>] ? ftrace_pid_func+0x26/0x30 [<ffffffff8104ee86>] warn_slowpath_fmt+0x46/0x50 [<ffffffff8139c062>] debug_dma_assert_idle+0xd2/0x120 [<ffffffff81154a40>] do_wp_page+0xd0/0x790 [<ffffffff811582ac>] handle_mm_fault+0x51c/0xde0 [<ffffffff813830b9>] ? copy_user_enhanced_fast_string+0x9/0x20 [<ffffffff8175fc2c>] __do_page_fault+0x19c/0x530 [<ffffffff8175c196>] ? _raw_spin_lock_bh+0x16/0x40 [<ffffffff810f3539>] ? trace_clock_local+0x9/0x10 [<ffffffff810fa1f4>] ? rb_reserve_next_event+0x64/0x310 [<ffffffffa0014c00>] ? ioat2_dma_prep_memcpy_lock+0x60/0x130 [ioatdma] [<ffffffff8175ffce>] do_page_fault+0xe/0x10 [<ffffffff8175c862>] page_fault+0x22/0x30 [<ffffffff81643991>] ? __kfree_skb+0x51/0xd0 [<ffffffff813830b9>] ? copy_user_enhanced_fast_string+0x9/0x20 [<ffffffff81388ea2>] ? memcpy_toiovec+0x52/0xa0 [<ffffffff8164770f>] skb_copy_datagram_iovec+0x5f/0x2a0 [<ffffffff8169d0f4>] tcp_rcv_established+0x674/0x7f0 [<ffffffff816a68c5>] tcp_v4_do_rcv+0x2e5/0x4a0 [..] ---[ end trace e30e3b01191b7617 ]--- Mapped at: [<ffffffff8139c169>] debug_dma_map_page+0xb9/0x160 [<ffffffff8142bf47>] dma_async_memcpy_pg_to_pg+0x127/0x210 [<ffffffff8142cce9>] dma_memcpy_pg_to_iovec+0x119/0x1f0 [<ffffffff81669d3c>] dma_skb_copy_datagram_iovec+0x11c/0x2b0 [<ffffffff8169d1ca>] tcp_rcv_established+0x74a/0x7f0: ...the problem is that the receive path falls back to cpu-copy in several locations and this trace is just one of the areas. A few options were considered to fix this: 1/ sync all dma whenever a cpu copy branch is taken 2/ modify the page fault handler to hold off while dma is in-flight Option 1 adds yet more cpu overhead to an "offload" that struggles to compete with cpu-copy. Option 2 adds checks for behavior that is already documented as broken when using get_user_pages(). At a minimum a debug mode is warranted to catch and flag these violations of the dma-api vs get_user_pages(). Thanks to David for his reproducer. Cc: Dave Jiang <dave.jiang@intel.com> Cc: Vinod Koul <vinod.koul@intel.com> Cc: Alexander Duyck <alexander.h.duyck@intel.com> Reported-by: David Whipple <whipple@securedatainnovations.ch> Acked-by: David S. Miller <davem@davemloft.net> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-12-04ioatdma: fix selection of 16 vs 8 source pathDan Williams
commit 21e96c7313486390c694919522a76dfea0a86c59 upstream. When performing continuations there are implied sources that need to be added to the source count. Quoting dma_set_maxpq: /* dma_maxpq - reduce maxpq in the face of continued operations * @dma - dma device with PQ capability * @flags - to check if DMA_PREP_CONTINUE and DMA_PREP_PQ_DISABLE_P are set * * When an engine does not support native continuation we need 3 extra * source slots to reuse P and Q with the following coefficients: * 1/ {00} * P : remove P from Q', but use it as a source for P' * 2/ {01} * Q : use Q to continue Q' calculation * 3/ {00} * Q : subtract Q from P' to cancel (2) * * In the case where P is disabled we only need 1 extra source: * 1/ {01} * Q : use Q to continue Q' calculation */ ...fix the selection of the 16 source path to take these implied sources into account. Note this also kills the BUG_ON(src_cnt < 9) check in __ioat3_prep_pq16_lock(). Besides not accounting for implied sources the check is redundant given we already made the path selection. Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-12-04ioatdma: fix sed pool selectionDan Williams
commit 5d48b9b5d80e3aa38a5161565398b1e48a650573 upstream. The array to lookup the sed pool based on the number of sources (pq16_idx_to_sedi) is 16 entries and expects a max source index. However, we pass the total source count which runs off the end of the array when src_cnt == 16. The minimal fix is to just pass src_cnt-1, but given we know the source count is > 8 we can just calculate the sed pool by (src_cnt - 2) >> 3. Cc: Dave Jiang <dave.jiang@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Dan Williams <dan.j.williams@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13dmaengine: imx-dma: fix slow path issue in prep_dma_cyclicMichael Grzeschik
commit edc530fe7ee5a562680615d2e7cd205879c751a7 upstream. When perparing cyclic_dma buffers by the sound layer, it will dump the following lockdep trace. The leading snd_pcm_action_single get called with read_lock_irq called. To fix this, we change the kcalloc call from GFP_KERNEL to GFP_ATOMIC. WARNING: at kernel/lockdep.c:2740 lockdep_trace_alloc+0xcc/0x114() DEBUG_LOCKS_WARN_ON(irqs_disabled_flags(flags)) Modules linked in: CPU: 0 PID: 832 Comm: aplay Not tainted 3.11.0-20130823+ #903 Backtrace: [<c000b98c>] (dump_backtrace+0x0/0x10c) from [<c000bb28>] (show_stack+0x18/0x1c) r6:c004c090 r5:00000009 r4:c2e0bd18 r3:00404000 [<c000bb10>] (show_stack+0x0/0x1c) from [<c02f397c>] (dump_stack+0x20/0x28) [<c02f395c>] (dump_stack+0x0/0x28) from [<c001531c>] (warn_slowpath_common+0x54/0x70) [<c00152c8>] (warn_slowpath_common+0x0/0x70) from [<c00153dc>] (warn_slowpath_fmt+0x38/0x40) r8:00004000 r7:a3b90000 r6:000080d0 r5:60000093 r4:c2e0a000 r3:00000009 [<c00153a4>] (warn_slowpath_fmt+0x0/0x40) from [<c004c090>] (lockdep_trace_alloc+0xcc/0x114) r3:c03955d8 r2:c03907db [<c004bfc4>] (lockdep_trace_alloc+0x0/0x114) from [<c008f16c>] (__kmalloc+0x34/0x118) r6:000080d0 r5:c3800120 r4:000080d0 r3:c040a0f8 [<c008f138>] (__kmalloc+0x0/0x118) from [<c019c95c>] (imxdma_prep_dma_cyclic+0x64/0x168) r7:a3b90000 r6:00000004 r5:c39d8420 r4:c3847150 [<c019c8f8>] (imxdma_prep_dma_cyclic+0x0/0x168) from [<c024618c>] (snd_dmaengine_pcm_trigger+0xa8/0x160) [<c02460e4>] (snd_dmaengine_pcm_trigger+0x0/0x160) from [<c0241fa8>] (soc_pcm_trigger+0x90/0xb4) r8:c058c7b0 r7:c3b8140c r6:c39da560 r5:00000001 r4:c3b81000 [<c0241f18>] (soc_pcm_trigger+0x0/0xb4) from [<c022ece4>] (snd_pcm_do_start+0x2c/0x38) r7:00000000 r6:00000003 r5:c058c7b0 r4:c3b81000 [<c022ecb8>] (snd_pcm_do_start+0x0/0x38) from [<c022e958>] (snd_pcm_action_single+0x40/0x6c) [<c022e918>] (snd_pcm_action_single+0x0/0x6c) from [<c022ea64>] (snd_pcm_action_lock_irq+0x7c/0x9c) r7:00000003 r6:c3b810f0 r5:c3b810f0 r4:c3b81000 [<c022e9e8>] (snd_pcm_action_lock_irq+0x0/0x9c) from [<c023009c>] (snd_pcm_common_ioctl1+0x7f8/0xfd0) r8:c3b7f888 r7:005407b8 r6:c2c991c0 r5:c3b81000 r4:c3b81000 r3:00004142 [<c022f8a4>] (snd_pcm_common_ioctl1+0x0/0xfd0) from [<c023117c>] (snd_pcm_playback_ioctl1+0x464/0x488) [<c0230d18>] (snd_pcm_playback_ioctl1+0x0/0x488) from [<c02311d4>] (snd_pcm_playback_ioctl+0x34/0x40) r8:c3b7f888 r7:00004142 r6:00000004 r5:c2c991c0 r4:005407b8 [<c02311a0>] (snd_pcm_playback_ioctl+0x0/0x40) from [<c00a14a4>] (vfs_ioctl+0x30/0x44) [<c00a1474>] (vfs_ioctl+0x0/0x44) from [<c00a1fe8>] (do_vfs_ioctl+0x55c/0x5c0) [<c00a1a8c>] (do_vfs_ioctl+0x0/0x5c0) from [<c00a208c>] (SyS_ioctl+0x40/0x68) [<c00a204c>] (SyS_ioctl+0x0/0x68) from [<c0009380>] (ret_fast_syscall+0x0/0x44) r8:c0009544 r7:00000036 r6:bedeaa58 r5:00000000 r4:000000c0 Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Cc: Jonghwan Choi <jhbird.choi@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13dmaengine: imx-dma: fix callback path in taskletMichael Grzeschik
commit fcaaba6c7136fe47e5a13352f99a64b019b6d2c5 upstream. We need to free the ld_active list head before jumping into the callback routine. Otherwise the callback could run into issue_pending and change our ld_active list head we just going to free. This will run the channel list into an currupted and undefined state. Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Cc: Jonghwan Choi <jhbird.choi@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-10-13dmaengine: imx-dma: fix lockdep issue between irqhandler and taskletMichael Grzeschik
commit 5a276fa6bdf82fd442046969603968c83626ce0b upstream. The tasklet and irqhandler are using spin_lock while other routines are using spin_lock_irqsave/restore. This leads to lockdep issues as described bellow. This patch is changing the code to use spinlock_irq_save/restore in both code pathes. As imxdma_xfer_desc always gets called with spin_lock_irqsave lock held, this patch also removes the spare call inside the routine to avoid double locking. [ 403.358162] ================================= [ 403.362549] [ INFO: inconsistent lock state ] [ 403.366945] 3.10.0-20130823+ #904 Not tainted [ 403.371331] --------------------------------- [ 403.375721] inconsistent {IN-HARDIRQ-W} -> {HARDIRQ-ON-W} usage. [ 403.381769] swapper/0 [HC0[0]:SC1[1]:HE1:SE0] takes: [ 403.386762] (&(&imxdma->lock)->rlock){?.-...}, at: [<c019d77c>] imxdma_tasklet+0x20/0x134 [ 403.395201] {IN-HARDIRQ-W} state was registered at: [ 403.400108] [<c004b264>] mark_lock+0x2a0/0x6b4 [ 403.404798] [<c004d7c8>] __lock_acquire+0x650/0x1a64 [ 403.410004] [<c004f15c>] lock_acquire+0x94/0xa8 [ 403.414773] [<c02f74e4>] _raw_spin_lock+0x54/0x8c [ 403.419720] [<c019d094>] dma_irq_handler+0x78/0x254 [ 403.424845] [<c0061124>] handle_irq_event_percpu+0x38/0x1b4 [ 403.430670] [<c00612e4>] handle_irq_event+0x44/0x64 [ 403.435789] [<c0063a70>] handle_level_irq+0xd8/0xf0 [ 403.440903] [<c0060a20>] generic_handle_irq+0x28/0x38 [ 403.446194] [<c0009cc4>] handle_IRQ+0x68/0x8c [ 403.450789] [<c0008714>] avic_handle_irq+0x3c/0x48 [ 403.455811] [<c0008f84>] __irq_svc+0x44/0x74 [ 403.460314] [<c0040b04>] cpu_startup_entry+0x88/0xf4 [ 403.465525] [<c02f00d0>] rest_init+0xb8/0xe0 [ 403.470045] [<c03e07dc>] start_kernel+0x28c/0x2d4 [ 403.474986] [<a0008040>] 0xa0008040 [ 403.478709] irq event stamp: 50854 [ 403.482140] hardirqs last enabled at (50854): [<c001c6b8>] tasklet_action+0x38/0xdc [ 403.489954] hardirqs last disabled at (50853): [<c001c6a0>] tasklet_action+0x20/0xdc [ 403.497761] softirqs last enabled at (50850): [<c001bc64>] _local_bh_enable+0x14/0x18 [ 403.505741] softirqs last disabled at (50851): [<c001c268>] irq_exit+0x88/0xdc [ 403.513026] [ 403.513026] other info that might help us debug this: [ 403.519593] Possible unsafe locking scenario: [ 403.519593] [ 403.525548] CPU0 [ 403.528020] ---- [ 403.530491] lock(&(&imxdma->lock)->rlock); [ 403.534828] <Interrupt> [ 403.537474] lock(&(&imxdma->lock)->rlock); [ 403.541983] [ 403.541983] *** DEADLOCK *** [ 403.541983] [ 403.547951] no locks held by swapper/0. [ 403.551813] [ 403.551813] stack backtrace: [ 403.556222] CPU: 0 PID: 0 Comm: swapper Not tainted 3.10.0-20130823+ #904 [ 403.563039] Backtrace: [ 403.565581] [<c000b98c>] (dump_backtrace+0x0/0x10c) from [<c000bb28>] (show_stack+0x18/0x1c) [ 403.574054] r6:00000000 r5:c05c51d8 r4:c040bd58 r3:00200000 [ 403.579872] [<c000bb10>] (show_stack+0x0/0x1c) from [<c02f398c>] (dump_stack+0x20/0x28) [ 403.587955] [<c02f396c>] (dump_stack+0x0/0x28) from [<c02f29c8>] (print_usage_bug.part.28+0x224/0x28c) [ 403.597340] [<c02f27a4>] (print_usage_bug.part.28+0x0/0x28c) from [<c004b404>] (mark_lock+0x440/0x6b4) [ 403.606682] r8:c004a41c r7:00000000 r6:c040bd58 r5:c040c040 r4:00000002 [ 403.613566] [<c004afc4>] (mark_lock+0x0/0x6b4) from [<c004d844>] (__lock_acquire+0x6cc/0x1a64) [ 403.622244] [<c004d178>] (__lock_acquire+0x0/0x1a64) from [<c004f15c>] (lock_acquire+0x94/0xa8) [ 403.631010] [<c004f0c8>] (lock_acquire+0x0/0xa8) from [<c02f74e4>] (_raw_spin_lock+0x54/0x8c) [ 403.639614] [<c02f7490>] (_raw_spin_lock+0x0/0x8c) from [<c019d77c>] (imxdma_tasklet+0x20/0x134) [ 403.648434] r6:c3847010 r5:c040e890 r4:c38470d4 [ 403.653194] [<c019d75c>] (imxdma_tasklet+0x0/0x134) from [<c001c70c>] (tasklet_action+0x8c/0xdc) [ 403.662013] r8:c0599160 r7:00000000 r6:00000000 r5:c040e890 r4:c3847114 r3:c019d75c [ 403.670042] [<c001c680>] (tasklet_action+0x0/0xdc) from [<c001bd4c>] (__do_softirq+0xe4/0x1f0) [ 403.678687] r7:00000101 r6:c0402000 r5:c059919c r4:00000001 [ 403.684498] [<c001bc68>] (__do_softirq+0x0/0x1f0) from [<c001c268>] (irq_exit+0x88/0xdc) [ 403.692652] [<c001c1e0>] (irq_exit+0x0/0xdc) from [<c0009cc8>] (handle_IRQ+0x6c/0x8c) [ 403.700514] r4:00000030 r3:00000110 [ 403.704192] [<c0009c5c>] (handle_IRQ+0x0/0x8c) from [<c0008714>] (avic_handle_irq+0x3c/0x48) [ 403.712664] r5:c0403f28 r4:c0593ebc [ 403.716343] [<c00086d8>] (avic_handle_irq+0x0/0x48) from [<c0008f84>] (__irq_svc+0x44/0x74) [ 403.724733] Exception stack(0xc0403f28 to 0xc0403f70) [ 403.729841] 3f20: 00000001 00000004 00000000 20000013 c0402000 c04104a8 [ 403.738078] 3f40: 00000002 c0b69620 a0004000 41069264 a03fb5f4 c0403f7c c0403f40 c0403f70 [ 403.746301] 3f60: c004b92c c0009e74 20000013 ffffffff [ 403.751383] r6:ffffffff r5:20000013 r4:c0009e74 r3:c004b92c [ 403.757210] [<c0009e30>] (arch_cpu_idle+0x0/0x4c) from [<c0040b04>] (cpu_startup_entry+0x88/0xf4) [ 403.766161] [<c0040a7c>] (cpu_startup_entry+0x0/0xf4) from [<c02f00d0>] (rest_init+0xb8/0xe0) [ 403.774753] [<c02f0018>] (rest_init+0x0/0xe0) from [<c03e07dc>] (start_kernel+0x28c/0x2d4) [ 403.783051] r6:c03fc484 r5:ffffffff r4:c040a0e0 [ 403.787797] [<c03e0550>] (start_kernel+0x0/0x2d4) from [<a0008040>] (0xa0008040) Signed-off-by: Michael Grzeschik <m.grzeschik@pengutronix.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Cc: Jonghwan Choi <jhbird.choi@samsung.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-08-11dma: pl330: Fix cyclic transfersLars-Peter Clausen
commit fc51446021f42aca8906e701fc2292965aafcb15 upstream. Allocate a descriptor for each period of a cyclic transfer, not just the first. Also since the callback needs to be called for each finished period make sure to initialize the callback and callback_param fields of each descriptor in a cyclic transfer. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-07-21drivers/dma/pl330.c: fix locking in pl330_free_chan_resources()Bartlomiej Zolnierkiewicz
commit da331ba8e9c5de72a27e50f71105395bba6eebe0 upstream. tasklet_kill() may sleep so call it before taking pch->lock. Fixes following lockup: BUG: scheduling while atomic: cat/2383/0x00000002 Modules linked in: unwind_backtrace+0x0/0xfc __schedule_bug+0x4c/0x58 __schedule+0x690/0x6e0 sys_sched_yield+0x70/0x78 tasklet_kill+0x34/0x8c pl330_free_chan_resources+0x24/0x88 dma_chan_put+0x4c/0x50 [...] BUG: spinlock lockup suspected on CPU#0, swapper/0/0 lock: 0xe52aa04c, .magic: dead4ead, .owner: cat/2383, .owner_cpu: 1 unwind_backtrace+0x0/0xfc do_raw_spin_lock+0x194/0x204 _raw_spin_lock_irqsave+0x20/0x28 pl330_tasklet+0x2c/0x5a8 tasklet_action+0xfc/0x114 __do_softirq+0xe4/0x19c irq_exit+0x98/0x9c handle_IPI+0x124/0x16c gic_handle_irq+0x64/0x68 __irq_svc+0x40/0x70 cpuidle_wrap_enter+0x4c/0xa0 cpuidle_enter_state+0x18/0x68 cpuidle_idle_call+0xac/0xe0 cpu_idle+0xac/0xf0 Signed-off-by: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com> Signed-off-by: Kyungmin Park <kyungmin.park@samsung.com> Acked-by: Jassi Brar <jassisinghbrar@gmail.com> Cc: Vinod Koul <vinod.koul@linux.intel.com> Cc: Tomasz Figa <t.figa@samsung.com> Signed-off-by: Andrew Morton <akpm@linux-foundation.org> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org> Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
2013-06-08dmatest: do not allow to interrupt ongoing testsAndy Shevchenko
When user interrupts ongoing transfers the dmatest may end up with console lockup, oops, or data mismatch. This patch prevents user to abort any ongoing test. Documentation is updated accordingly. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reported-by: Will Deacon <will.deacon@arm.com> Tested-by: Will Deacon <will.deacon@arm.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-27dmaengine: ste_dma40: fix pm runtime ref countingRabin Vincent
The pm runtime reference counting of the driver is broken for the case when there is more than one transfer queued, leading to the device being runtime suspend while active. Fix it. Signed-off-by: Rabin Vincent <rabin.vincent@stericsson.com> Acked-by: Linus Walleij <linus.walleij@linaro.org> Cc: stable@vger.kernel.org Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-25Merge branch 'fixes' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds
Pull slave-dma fixes from Vinod Koul: "We have two patches from Andy & Rafael fixing the Lynxpoint dma" * 'fixes' of git://git.infradead.org/users/vkoul/slave-dma: ACPI / LPSS: register clock device for Lynxpoint DMA properly dma: acpi-dma: parse CSRT to extract additional resources
2013-05-18drivers/dma: don't check resource with devm_ioremap_resourceWolfram Sang
devm_ioremap_resource does sanity checks on the given resource. No need to duplicate this in the driver. Signed-off-by: Wolfram Sang <wsa@the-dreams.de> Acked-by: Stephen Warren <swarren@nvidia.com>
2013-05-14dma: acpi-dma: parse CSRT to extract additional resourcesAndy Shevchenko
Since we have CSRT only to get additional DMA controller resources, let's get rid of drivers/acpi/csrt.c and move its logic inside ACPI DMA helpers code. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-09Merge branch 'for-linus' of git://git.infradead.org/users/vkoul/slave-dmaLinus Torvalds
Pull slave-dmaengine updates from Vinod Koul: "This time we have dmatest improvements from Andy along with dw_dmac fixes. He has also done support for acpi for dmanegine. Also we have bunch of fixes going in DT support for dmanegine for various folks. Then Haswell and other ioat changes from Dave and SUDMAC support from Shimoda." * 'for-linus' of git://git.infradead.org/users/vkoul/slave-dma: (53 commits) dma: tegra: implement suspend/resume callbacks dma:of: Use a mutex to protect the of_dma_list dma: of: Fix of_node reference leak dmaengine: sirf: move driver init from module_init to subsys_initcall sudmac: add support for SUDMAC dma: sh: add Kconfig at_hdmac: move to generic DMA binding ioatdma: ioat3_alloc_sed can be static ioatdma: Adding write back descriptor error status support for ioatdma 3.3 ioatdma: S1200 platforms ioatdma channel 2 and 3 falsely advertise RAID cap ioatdma: Adding support for 16 src PQ ops and super extended descriptors ioatdma: Removing hw bug workaround for CB3.x .2 and earlier dw_dmac: add ACPI support dmaengine: call acpi_dma_request_slave_channel as well dma: acpi-dma: introduce ACPI DMA helpers dma: of: Remove unnecessary list_empty check DMA: OF: Check properties value before running be32_to_cpup() on it DMA: of: Constant names ioatdma: skip silicon bug workaround for pq_align for cb3.3 ioatdma: Removing PQ val disable for cb3.3 ...
2013-05-07Merge tag 'dt-for-linus-2' of ↵Linus Torvalds
git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc Pull ARM SoC device tree updates (part 2) from Arnd Bergmann: "These are mostly new device tree bindings for existing drivers, as well as changes to the device tree source files to add support for those devices, and a couple of new boards, most notably Samsung's Exynos5 based Chromebook. The changes depend on earlier platform specific updates and touch the usual platforms: omap, exynos, tegra, mxs, mvebu and davinci." * tag 'dt-for-linus-2' of git://git.kernel.org/pub/scm/linux/kernel/git/arm/arm-soc: (169 commits) ARM: exynos: dts: cros5250: add EC device ARM: dts: Add sbs-battery for exynos5250-snow ARM: dts: Add i2c-arbitrator bus for exynos5250-snow ARM: dts: add mshc controller node for Exynos4x12 SoCs ARM: dts: Add chip-id controller node on Exynos4/5 SoC ARM: EXYNOS: Create virtual I/O mapping for Chip-ID controller using device tree ARM: davinci: da850-evm: add SPI flash support ARM: davinci: da850: override SPI DT node device name ARM: davinci: da850: add SPI1 DT node spi/davinci: add DT binding documentation spi/davinci: no wildcards in DT compatible property ARM: dts: mvebu: Convert mvebu device tree files to 64 bits ARM: dts: mvebu: introduce internal-regs node ARM: dts: mvebu: Convert all the mvebu files to use the range property ARM: dts: mvebu: move all peripherals inside soc ARM: dts: mvebu: fix cpus section indentation ARM: davinci: da850: add EHRPWM & ECAP DT node ARM/dts: OMAP3: fix pinctrl-single configuration ARM: dts: Add OMAP3430 SDP NOR flash memory binding ARM: dts: Add NOR flash bindings for OMAP2420 H4 ...
2013-05-06Merge branch 'late/dt' into next/dt2Arnd Bergmann
This is support for the ARM Chromebook, originally scheduled as a "late" pull request. Since it's already late now, we can combine this into the existing next/dt2 branch. * late/dt: ARM: exynos: dts: cros5250: add EC device ARM: dts: Add sbs-battery for exynos5250-snow ARM: dts: Add i2c-arbitrator bus for exynos5250-snow ARM: dts: Add chip-id controller node on Exynos4/5 SoC ARM: EXYNOS: Create virtual I/O mapping for Chip-ID controller using device tree
2013-05-02dma: tegra: implement suspend/resume callbacksLaxman Dewangan
Implement suspend/resume callbacks to store APB DMA channel's register on suspend and restore APB DMA channel's register on resume. Signed-off-by: Laxman Dewangan <ldewangan@nvidia.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-02Merge branch 'topic/of' into for-linusVinod Koul
Conflicts: include/linux/dmaengine.h Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-02dma:of: Use a mutex to protect the of_dma_listLars-Peter Clausen
Currently the OF DMA code uses a spin lock to protect the of_dma_list from concurrent access and a per controller reference count to protect the controller from being freed while a request operation is in progress. If of_dma_controller_free() is called for a controller who's reference count is not zero it will return -EBUSY and not remove the controller. This is fine up until here, but leaves the question what the caller of of_dma_controller_free() is supposed to do if the controller couldn't be freed. The only viable solution for the caller is to spin on of_dma_controller_free() until it returns success. E.g. do { ret = of_dma_controller_free(dev->of_node) } while (ret != -EBUSY); This is rather ugly and unnecessary and none of the current users of of_dma_controller_free() check it's return value anyway. Instead protect the list by a mutex. The mutex will be held as long as a request operation is in progress. So if of_dma_controller_free() is called while a request operation is in progress it will be put to sleep and only wake up once the request operation has finished. This means that it is no longer possible to register or unregister OF DMA controllers from a context where it's not possible to sleep. But I doubt that we'll ever need this. Also rename of_dma_get_controller back to of_dma_find_controller. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-02dma: of: Fix of_node reference leakLars-Peter Clausen
of_dma_request_slave_channel() currently does not drop the reference to the dma_spec of_node if no DMA controller matching the of_node could be found. This patch fixes it by always calling of_node_put(). Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Acked-by: Arnd Bergmann <arnd@arndb.de> Reviewed-by: Jon Hunter <jon-hunter@ti.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-05-02dmaengine: sirf: move driver init from module_init to subsys_initcallBarry Song
if we initilize dma driver by module_init, there are still many devices which will be initilized earlier than dma. these devices will fail to get dma channel. this moves dmaengine earlier than device_initcall and make dma available for all devices. Reported-by: Renwei Wu <Renwei.Wu@csr.com> Signed-off-by: Barry Song <Baohua.Song@csr.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-30sudmac: add support for SUDMACShimoda, Yoshihiro
Some Renesas USB modules have SUDMAC. This patch supports it using the shdma-base driver. Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Reviewed-by: Guennadi Liakhovetski <g.liakhovetski@gmx.de> Acked-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-30dma: sh: add KconfigShimoda, Yoshihiro
This patch adds Kconfig in the drivers/dma/sh. This patch also adds a new config "SH_DMAE_BASE" and the "config SH_DMAE" depends on it. Since some drivers (e.g. sh_mmcif.c) depends on shdma-base.c if CONFIG_DMA_ENGINE=y, the "config SH_DMAE_BASE" is set as "bool". Signed-off-by: Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com> Acked-by: Kuninori Morimoto <kuninori.morimoto.gx@renesas.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-30at_hdmac: move to generic DMA bindingLudovic Desroches
Update at_hdmac driver to support generic DMA device tree binding. Devices can still request channel with dma_request_channel() then it doesn't break DMA for non DT boards. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Acked-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Jean-Christophe PLAGNIOL-VILLARD <plagnioj@jcrosoft.com> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-22Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller
Conflicts: drivers/net/ethernet/emulex/benet/be_main.c drivers/net/ethernet/intel/igb/igb_main.c drivers/net/wireless/brcm80211/brcmsmac/mac80211_if.c include/net/scm.h net/batman-adv/routing.c net/ipv4/tcp_input.c The e{uid,gid} --> {uid,gid} credentials fix conflicted with the cleanup in net-next to now pass cred structs around. The be2net driver had a bug fix in 'net' that overlapped with the VLAN interface changes by Patrick McHardy in net-next. An IGB conflict existed because in 'net' the build_skb() support was reverted, and in 'net-next' there was a comment style fix within that code. Several batman-adv conflicts were resolved by making sure that all calls to batadv_is_my_mac() are changed to have a new bat_priv first argument. Eric Dumazet's TS ECR fix in TCP in 'net' conflicted with the F-RTO rewrite in 'net-next', mostly overlapping changes. Thanks to Stephen Rothwell and Antonio Quartulli for help with several of these merge resolutions. Signed-off-by: David S. Miller <davem@davemloft.net>
2013-04-18dmaengine: at_hdmac: fix race condition in atc_advance_work()Ludovic Desroches
The BUG_ON() directive is triggered probably due to a latency modification following inclusion of commit c10d73671ad3 ("softirq: reduce latencies"). This condition has not been met before 3.9-rc1 and doesn't trigger without this patch. We now make sure that DMA channel is idle before calling atc_complete_all() which makes the BUG_ON() "protection" useless. Signed-off-by: Ludovic Desroches <ludovic.desroches@atmel.com> Signed-off-by: Nicolas Ferre <nicolas.ferre@atmel.com> Acked-by: Vinod Koul <vinod.koul@intel.com> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
2013-04-16ioatdma: ioat3_alloc_sed can be staticFengguang Wu
Reported-by: Fengguang Wu <fengguang.wu@intel.com> Signed-off-by: Fengguang Wu <fengguang.wu@intel.com> Acked-by: Dave Jiang <dave.jiang@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding write back descriptor error status support for ioatdma 3.3Dave Jiang
v3.3 provides support for write back descriptor error status. This allows reporting of errors in a descriptor field. In supporting this, certain errors such as P/Q validation errors no longer halts the channel. The DMA engine can continue to execute until the end of the chain and allow software to report the "errors" up the stack. We are also going to mask those error interrupts and handle them when the "chain" has completed at the end. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: S1200 platforms ioatdma channel 2 and 3 falsely advertise RAID capDave Jiang
This workaround checks for channel 2&3 and remove RAID cap. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding support for 16 src PQ ops and super extended descriptorsDave Jiang
v3.3 introduced 16 sources PQ operations. This also introduced super extended descriptors to support the 16 srcs operations. This patch adds support for the 16 sources ops and in turn adds the super extended descriptors for those ops. 5 SED pools are created depending on the descriptor sizes. An SED can be a 64 bytes sized descriptor or larger and must be physically contiguous. A kmem cache pool is created for allocating the software descriptor that manages the hardware descriptor. The super extended descriptor will take place of extended descriptor under certain operations and be "attached" to the op descriptor during operation. This is a new feature for ioatdma v3.3. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Removing hw bug workaround for CB3.x .2 and earlierDave Jiang
CB3.2 and earlier hardware has silicon bugs that are no longer needed with the new hardware. We don't have to use a NULL op to signal interrupt for RAID ops any longer. This code make sure the legacy workarounds only happen on legacy hardware. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dw_dmac: add ACPI supportAndy Shevchenko
Since we have proper ACPI DMA helpers implemented, the driver may use it. This patch introduces custom filter function together with acpi_device_id table. Signed-off-by: Mika Westerberg <mika.westerberg@linux.intel.com> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dmaengine: call acpi_dma_request_slave_channel as wellAndy Shevchenko
The slave device could be enumerated by ACPI. In that case the dma_request_slave_channel should use the acpi_dma_request_slave_channel() helper. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dma: acpi-dma: introduce ACPI DMA helpersAndy Shevchenko
There is a new generic API to get a DMA channel for a slave device (commit 9a6cecc8 "dmaengine: add helper function to request a slave DMA channel"). In similar fashion to the DT case (commit aa3da644 "of: Add generic device tree DMA helpers") we introduce helpers to the DMAC drivers which are enumerated by ACPI. The proposed extension provides the following API calls: acpi_dma_controller_register(), devm_acpi_dma_controller_register() acpi_dma_controller_free(), devm_acpi_dma_controller_free() acpi_dma_simple_xlate() acpi_dma_request_slave_chan_by_index() acpi_dma_request_slave_chan_by_name() The first two should be used, for example, at probe() and remove() of the corresponding DMAC driver. At the register stage the DMAC driver supplies a custom xlate() function to translate a struct dma_spec into struct dma_chan. Accordingly to the ACPI Fixed DMA resource specification the only two pieces of information the slave device has are the channel id and the request line (slave id). Those two are represented by struct dma_spec. The acpi_dma_request_slave_chan_by_index() provides access to the specifix FixedDMA resource by its index. Whereas dma_request_slave_channel() takes a string parameter to identify the DMA resources required by the slave device. To make a slave device driver work with both DeviceTree and ACPI enumeration a simple convention is established: "tx" corresponds to the index 0 and "rx" to the index 1. In case of robust configuration the slave device driver unfortunately needs to call acpi_dma_request_slave_chan_by_index() directly. Additionally the patch provides "managed" version of the register/free pair i.e. devm_acpi_dma_controller_register() and devm_acpi_dma_controller_free(). Usually, the driver uses only devm_acpi_dma_controller_register(). Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Reviewed-by: Mika Westerberg <mika.westerberg@linux.intel.com> Acked-by: Rafael J. Wysocki <rafael.j.wysocki@intel.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dma: of: Remove unnecessary list_empty checkLars-Peter Clausen
list_for_each_entry is able to handle empty lists just fine, there is no need to make sure that the list is non empty. Signed-off-by: Lars-Peter Clausen <lars@metafoo.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15DMA: OF: Check properties value before running be32_to_cpup() on itViresh Kumar
In of_dma_controller_register() routine we are calling of_get_property() as an parameter to be32_to_cpup(). In case the property doesn't exist we will get a crash. This patch changes this code to check if we got a valid property first and then runs be32_to_cpup() on it. Signed-off-by: Viresh Kumar <viresh.kumar@linaro.org> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15DMA: of: Constant namesMarkus Pargmann
No DMA of-function alters the name, so this patch changes the name arguments to be constant. Most drivers will probably request DMA channels using a constant name. Signed-off-by: Markus Pargmann <mpa@pengutronix.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: skip silicon bug workaround for pq_align for cb3.3Dave Jiang
The alignment workaround is only necessary for cb3.2 or earlier platforms. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Removing PQ val disable for cb3.3Dave Jiang
The PQ Val ops work on the newer hardware so we should actually provide support for it and remove the disabling bits. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: skip legacy reset bits since v3.3 plattform doesn't need itDave Jiang
Make it so only 3.2 and earlier platform need the PCI config register clearings since this implementation does not have the registers. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: channel reset scheme fixup on Intel Atom S1200 platformsDave Jiang
The Intel Atom S1200 family ioatdma changed the channel reset behavior. It does a reset similar to PCI FLR by resetting all the MSIX registers. We have to re-init msix interrupts because of this. This workaround is only specific to this platform and is not expected to carry over to the later generations. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Add 64bit chansts register read for ioat v3.3.Dave Jiang
The channel status register for v3.3 is now 64bit. Use readq if available on v3.3 platforms. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding PCI IDs for Intel Atom S1200 product family ioatdma devicesDave Jiang
These should be good for the IOAT DMA devices on the Intel Atom S1269, S1279, and S1289 platforms. We are also adding IOAT v3.3 definition for the new DMA engine. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15ioatdma: Adding Haswell devid for ioatdmaDave Jiang
Adding Haswell PCI device IDs for ioatdma and simplify the detection of certain Xeon CPUs that has alignment bugs so that modifications can be changed at a single place going forward. Signed-off-by: Dave Jiang <dave.jiang@intel.com> Acked-by: Dan Williams <djbw@fb.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dmaengine: OMAP: Register SDMA controller with Device Tree DMA driverJon Hunter
If the device-tree blob is present during boot, then register the SDMA controller with the device-tree DMA driver so that we can use device-tree to look-up DMA client information. Signed-off-by: Jon Hunter <jon-hunter@ti.com> Reviewed-by: Felipe Balbi <balbi@ti.com> Acked-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com> Acked-by: Tony Lindgren <tony@atomide.com> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dw_dmac: remove unnecessary ENODEV checkAndy Shevchenko
If CONFIG_OF is not set the of_node of the device will always be NULL. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dmaengine: dw_dmac: simplify master selectionArnd Bergmann
The patch to add the common DMA binding added a dummy dw_dma_slave structure into the dw_dma_chan structure in order to configure the masters correctly. It turns out that this can be simplified if we pick the DMA masters in the dwc_alloc_chan_resources function instead and save them in the dw_dma_chan structure directly. This could be simplified further once all users that today use dw_dma_slave for configuration get converted to device tree based setup instead. Signed-off-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Cc: linux-arm-kernel@lists.infradead.org Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>
2013-04-15dw_dmac: rename DT related methods to reflect their belongingAndy Shevchenko
Since we will have not only DT cases in future let's rename DT related methods to reflect their belonging. The rename was done as follows: struct dw_dma_filter_args -> struct dw_dma_of_filter_args dw_dma_generic_filter() -> dw_dma_of_filter() dw_dma_xlate() -> dw_dma_of_xlate() dw_dma_id_table -> dw_dma_of_id_table There is no functional change. Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com> Acked-by: Viresh Kumar <viresh.kumar@linaro.org> Acked-by: Arnd Bergmann <arnd@arndb.de> Signed-off-by: Vinod Koul <vinod.koul@intel.com>