<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/gpu/drm/ttm, branch v3.8.1</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/drivers/gpu/drm/ttm?h=v3.8.1</id>
<link rel='self' href='https://git.amat.us/linux/atom/drivers/gpu/drm/ttm?h=v3.8.1'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2013-02-08T00:44:31Z</updated>
<entry>
<title>drm/ttm: fix fence locking in ttm_buffer_object_transfer, 2nd try</title>
<updated>2013-02-08T00:44:31Z</updated>
<author>
<name>Daniel Vetter</name>
<email>daniel.vetter@ffwll.ch</email>
</author>
<published>2013-01-14T14:08:14Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=ff7c60c580d9722f820d85c9c58ca55ecc1ee7c4'/>
<id>urn:sha1:ff7c60c580d9722f820d85c9c58ca55ecc1ee7c4</id>
<content type='text'>
This fixes up

commit e8e89622ed361c46bf90ba4828e685a8b603f7e5
Author: Daniel Vetter &lt;daniel.vetter@ffwll.ch&gt;
Date:   Tue Dec 18 22:25:11 2012 +0100

    drm/ttm: fix fence locking in ttm_buffer_object_transfer

which leaves behind a might_sleep in atomic context, since the
fence_lock spinlock is held over a kmalloc(GFP_KERNEL) call. The fix
is to revert the above commit and only take the lock where we need it,
around the call to -&gt;sync_obj_ref.

v2: Fixup things noticed by Maarten Lankhorst:
- Brown paper bag locking bug.
- No need for kzalloc if we clear the entire thing on the next line.
- check for bo-&gt;sync_obj (totally unlikely race, but still someone
  else could have snuck in) and clear fbo-&gt;sync_obj if it's cleared
  already.

Reported-by: Dave Airlie &lt;airlied@gmail.com&gt;
Cc: Jerome Glisse &lt;jglisse@redhat.com&gt;
Cc: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Signed-off-by: Daniel Vetter &lt;daniel.vetter@ffwll.ch&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>ttm: on move memory failure don't leave a node dangling</title>
<updated>2013-01-21T03:45:23Z</updated>
<author>
<name>Dave Airlie</name>
<email>airlied@gmail.com</email>
</author>
<published>2013-01-16T05:58:34Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=014b34409fb2015f63663b6cafdf557fdf289628'/>
<id>urn:sha1:014b34409fb2015f63663b6cafdf557fdf289628</id>
<content type='text'>
if we have a move notify callback, when moving fails, we call move notify
the opposite way around, however this ends up with *mem containing the mm_node
from the bo, which means we double free it. This is a follow on to the previous
fix.

Reviewed-by: Jerome Glisse &lt;jglisse@redhat.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>ttm: don't destroy old mm_node on memcpy failure</title>
<updated>2013-01-21T03:45:02Z</updated>
<author>
<name>Dave Airlie</name>
<email>airlied@gmail.com</email>
</author>
<published>2013-01-16T04:25:44Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=630541863b29f88c7ab34e647758344e4cd1eafd'/>
<id>urn:sha1:630541863b29f88c7ab34e647758344e4cd1eafd</id>
<content type='text'>
When we are using memcpy to move objects around, and we fail to memcpy
due to lack of memory to populate or failure to finish the copy, we don't
want to destroy the mm_node that has been copied into old_copy.

While working on a new kms driver that uses memcpy, if I overallocated bo's
up to the memory limits, and eviction failed, then machine would oops soon
after due to having an active bo with an already freed drm_mm embedded in it,
freeing it a second time didn't end well.

Reviewed-by: Jerome Glisse &lt;jglisse@redhat.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: fix fence locking in ttm_buffer_object_transfer</title>
<updated>2013-01-08T08:35:31Z</updated>
<author>
<name>Daniel Vetter</name>
<email>daniel.vetter@ffwll.ch</email>
</author>
<published>2012-12-18T21:25:11Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e8e89622ed361c46bf90ba4828e685a8b603f7e5'/>
<id>urn:sha1:e8e89622ed361c46bf90ba4828e685a8b603f7e5</id>
<content type='text'>
Noticed while reviewing the fence locking in the radeon pageflip
handler.

v2: Instead of grabbing the bdev-&gt;fence_lock in object_transfer just
move the single callsite of that function a few lines, so that it is
protected by the fence_lock. Suggested by Jerome Glisse.

v3: Fix typo in commit message.

Reviewed-by: Jerome Glisse &lt;jglisse@redhat.com&gt;
Signed-off-by: Daniel Vetter &lt;daniel.vetter@ffwll.ch&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: fix delayed ttm_bo_cleanup_refs_and_unlock delayed handling</title>
<updated>2012-12-19T21:46:20Z</updated>
<author>
<name>Maarten Lankhorst</name>
<email>maarten.lankhorst@canonical.com</email>
</author>
<published>2012-12-19T17:21:10Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=0953e76e91f4b6206cef50bd680696dc6bf1ef99'/>
<id>urn:sha1:0953e76e91f4b6206cef50bd680696dc6bf1ef99</id>
<content type='text'>
Fix regression introduced by 85b144f860176
"drm/ttm: call ttm_bo_cleanup_refs with reservation and lru lock held, v3"

Slowpath ttm_bo_cleanup_refs_and_unlock accidentally tried to increase
refcount on &amp;bo-&gt;sync_obj instead of bo-&gt;sync_obj.

The compiler didn't complain since sync_obj_ref takes a void pointer,
so it was still valid c.

This could result in lockups, memory corruptions, and warnings like
these when graphics card VRAM usage is high:

------------[ cut here ]------------
WARNING: at include/linux/kref.h:42 radeon_fence_ref+0x2c/0x40()
Hardware name: System Product Name
Pid: 157, comm: X Not tainted 3.7.0-rc7-00520-g85b144f-dirty #174
Call Trace:
[&lt;ffffffff81058c84&gt;] ? warn_slowpath_common+0x74/0xb0
[&lt;ffffffff8129273c&gt;] ? radeon_fence_ref+0x2c/0x40
[&lt;ffffffff8125e95c&gt;] ? ttm_bo_cleanup_refs_and_unlock+0x18c/0x2d0
[&lt;ffffffff8125f17c&gt;] ? ttm_mem_evict_first+0x1dc/0x2a0
[&lt;ffffffff81264452&gt;] ? ttm_bo_man_get_node+0x62/0xb0
[&lt;ffffffff8125f4ce&gt;] ? ttm_bo_mem_space+0x28e/0x340
[&lt;ffffffff8125fb0c&gt;] ? ttm_bo_move_buffer+0xfc/0x170
[&lt;ffffffff810de172&gt;] ? kmem_cache_alloc+0xb2/0xc0
[&lt;ffffffff8125fc15&gt;] ? ttm_bo_validate+0x95/0x110
[&lt;ffffffff8125ff7c&gt;] ? ttm_bo_init+0x2ec/0x3b0
[&lt;ffffffff8129419a&gt;] ? radeon_bo_create+0x18a/0x200
[&lt;ffffffff81293e80&gt;] ? radeon_bo_clear_va+0x40/0x40
[&lt;ffffffff812a5342&gt;] ? radeon_gem_object_create+0x92/0x160
[&lt;ffffffff812a575c&gt;] ? radeon_gem_create_ioctl+0x6c/0x150
[&lt;ffffffff812a529f&gt;] ? radeon_gem_object_free+0x2f/0x40
[&lt;ffffffff81246b60&gt;] ? drm_ioctl+0x420/0x4f0
[&lt;ffffffff812a56f0&gt;] ? radeon_gem_pwrite_ioctl+0x20/0x20
[&lt;ffffffff810f53a4&gt;] ? do_vfs_ioctl+0x2e4/0x4e0
[&lt;ffffffff810e5588&gt;] ? vfs_read+0x118/0x160
[&lt;ffffffff810f55ec&gt;] ? sys_ioctl+0x4c/0xa0
[&lt;ffffffff810e5851&gt;] ? sys_read+0x51/0xa0
[&lt;ffffffff814b0612&gt;] ? system_call_fastpath+0x16/0x1b

Signed-off-by: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Reported-by: Markus Trippelsdorf &lt;markus@trippelsdorf.de&gt;
Acked-by: Paul Menzel &lt;paulepanter@users.sourceforge.net&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: remove no_wait_reserve, v3</title>
<updated>2012-12-10T10:21:30Z</updated>
<author>
<name>Maarten Lankhorst</name>
<email>m.b.lankhorst@gmail.com</email>
</author>
<published>2012-11-28T11:25:44Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=97a875cbdf89a4638eea57c2b456c7cc4e3e8b21'/>
<id>urn:sha1:97a875cbdf89a4638eea57c2b456c7cc4e3e8b21</id>
<content type='text'>
All items on the lru list are always reservable, so this is a stupid
thing to keep. Not only that, it is used in a way which would
guarantee deadlocks if it were ever to be set to block on reserve.

This is a lot of churn, but mostly because of the removal of the
argument which can be nested arbitrarily deeply in many places.

No change of code in this patch except removal of the no_wait_reserve
argument, the previous patch removed the use of no_wait_reserve.

v2:
 - Warn if -EBUSY is returned on reservation, all objects on the list
   should be reservable. Adjusted patch slightly due to conflicts.
v3:
 - Focus on no_wait_reserve removal only.

Signed-off-by: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Reviewed-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: cope with reserved buffers on lru list in ttm_mem_evict_first, v2</title>
<updated>2012-12-10T10:21:22Z</updated>
<author>
<name>Maarten Lankhorst</name>
<email>m.b.lankhorst@gmail.com</email>
</author>
<published>2012-11-28T11:25:43Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=e7ab20197be3ee5fd75441e1cff0c7cdfea5bf1a'/>
<id>urn:sha1:e7ab20197be3ee5fd75441e1cff0c7cdfea5bf1a</id>
<content type='text'>
Replace the goto loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.

No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation, because previously buffers
on the lru list were always guaranteed to be reservable.

This should work since currently ttm guarantees items on the lru are
always reservable, and reserving items blockingly with some bo held
are enough to cause you to run into a deadlock.

Currently this is not a concern since removal off the lru list and
reservations are always done with atomically, but when this guarantee
no longer holds, we have to handle this situation or end up with
possible deadlocks.

Signed-off-by: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Reviewed-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: cope with reserved buffers on swap list in ttm_bo_swapout, v2</title>
<updated>2012-12-10T10:21:06Z</updated>
<author>
<name>Maarten Lankhorst</name>
<email>m.b.lankhorst@gmail.com</email>
</author>
<published>2012-11-28T11:25:42Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=2b7b3ad2fb8f904ae9ba7ca71323bc11c0978d91'/>
<id>urn:sha1:2b7b3ad2fb8f904ae9ba7ca71323bc11c0978d91</id>
<content type='text'>
Replace the while loop with a simple for each loop, and only run the
delayed destroy cleanup if we can reserve the buffer first.

No race occurs, since lru lock is never dropped any more. An empty list
and a list full of unreservable buffers both cause -EBUSY to be returned,
which is identical to the previous situation.

Signed-off-by: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Reviewed-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: call ttm_bo_cleanup_refs with reservation and lru lock held, v3</title>
<updated>2012-12-10T10:21:03Z</updated>
<author>
<name>Maarten Lankhorst</name>
<email>maarten.lankhorst@canonical.com</email>
</author>
<published>2012-11-29T11:36:54Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=85b144f860176ec18db927d6d9ecdfb24d9c6483'/>
<id>urn:sha1:85b144f860176ec18db927d6d9ecdfb24d9c6483</id>
<content type='text'>
By removing the unlocking of lru and retaking it immediately, a race is
removed where the bo is taken off the swap list or the lru list between
the unlock and relock. As such the cleanup_refs code can be simplified,
it will attempt to call ttm_bo_wait non-blockingly, and if it fails
it will drop the locks and perform a blocking wait, or return an error
if no_wait_gpu was set.

The need for looping is also eliminated, since swapout and evict_mem_first
will always follow the destruction path, no new fence is allowed
to be attached. As far as I can see this may already have been the case,
but the unlocking / relocking required a complicated loop to deal with
re-reservation.

Changes since v1:
 - Simplify no_wait_gpu case by folding it in with empty ddestroy.
 - Hold a reservation while calling ttm_bo_cleanup_memtype_use again.
Changes since v2:
 - Do not remove bo from lru list while waiting

Signed-off-by: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Reviewed-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
<entry>
<title>drm/ttm: change fence_lock to inner lock</title>
<updated>2012-12-10T10:09:58Z</updated>
<author>
<name>Maarten Lankhorst</name>
<email>m.b.lankhorst@gmail.com</email>
</author>
<published>2012-11-28T11:25:39Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=4154f051e74e6a5db174c8f4fc8a2f9c8a6b2541'/>
<id>urn:sha1:4154f051e74e6a5db174c8f4fc8a2f9c8a6b2541</id>
<content type='text'>
This requires changing the order in ttm_bo_cleanup_refs_or_queue to
take the reservation first, as there is otherwise no race free way to
take lru lock before fence_lock.

Signed-off-by: Maarten Lankhorst &lt;maarten.lankhorst@canonical.com&gt;
Reviewed-by: Thomas Hellstrom &lt;thellstrom@vmware.com&gt;
Signed-off-by: Dave Airlie &lt;airlied@redhat.com&gt;
</content>
</entry>
</feed>
