<feed xmlns='http://www.w3.org/2005/Atom'>
<title>linux/drivers/md, branch v3.2.2</title>
<subtitle>Linux kernel source tree</subtitle>
<id>https://git.amat.us/linux/atom/drivers/md?h=v3.2.2</id>
<link rel='self' href='https://git.amat.us/linux/atom/drivers/md?h=v3.2.2'/>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/'/>
<updated>2012-01-26T00:13:42Z</updated>
<entry>
<title>dm: do not forward ioctls from logical volumes to the underlying device</title>
<updated>2012-01-26T00:13:42Z</updated>
<author>
<name>Paolo Bonzini</name>
<email>pbonzini@redhat.com</email>
</author>
<published>2012-01-12T15:01:29Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=10f176d299c6598b3a94d1257f39410da5ba08e5'/>
<id>urn:sha1:10f176d299c6598b3a94d1257f39410da5ba08e5</id>
<content type='text'>
commit ec8013beddd717d1740cfefb1a9b900deef85462 upstream.

A logical volume can map to just part of underlying physical volume.
In this case, it must be treated like a partition.

Based on a patch from Alasdair G Kergon.

Cc: Alasdair G Kergon &lt;agk@redhat.com&gt;
Cc: dm-devel@redhat.com
Signed-off-by: Paolo Bonzini &lt;pbonzini@redhat.com&gt;
Signed-off-by: Linus Torvalds &lt;torvalds@linux-foundation.org&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>md/raid1: perform bad-block tests for WriteMostly devices too.</title>
<updated>2012-01-26T00:13:20Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2012-01-08T14:41:51Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=45c5b2b95f829018321325129d824a6154d5f955'/>
<id>urn:sha1:45c5b2b95f829018321325129d824a6154d5f955</id>
<content type='text'>
commit 307729c8bc5b5a41361af8af95906eee7552acb1 upstream.

We normally try to avoid reading from write-mostly devices, but when
we do we really have to check for bad blocks and be sure not to
try reading them.

With the current code, best_good_sectors might not get set and that
causes zero-length read requests to be send down which is very
confusing.

This bug was introduced in commit d2eb35acfdccbe2 and so the patch
is suitable for 3.1.x and 3.2.x

Reported-and-tested-by: Michał Mirosław &lt;mirq-linux@rere.qmqm.pl&gt;
Reported-and-tested-by: Art -kwaak- van Breemen &lt;ard@telegraafnet.nl&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
Signed-off-by: Greg Kroah-Hartman &lt;gregkh@suse.de&gt;

</content>
</entry>
<entry>
<title>md/bitmap: It is OK to clear bits during recovery.</title>
<updated>2011-12-22T22:57:48Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-22T22:57:48Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=961902c0f8240175729274cd14198872f42072b7'/>
<id>urn:sha1:961902c0f8240175729274cd14198872f42072b7</id>
<content type='text'>
commit d0a4bb492772ce5c4bdfba3744a99ed6f6fb238f introduced a
regression which is annoying but fairly harmless.

When writing to an array that is undergoing recovery (a spare
in being integrated into the array), writing to the array will
set bits in the bitmap, but they will not be cleared when the
write completes.

For bits covering areas that have not been recovered yet this is not a
problem as the recovery will clear the bits.  However bits set in
already-recovered region will stay set and never be cleared.
This doesn't risk data integrity.  The only negatives are:
 - next time there is a crash, more resyncing than necessary will
   be done.
 - the bitmap doesn't look clean, which is confusing.

While an array is recovering we don't want to update the
'events_cleared' setting in the bitmap but we do still want to clear
bits that have very recently been set - providing they were written to
the recovering device.

So split those two needs - which previously both depended on 'success'
and always clear the bit of the write went to all devices.

Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md: don't give up looking for spares on first failure-to-add</title>
<updated>2011-12-22T22:57:19Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-22T22:57:19Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=60fc13702a1b35118c1548e9c257fa038cecb658'/>
<id>urn:sha1:60fc13702a1b35118c1548e9c257fa038cecb658</id>
<content type='text'>
Before performing a recovery we try to remove any spares that
might not be working, then add any that might have become relevant.

Currently we abort on the first spare that cannot be added.
This is a false optimisation.
It is conceivable that - depending on rules in the personality - a
subsequent spare might be accepted.
Also the loop does other things like count the available spares and
reset the 'recovery_offset' value.

If we abort early these might not happen properly.

So remove the early abort.

In particular if you have an array what is undergoing recovery and
which has extra spares, then the recovery may not restart after as
reboot as the could of 'spares' might end up as zero.

Reported-by: Anssi Hannula &lt;anssi.hannula@iki.fi&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md/raid5: ensure correct assessment of drives during degraded reshape.</title>
<updated>2011-12-22T22:57:00Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-22T22:57:00Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=30d7a4836847bdb10b32c78a4879d4aebe0f193b'/>
<id>urn:sha1:30d7a4836847bdb10b32c78a4879d4aebe0f193b</id>
<content type='text'>
While reshaping a degraded array (as when reshaping a RAID0 by first
converting it to a degraded RAID4) we currently get confused about
which devices are in_sync.  In most cases we get it right, but in the
region that is being reshaped we need to treat non-failed devices as
in-sync when we have the data but haven't actually written it out yet.

Reported-by: Adam Kwolek &lt;adam.kwolek@intel.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md/linear: fix hot-add of devices to linear arrays.</title>
<updated>2011-12-22T22:56:55Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-22T22:56:55Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=09cd9270ea52e0f9851528e8ed028073f96b3c34'/>
<id>urn:sha1:09cd9270ea52e0f9851528e8ed028073f96b3c34</id>
<content type='text'>
commit d70ed2e4fafdbef0800e73942482bb075c21578b
broke hot-add to a linear array.
After that commit, metadata if not written to devices until they
have been fully integrated into the array as determined by
saved_raid_disk.  That patch arranged to clear that field after
a recovery completed.

However for linear arrays, there is no recovery - the integration is
instantaneous.  So we need to explicitly clear the saved_raid_disk
field.

Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md: raid5 crash during degradation</title>
<updated>2011-12-09T03:26:11Z</updated>
<author>
<name>Adam Kwolek</name>
<email>adam.kwolek@intel.com</email>
</author>
<published>2011-12-09T03:26:11Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=5d8c71f9e5fbdd95650be00294d238e27a363b5c'/>
<id>urn:sha1:5d8c71f9e5fbdd95650be00294d238e27a363b5c</id>
<content type='text'>
NULL pointer access causes crash in raid5 module.

Signed-off-by: Adam Kwolek &lt;adam.kwolek@intel.com&gt;
Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md/raid5: never wait for bad-block acks on failed device.</title>
<updated>2011-12-08T05:27:57Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-08T05:27:57Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=9283d8c5af4cdcb809e655acdf4be368afec8b58'/>
<id>urn:sha1:9283d8c5af4cdcb809e655acdf4be368afec8b58</id>
<content type='text'>
Once a device is failed we really want to completely ignore it.
It should go away soon anyway.

In particular the presence of bad blocks on it should not cause us to
block as we won't be trying to write there anyway.

So as soon as we can check if a device is Faulty, do so and pretend
that it is already gone if it is Faulty.

Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md: ensure new badblocks are handled promptly.</title>
<updated>2011-12-08T05:26:08Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-08T05:26:08Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=8bd2f0a05b361e07d48bb34398593f5f523946b3'/>
<id>urn:sha1:8bd2f0a05b361e07d48bb34398593f5f523946b3</id>
<content type='text'>
When we mark blocks as bad we need them to be acknowledged by the
metadata handler promptly.

For an in-kernel metadata handler that was already being done.  But
for an external metadata handler we need to alert it of the change by
sending a notification through the sysfs file.  This adds that
notification.

Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
<entry>
<title>md: bad blocks shouldn't cause a Blocked status on a Faulty device.</title>
<updated>2011-12-08T05:22:48Z</updated>
<author>
<name>NeilBrown</name>
<email>neilb@suse.de</email>
</author>
<published>2011-12-08T05:22:48Z</published>
<link rel='alternate' type='text/html' href='https://git.amat.us/linux/commit/?id=52c64152a935e63d9ff73ce823730c9a23dedbff'/>
<id>urn:sha1:52c64152a935e63d9ff73ce823730c9a23dedbff</id>
<content type='text'>
Once a device is marked Faulty the badblocks - whether acknowledged or
not - become irrelevant.  So they shouldn't cause the device to be
marked as Blocked.

Without this patch, a process might write "-blocked" to clear the
Blocked status, but while that will correctly fail the device, it
won't remove the apparent 'blocked' status.

Signed-off-by: NeilBrown &lt;neilb@suse.de&gt;
</content>
</entry>
</feed>
