diff options
author | NeilBrown <neilb@suse.de> | 2012-07-09 11:34:13 +1000 |
---|---|---|
committer | Ben Hutchings <ben@decadent.org.uk> | 2012-07-25 04:11:13 +0100 |
commit | 33c050f877ba0d95c43a2bc81f3e6870fa8b0a6b (patch) | |
tree | 01b778795e78c99bf01e90902af8eacd0115033f /drivers/md | |
parent | cb480c94c8e89014cdb54201a413bf9d36f6cd41 (diff) |
md/raid1: fix use-after-free bug in RAID1 data-check code.
commit 2d4f4f3384d4ef4f7c571448e803a1ce721113d5 upstream.
This bug has been present ever since data-check was introduce
in 2.6.16. However it would only fire if a data-check were
done on a degraded array, which was only possible if the array
has 3 or more devices. This is certainly possible, but is quite
uncommon.
Since hot-replace was added in 3.3 it can happen more often as
the same condition can arise if not all possible replacements are
present.
The problem is that as soon as we submit the last read request, the
'r1_bio' structure could be freed at any time, so we really should
stop looking at it. If the last device is being read from we will
stop looking at it. However if the last device is not due to be read
from, we will still check the bio pointer in the r1_bio, but the
r1_bio might already be free.
So use the read_targets counter to make sure we stop looking for bios
to submit as soon as we have submitted them all.
This fix is suitable for any -stable kernel since 2.6.16.
Reported-by: Arnold Schulz <arnysch@gmx.net>
Signed-off-by: NeilBrown <neilb@suse.de>
[bwh: Backported to 3.2: no doubling of conf->raid_disks; we don't have
hot-replace support]
Signed-off-by: Ben Hutchings <ben@decadent.org.uk>
Diffstat (limited to 'drivers/md')
-rw-r--r-- | drivers/md/raid1.c | 3 |
1 files changed, 2 insertions, 1 deletions
diff --git a/drivers/md/raid1.c b/drivers/md/raid1.c index 7af60ec98c6..58f00553995 100644 --- a/drivers/md/raid1.c +++ b/drivers/md/raid1.c @@ -2378,9 +2378,10 @@ static sector_t sync_request(struct mddev *mddev, sector_t sector_nr, int *skipp */ if (test_bit(MD_RECOVERY_REQUESTED, &mddev->recovery)) { atomic_set(&r1_bio->remaining, read_targets); - for (i=0; i<conf->raid_disks; i++) { + for (i = 0; i < conf->raid_disks && read_targets; i++) { bio = r1_bio->bios[i]; if (bio->bi_end_io == end_sync_read) { + read_targets--; md_sync_acct(bio->bi_bdev, nr_sectors); generic_make_request(bio); } |