- Registriert
- 19. Feb. 2016
- Beiträge
- 1.212
- Reaktionspunkte
- 235
- Punkte
- 89
Given I have no spare DS to test my assumption I setup a similar setup on one of my Raspberries:
3 devices with 3 partitions, partition 1 used for DSM (/dev/md0) , partition 2 used for SWAP (/dev/md1) and partition 3 (/dev/md2) used for a SP.
Then I umounted and stopped /dev/md2 and removed /dev/sda. /dev/md0 and /dev/md1 degraded as expected. /dev/md2 was not listed in /proc/mdstat and didn't get notified of the removal of a disk.
Then I inserted /dev/sda again and added /dev/sda1, /dev/sda2 and /dev/sda3. /dev/md0 and /dev/md1 are rebuilded because they were active when I removed /dev/sda but /dev/md2 is immediately available again. This proves my assumption is correct: If the RAID is inactive the removal of a disk will not be detected by mdadm if the RAID5 is deactivated and a reinsertion of the disk will not initiate a rebuild
I created a small script in order to document all the steps.
... waiting for RAID1 and RAID5 builds to finish with watch /proc/mdstat ...
3 devices with 3 partitions, partition 1 used for DSM (/dev/md0) , partition 2 used for SWAP (/dev/md1) and partition 3 (/dev/md2) used for a SP.
Then I umounted and stopped /dev/md2 and removed /dev/sda. /dev/md0 and /dev/md1 degraded as expected. /dev/md2 was not listed in /proc/mdstat and didn't get notified of the removal of a disk.
Then I inserted /dev/sda again and added /dev/sda1, /dev/sda2 and /dev/sda3. /dev/md0 and /dev/md1 are rebuilded because they were active when I removed /dev/sda but /dev/md2 is immediately available again. This proves my assumption is correct: If the RAID is inactive the removal of a disk will not be detected by mdadm if the RAID5 is deactivated and a reinsertion of the disk will not initiate a rebuild

I created a small script in order to document all the steps.
Code:
pi@raspberrypi-bookworm64-lite-regression-gpt:~ $ sudo ./setupADM.sh 1
/bin/bash: warning: setlocale: LC_ALL: cannot change locale (en_US.UTF-8)
*** Removing mdadm config
rm: cannot remove '/etc/mdadm/mdadm.conf': No such file or directory
*** Umounting RAIDs
umount: /dev/md0: not mounted.
umount: /dev/md1: not mounted.
umount: /dev/md2: not mounted.
*** Wipe RAID partition
umount: /dev/md0: not mounted.
/dev/md0: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
umount: /dev/md1: not mounted.
/dev/md1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
umount: /dev/md2: not mounted.
/dev/md2: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef
*** Stopping RAIDs
mdadm: stopped /dev/md0
mdadm: stopped /dev/md1
mdadm: stopped /dev/md2
*** Wipe partitions
umount: /dev/sda1: not mounted.
umount: /dev/sda2: not mounted.
umount: /dev/sda3: not mounted.
/dev/sda1: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sda2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sda3: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x1d9bffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
umount: /dev/sdb1: not mounted.
umount: /dev/sdb2: not mounted.
umount: /dev/sdb3: not mounted.
/dev/sdb1: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sdb2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sdb3: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sdb: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 8 bytes were erased at offset 0x3a3dffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdb: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
umount: /dev/sdc1: not mounted.
umount: /dev/sdc2: not mounted.
umount: /dev/sdc3: not mounted.
/dev/sdc1: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sdc2: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sdc3: 4 bytes were erased at offset 0x00001000 (linux_raid_member): fc 4e 2b a9
/dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdc: 8 bytes were erased at offset 0xede9ffe00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
*** Zero superblock
mdadm: Couldn't open /dev/sda1 for write - not zeroing
mdadm: Couldn't open /dev/sdb1 for write - not zeroing
mdadm: Unrecognised md component device - /dev/sdc1
mdadm: Couldn't open /dev/sda2 for write - not zeroing
mdadm: Couldn't open /dev/sdb2 for write - not zeroing
mdadm: Couldn't open /dev/sdc2 for write - not zeroing
mdadm: Couldn't open /dev/sda3 for write - not zeroing
mdadm: Couldn't open /dev/sdb3 for write - not zeroing
mdadm: Couldn't open /dev/sdc3 for write - not zeroing
*** Allocate partitions
Creating new GPT entries in memory.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
Creating new GPT entries in memory.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
Creating new GPT entries in memory.
The operation has completed successfully.
The operation has completed successfully.
The operation has completed successfully.
*** Creating RAIDs
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 1046528K
Continue creating array? mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 1046528K
Continue creating array? mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
mdadm: layout defaults to left-symmetric
mdadm: layout defaults to left-symmetric
mdadm: chunk size defaults to 512K
mdadm: size set to 1046528K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
*** Formating RAID
mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 261632 4k blocks and 65408 inodes
Filesystem UUID: a1d3c91c-c74e-40c7-84c6-d244196e4631
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 261632 4k blocks and 65408 inodes
Filesystem UUID: b9cc8b81-9ecd-47f6-ab79-18bda14ad2cb
Superblock backups stored on blocks:
32768, 98304, 163840, 229376
Allocating group tables: done
Writing inode tables: done
Creating journal (4096 blocks): done
Writing superblocks and filesystem accounting information: done
mke2fs 1.47.0 (5-Feb-2023)
Creating filesystem with 523264 4k blocks and 130816 inodes
Filesystem UUID: 62e9477d-a929-4d02-bccb-b54dcae91fd1
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
Personalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sdc3[3] sdb3[1] sda3[0]
2093056 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
resync=DELAYED
md1 : active raid1 sdc2[2] sdb2[1] sda2[0]
1046528 blocks super 1.2 [3/3] [UUU]
resync=DELAYED
md0 : active raid1 sdc1[2] sdb1[1] sda1[0]
1046528 blocks super 1.2 [3/3] [UUU]
[=>...................] resync = 7.5% (79296/1046528) finish=7.1min speed=2265K/sec
unused devices: <none>
... waiting for RAID1 and RAID5 builds to finish with watch /proc/mdstat ...
Zuletzt bearbeitet:

Ich habe dem Script auf github einen etwas sprechenderen Namen gegeben.
. Ob das auf einer Syno funktioniert weiss ich nicht. Die haben ja ziemlich viel an Linux rumgefummelt. Aber Du kannst mir gerne eine spare Syno spenden damit ich das nochmal auf einer Syno verifizieren kann 