Lost SHR volume - Progress, but need help

Status
Für weitere Antworten geschlossen.

Diverge

Benutzer
Mitglied seit
20. Jul 2013
Beiträge
6
Punkte für Reaktionen
0
Punkte
0
Hello, I made an account here because english Synology forums doesn't have a data recovery section. So I'm hoping someone here understands english and can point me in the right direction.

To make long story short, I was adding a new disk to a 3 disk SHR array, and at the start of adding disk it gave me error. I figured I'd reboot and try again, but it got stuck shutting down. So after waiting for a while I hard powered off. Powered back on and data volume was gone. I just made some great progress today with help of someone experienced in data recovery (Remy), but could use some help from people familiar with synology systems.

DiskStation> fdisk -l
Rich (BBCode):
Disk /dev/sda: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sda1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sda2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary
/dev/sda3             588      243201  1948788912   f Win95 Ext'd (LBA)

Disk /dev/sdc: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdc1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sdc2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary

Disk /dev/sdb: 2000.3 GB, 2000398934016 bytes
255 heads, 63 sectors/track, 243201 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

   Device Boot      Start         End      Blocks  Id System
/dev/sdb1               1         311     2490240  fd Linux raid autodetect
Partition 1 does not end on cylinder boundary
/dev/sdb2             311         572     2097152  fd Linux raid autodetect
Partition 2 does not end on cylinder boundary

I'm not sure where that fat partition (sda3) came from or what it is.

Using testdisk, partition table none, no alignment, I was able to find my data partition (Diskstation:2):
Rich (BBCode):
Results
   P ext4                     0   4  5   310   7 17    4980352 [1.42.6-3202]
     ext4 blocksize=4096 Large file Sparse superblock, 2549 MB / 2431 MiB
   P Linux md 0.9 RAID        0   4  5   310   9 19    4980480 [md0]
     md 0.90.0 B.Endian Raid 1: devices 0(8,1)* 1(8,17) 2(8,33), 2550 MB / 2431 MiB
   P Linux SWAP 2           310   9 20   571  28 17    4194160
     SWAP2 version 1, pagesize=4096, 2147 MB / 2047 MiB
   P Linux md 0.9 RAID      310   9 20   571  30 35    4194304 [md1]
     md 0.90.0 B.Endian Raid 1: devices 0(8,2)* 1(8,18) 2(8,34), 2147 MB / 2048 MiB
   P Linux md 1.x RAID      588 112  5 243200  72 48 3897559304 [DiskStation:2]
     md 1.x L.Endian Raid 5 - Array Slot : 0 (0, 1, 2, failed, failed, failed), 1995 GB / 1858 GiB

interface_write()
   P ext4                     0   4  5   310   7 17    4980352 [1.42.6-3202]
   P Linux md 0.9 RAID        0   4  5   310   9 19    4980480 [md0]
   P Linux SWAP 2           310   9 20   571  28 17    4194160
   P Linux md 0.9 RAID      310   9 20   571  30 35    4194304 [md1]
   P Linux md 1.x RAID      588 112  5 243200  72 48 3897559304 [DiskStation:2]

I then calculated offset for Diskstation:2 partition, and used losetup:
Code:
CHS to LBA = (588*255*63)+(112*63)+(5-1) = 9453280
Offset in bytes = 9453280*512 = 4840079360

Rich (BBCode):
DiskStation> losetup -o 4840079360 /dev/loop1 /dev/sda
DiskStation> losetup -o 4840079360 /dev/loop2 /dev/sdb
DiskStation> losetup -o 4840079360 /dev/loop3 /dev/sdc
DiskStation> losetup
/dev/loop1: 545112064 /dev/sda
/dev/loop2: 545112064 /dev/sdb
/dev/loop3: 545112064 /dev/sdc

Examine new loop devices.
Rich (BBCode):
DiskStation> mdadm --examine /dev/loop[1-3]
/dev/loop1:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c7ac0e55:5cb2e968:b09f83b2:85b6356e
           Name : DiskStation:2  (local to host DiskStation)
  Creation Time : Sun May 19 18:10:54 2013
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 3897559680 (1858.50 GiB 1995.55 GB)
     Array Size : 7795118592 (3717.00 GiB 3991.10 GB)
  Used Dev Size : 3897559296 (1858.50 GiB 1995.55 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 26c354ea:9679fee7:b4337841:8d291201

    Update Time : Tue Jun 11 12:00:06 2013
       Checksum : 2a2f9fe9 - correct
         Events : 18

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 0
   Array State : AAA ('A' == active, '.' == missing)
/dev/loop2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c7ac0e55:5cb2e968:b09f83b2:85b6356e
           Name : DiskStation:2  (local to host DiskStation)
  Creation Time : Sun May 19 18:10:54 2013
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 3897559680 (1858.50 GiB 1995.55 GB)
     Array Size : 7795118592 (3717.00 GiB 3991.10 GB)
  Used Dev Size : 3897559296 (1858.50 GiB 1995.55 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : e48bfdb6:7f6d61e8:3ba638d8:2f633966

    Update Time : Tue Jun 11 12:00:06 2013
       Checksum : f32308ba - correct
         Events : 18

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 1
   Array State : AAA ('A' == active, '.' == missing)
/dev/loop3:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x0
     Array UUID : c7ac0e55:5cb2e968:b09f83b2:85b6356e
           Name : DiskStation:2  (local to host DiskStation)
  Creation Time : Sun May 19 18:10:54 2013
     Raid Level : raid5
   Raid Devices : 3

 Avail Dev Size : 3897559680 (1858.50 GiB 1995.55 GB)
     Array Size : 7795118592 (3717.00 GiB 3991.10 GB)
  Used Dev Size : 3897559296 (1858.50 GiB 1995.55 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : dd8e1640:c64f0593:bbf07fd2:e4bca4ff

    Update Time : Tue Jun 11 12:00:06 2013
       Checksum : ba929230 - correct
         Events : 18

         Layout : left-symmetric
     Chunk Size : 64K

   Device Role : Active device 2
   Array State : AAA ('A' == active, '.' == missing)

Assembled new array:

Rich (BBCode):
DiskStation> mdadm -A /dev/md2 /dev/loop1 /dev/loop2 /dev/loop3
mdadm: /dev/md2 has been started with 3 drives.

Rich (BBCode):
DiskStation> cat /proc/mdstat
Personalities : [linear] [raid0] [raid1] [raid10] [raid6] [raid5] [raid4]
md2 : active raid5 loop1[0] loop3[2] loop2[1]
      3897559296 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]

md1 : active raid1 sda2[0] sdb2[1] sdc2[2]
      2097088 blocks [12/3] [UUU_________]

md0 : active raid1 sda1[0] sdb1[1] sdc1[2]
      2490176 blocks [12/3] [UUU_________]

unused devices: <none>
DiskStation>

Rich (BBCode):
DiskStation> vgdisplay
  --- Volume group ---
  VG Name               vg1000
  System ID
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  2
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                1
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               3.63 TB
  PE Size               4.00 MB
  Total PE              951552
  Alloc PE / Size       951552 / 3.63 TB
  Free  PE / Size       0 / 0
  VG UUID               6Lf9E0-Uruw-vS94-vaQn-TyxN-35a4-awlTUI

DiskStation> pvdisplay
  --- Physical volume ---
  PV Name               /dev/md2
  VG Name               vg1000
  PV Size               3.63 TB / not usable 2.25 MB
  Allocatable           yes (but full)
  PE Size (KByte)       4096
  Total PE              951552
  Free PE               0
  Allocated PE          951552
  PV UUID               n5I8mM-3o67-f411-xnzi-lmr5-uh09-bnhpVr

DiskStation> lvdisplay
  --- Logical volume ---
  LV Name                /dev/vg1000/lv
  VG Name                vg1000
  LV UUID                YGWx0P-vbWP-wHmD-4Cqs-h8R5-sKQy-Ol2GBp
  LV Write Access        read/write
  LV Status              NOT available
  LV Size                3.63 TB
  Current LE             951552
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto

I don't know what I need to do next. Does this all look okay? Can anyone tell me what to do next?

Danke :)
 
Zuletzt bearbeitet:

mediamemphis

Benutzer
Mitglied seit
08. Mrz 2013
Beiträge
21
Punkte für Reaktionen
0
Punkte
0
I think you should contact the Synology Support Team,
and tell them, what you have already done.

SHA / LVM is a hard bread.
 

Diverge

Benutzer
Mitglied seit
20. Jul 2013
Beiträge
6
Punkte für Reaktionen
0
Punkte
0
I think you should contact the Synology Support Team,
and tell them, what you have already done.

SHA / LVM is a hard bread.

I had already spoke with a guy from support. My NAS is in a virtual machine, so it's outside the scope of actual synology support. He was nice enough to take a look remotely on his own time, but didn't find anything. But I assume he didn't look too hard at it. This was long before Remy was helping me, and all of the above was done.

I've thought about contacting him again, but the only way I know of reaching him is via support ticket. And I don't want to bother them again with a system that they don't have to support. So here I am. This is also a fun learning process :)
 
Zuletzt bearbeitet:

mediamemphis

Benutzer
Mitglied seit
08. Mrz 2013
Beiträge
21
Punkte für Reaktionen
0
Punkte
0
Ah okay ;)

I haven't a VM like this but i think, you have first to manage the raid

syno_poweroff_task
mdadm -S /dev/md0
mdadm -AfR /dev/md2 /dev/sd[abc]5

But i think there are no raid Information on sdc so it does not work.
You can check this with mdadm -E /dev/sd?5

If assemble command fails i would like to try this out: (dont know which raid is used on md2
mdadm -C /dev/md2 -R -l[1|5|6|linear|0] -e1.2 -n3 /dev/sda5 /dev/sdb5 /dev/sdc5 --assume-clean

but this is only an idea, i think it is a bad idea?
what is with the tool synopartition

really, i think i have no idea ;)
 

Diverge

Benutzer
Mitglied seit
20. Jul 2013
Beiträge
6
Punkte für Reaktionen
0
Punkte
0
Ah okay ;)

I haven't a VM like this but i think, you have first to manage the raid

syno_poweroff_task
mdadm -S /dev/md0
mdadm -AfR /dev/md2 /dev/sd[abc]5

But i think there are no raid Information on sdc so it does not work.
You can check this with mdadm -E /dev/sd?5

If assemble command fails i would like to try this out: (dont know which raid is used on md2
mdadm -C /dev/md2 -R -l[1|5|6|linear|0] -e1.2 -n3 /dev/sda5 /dev/sdb5 /dev/sdc5 --assume-clean

but this is only an idea, i think it is a bad idea?
what is with the tool synopartition

really, i think i have no idea ;)

I'm not sure I am understanding what your telling me to try :p But I don't think it will work because there are no sda5, sdb5, or sdc5 partitions on the disks. The only synology partitions present are sd[abc][1-2], which are the DSM partition and swap partition. Volume partition is wiped from partition tables. Which is why I used testdisk to try to find it. I'm pretty sure I did find data partitions, which are now mapped to loop1, loop2 and loop3, and assembled to md2.

I'm just learning about this stuff as I go, so I could be all wrong :)
 

mediamemphis

Benutzer
Mitglied seit
08. Mrz 2013
Beiträge
21
Punkte für Reaktionen
0
Punkte
0
so far as i know is synology using sd?3 for raid 0,1,5,6 and sd?5 for shm / lvm which will be assembled to md2
i haven't heard anytime bevore, that loop(1,2,3) ist used by syno (and haven't seen this bevore), but i'm still learning, too ;)

maybe some one else know what to do.
 

Diverge

Benutzer
Mitglied seit
20. Jul 2013
Beiträge
6
Punkte für Reaktionen
0
Punkte
0
I just did the following:
Rich (BBCode):
DiskStation> vgchange -ay
  1 logical volume(s) in volume group "vg1000" now active

now lvdisplay says LV Status is active. Now to figure out what to do next.

edit:

Just did the following, but stuck again:

Rich (BBCode):
DiskStation> lvdisplay /dev/vg1000
  --- Logical volume ---
  LV Name                /dev/vg1000/lv
  VG Name                vg1000
  LV UUID                YGWx0P-vbWP-wHmD-4Cqs-h8R5-sKQy-Ol2GBp
  LV Write Access        read/write
  LV Status              available
  # open                 0
  LV Size                3.63 TB
  Current LE             951552
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     384
  Block device           253:0

DiskStation> mount -ro /dev/vg1000/lv /mnt
mount: can't find /mnt in /etc/fstab
DiskStation> cat /etc/fstab
/dev/root / ext4 defaults 1 1
none /proc proc defaults 0 0

Off to read about fstab.
 
Zuletzt bearbeitet:

Diverge

Benutzer
Mitglied seit
20. Jul 2013
Beiträge
6
Punkte für Reaktionen
0
Punkte
0
Follow up. I think I did it!! I see my data :)

Rich (BBCode):
DiskStation> mount /dev/vg1000/lv /mnt -o ro
DiskStation> cd /mnt
DiskStation> ls
@appstore     @database     @download     @iSCSITrg     @spool        @tmp          Plex          aquota.group  aquota.user   downloads     lost+found    music         photo         software      video
DiskStation> cd downloads/
DiskStation> ls
KerbalSpaceProgram  nzb
DiskStation> cd nzb
DiskStation> ls
complete    incomplete
DiskStation>


edit:

If someone can tell me how to repair my partition tables and restore my volume in DSM it would be appreciated much. Otherwise I need buy a new big external hardrive to copy off all the data and start over by creating new volume on the disks in DSM.
 
Status
Für weitere Antworten geschlossen.
 

Kaffeautomat

Wenn du das Forum hilfreich findest oder uns unterstützen möchtest, dann gib uns doch einfach einen Kaffee aus.

Als Dankeschön schalten wir deinen Account werbefrei.

:coffee:

Hier gehts zum Kaffeeautomat