Recovering a file by mounting a Synology RAID1 disk in Ubuntu (mount, mdadm, and losetup issues)
I’m working with a Synology RAID1 system and deleted a file that wasn’t yet backed up. To recover the file, and since an undelete approach seems to be impossible, my idea is to mount a single RAID1 disk in Ubuntu 20.04 LTS and search for it, but I’m having some trouble with it. An internet search brought me to two related Q&A’s here; they are of older date though and didn’t work in my case.
From lsblk :
sdb 8:16 0 5.5T 0 disk ├─sdb1 8:17 0 2.4G 0 part ├─sdb2 8:18 0 2G 0 part ├─sdb5 8:21 0 2.7T 0 part └─sdb6 8:22 0 2.7T 0 part
Attempt 1, usual mount attempt:
# mount /dev/sdb1 /mnt/test/ mount: /mnt/test: unknown filesystem type 'linux_raid_member'.
Attempt 2 using mdadm (Ref)
# mdadm --assemble --run /dev/md0 /dev/sdb1 mdadm: no recogniseable superblock on /dev/sdb1 mdadm: /dev/sdb1 has no superblock - assembly aborted
Attempt 3 using losetup (Ref)
# losetup /dev/loop18 /dev/sdb1 -o 1048576 # mount /dev/loop18 /mnt/test/ mount: /mnt/test: wrong fs type, bad option, bad superblock on /dev/loop18, missing codepage or helper program, or other error.
Altogether this shows me I probably could need some help. Can anyone provide me with a working solution? Please note that the main goal is to recover the file, not necessarily in a specific way.
Edit
# file -s /dev/sdb? /dev/sdb1: Linux rev 1.0 ext4 filesystem data, UUID=ceb6a1e0-2bde-441f-97dc-db231fc51d41, volume name "1.41.12-1963" (extents) (large files) (huge files) /dev/sdb2: Linux/i386 swap file (new style), version 1 (4K pages), size 524271 pages, no label, UUID=abbd2e2f-a7a4-4e5d-bd79-55908f8ff79d /dev/sdb5: Linux Software RAID version 1.2 (1) UUID=a7c85951:8b8b7689:d4ad5498:e14c55d1 name=DiskStation:2 level=1 disks=2 /dev/sdb6: Linux Software RAID version 1.2 (1) UUID=69b042ac:84e2b185:501c0c3e: c12533 name=WOTAN:3 level=1 disks=2
Arch Linux
I am trying to recover data from partitions on a RAID 1 array.
However, when I try to mount either of the partitions (/dev/sdc1 or /dev/sdc2), I get the error «mount: unknown filesystem type ‘linux_raid_member'».
the mount command I used was (I did the same for sdc2):
I also ran ‘sudo smart ctl’ and got «SMART Health Status: OK» for both partitions. So I think I can rule out disk health issues.
I tried ‘mdadm —assemble —scan’ and got:
mdadm: failed to add /dev/sdc1 to /dev/md/1: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md/1: Invalid argument
mdadm: failed to add /dev/sdc2 to /dev/md/2: Invalid argument
mdadm: failed to RUN_ARRAY /dev/md/2: Invalid argument
This is the output of ‘sudo fdisk -l /dev/sdc’:
Disk /dev/sdc: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 0A7D7BB3-7070-2141-83F2-69FDC85C9E69
Device Start End Sectors Size Type
/dev/sdc1 2000896 11990919 9990024 4.8G Linux RAID
/dev/sdc2 12000002 13998857 1998856 976M Linux RAID
the output of ‘sudo mdadm —examine /dev/sdc1’:
/dev/sdc1:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 3d41c60c:405653fd:39c88afe:9fe03176
Name : LS421DE-EM4F5:1
Creation Time : Wed Oct 31 01:03:49 2007
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 9990144 (4.76 GiB 5.11 GB)
Array Size : 4995008 (4.76 GiB 5.11 GB)
Used Dev Size : 9990016 (4.76 GiB 5.11 GB)
Data Offset : 8192 sectors
Super Offset : 8 sectors
Unused Space : before=8112 sectors, after=18446744073709543432 sectors
State : clean
Device UUID : e2450711:ceb78514:cad2b878:87dabe05
Update Time : Tue Mar 24 18:34:56 2015
Checksum : f1310b3 — correct
Events : 239666
Device Role : Active device 0
Array State : A. (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
and the output of ‘sudo mdadm —examine /dev/sdc2’:
/dev/sdc2:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : b2c77f77:9d9ae7ac:313e5f4a:a378df2a
Name : LS421DE-EM4F5:2
Creation Time : Wed Oct 31 01:03:49 2007
Raid Level : raid1
Raid Devices : 2
Avail Dev Size : 1998975 (976.23 MiB 1023.48 MB)
Array Size : 999424 (976.16 MiB 1023.41 MB)
Used Dev Size : 1998848 (976.16 MiB 1023.41 MB)
Data Offset : 1024 sectors
Super Offset : 8 sectors
Unused Space : before=944 sectors, after=18446744073709550600 sectors
State : clean
Device UUID : fd4ca8dc:5994b788:4d6537ff:d4dbe0a9
Update Time : Tue Mar 24 18:10:31 2015
Checksum : 935fba7c — correct
Events : 624
Device Role : Active device 0
Array State : A. (‘A’ == active, ‘.’ == missing, ‘R’ == replacing)
Please help me. Anything is appreciated. Thanks in advance.
EDIT: To be sure, I should have more data on the disk than what is shown in the partitions. I’ve already ran ‘testdisk’ on this disk.
Last edited by mimaste7 (2015-03-29 05:24:56)
#2 2015-03-29 14:00:55
Re: mount: unknown filesystem type ‘linux_raid_member’
However, when I try to mount either of the partitions (/dev/sdc1 or /dev/sdc2), I get the error «mount: unknown filesystem type ‘linux_raid_member'».
This message simply means that the device is member of a RAID. You are supposed to start the RAID and then mount the RAID, not mount the underlying device.
If for some reason the RAID can not be started (check /proc/mdstat as well, you might have to —stop before you —assemble —scan), then you can circumvent this by using a loop mount
mount -o ro,loop,offset=$((8192*512)) /dev/sdc1 /mnt/sdc1
The offset in question is what’s shown in the mdadm —examine output as «Data Offset».
Of course this requires a filesystem to be on the RAID; if it’s something else, like LVM or LUKS, you have to go through losetup and then enable the other layers individually.
I’m not sure what to do about wrong partition sizes; you might need TestDisk to check if other partitions can be found.
Last edited by frostschutz (2015-03-29 14:02:43)
#3 2015-03-30 03:10:13
Re: mount: unknown filesystem type ‘linux_raid_member’
Thanks for the reply!!
I have already ran testdisk. It attempted to restore the partitions, but I’m not 100% sure how effective it was.
In any case, I tried ‘sudo mount -o ro,loop,offset=$((8192*512)) /dev/sdb1 /mnt/backup/’ and I got the following error
mount: wrong fs type, bad option, bad superblock on /dev/loop0,
missing codepage or helper program, or other error
In some cases useful info is found in syslog — try
dmesg | tail or so.
I also ran ‘sudo mount -o ro,loop,offset=$((1024*512)) /dev/sdb2 /mnt/backup/’ and got this error:
mount: unknown filesystem type ‘swap’
Which seems odd because I don’t remember ever setting up a swap space on this disk.
Here’s the output from ‘dmesg | tail -25’:
[ 4356.952269] scsi host6: usb-storage 7-1:1.0
[ 4357.955511] scsi 6:0:0:0: Direct-Access ST4000DM 000-1F2168 CC52 PQ: 0 ANSI: 5
[ 4357.956939] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 4357.957263] sd 6:0:0:0: [sdb] 7814037168 512-byte logical blocks: (4.00 TB/3.63 TiB)
[ 4357.959435] sd 6:0:0:0: [sdb] Write Protect is off
[ 4357.959451] sd 6:0:0:0: [sdb] Mode Sense: 28 00 00 00
[ 4357.962322] sd 6:0:0:0: [sdb] No Caching mode page found
[ 4357.962335] sd 6:0:0:0: [sdb] Assuming drive cache: write through
[ 4357.963301] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 4358.000908] sdb: sdb1 sdb2
[ 4358.002117] sd 6:0:0:0: [sdb] Very big device. Trying to use READ CAPACITY(16).
[ 4358.003882] sd 6:0:0:0: [sdb] Attached SCSI disk
[ 4358.084885] md: sdb1 does not have a valid v1.2 superblock, not importing!
[ 4358.084904] md: md_import_device returned -22
[ 4358.084975] md: md1 stopped.
[ 4358.108807] md: sdb2 does not have a valid v1.2 superblock, not importing!
[ 4358.108878] md: md_import_device returned -22
[ 4358.108998] md: md2 stopped.
[ 4538.771028] EXT4-fs (loop0): mounting ext3 file system using the ext4 subsystem
[ 4538.771275] EXT4-fs (loop0): bad geometry: block count 1248752 exceeds size of device (1247729 blocks)
Output from ‘cat /proc/mdstat’
Personalities :
unused devices:
To my knowledge, this is not a LVM/LUKS setup. I bought a Buffalo NAS and it used the default RAID configuration (the Buffalo support guy (higher than Level 1 support, for what its worth) told me the default is RAID 1, which makes sense since that’s what ‘mdadm —examine’ says).
I apologize for my lack of knowledge about all this. I’m fresh meat to NAS/RAID/etc.
How do i mount a Raid disk in linux
I have an ubuntu server that i had to restart on rescue mode and i am trying to mount a partition to reset the root password. I followed the instruction of the hosting company but got stuck and haven heard from them When i try to do
mount: unknown filesystem type 'linux_raid_member'
Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009307f Device Boot Start End Blocks Id System /dev/sda1 * 2048 20973568 10485760+ fd Linux raid autodetect /dev/sda2 20973569 1952468992 965747712 fd Linux raid autodetect /dev/sda3 1952468993 1953520064 525536 82 Linux swap / Solaris Disk /dev/md2: 988.9 GB, 988925591552 bytes 2 heads, 4 sectors/track, 241436912 cylinders, total 1931495296 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md2 doesn't contain a valid partition table Disk /dev/md1: 10.7 GB, 10737352704 bytes 2 heads, 4 sectors/track, 2621424 cylinders, total 20971392 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Disk /dev/md1 doesn't contain a valid partition table