-
-
Notifications
You must be signed in to change notification settings - Fork 305
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RAID array replaced by ZFS pool, but still appears as RAID #698
Comments
@MrAlucardDante ... can you update me on what you would expect or seems to be wrong ... I am a bit confused and maybe a little bit more explanation could lead me to a more clear view on what might be wrong ;-) Thank you in advance ... Generally: What output you would expect (what is to much / wrong / should be added) and based on which of the underlying commands would be very helpful. |
sure @sebhildebrandt sda1 and sdc1 should have the same label and fsType as sde1 and sdd1 (data-pool and zfs-member), like shown by the fdisk command. |
@MrAlucardDante ok, let me check this tonight ... if I have further questions, I will contact you again... |
Ok, still a little confused ... sorry for asking again ;-) systeminformation has the following commands to determine file / disk structure:
See also https://systeminformation.io/filesystem.html To determine
To determine
To determine
or if this gives an error
Can you provide the output of those three commands and the corresponding output of the three System Information functions ... It would be fine to run the
there you chose Can you then mark, what you think is wrong in which of the three systeminformation outputs? Thank you for all your help here!! |
So it looks like when I switched from linux raid to zfs, some labels and fstype were kept, even though I deleted every partition on the disks, removed all the mdadm configuration and used whole disks when creating the pool. So everything is working as intended on your side, I'll have to figure how to change the fstype and label on the GPT side of things on my end. If anyone knows how to do that, I'm currently looking at exporting and re-importing the pool, but I don't want to mess my array and having to copy 3TB of backup 😄 Thank you for your time, the issue can be closed. |
@MrAlucardDante thank you for testing. Closing it for now. Feel free to reopen it if you see any problems on my side ... |
I used to have a 2x2TB RAID 1 array created with mdadm (called SERVEUR-BB).
I then added 2 extra drives, deleted every partition and mount points for the older drives and created a ZFS pool with 4x2TB HDD (2 mirrored vDev) which results in a 4TB array (called data-pool).
I have removed the old RAID array in /etc/fstab, /proc/mdstat returns nothing as well as mdadm --monitor --scan.
For some reason, the 2 drives that were in RAID are not updated in the output and shows as RAID member with their former label, instead of being ZFS members, like the 2 new drives.
dh -kPT:
fdisk -l:
zpool status
docker exec dashdot yarn cli raw-data --storage --custom blockDevices:
The text was updated successfully, but these errors were encountered: