Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature] RAID Support #40

Closed
velmirslac opened this issue May 20, 2022 · 60 comments
Closed

[Feature] RAID Support #40

velmirslac opened this issue May 20, 2022 · 60 comments

Comments

@velmirslac
Copy link

Is your feature request related to a problem? Please describe.
The reported disk usage is simply a sum of all disks on the system, not real storage.

Describe the solution you'd like
Storage should monitor a configurable list of volumes, not just block devices.

Additional context
For example, I have a test server that has two 240 GB SSDs and two 480GB HDDs. Dashdot reports this as 1.4TB of storage with some tiny sliver as "used." However, those two HDDs and one of the SSDs are in a ZFS pool together. So the actual state of storage on the server is one 240GB volume with 5% used and one 480GB volume with 33% used.

root@test:~# lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda      8:0    0 465.8G  0 disk 
├─sda1   8:1    0 465.8G  0 part 
└─sda9   8:9    0     8M  0 part 
sdb      8:16   0 238.5G  0 disk 
└─sdb1   8:17   0 238.5G  0 part /
sdc      8:32   0 238.5G  0 disk 
├─sdc1   8:33   0 238.5G  0 part 
└─sdc9   8:41   0     8M  0 part 
sdd      8:48   0 465.8G  0 disk 
├─sdd1   8:49   0 465.8G  0 part 
└─sdd9   8:57   0     8M  0 part 

root@test:~# df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdb1       234G  9.7G  213G   5% /

root@test:~# zpool status
  pool: tank
 state: ONLINE
  scan: scrub repaired 0B in 00:42:24 with 0 errors on Sun May  8 01:06:25 2022
config:

        NAME                                              STATE     READ WRITE CKSUM
        tank                                              ONLINE       0     0     0
          mirror-0                                        ONLINE       0     0     0
            ata-WDC_WD5002ABYS-02B1B0_WD-WCASYA237797     ONLINE       0     0     0
            ata-WDC_WD5003ABYX-01WERA2_WD-WMAYP6798572    ONLINE       0     0     0
        cache
          ata-Samsung_SSD_840_PRO_Series_S12RNEACC87965T  ONLINE       0     0     0

errors: No known data errors

root@test:~# zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
tank   464G   156G   308G        -         -    17%    33%  1.00x    ONLINE  -

@vangyyy
Copy link

vangyyy commented May 21, 2022

Info about RAID setup is obtainable via https://github.com/sebhildebrandt/systeminformation#9-file-system. Some new GUI options should be implemented for this to work though.

For example you should be able to choose if you want to show the information for raw disk space (like lsblk would), or if you want to see (lets call it) RAID mode, where the disks are grouped together and free diskspace is calculated accordingly.

I have a very similar setup to @velmirslac, in that I have 1x main SSD and 2x 4TB HDDs in ZFS pool mirroring each other.

λ zpool list
NAME   SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
nas   3.62T   605G  3.03T        -         -     0%    16%  1.00x    ONLINE  -
λ lsblk
NAME   MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
------ OMITTED LOOP INTERFACES ------
sda      8:0    0 238.5G  0 disk 
├─sda1   8:1    0   512M  0 part /boot/efi
└─sda2   8:2    0   238G  0 part /
sdb      8:16   0   3.7T  0 disk 
├─sdb1   8:17   0   3.7T  0 part 
└─sdb9   8:25   0     8M  0 part 
sdc      8:32   0   3.7T  0 disk 
├─sdc1   8:33   0   3.7T  0 part 
└─sdc9   8:41   0     8M  0 part 

Data sample:
This is the output when calling si.blockDevices(), after filtering out loop interfaces and everything that is not of a zfs_member type:

[
  {
    name: 'sdb1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 4000776716288,
    physical: '',
    uuid: '5279486443599641042',
    label: 'nas',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 4000776716288,
    physical: '',
    uuid: '5279486443599641042',
    label: 'nas',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  }
]

From this you can group them by label, but I'm not entirely sure how you could get the information about pool type. Maybe by calling zpool directly ? ... or there might be a different nodejs library for obtaining this information.

@MauriceNino
Copy link
Owner

Thank you for providing me with all this information! I think I will leave it how it is right now as a default configuration and add a configuration option for "RAID Mode", just like you proposed. Only problem would be testing it, as I don't have a RAID running anywhere, so I hope I can get back to you @vangyyy and @velmirslac for any testing requests in the following days.

@MauriceNino MauriceNino changed the title RAID Support [Feature] RAID Support May 21, 2022
@vangyyy
Copy link

vangyyy commented May 22, 2022

I wanted to even attempt it myself, but am kind of short on time. I would certainly help you with testing. Just tag me when you need help.

@MauriceNino
Copy link
Owner

Ok, aside from not being able to test it and therefore not being sure what I am doing - I am not sure on how I would display it on the frontend in the end.

Lets say you have one disk, then right now it will look like this:

Brand: Test
Size; 123 GB
Type: HD

If you have multiple drives, it would look like this:

Drive 1: Test HD (123 GB)
Drive 2: Another SSD (321 GB)

I guess if there was a RAID, I could just add that to the type (e.g. Type: HD (RAID)). But what happens when you have a raid with drives from different brands? Or what happens when you have a raid of multiple drives, but then have other single drives as well?

As I don't really know how raids work and what is important when working with them, I don't know what I should display.

@MauriceNino
Copy link
Owner

Also, I am not entirely sure, but part of this issue is essentially blocked by this issue: #59

When I can't get the correct load for all drives, it will be hard to display the RAID information as well.

@MauriceNino
Copy link
Owner

I will close this due to inactivity, but if someone wants to help me resume work by providing more info, I will reopen.

@MauriceNino MauriceNino closed this as not planned Won't fix, can't repro, duplicate, stale Jun 10, 2022
@MauriceNino
Copy link
Owner

@mjefferys Can you please post the output of the following command here:

docker exec CONTAINER yarn cli raw-data --storage --custom blockDevices

@velmirslac @vangyyy are you running the same type of RAID? I have never done any RAID config, so please help me understand what the different types are and how they could be shown.

@MauriceNino MauriceNino reopened this Jun 14, 2022
@mjefferys
Copy link

mjefferys commented Jun 14, 2022

@MauriceNino thanks for looking into it, here's my output.

Output
docker exec dash yarn cli raw-data --storage --custom blockDevices
yarn run v1.22.19
$ node dist/apps/cli/main.js raw-data --storage --custom blockDevices
Disk Layout: [
  {
    device: '/dev/sda',
    type: 'SSD',
    name: 'SAMSUNG MZ7LN1T0',
    vendor: 'Samsung',
    size: 1024209543168,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '300Q',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdb',
    type: 'SSD',
    name: 'SAMSUNG MZ7LN1T0',
    vendor: 'Samsung',
    size: 1024209543168,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '300Q',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/nvme0n1',
    type: 'NVMe',
    name: 'SAMSUNG MZVLB1T0HBLR-00000              ',
    vendor: 'Samsung',
    size: 1024209543168,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '',
    serialNum: 'S4GJNX0R521846',
    interfaceType: 'PCIe',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/nvme1n1',
    type: 'NVMe',
    name: 'SAMSUNG MZVLB1T0HBLR-00000              ',
    vendor: 'Samsung',
    size: 1024209543168,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '',
    serialNum: 'S4GJNX0R521864',
    interfaceType: 'PCIe',
    smartStatus: 'unknown',
    temperature: null
  }
]
FS Size [
  {
    fs: 'overlay',
    type: 'overlay',
    size: 972496621568,
    used: 631807569920,
    available: 291213500416,
    use: 68.45,
    mount: '/'
  },
  {
    fs: '/dev/md2',
    type: 'ext4',
    size: 972496621568,
    used: 631807569920,
    available: 291213500416,
    use: 68.45,
    mount: '/etc/os-release'
  },
  {
    fs: '/dev/md3',
    type: 'ext4',
    size: 1006847627264,
    used: 807492980736,
    available: 148134158336,
    use: 84.5,
    mount: '/mnt/host_mnt/data'
  }
]
Custom [blockDevices] [
  {
    name: 'nvme0n1',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 1024209543168,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'SAMSUNG MZVLB1T0HBLR-00000              ',
    serial: 'S4GJNX0R521846      ',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'nvme1n1',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 1024209543168,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'SAMSUNG MZVLB1T0HBLR-00000              ',
    serial: 'S4GJNX0R521864      ',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'sda',
    type: 'disk',
    fsType: 'linux_raid_member',
    mount: '',
    size: 1024209543168,
    physical: 'SSD',
    uuid: 'db4320eb-cd96-ea17-fc1a-49a0cdc3ea89',
    label: 'saturn:3',
    model: 'SAMSUNG MZ7LN1T0',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdb',
    type: 'disk',
    fsType: 'linux_raid_member',
    mount: '',
    size: 1024209543168,
    physical: 'SSD',
    uuid: 'db4320eb-cd96-ea17-fc1a-49a0cdc3ea89',
    label: 'saturn:3',
    model: 'SAMSUNG MZ7LN1T0',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'nvme0n1p1',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 34359738368,
    physical: '',
    uuid: '2a1e4e27-38c9-961b-9221-f24aa916e81e',
    label: 'rescue:0',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'nvme0n1p2',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 536870912,
    physical: '',
    uuid: 'fa01e769-fc2b-efc7-e75e-2d9760c5a491',
    label: 'rescue:1',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'nvme0n1p3',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 989310836736,
    physical: '',
    uuid: 'fd6b8d07-ec1f-8935-2103-38f3452e14d1',
    label: 'rescue:2',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'nvme1n1p1',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 34359738368,
    physical: '',
    uuid: '2a1e4e27-38c9-961b-9221-f24aa916e81e',
    label: 'rescue:0',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'nvme1n1p2',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 536870912,
    physical: '',
    uuid: 'fa01e769-fc2b-efc7-e75e-2d9760c5a491',
    label: 'rescue:1',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'nvme1n1p3',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 989310836736,
    physical: '',
    uuid: 'fd6b8d07-ec1f-8935-2103-38f3452e14d1',
    label: 'rescue:2',
    model: '',
    serial: '',
    removable: false,
    protocol: 'nvme',
    group: undefined
  },
  {
    name: 'md0',
    type: 'raid1',
    fsType: 'swap',
    mount: '[SWAP]',
    size: 34325135360,
    physical: '',
    uuid: '8d77f09a-1daf-4685-94c3-82730da58f62',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'md1',
    type: 'raid1',
    fsType: 'ext3',
    mount: '',
    size: 535822336,
    physical: '',
    uuid: 'e5cfd092-bf43-4e36-9f58-1c919e600551',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'md2',
    type: 'raid1',
    fsType: 'ext4',
    mount: '/etc/hosts',
    size: 989175545856,
    physical: '',
    uuid: '0fd69e19-1cbb-4876-8bd4-1b1ec1d94c78',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'md3',
    type: 'raid1',
    fsType: 'ext4',
    mount: '/mnt/host_mnt/data',
    size: 1024074252288,
    physical: '',
    uuid: '64b80fff-9799-45aa-af08-b45ae48a6da4',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  }
]
Done in 0.21s.

No idea why github is messing the formatting up on this. Grr

@MauriceNino
Copy link
Owner

Because you need triple backticks for code blocks :)

Thanks for the output, I will look into it in the evening.
@velmirslac @vangyyy if you could provide the same output, that might help as well.

@mjefferys
Copy link

Aha, I clicked the insert code block thing in the github editor but it didn't take. Appreciate you looking.

@velmirslac
Copy link
Author

velmirslac commented Jun 14, 2022

No, I am not running the same type of RAID. @mjefferys appears to be running Linux software RAID using (mdadm)[https://raid.wiki.kernel.org/index.php/A_guide_to_mdadm]. This type of RAID creates block devices you can then put a filesystem on and use as normal. The RAID volumes are the /dev/md# devices. You can get info about mdadm RAID from /proc/mdstat like which partitions are part of which RAIDs and RAID health.

I'm using a different kind of software RAID called (ZFS)[https://en.wikipedia.org/wiki/ZFS]. ZFS creates virtual devices, called pools, and manages the filesystems, called datasets, in the pool. The ZFS system handles mounting, partitioning, metadata, etc.

Output
root@# docker exec dash_dash_1 yarn cli raw-data --storage --custom blockDevices
yarn run v1.22.19
$ node dist/apps/cli/main.js raw-data --storage --custom blockDevices
Disk Layout: [
  {
    device: '/dev/sda',
    type: 'HD',
    name: 'WDC WD5002ABYS-0',
    vendor: 'Western Digital',
    size: 500107862016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '3B03',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdb',
    type: 'SSD',
    name: 'Samsung SSD 840 ',
    vendor: 'Samsung',
    size: 256060514304,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '4B0Q',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdc',
    type: 'SSD',
    name: 'Samsung SSD 840 ',
    vendor: 'Samsung',
    size: 256060514304,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '4B0Q',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdd',
    type: 'HD',
    name: 'WDC WD5003ABYX-0',
    vendor: 'Western Digital',
    size: 500107862016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '1S03',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  }
]
FS Size [
  {
    fs: 'tank/var/77fc308448f99cd0bb33d825631454fa2385eead208fd54df504843acc2905cc',
    type: 'zfs',
    size: 263471497216,
    used: 90177536,
    available: 263381319680,
    use: 0.03,
    mount: '/'
  },
  {
    fs: '/dev/sdb1',
    type: 'ext4',
    size: 250903556096,
    used: 11111383040,
    available: 226972442624,
    use: 4.67,
    mount: '/mnt/host_media'
  },
  {
    fs: 'tank/var',
    type: 'zfs',
    size: 308565508096,
    used: 45184188416,
    available: 263381319680,
    use: 14.64,
    mount: '/etc/resolv.conf'
  }
]
Custom [blockDevices] [
  {
    name: 'sda',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 500107862016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'WDC WD5002ABYS-0',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdb',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 256060514304,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'Samsung SSD 840 ',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdc',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 256060514304,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'Samsung SSD 840 ',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdd',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 500107862016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'WDC WD5003ABYX-0',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'zram0',
    type: 'disk',
    fsType: 'swap',
    mount: '[SWAP]',
    size: 1670815744,
    physical: 'SSD',
    uuid: '51c1aa2f-ecb0-4a85-8bc8-3660a1779241',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sda1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 500098400256,
    physical: '',
    uuid: '11106343725164843224',
    label: 'tank',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sda9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb1',
    type: 'part',
    fsType: 'ext4',
    mount: '/mnt/host_mnt',
    size: 256059113472,
    physical: '',
    uuid: '668ffb6e-79cf-4d96-b907-5a4d55ae6fc1',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc1',
    type: 'part',
    fsType: '',
    mount: '',
    size: 256050724864,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdd1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 500098400256,
    physical: '',
    uuid: '11106343725164843224',
    label: 'tank',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdd9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  }
]
Done in 0.58s.

@MauriceNino
Copy link
Owner

Ok, I reviewed the outputs and am unsure how to go about this.

  1. I think I can group drives which are running in a raid config through the uuid, but there is one thing that seems to be off in your output @velmirslac - The device with the uuid 668ffb6e-79cf-4d96-b907-5a4d55ae6fc1 has no counterpart - and it is fsType: ext4. Are you sure its counterpart is mounted inside the container? When I look at the output of @mjefferys, for every raid-member, there is a second one with the same uuid.

  2. Also at your output @velmirslac, the root fs mountpoint: "/" is named tank/var/77fc308448f99cd0bb33d825631454fa2385eead208fd54df504843acc2905cc. Do you know what might cause this? It seems to be only half the size as well.

@velmirslac
Copy link
Author

velmirslac commented Jun 14, 2022

  1. The device with UUID 668ffb6e... is the root filesystem for the server (mounted to /). It's a standalone, single disk, not a RAID member. I didn't bother setting up any RAID for the OS when I installed the system.
  2. tank/var/77fc308448f... is the dataset Docker created in ZFS for the container. Docker will default to using ZFS datasets instead of overlay if it detects it's being run in a ZFS dataset. tank is my main pool, tank/var is a dataset I'm mounting to /var. Since Docker is using /var/lib/docker as its root directory, it creates datasets within tank/var/ for use as volumes. Every container I'm running has one or more of these.

@MauriceNino
Copy link
Owner

@velmirslac

  1. Ok I just re-read the first comment and am even more confused now - you are saying that two HDDs and ONE SSD are in a single RAID? So essentially 3 disks of different sizes and brands in one raid? How is the SSD used? Just as a cache?

  2. Ok I think I get that - but that also means I have a bug, because currently I am checking for the overlay drive as the root.

@velmirslac
Copy link
Author

  1. Yes, that's the case. One SSD is / while the two HDDs and the remaining SSD are a ZFS pool. The SSD in the ZFS pool is used as a read cache. The HDDs are the same brand (WD) and size (480GB), the SSDs are the same brand (Samsung) and size (240GB). The two HDDs are in a mirror array, so the effect capacity is that of one HDD. The cache SSD's capacity is not part of the RAID capacity.
  2. zfs and overlay are two different storage drivers in Docker. That difference may be presented differently by the Docker system to the container.

@MauriceNino
Copy link
Owner

That makes sense, thanks for clearing that up.

And about 2) that is definitely a bug in the current system then. Will need to fix this as well.

The only problem now is, that I can't really map the usage stats to single drives or raids consistently, so while I might get the total size right, the used part will be off. What do you think about that?

@mjefferys
Copy link

mjefferys commented Jun 15, 2022

Could you use the output contained within the FS size node if you discover a RAID array? I guess you then end up rather than showing individual disks the file systems that are mounted, but potentially that's actually more relevant anyway in a RAID set up?

FS Size [
  {
    fs: 'overlay',
    type: 'overlay',
    size: 972496621568,
    used: 631807569920,
    available: 291213500416,
    use: 68.45,
    mount: '/'
  },
  {
    fs: '/dev/md2',
    type: 'ext4',
    size: 972496621568,
    used: 631807569920,
    available: 291213500416,
    use: 68.45,
    mount: '/etc/os-release'
  },
  {
    fs: '/dev/md3',
    type: 'ext4',
    size: 1006847627264,
    used: 807492980736,
    available: 148134158336,
    use: 84.5,
    mount: '/mnt/host_mnt/data'
  }
]

@MauriceNino
Copy link
Owner

Thats what I will be doing (or am actually doing right now aswell). But if I take @velmirslac's output, I only see 2 drives around the 250GB mark, but if I understood it correctly, it should be 1x 250 GB drive and 1x 500 GB drive and the 500 GB drive should be the main one mounted at /.

@velmirslac
Copy link
Author

velmirslac commented Jun 15, 2022

In my setup, one 240GB SSD is mounted as / on the host machine, and the 480GB RAID is mounted to /home and /var.

@MauriceNino
Copy link
Owner

Did you mount the /home and /var inside the container? Also, where does /mnt/host_media come from then? @velmirslac

The 480 GB must be /mnt/host_media, but why is it the wrong size?

@MauriceNino
Copy link
Owner

Assuming you are both running on an amd64 device, can you please pull the latest dev image (mauricenino/dashdot:dev) and see what it shows in the frontend for you as well as look what it logs for the static data (storage section).

@velmirslac @mjefferys

@mjefferys
Copy link

I love where this is going.

chrome_qp3TJtCUXP

Looks very close!

  os: {
    arch: 'x64',
    distro: 'Debian GNU/Linux',
    kernel: '5.10.0-9-amd64',
    platform: 'linux',
    release: '11',
    uptime: 15613258.46
  },
  cpu: {
    brand: 'AMD',
    model: 'Ryzen 7 3700X',
    cores: 8,
    threads: 16,
    frequency: 3.6
  },
  ram: {
    size: 67361349632,
    layout: [
      { brand: undefined, type: 'Empty', frequency: 0 },
      { brand: 'Samsung', type: 'DDR4', frequency: 2667 },
      { brand: undefined, type: 'Empty', frequency: 0 },
      { brand: 'Samsung', type: 'DDR4', frequency: 2667 }
    ]
  },
  storage: {
    layout: [
      {
        brand: 'Samsung',
        size: 1024209543168,
        type: 'NVMe',
        raidGroup: 'rescue'
      },
      {
        brand: 'Samsung',
        size: 1024209543168,
        type: 'NVMe',
        raidGroup: 'rescue'
      },
      {
        brand: 'Samsung',
        size: 1024209543168,
        type: 'SSD',
        raidGroup: 'saturn'
      },
      {
        brand: 'Samsung',
        size: 1024209543168,
        type: 'SSD',
        raidGroup: 'saturn'
      }
    ]
  },
  network: {
    interfaceSpeed: 1000,
    speedDown: 0,
    speedUp: 0,
    type: 'Wired',
    publicIp: ''
  },
  config: {
    port: 3001,
    widget_list: [ 'os', 'cpu', 'storage', 'ram', 'network' ],
    accept_ookla_eula: false,
    use_imperial: false,
    disable_host: false,
    os_label_list: [ 'os', 'arch', 'up_since' ],
    os_widget_grow: 1.5,
    os_widget_min_width: 300,
    enable_cpu_temps: false,
    cpu_label_list: [ 'brand', 'model', 'cores', 'threads', 'frequency' ],
    cpu_widget_grow: 4,
    cpu_widget_min_width: 500,
    cpu_shown_datapoints: 20,
    cpu_poll_interval: 1000,
    storage_label_list: [ 'brand', 'size', 'type', 'raid' ],
    storage_widget_grow: 3.5,
    storage_widget_min_width: 500,
    storage_poll_interval: 60000,
    ram_label_list: [ 'brand', 'size', 'type', 'frequency' ],
    ram_widget_grow: 4,
    ram_widget_min_width: 500,
    ram_shown_datapoints: 20,
    ram_poll_interval: 1000,
    use_network_interface: '',
    speed_test_interval: 60,
    network_label_list: [ 'type', 'speed_up', 'speed_down', 'interface_speed' ],
    network_widget_grow: 6,
    network_widget_min_width: 500,
    network_shown_datapoints: 20,
    network_poll_interval: 1000,
    override: {
      os: undefined,
      arch: undefined,
      cpu_brand: undefined,
      cpu_model: undefined,
      cpu_cores: undefined,
      cpu_threads: undefined,
      cpu_frequency: undefined,
      ram_brand: undefined,
      ram_size: undefined,
      ram_type: undefined,
      ram_frequency: undefined,
      network_type: undefined,
      network_speed_up: undefined,
      network_speed_down: undefined,
      network_interface_speed: undefined,
      network_public_ip: undefined,
      storage_brands: [],
      storage_sizes: [],
      storage_types: []
    }
  }
}

@MauriceNino
Copy link
Owner

🎉 This issue has been resolved in version 3.4.0

Please check the changelog for more details.

@MrAlucardDante
Copy link

MrAlucardDante commented Jun 16, 2022

Hi, I've read the thread and tried different things, but the RAID support is acting weird in 3.5.1.

I have 1 SSD for my OS and 4x2TB HDD in a ZFS pool (2 mirrored vDev) which results in a 4TB array.

Previously, I had a 2x2TB RAID 1 array (using mdadm).

Here is what I get in dashdot, no matter if I mount my array inside the container or not :
firefox_jrmBS3BnZ2

the first raid "server-bb" was the old raid array, the second disk is the SSD and the second raid "data-pool" is the zfs pool.

Here is the df -h on the host :

Filesystem                         Size  Used Avail Use% Mounted on
tmpfs                              1.6G   14M  1.6G   1% /run
/dev/mapper/ubuntu--vg-ubuntu--lv   98G   19G   75G  20% /
tmpfs                              7.7G     0  7.7G   0% /dev/shm
tmpfs                              5.0M     0  5.0M   0% /run/lock
/dev/sdb2                          2.0G  243M  1.6G  14% /boot
data-pool                          3.6T  1.5T  2.1T  43% /mnt/data

and the df -h in the container :

Filesystem                Size      Used Available Use% Mounted on
overlay                  97.9G     18.5G     74.3G  20% /
tmpfs                    64.0M         0     64.0M   0% /dev
shm                      64.0M         0     64.0M   0% /dev/shm
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.5G     74.3G  20% /etc/os-release
data-pool                 3.5T      1.5T      2.0T  43% /mnt/data
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.5G     74.3G  20% /etc/resolv.conf
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.5G     74.3G  20% /etc/hostname
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.5G     74.3G  20% /etc/hosts

@MauriceNino
Copy link
Owner

@MrAlucardDante It is important that you mount your volumes exactly with this format: /mnt/host_*, otherwise dashdot can't pick up which mounts are by accident or not (e.g. /etc/os-release is the exact same size as your host drive, but your host drive is already mounted on / as well).

So long story short, remove -v /mnt/data:/mnt/data and add -v /mnt/data:/mnt/host_data instead.

Maybe I should remove the need for the host_* part - it has been needed a while back, but now I think it can work without. I will think about including it in the next release.

@MrAlucardDante
Copy link

MrAlucardDante commented Jun 16, 2022

@MrAlucardDante It is important that you mount your volumes exactly with this format: /mnt/host_*, otherwise dashdot can't pick up which mounts are by accident or not (e.g. /etc/os-release is the exact same size as your host drive, but your host drive is already mounted on / as well).

So long story short, remove -v /mnt/data:/mnt/data and add -v /mnt/data:/mnt/host_data instead.

Maybe I should remove the need for the host_* part - it has been needed a while back, but now I think it can work without. I will think about including it in the next release.

I tried that too, like I said I went through the thread to figure it out, but the result is the same

Here is the df -h of the updated volume :

Filesystem                Size      Used Available Use% Mounted on
overlay                  97.9G     18.6G     74.3G  20% /
tmpfs                    64.0M         0     64.0M   0% /dev
shm                      64.0M         0     64.0M   0% /dev/shm
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.6G     74.3G  20% /etc/os-release
data-pool                 3.5T      1.5T      2.0T  44% /mnt/host_data
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.6G     74.3G  20% /etc/resolv.conf
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.6G     74.3G  20% /etc/hostname
/dev/mapper/ubuntu--vg-ubuntu--lv
                         97.9G     18.6G     74.3G  20% /etc/hosts

And here is a screenshot after recreating the container, emptying the cache and refreshing the page
image

@MauriceNino
Copy link
Owner

Hm, that is really weird. Can you please send me the output of the following command?

docker exec CONTAINER yarn cli raw-data --storage --custom blockDevices

@MrAlucardDante
Copy link

MrAlucardDante commented Jun 16, 2022

Sure.

Here it is :

Output
yarn run v1.22.19
$ node dist/apps/cli/main.js raw-data --storage --custom blockDevices
Disk Layout: [
  {
    device: '/dev/sda',
    type: 'HD',
    name: 'ST2000DM001-1CH1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: 'CC27',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdb',
    type: 'SSD',
    name: 'KINGSTON SA400S3',
    vendor: 'Kingston Technology',
    size: 240057409536,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: 'B1E1',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdc',
    type: 'HD',
    name: 'ST2000DM008-2FR1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0001',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdd',
    type: 'HD',
    name: 'ST2000DM008-2FR1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0001',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sde',
    type: 'HD',
    name: 'ST2000DM008-2FR1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0001',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  }
]
FS Size [
  {
    fs: 'overlay',
    type: 'overlay',
    size: 105089261568,
    used: 19943407616,
    available: 79760367616,
    use: 20,
    mount: '/'
  },
  {
    fs: '/dev/mapper/ubuntu--vg-ubuntu--lv',
    type: 'ext4',
    size: 105089261568,
    used: 19943407616,
    available: 79760367616,
    use: 20,
    mount: '/etc/os-release'
  }
]
Custom [blockDevices] [
  {
    name: 'sda',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM001-1CH1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdb',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 240057409536,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'KINGSTON SA400S3',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdc',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM008-2FR1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdd',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM008-2FR1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sde',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM008-2FR1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'loop0',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 64925696,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop1',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 64933888,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop2',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 83832832,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop4',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 46870528,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop5',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 49233920,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sda1',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '100776c9-bc80-9e31-c6d1-93f8c418c2d1',
    label: 'SERVEUR-BB:0',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sda9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb1',
    type: 'part',
    fsType: '',
    mount: '',
    size: 1048576,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb2',
    type: 'part',
    fsType: 'ext4',
    mount: '',
    size: 2147483648,
    physical: '',
    uuid: 'f4fb588e-039a-40e7-9e7c-be1565cab7ac',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb3',
    type: 'part',
    fsType: 'LVM2_member',
    mount: '',
    size: 237906165760,
    physical: '',
    uuid: '1yMIdP-6v1A-fXEE-Q8La-3JP4-yGXD-N3tG4Z',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc1',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '100776c9-bc80-9e31-c6d1-93f8c418c2d1',
    label: 'SERVEUR-BB:0',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdd1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '15332071847395975336',
    label: 'data-pool',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdd9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sde1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '15332071847395975336',
    label: 'data-pool',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sde9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  }
]
Done in 1.21s.

@MrAlucardDante
Copy link

What is weird here is that dashdot still sees my old raid on sda1 and sdc1 even tho it's gone.

Maybe it is caching something somewhere.

Here is the fdisk -l for comparison:

Disk /dev/loop0: 61,92 MiB, 64925696 bytes, 126808 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 61,93 MiB, 64933888 bytes, 126824 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 79,95 MiB, 83832832 bytes, 163736 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop4: 44,7 MiB, 46870528 bytes, 91544 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop5: 46,95 MiB, 49233920 bytes, 96160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM001-1CH1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 11C78F98-B4BC-A54F-80B6-11108B0DD7F3

Device          Start        End    Sectors  Size Type
/dev/sda1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS
/dev/sda9  3907012608 3907028991      16384    8M Solaris reserved 1


Disk /dev/sdc: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM008-2FR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: F3DC9634-4448-0741-B01D-ECB016788FCC

Device          Start        End    Sectors  Size Type
/dev/sdc1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS
/dev/sdc9  3907012608 3907028991      16384    8M Solaris reserved 1


Disk /dev/sdd: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM008-2FR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 23781841-21CF-D34D-9341-4C6B99D04770

Device          Start        End    Sectors  Size Type
/dev/sdd1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS
/dev/sdd9  3907012608 3907028991      16384    8M Solaris reserved 1


Disk /dev/sdb: 223,57 GiB, 240057409536 bytes, 468862128 sectors
Disk model: KINGSTON SA400S3
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D11432FB-2775-4F71-A25B-859D9AB38630

Device       Start       End   Sectors   Size Type
/dev/sdb1     2048      4095      2048     1M BIOS boot
/dev/sdb2     4096   4198399   4194304     2G Linux filesystem
/dev/sdb3  4198400 468858879 464660480 221,6G Linux filesystem


Disk /dev/sde: 1,82 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: ST2000DM008-2FR1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 546D804D-22AA-5148-8FD8-0DBD24B07745

Device          Start        End    Sectors  Size Type
/dev/sde1        2048 3907012607 3907010560  1,8T Solaris /usr & Apple ZFS
/dev/sde9  3907012608 3907028991      16384    8M Solaris reserved 1


Disk /dev/mapper/ubuntu--vg-ubuntu--lv: 100 GiB, 107374182400 bytes, 209715200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

@MauriceNino
Copy link
Owner

Thanks for the outputs - don't know why the mount is not listed there, can you please also run:

docker exec CONTAINER df -kPT

@MauriceNino
Copy link
Owner

What is weird here is that dashdot still sees my old raid on sda1 and sdc1 even tho it's gone.

Maybe it is caching something somewhere.

About that - I really don't know what I can do there - when the information is still saved somewhere, and it is read by systeminformation, I will have to use that. I don't know enough about RAID configurations to know what is happening there. You might be better off creating an issue about that at https://github.com/sebhildebrandt/systeminformation

@MrAlucardDante
Copy link

MrAlucardDante commented Jun 16, 2022

Thanks for the outputs - don't know why the mount is not listed there, can you please also run:

docker exec CONTAINER df -kPT

There you go :

Filesystem           Type       1024-blocks    Used Available Capacity Mounted on
overlay              overlay    102626232  19481216  77885752  20% /
tmpfs                tmpfs          65536         0     65536   0% /dev
shm                  tmpfs          65536         0     65536   0% /dev/shm
/dev/mapper/ubuntu--vg-ubuntu--lv              ext4       102626232  19481216  77885752  20% /etc/os-release
data-pool            zfs        3771705088 1642311552 2129393536  44% /mnt/host_data
/dev/mapper/ubuntu--vg-ubuntu--lv              ext4       102626232  19481216  77885752  20% /etc/resolv.conf
/dev/mapper/ubuntu--vg-ubuntu--lv              ext4       102626232  19481216  77885752  20% /etc/hostname
/dev/mapper/ubuntu--vg-ubuntu--lv              ext4       102626232  19481216  77885752  20% /etc/hosts

@MauriceNino
Copy link
Owner

I have tagged you in an issue for your fsSize output issue, but for that other problem you are having with your configuration, I would suggest you to open up your own issue, because I do not have enough issue to provide, and I don't know what the maintainer (@sebhildebrandt) needs exactly.

It would probably be a good idea to add the output of the following to the issue you create:

docker exec CONTAINER yarn cli raw-data --storage --custom blockDevices

And then explain what your configuration is, and that you suspect that something is not quite right in the output. But I can not comment on that, because as I said I know next to nothing about RAIDs.

@MrAlucardDante
Copy link

I have tagged you in an issue for your fsSize output issue, but for that other problem you are having with your configuration, I would suggest you to open up your own issue, because I do not have enough issue to provide, and I don't know what the maintainer (@sebhildebrandt) needs exactly.

It would probably be a good idea to add the output of the following to the issue you create:

docker exec CONTAINER yarn cli raw-data --storage --custom blockDevices

And then explain what your configuration is, and that you suspect that something is not quite right in the output. But I can not comment on that, because as I said I know next to nothing about RAIDs.

I appreciate the help.

I really think it's a "caching" issue (or data being hard written somewhere) because I reran the yarn cli command after removing the volume and I can still see the ZFS and RAID.

I will add all the info to the issue you've opened

@MauriceNino
Copy link
Owner

I will add all the info to the issue you've opened

My Issue has all necessary info, I think - you need to create a second one for your other problem (with the raids).

@MauriceNino
Copy link
Owner

I reran the yarn cli command after removing the volume and I can still see the ZFS and RAID

The volume mount is only for reading the disk sizes (df) - the drive names and sizes are gathered through the --privileged flag, which mounts all devices into the container and are then read through lsblk. There are no mounts needed for that.

@MrAlucardDante
Copy link

Thank you for the clarification, I am not a Linux or Docker expert by no means

@MauriceNino
Copy link
Owner

No worries - I ain't either :)

@MauriceNino
Copy link
Owner

@MrAlucardDante Can you please pull the latest dev image (mauricenino/dashdot:dev) and see if it changes anything for your storage graph?

@MrAlucardDante
Copy link

@MauriceNino unfortunately, it is still the same
firefox_dppgsuUpfT

@MrAlucardDante
Copy link

Here is my compose, just for reference

services:
  dashdot:
    image: mauricenino/dashdot:dev
    container_name: dashdot
    restart: unless-stopped
    privileged: true
    environment:
      - DASHDOT_WIDGET_LIST=os,network,ram,cpu,storage
      - DASHDOT_OS_LABEL_LIST=os,up_since
      - DASHDOT_ENABLE_CPU_TEMPS=true
      - DASHDOT_RAM_LABEL_LIST=size,type,frequency
      - DASHDOT_NETWORK_LABEL_LIST=speed_up,speed_down,interface_speed
      - DASHDOT_SPEED_TEST_INTERVAL=1440
    volumes:
      - /etc/os-release:/etc/os-release:ro
      - /proc/1/ns/net:/mnt/host_ns_net:ro
      - /mnt/data:/mnt/host_data:ro

@MauriceNino
Copy link
Owner

Are you sure you did upgrade? Please use the main branch (mauricenino/dashdot:latest) in your compose file and update the image, using docker pull mauricenino/dashdot:latest.

Does that change something? @MrAlucardDante

If not, please run this command and paste the output: docker exec CONTAINER yarn cli raw-data --storage

@MrAlucardDante
Copy link

MrAlucardDante commented Jun 20, 2022

@MrAlucardDante Can you please pull the latest dev image (mauricenino/dashdot:dev) and see if it changes anything for your storage graph?

@MauriceNino you asked me to test with the dev tag, which I did.

I might not be a Linux/Docker expert, but I know how to update a docker image 😉

I just tried with the latest tag as well, still the same output :

Output
yarn run v1.22.19
$ node dist/apps/cli/main.js raw-data --storage
Disk Layout: [
  {
    device: '/dev/sda',
    type: 'HD',
    name: 'ST2000DM001-1CH1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: 'CC27',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdb',
    type: 'SSD',
    name: 'KINGSTON SA400S3',
    vendor: 'Kingston Technology',
    size: 240057409536,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: 'B1E1',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdc',
    type: 'HD',
    name: 'ST2000DM008-2FR1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0001',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sdd',
    type: 'HD',
    name: 'ST2000DM008-2FR1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0001',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  },
  {
    device: '/dev/sde',
    type: 'HD',
    name: 'ST2000DM008-2FR1',
    vendor: 'Seagate',
    size: 2000398934016,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '0001',
    serialNum: '',
    interfaceType: 'SATA',
    smartStatus: 'unknown',
    temperature: null
  }
]
FS Size: [
  {
    fs: 'overlay',
    type: 'overlay',
    size: 105089261568,
    used: 20519108608,
    available: 79184666624,
    use: 20.58,
    mount: '/'
  },
  {
    fs: 'data-pool',
    type: 'zfs',
    size: 3862225354752,
    used: 1684411252736,
    available: 2177814102016,
    use: 43.61,
    mount: '/mnt/host_data'
  },
  {
    fs: '/dev/mapper/ubuntu--vg-ubuntu--lv',
    type: 'ext4',
    size: 105089261568,
    used: 20519108608,
    available: 79184666624,
    use: 20.58,
    mount: '/etc/os-release'
  }
]
BLock Devices: [
  {
    name: 'sda',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM001-1CH1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdb',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 240057409536,
    physical: 'SSD',
    uuid: '',
    label: '',
    model: 'KINGSTON SA400S3',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdc',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM008-2FR1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sdd',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM008-2FR1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'sde',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 2000398934016,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: 'ST2000DM008-2FR1',
    serial: '',
    removable: false,
    protocol: 'sata',
    group: undefined
  },
  {
    name: 'loop0',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 64925696,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop1',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 64933888,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop2',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 83832832,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop3',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 46870528,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'loop4',
    type: 'loop',
    fsType: 'squashfs',
    mount: '',
    size: 49233920,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sda1',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '100776c9-bc80-9e31-c6d1-93f8c418c2d1',
    label: 'SERVEUR-BB:0',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sda9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb1',
    type: 'part',
    fsType: '',
    mount: '',
    size: 1048576,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb2',
    type: 'part',
    fsType: 'ext4',
    mount: '',
    size: 2147483648,
    physical: '',
    uuid: 'f4fb588e-039a-40e7-9e7c-be1565cab7ac',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdb3',
    type: 'part',
    fsType: 'LVM2_member',
    mount: '',
    size: 237906165760,
    physical: '',
    uuid: '1yMIdP-6v1A-fXEE-Q8La-3JP4-yGXD-N3tG4Z',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc1',
    type: 'part',
    fsType: 'linux_raid_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '100776c9-bc80-9e31-c6d1-93f8c418c2d1',
    label: 'SERVEUR-BB:0',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdc9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdd1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '15332071847395975336',
    label: 'data-pool',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sdd9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sde1',
    type: 'part',
    fsType: 'zfs_member',
    mount: '',
    size: 2000389406720,
    physical: '',
    uuid: '15332071847395975336',
    label: 'data-pool',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'sde9',
    type: 'part',
    fsType: '',
    mount: '',
    size: 8388608,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  }
]

@MauriceNino
Copy link
Owner

@MrAlucardDante Yeah I know, but that was a few days ago and I have made some other changes (in the storage area) in the meantime, so I was unsure which version you are on (and it is harder to check that for dev builds), that's why I was suggesting the latest build.

And I didn't want to come off rude, telling you how to update images, I was just trying to make sure. In the selfhosted area there are also a lot of non-tech people around, so I am always trying to be as specific as possible :)

I will look into that log output tomorrow and see what I can do.

@MauriceNino
Copy link
Owner

@MrAlucardDante The latest main release should fix your problems for the most part.

@MrAlucardDante
Copy link

@MauriceNino thanks for your hard work, we are close but not there yet 😉
The graph now display the correct used/free space, but the text in the description is still wrong.

It should be :

Drive

Kingston Technology SSD
=> 223.6 GiB

RAID
=> data-pool

Seagate HD, Seagate HD, Seagate HD, Seagate HD
=> 3726.0 GiB

image

@MauriceNino
Copy link
Owner

Great! Unfortunately, about your other issue, I can't do anything about that - that's an issue with your configuration leftovers, or an issue on systeminformation (sebhildebrandt/systeminformation#698)

@MrAlucardDante
Copy link

MrAlucardDante commented Jun 23, 2022

Thank you, I am investigating with sebhildebrandt.

Could it be possible to have a graph for each storage ? With the current setup, I have no way to know if my SSD is almost full or not, since it's aggregated with my big array

@MauriceNino
Copy link
Owner

Technically there is an option for that, but I don't think it will work for your setup as of right now.

Normally, every disk in blockDevices lists its partitions and mountpoints, but your mountpoint /mnt/host_data is not claimed by any partition, so it would get assigned to your overlay network, which would result in one big graph again.

@MrAlucardDante
Copy link

Indeed, ZFS doesn't set a mount point for the blockDevices.

NAME="loop0" TYPE="loop" SIZE="64925696" FSTYPE="squashfs" MOUNTPOINT="/snap/core20/1494" UUID="" ROTA="0" RO="1" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="loop1" TYPE="loop" SIZE="64933888" FSTYPE="squashfs" MOUNTPOINT="/snap/core20/1518" UUID="" ROTA="0" RO="1" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="loop2" TYPE="loop" SIZE="83832832" FSTYPE="squashfs" MOUNTPOINT="/snap/lxd/22923" UUID="" ROTA="0" RO="1" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="loop3" TYPE="loop" SIZE="46870528" FSTYPE="squashfs" MOUNTPOINT="/snap/snapd/15904" UUID="" ROTA="0" RO="1" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="loop4" TYPE="loop" SIZE="49233920" FSTYPE="squashfs" MOUNTPOINT="/snap/snapd/16010" UUID="" ROTA="0" RO="1" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sda" TYPE="disk" SIZE="2000398934016" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="sata" SERIAL="Z1E5FT5W" LABEL="" MODEL="ST2000DM001-1CH1" OWNER="root"
NAME="sda1" TYPE="part" SIZE="2000389406720" FSTYPE="linux_raid_member" MOUNTPOINT="" UUID="100776c9-bc80-9e31-c6d1-93f8c418c2d1" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="SERVEUR-BB:0" MODEL="" OWNER="root"
NAME="sda9" TYPE="part" SIZE="8388608" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sdb" TYPE="disk" SIZE="240057409536" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="0" RO="0" RM="0" TRAN="sata" SERIAL="50026B73807757D5" LABEL="" MODEL="KINGSTON SA400S3" OWNER="root"
NAME="sdb1" TYPE="part" SIZE="1048576" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="0" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sdb2" TYPE="part" SIZE="2147483648" FSTYPE="ext4" MOUNTPOINT="/boot" UUID="f4fb588e-039a-40e7-9e7c-be1565cab7ac" ROTA="0" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sdb3" TYPE="part" SIZE="237906165760" FSTYPE="LVM2_member" MOUNTPOINT="" UUID="1yMIdP-6v1A-fXEE-Q8La-3JP4-yGXD-N3tG4Z" ROTA="0" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="ubuntu--vg-ubuntu--lv" TYPE="lvm" SIZE="107374182400" FSTYPE="ext4" MOUNTPOINT="/" UUID="e1a4370f-8b50-408a-bf6c-41be77910cb1" ROTA="0" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sdc" TYPE="disk" SIZE="2000398934016" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="sata" SERIAL="WFL3ANGQ" LABEL="" MODEL="ST2000DM008-2FR1" OWNER="root"
NAME="sdc1" TYPE="part" SIZE="2000389406720" FSTYPE="linux_raid_member" MOUNTPOINT="" UUID="100776c9-bc80-9e31-c6d1-93f8c418c2d1" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="SERVEUR-BB:0" MODEL="" OWNER="root"
NAME="sdc9" TYPE="part" SIZE="8388608" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sdd" TYPE="disk" SIZE="2000398934016" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="sata" SERIAL="ZK306AYF" LABEL="" MODEL="ST2000DM008-2FR1" OWNER="root"
NAME="sdd1" TYPE="part" SIZE="2000389406720" FSTYPE="zfs_member" MOUNTPOINT="" UUID="15332071847395975336" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="data-pool" MODEL="" OWNER="root"
NAME="sdd9" TYPE="part" SIZE="8388608" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"
NAME="sde" TYPE="disk" SIZE="2000398934016" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="sata" SERIAL="ZFL5QNFC" LABEL="" MODEL="ST2000DM008-2FR1" OWNER="root"
NAME="sde1" TYPE="part" SIZE="2000389406720" FSTYPE="zfs_member" MOUNTPOINT="" UUID="15332071847395975336" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="data-pool" MODEL="" OWNER="root"
NAME="sde9" TYPE="part" SIZE="8388608" FSTYPE="" MOUNTPOINT="" UUID="" ROTA="1" RO="0" RM="0" TRAN="" SERIAL="" LABEL="" MODEL="" OWNER="root"

After investigating with sebhildebrant, the issue on my end, The labels and types weren't fully erased and set when I switched from linux-raid to zfs.

Thank you for your time and help.

alan-caio added a commit to alan-caio/healthcare-dshboard-react-node that referenced this issue Jul 28, 2022
# [3.4.0](MauriceNino/dashdot@v3.3.3...v3.4.0) (2022-06-15)

### Bug Fixes

* **api:** error on multiple default network interfaces ([3cf8774](MauriceNino/dashdot@3cf8774)), closes [#118](MauriceNino/dashdot#118)

### Features

* **api, view:** add raid information to storage widget ([ba84d34](MauriceNino/dashdot@ba84d34)), closes [#40](MauriceNino/dashdot#40)
* **api:** add option to select the used network interface ([8b6a78d](MauriceNino/dashdot@8b6a78d)), closes [#117](MauriceNino/dashdot#117)
* **view:** add option to show in imperial units ([d4b1f69](MauriceNino/dashdot@d4b1f69))
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants