Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Digital Ocean Droplet Disk Space Is Duplicated #196

Closed
kaimoe opened this issue Jul 10, 2022 · 6 comments
Closed

[Bug] Digital Ocean Droplet Disk Space Is Duplicated #196

kaimoe opened this issue Jul 10, 2022 · 6 comments

Comments

@kaimoe
Copy link

kaimoe commented Jul 10, 2022

Description of the bug

When using dashdot on a Digital Ocean droplet (Ubuntu 20.04, 50GB), the available disk space shown is twice that which was expected. Disk /dev/vda is 50GiB, and /dev/vdb is only 474KiB, but dashdot recognizes it as having the same size as vda, thus incorrectly doubling the available disk space.

image

How to reproduce

No response

Relevant log output

user@host:~ > docker logs dashdot
...
listening on *:3001
Using internally mounted network interface "eth0"
Using host os version from "/mnt/host/etc/os-release"
Static Server Info: {
  os: {
    arch: 'x64',
    distro: 'Ubuntu',
    kernel: '5.4.0-121-generic',
    platform: 'linux',
    release: '20.04.4 LTS',
    uptime: 181373.98
  },
  cpu: {
    brand: 'DO-Regular',
    model: '',
    cores: 1,
    threads: 1,
    frequency: 2
  },
  ram: {
    size: 2079617024,
    layout: [ { brand: 'QEMU', type: 'RAM', frequency: NaN } ]
  },
  storage: {
    layout: [
      { device: 'vda', brand: '0x1af4', size: 53687091200, type: 'HD' },
      { device: 'vdb', brand: '0x1af4', size: 53687091200, type: 'HD' }
    ]
  }
...


user@host:~ > sudo fdisk -l
...
Disk /dev/vda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 4C5E9B9F-981B-48F7-BC74-8F5BED739FAA

Device      Start       End   Sectors  Size Type
/dev/vda1  227328 104857566 104630239 49.9G Linux filesystem
/dev/vda14   2048     10239      8192    4M BIOS boot
/dev/vda15  10240    227327    217088  106M EFI System

Partition table entries are not in disk order.

Disk /dev/vdb: 474 KiB, 485376 bytes, 948 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

Info output of dashdot cli

yarn run v1.22.19
$ node dist/apps/cli/main.js info
INFO
=========
Yarn: 1.22.19
Node: v18.5.0
Dash: 4.3.1

Cwd: /app
Hash: 87449ffde7e1f62cfc2eb44e8c46dfa72c7d1805
In Docker: true
In Podman: false
In Docker (env): true
Done in 0.60s.

What browsers are you seeing the problem on?

No response

Where is your instance running?

Linux Server

Additional context

No response

@MauriceNino
Copy link
Owner

Hey, thank you for your issue!

Can you maybe provide me with the output of the following commands?

docker exec CONTAINER yarn cli raw-data --storage
docker exec CONTAINER df 

@kaimoe
Copy link
Author

kaimoe commented Jul 10, 2022

Here you go! (I've trimmed (docker's?) irrelevant read-only loop storage devices)

yarn run v1.22.19
$ node dist/apps/cli/main.js raw-data --storage
Disk Layout: [
  {
    device: '/dev/vda',
    type: 'HD',
    name: '',
    vendor: '0x1af4',
    size: 53687091200,
    bytesPerSector: null,
    totalCylinders: null,
    totalHeads: null,
    totalSectors: null,
    totalTracks: null,
    tracksPerCylinder: null,
    sectorsPerTrack: null,
    firmwareRevision: '',
    serialNum: '',
    interfaceType: '',
    smartStatus: 'unknown',
    temperature: null
  }
]
FS Size: [
  {
    fs: 'overlay',
    type: 'overlay',
    size: 51848359936,
    used: 17542520832,
    available: 34289061888,
    use: 33.85,
    mount: '/'
  },
  {
    fs: '/dev/vda1',
    type: 'ext4',
    size: 51848359936,
    used: 17542520832,
    available: 34289061888,
    use: 33.85,
    mount: '/etc/resolv.conf'
  }
]
Block Devices: [
  {
    name: 'vda',
    type: 'disk',
    fsType: '',
    mount: '',
    size: 53687091200,
    physical: 'HDD',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'vdb',
    type: 'disk',
    fsType: 'iso9660',
    mount: '',
    size: 485376,
    physical: 'HDD',
    uuid: '2022-07-08-15-17-52-00',
    label: 'config-2',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'vda1',
    type: 'part',
    fsType: 'ext4',
    mount: '/mnt/host/etc/os-release',
    size: 53570682368,
    physical: '',
    uuid: '52ab51ba-2cb8-41d9-898e-dacf353b6e3a',
    label: 'cloudimg-rootfs',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'vda14',
    type: 'part',
    fsType: '',
    mount: '',
    size: 4194304,
    physical: '',
    uuid: '',
    label: '',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  },
  {
    name: 'vda15',
    type: 'part',
    fsType: 'vfat',
    mount: '',
    size: 111149056,
    physical: '',
    uuid: '0C7A-D5F0',
    label: 'UEFI',
    model: '',
    serial: '',
    removable: false,
    protocol: '',
    group: undefined
  }
]
Filesystem           1K-blocks      Used Available Use% Mounted on
overlay               50633164  17131376  33485404  34% /
tmpfs                    65536         0     65536   0% /dev
tmpfs                  1015436         0   1015436   0% /sys/fs/cgroup
shm                      65536         0     65536   0% /dev/shm
/dev/vda1             50633164  17131376  33485404  34% /etc/resolv.conf
/dev/vda1             50633164  17131376  33485404  34% /etc/hostname
/dev/vda1             50633164  17131376  33485404  34% /etc/hosts
/dev/vda1             50633164  17131376  33485404  34% /mnt/host/etc/os-release

@MauriceNino
Copy link
Owner

Thank you, this will be fixed shortly!

MauriceNino added a commit that referenced this issue Jul 10, 2022
## [4.3.2](v4.3.1...v4.3.2) (2022-07-10)

### Bug Fixes

* **api:** fallback to disk block info when no native disk found ([ca180c0](ca180c0)), closes [#196](#196)
@MauriceNino
Copy link
Owner

🎉 This issue has been resolved in version 4.3.2

Please check the changelog for more details.

@kaimoe
Copy link
Author

kaimoe commented Jul 10, 2022

Thank you very much for the super fast fix!

image

@MauriceNino
Copy link
Owner

No problem :)

alan-caio added a commit to alan-caio/healthcare-dshboard-react-node that referenced this issue Jul 28, 2022
## [4.3.2](MauriceNino/dashdot@v4.3.1...v4.3.2) (2022-07-10)

### Bug Fixes

* **api:** fallback to disk block info when no native disk found ([ca180c0](MauriceNino/dashdot@ca180c0)), closes [#196](MauriceNino/dashdot#196)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants