Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ZFS Pool IO stats are always 0% #200

Open
kernelkaribou opened this issue Oct 3, 2024 · 5 comments
Open

ZFS Pool IO stats are always 0% #200

kernelkaribou opened this issue Oct 3, 2024 · 5 comments
Labels
enhancement New feature or request question Further information is requested

Comments

@kernelkaribou
Copy link

After adding ZFS pools to my disks to monitor, the storage is represented but not IO Stats like "normal" disks. Presume this is due to the ZFS pool being represented as a group and not a single disk.

image

No expert but believe this is due to iostats being captured separate on ZFS. I only know of zpool iostat to produce metrics that show operations and bandwidth.

$ zpool iostat pool02
              capacity     operations     bandwidth 
pool        alloc   free   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
pool02      9.03T  7.33T     26      1  8.86M   331K

Understanding is that the metrics is an average since boot of machine but can have flags -y TIME (count) to pick an average across a time frame and the number of times to capture (otherwise indefinite). Additionally I can get no headers to format and then use awk to get just the bandwidth metrics

$ zpool iostat pool02 -H -y 1 1 | awk '{print $6, $7}'
0 0

One unfortunate behavior I have seen is that the time indicated is the start of the averaging so if you were to want 60 second averages it would need to run for 60 seconds (makes sense) but we cannot get a snapshot otherwise, meaning if 1 second was gathered which happened to be no activity may not represent the best metrics.

Would be nice to have ZFS stats if possible given my OCD and seeing the persistent 0 stat graphs for multiple pools. Alternatively if there was a way to filter those as well, that would be awesome. Not sure if there is the ability to tell the agent which stats to capture or not.

Love the project, thank you.

@henrygd
Copy link
Owner

henrygd commented Oct 3, 2024

I don't use ZFS so I probably won't be able to add any direct integration with ZFS utilities.

Do the devices that make up the pool show up if you run cat /proc/diskstats?

Maybe as a workaround I can add a way for you to map the sum total I/O of multiple devices to the pool. Something like pool02_IO=sdc,sdd,sde.

@henrygd henrygd added enhancement New feature or request question Further information is requested labels Oct 3, 2024
@kernelkaribou
Copy link
Author

Thanks. Yep looks like I can get the individual disk stats with `cat /proc/diskstats'

cat /proc/diskstats
 259       0 nvme1n1 1506586 0 228868910 418044 129801417 0 1382366017 44571380 0 42303316 60623102 103422 0 1851406079 107140 5609452 15526537
 259       2 nvme1n1p1 1506442 0 228864038 418022 129801417 0 1382366017 44571380 0 42303284 45096543 103422 0 1851406079 107140 0 0
 259       3 nvme1n1p9 38 0 304 2 0 0 0 0 0 16 2 0 0 0 0 0 0
 259       1 nvme2n1 1511239 0 229581638 420189 129809248 0 1382366017 38301785 0 39336024 51438886 103419 0 1851408090 115364 5609491 12601546
 259       4 nvme2n1p1 1511095 0 229576766 420169 129809248 0 1382366017 38301785 0 39335996 38837320 103419 0 1851408090 115364 0 0
 259       5 nvme2n1p9 38 0 304 0 0 0 0 0 0 8 0 0 0 0 0 0 0
 259       6 nvme0n1 610788 106246 28280576 371884 22665350 15793242 526448866 63327031 0 33930024 66396994 101187 0 2062731552 66803 1090594 2631275
 259       7 nvme0n1p1 195 6327 10638 477 2 0 2 8 0 164 509 8 0 8276416 23 0 0
 259       8 nvme0n1p2 609881 99899 28244114 371327 22664640 15792368 526436240 63321656 0 33924352 63759764 101179 0 2054455136 66780 0 0
 259       9 nvme0n1p3 285 20 6472 40 704 874 12624 5363 0 6652 5403 0 0 0 0 0 0
   8       0 sda 50991523 2801 41070020368 302941532 2848924 54384 1835946992 27799594 0 123105296 331867050 0 0 0 0 43246 1125923
   8       1 sda1 50991369 2801 41070015608 302941193 2848924 54384 1835946992 27799594 0 123104976 330740788 0 0 0 0 0 0
   8       9 sda9 38 0 304 61 0 0 0 0 0 72 61 0 0 0 0 0 0
   8      48 sdd 71421488 3047 41663747592 250393782 2826984 66182 1835946992 26877846 0 122065348 278372007 0 0 0 0 43246 1100378
   8      49 sdd1 71421334 3047 41663742832 250393448 2826984 66182 1835946992 26877846 0 122065048 277271295 0 0 0 0 0 0
   8      57 sdd9 38 0 304 60 0 0 0 0 0 72 60 0 0 0 0 0 0
   8      32 sdc 2859360 84436 145041618 21616561 27455244 4334787 1788230400 720119264 0 115697556 831925478 0 0 0 0 2574871 90189652
   8      33 sdc1 2859246 84436 145037058 21616342 27455244 4334787 1788230400 720119264 0 115697356 741735606 0 0 0 0 0 0
   8      16 sdb 790462 10614 202385242 892026 33839326 4888077 1187954368 22663024 0 10797204 24158455 22012 0 2685252744 157152 783074 446252
   8      17 sdb1 790370 10614 202380906 892003 33839326 4888077 1187954368 22663024 0 10797164 23712180 22012 0 2685252744 157152 0 0

Disks in ZFS pools (in output above)

lsblk -o NAME,FSTYPE -dsn | grep -A 1 'zfs'
sda1      zfs_member
└─sda     
--
sdd1      zfs_member
└─sdd     
--
nvme1n1p1 zfs_member
└─nvme1n1 
--
nvme2n1p1 zfs_member
└─nvme2n1 

The summation is not a bad idea if not too much effort.

@henrygd
Copy link
Owner

henrygd commented Oct 5, 2024

Great, thanks for confirming. I'll add this to the list.

I do need to work on a different project for a bit to pay the bills so it will take some time.

@kernelkaribou
Copy link
Author

Awesome and totally understand, even as it stands today this has been one of my favorite additions to my homelab environment so thanks again.

@theAlevan
Copy link

theAlevan commented Oct 17, 2024

If I my ask @kernelkaribou, how did you reference your zfs pool in your docker compose? By directory or device-name (and if so, what device)?

I get 0 data when using the directory method.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants