Rebooting Petabyte Control Node
am rebooting one of the control nodes for a petabyte+ storage array, after 504 days of system uptime..
watching kernel log_level 6 debug info scroll by on the SoL terminal via iDrac..
logs scrolling, the array of SAS3 DE3-24C double-redundant SFF linked Oracle/Sun drive enclosures spin-up and begin talking to multipathd...
waiting for Zpool cache file import..
waiting.. 131 / 132 drives online across all enclosures.. hmm.. what's this now...
> transport_port_remove: removed: sas_addr(0x500c04f2cfe10620)
well ffs
> 12:0:10:0: SATA: handle(0x0017), sas_addr(0x500c04f2cfe10620), phy(32),
oh, that's a SATA drive on the system's local enclosure bay for scratch data, it's not part of the ZFS pool..
next step, not today, move control nodes to a higher performance + lower wattage pair of FreeBSD servers