751-800 of 2884 results (10ms)
2019-10-09 §
12:37 <arturo> drain tools-worker-1038 to rebalance load in the k8s cluster [tools]
12:35 <arturo> uncordon tools-worker-1029 (was disabled for unknown reasons) [tools]
12:33 <arturo> drain tools-worker-1010 to rebalance load [tools]
10:33 <arturo> several sgewebgrid-lighttpd nodes (9) not available because cloudvirt1013 is rebooting [tools]
10:21 <arturo> several worker nodes (7) not available because cloudvirt1012 is rebooting [tools]
10:08 <arturo> several worker nodes (6) not available because cloudvirt1009 is rebooting [tools]
09:59 <arturo> several worker nodes (5) not available because cloudvirt1008 is rebooting [tools]
2019-10-08 §
19:39 <bstorm_> drained tools-worker-1007/8 to rebalance the cluster [tools]
19:34 <bstorm_> drained tools-worker-1009 and then 1014 for rebalancing [tools]
19:27 <bstorm_> drained tools-worker-1005 for rebalancing (and put these back in service as I went) [tools]
19:24 <bstorm_> drained tools-worker-1003 and 1009 for rebalancing [tools]
15:41 <arturo> deleted VM instance tools-sgebastion-0test. No longer in use. [tools]
2019-10-07 §
20:17 <bd808> Dropped backlog of messages for delivery to tools.usrd-tools [tools]
20:16 <bd808> Dropped backlog of messages for delivery to tools.mix-n-match [tools]
20:13 <bd808> Dropped backlog of frozen messages for delivery (240 dropped) [tools]
19:25 <bstorm_> deleted tools-puppetmaster-02 [tools]
19:20 <Krenair> reboot tools-k8s-master-01 due to nfs stale issue [tools]
19:18 <Krenair> reboot tools-paws-worker-1006 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1040 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1039 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1038 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1037 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1036 due to nfs stale issue [tools]
19:16 <phamhi> reboot tools-worker-1035 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1034 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1033 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1032 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1031 due to nfs stale issue [tools]
19:15 <phamhi> reboot tools-worker-1030 due to nfs stale issue [tools]
19:10 <Krenair> reboot tools-puppetmaster-02 due to nfs stale issue [tools]
19:09 <Krenair> reboot tools-sgebastion-0test due to nfs stale issue [tools]
19:08 <Krenair> reboot tools-sgebastion-09 due to nfs stale issue [tools]
19:08 <Krenair> reboot tools-sge-services-04 due to nfs stale issue [tools]
19:07 <Krenair> reboot tools-paws-worker-1002 due to nfs stale issue [tools]
19:06 <Krenair> reboot tools-mail-02 due to nfs stale issue [tools]
19:06 <Krenair> reboot tools-docker-registry-03 due to nfs stale issue [tools]
19:04 <Krenair> reboot tools-worker-1029 due to nfs stale issue [tools]
19:00 <Krenair> reboot tools-static-12 tools-docker-registry-04 and tools-clushmaster-02 due to NFS stale issue [tools]
18:55 <phamhi> reboot tools-worker-1028 due to nfs stale issue [tools]
18:55 <phamhi> reboot tools-worker-1027 due to nfs stale issue [tools]
18:55 <phamhi> reboot tools-worker-1026 due to nfs stale issue [tools]
18:55 <phamhi> reboot tools-worker-1025 due to nfs stale issue [tools]
18:47 <phamhi> reboot tools-worker-1023 due to nfs stale issue [tools]
18:47 <phamhi> reboot tools-worker-1022 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1021 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1020 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1019 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1018 due to nfs stale issue [tools]
18:34 <phamhi> reboot tools-worker-1017 due to nfs stale issue [tools]
18:34 <phamhi> reboot tools-worker-1016 due to nfs stale issue [tools]