2401-2450 of 4500 results (33ms)
2019-10-07 §
19:06 <Krenair> reboot tools-mail-02 due to nfs stale issue [tools]
19:06 <Krenair> reboot tools-docker-registry-03 due to nfs stale issue [tools]
19:04 <Krenair> reboot tools-worker-1029 due to nfs stale issue [tools]
19:00 <Krenair> reboot tools-static-12 tools-docker-registry-04 and tools-clushmaster-02 due to NFS stale issue [tools]
18:55 <phamhi> reboot tools-worker-1028 due to nfs stale issue [tools]
18:55 <phamhi> reboot tools-worker-1027 due to nfs stale issue [tools]
18:55 <phamhi> reboot tools-worker-1026 due to nfs stale issue [tools]
18:55 <phamhi> reboot tools-worker-1025 due to nfs stale issue [tools]
18:47 <phamhi> reboot tools-worker-1023 due to nfs stale issue [tools]
18:47 <phamhi> reboot tools-worker-1022 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1021 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1020 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1019 due to nfs stale issue [tools]
18:46 <phamhi> reboot tools-worker-1018 due to nfs stale issue [tools]
18:34 <phamhi> reboot tools-worker-1017 due to nfs stale issue [tools]
18:34 <phamhi> reboot tools-worker-1016 due to nfs stale issue [tools]
18:32 <phamhi> reboot tools-worker-1015 due to nfs stale issue [tools]
18:32 <phamhi> reboot tools-worker-1014 due to nfs stale issue [tools]
18:23 <phamhi> reboot tools-worker-1013 due to nfs stale issue [tools]
18:21 <phamhi> reboot tools-worker-1012 due to nfs stale issue [tools]
18:12 <phamhi> reboot tools-worker-1011 due to nfs stale issue [tools]
18:12 <phamhi> reboot tools-worker-1010 due to nfs stale issue [tools]
18:08 <phamhi> reboot tools-worker-1009 due to nfs stale issue [tools]
18:07 <phamhi> reboot tools-worker-1008 due to nfs stale issue [tools]
17:58 <phamhi> reboot tools-worker-1007 due to nfs stale issue [tools]
17:57 <phamhi> reboot tools-worker-1006 due to nfs stale issue [tools]
17:47 <phamhi> reboot tools-worker-1005 due to nfs stale issue [tools]
17:47 <phamhi> reboot tools-worker-1004 due to nfs stale issue [tools]
17:43 <phamhi> reboot tools-worker-1002.tools.eqiad.wmflabs due to nfs stale issue [tools]
17:35 <phamhi> drained and uncordoned tools-worker-100[1-5] [tools]
17:32 <bstorm_> reboot tools-sgewebgrid-lighttpd-0912 [tools]
17:30 <bstorm_> reboot tools-sgewebgrid-lighttpd-0923/24/08 [tools]
17:01 <bstorm_> rebooting tools-sgegrid-master and tools-sgegrid-shadow 😭 [tools]
16:58 <bstorm_> rebooting tools-sgewebgrid-lighttpd-0902/4/6/7/8/19 [tools]
16:53 <bstorm_> rebooting tools-sgewebgrid-generic-0902/4 [tools]
16:50 <bstorm_> rebooting tools-sgeexec-0915/18/19/23/26 [tools]
16:49 <bstorm_> rebooting tools-sgeexec-0901 and tools-sgeexec-0909/10/11 [tools]
16:46 <bd808> `sudo shutdown -r now` for tools-sgebastion-08 [tools]
16:41 <bstorm_> reboot tools-sgebastion-07 [tools]
16:39 <bd808> `sudo service nslcd restart` on tools-sgebastion-08 [tools]
2019-10-04 §
21:43 <bd808> `sudo exec-manage repool tools-sgeexec-0923.tools.eqiad.wmflabs` [tools]
21:26 <bd808> Rebooting tools-sgeexec-0923 after lots of messing about with a broken update-initramfs build [tools]
20:35 <bd808> Manually running `/usr/bin/python3 /usr/bin/unattended-upgrade` on tools-sgeexec-0923 [tools]
20:33 <bd808> Killed 2 /usr/bin/unattended-upgrade procs on tools-sgeexec-0923 that seemed stuck [tools]
13:33 <arturo> remove /etc/init.d/rsyslog on tools-worker-XXXX nodes so the rsyslog deb prerm script doesn't prevent the package from being updated [tools]
2019-10-03 §
13:05 <arturo> delete servers tools-sssd-sgeexec-test-[1,2], no longer required [tools]
2019-09-27 §
16:59 <bd808> Set "profile::rsyslog::kafka_shipper::kafka_brokers: []" in tools-elastic prefix puppet [tools]
00:40 <bstorm_> depooled and rebooted tools-sgewebgrid-lighttpd-0927 [tools]
2019-09-25 §
19:08 <andrewbogott> moving tools-sgewebgrid-lighttpd-0903 to cloudvirt1021 [tools]
2019-09-23 §
16:58 <bstorm_> deployed tools-manifest 0.20 and restarted webservicemonitor [tools]