301-350 of 2318 results (14ms)
2019-08-05 §
09:30 <arturo> `root@tools-checker-03:~# toolscheckerctl restart` (T229787) [tools]
2019-08-02 §
14:00 <andrewbogott_> rebooting tools-worker-1022 as it is unresponsive [tools]
2019-07-31 §
18:07 <bstorm_> drained tools-worker-1015/05/03/17 to rebalance load [tools]
17:41 <bstorm_> drained tools-worker-1025 and 1026 to rebalance load [tools]
17:32 <bstorm_> drained tools-worker-1028 to rebalance load [tools]
17:29 <bstorm_> drained tools-worker-1008 to rebalance load [tools]
17:23 <bstorm_> drained tools-worker-1021 to rebalance load [tools]
17:17 <bstorm_> drained tools-worker-1007 to rebalance load [tools]
17:07 <bstorm_> drained tools-worker-1004 to rebalance load [tools]
16:27 <andrewbogott> moving tools-static-12 to cloudvirt1018 [tools]
15:33 <bstorm_> T228573 spinning up 5 worker nodes for kubernetes cluster (tools-worker-1035-9) [tools]
2019-07-27 §
23:00 <zhuyifei1999_> a past probably related ticket: T194859 [tools]
22:57 <zhuyifei1999_> maintain-kubeusers seems stuck. Traceback: https://phabricator.wikimedia.org/P8812, core dump: /root/core.17898. Restarting [tools]
2019-07-26 §
17:39 <bstorm_> restarted maintain-kubeusers because it was suspiciously tardy and quiet [tools]
17:14 <bstorm_> drained tools-worker-1013.tools.eqiad.wmflabs to rebalance load [tools]
17:09 <bstorm_> draining tools-worker-1020.tools.eqiad.wmflabs to rebalance load [tools]
16:32 <bstorm_> created tools-worker-1034 - T228573 [tools]
15:57 <bstorm_> created tools-worker-1032 and 1033 - T228573 [tools]
15:54 <bstorm_> created tools-worker-1031 - T228573 [tools]
2019-07-25 §
22:01 <bstorm_> T228573 created tools-worker-1030 [tools]
21:22 <jeh> rebooting tools-worker-1016 unresponsive [tools]
2019-07-24 §
10:14 <arturo> reallocating tools-puppetmaster-01 from cloudvirt1027 to cloudvirt1028 (T227539) [tools]
10:12 <arturo> reallocating tools-docker-registry-04 from cloudvirt1027 to cloudvirt1028 (T227539) [tools]
2019-07-22 §
18:39 <bstorm_> repooled tools-sgeexec-0905 after reboot [tools]
18:33 <bstorm_> depooled tools-sgeexec-0905 because it's acting kind of weird and not responding to prometheus [tools]
18:32 <bstorm_> repooled tools-sgewebgrid-lighttpd-0902 after restarting the grid-exec service [tools]
18:28 <bstorm_> depooled tools-sgewebgrid-lighttpd-0902 to find out why it is behaving weird [tools]
17:55 <bstorm_> draining tools-worker-1023 since it is having issues [tools]
17:38 <bstorm_> Adding the prometheus servers to the ferm rules via wikitech hiera for kubelet stats T228573 [tools]
2019-07-20 §
19:52 <andrewbogott> rebooting tools-worker-1023 [tools]
2019-07-17 §
20:23 <andrewbogott> migrating tools-sgegrid-shadow to cloudvirt1014 [tools]
2019-07-15 §
14:50 <bstorm_> cleared error state from tools-sgeexec-0911 which went offline after error from job 5190035 [tools]
2019-06-25 §
09:30 <arturo> detected puppet issue in all VMs: T226480 [tools]
2019-06-24 §
17:42 <andrewbogott> moving tools-sgeexec-0905 to cloudvirt1015 [tools]
2019-06-17 §
14:07 <andrewbogott> moving tools-sgewebgrid-lighttpd-0903 to cloudvirt1015 [tools]
13:59 <andrewbogott> moving tools-sgewebgrid-generic-0902 and tools-sgewebgrid-lighttpd-0902 to cloudvirt1015 (optimistic re: T220853 ) [tools]
2019-06-11 §
18:03 <bstorm_> deleted anomalous kubernetes node tools-worker-1019.eqiad.wmflabs [tools]
2019-06-05 §
18:33 <andrewbogott> repooled tools-sgeexec-0921 and tools-sgeexec-0929 [tools]
18:16 <andrewbogott> depooling and moving tools-sgeexec-0921 and tools-sgeexec-0929 [tools]
2019-05-30 §
13:01 <arturo> uncordon/repool tools-worker-1001/2/3. They should be fine now. I'm only leaving 1029 cordoned for testing purposes [tools]
13:01 <arturo> reboot tools-woker-1003 to cleanup sssd config and let nslcd/nscd start freshly [tools]
12:47 <arturo> reboot tools-woker-1002 to cleanup sssd config and let nslcd/nscd start freshly [tools]
12:42 <arturo> reboot tools-woker-1001 to cleanup sssd config and let nslcd/nscd start freshly [tools]
12:35 <arturo> enable puppet in tools-worker nodes [tools]
12:29 <arturo> switch hiera setting back to classic/sudoldap for tools-worker because T224651 (T224558) [tools]
12:25 <arturo> cordon/drain tools-worker-1002 because T224651 and T224651 [tools]
12:23 <arturo> cordon/drain tools-worker-1001 because T224651 and T224651 [tools]
12:22 <arturo> cordon/drain tools-worker-1029 because T224651 and T224651 [tools]
12:20 <arturo> cordon/drain tools-worker-1003 because T224651 and T224651 [tools]
11:59 <arturo> T224558 repool tools-worker-1003 (using sssd/sudo now!) [tools]