1901-1950 of 4072 results (34ms)
2019-10-28 §
14:43 <arturo> adding `role::wmcs::toolforge::proxy` to the `tools-proxy` puppet prefix (T235627) [tools]
14:42 <arturo> deleted `role::toollabs::proxy` from the `tools-proxy` puppet profile (T235627) [tools]
14:34 <arturo> icinga downtime toolschecker for 1h (T235627) [tools]
12:24 <arturo> upload image `coredns` v1.3.1 (eb516548c180) to docker registry (T236249) [tools]
12:23 <arturo> upload image `kube-apiserver` v1.15.1 (68c3eb07bfc3) to docker registry (T236249) [tools]
12:22 <arturo> upload image `kube-controller-manager` v1.15.1 (d75082f1d121) to docker registry (T236249) [tools]
12:20 <arturo> upload image `kube-proxy` v1.15.1 (89a062da739d) to docker registry (T236249) [tools]
12:18 <arturo> upload image `kube-scheduler` v1.15.1 (b0b3c4c404da) to docker registry (T236249) [tools]
12:04 <arturo> upload image `calico/node` v3.8.0 (cd3efa20ff37) to docker registry (T236249) [tools]
12:03 <arturo> upload image `calico/calico/pod2daemon-flexvol` v3.8.0 (f68c8f870a03) to docker registry (T236249) [tools]
12:01 <arturo> upload image `calico/cni` v3.8.0 (539ca36a4c13) to docker registry (T236249) [tools]
11:58 <arturo> upload image `calico/kube-controllers` v3.8.0 (df5ff96cd966) to docker registry (T236249) [tools]
11:47 <arturo> upload image `nginx-ingress-controller` v0.25.1 (0439eb3e11f1) to docker registry (T236249) [tools]
2019-10-24 §
16:32 <bstorm_> set the prod rsyslog config for kubernetes to false for Toolforge [tools]
2019-10-23 §
20:00 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.47 (T233347) [tools]
12:09 <phamhi> Deployed toollabs-webservice 0.47 to buster-tools and stretch-tools (T233347) [tools]
09:13 <arturo> 9 tools-sgeexec nodes and 6 other related VMs are down because hypervisor is rebooting [tools]
09:03 <arturo> tools-sgebastion-08 is down because hypervisor is rebooting [tools]
2019-10-22 §
16:56 <bstorm_> drained tools-worker-1025.tools.eqiad.wmflabs which was malfunctioning [tools]
09:25 <arturo> created the `tools.eqiad1.wikimedia.cloud.` DNS zone [tools]
2019-10-21 §
17:32 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.46 [tools]
2019-10-18 §
22:15 <bd808> Rescheduled continuous jobs away from tools-sgeexec-0904 because of high system load [tools]
22:09 <bd808> Cleared error state of webgrid-generic@tools-sgewebgrid-generic-0901, webgrid-lighttpd@tools-sgewebgrid-lighttpd-09{12,15,19,20,26} [tools]
21:29 <bd808> Rescheduled all grid engine webservice jobs (T217815) [tools]
2019-10-16 §
16:21 <phamhi> Deployed toollabs-webservice 0.46 to buster-tools and stretch-tools (T218461) [tools]
09:29 <arturo> toolforge is recovered from the reboot of cloudvirt1029 [tools]
09:17 <arturo> due to the reboot of cloudvirt1029, several sgeexec nodes (8) are offline, also sgewebgrid-lighttpd (8) and tools-worker (3) and the main toolforge proxy (tools-proxy-03) [tools]
2019-10-15 §
17:10 <phamhi> restart tools-worker-1035 because it is no longer responding [tools]
2019-10-14 §
09:26 <arturo> cleaned-up updatetools from tools-sge-services nodes (T229261) [tools]
2019-10-11 §
19:52 <bstorm_> restarted docker on tools-docker-builder after phamhi noticed the daemon had a routing issue (blank iptables) [tools]
11:55 <arturo> create tools-test-proxy-01 VM for testing T235059 and a puppet prefix for it [tools]
10:53 <arturo> added kubernetes-node_1.4.6-7_amd64.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
10:51 <arturo> added docker-engine_1.12.6-0~debian-jessie_amd64.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
10:46 <arturo> added logster_0.0.10-2~jessie1_all.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
2019-10-10 §
02:33 <bd808> Rebooting tools-sgewebgrid-lighttpd-0903. Instance hung. [tools]
2019-10-09 §
22:52 <jeh> removing test instances tools-sssd-sgeexec-test-[12] from SGE [tools]
15:32 <phamhi> drained tools-worker-1020/23/33/35/36/40 to rebalance the cluster [tools]
14:46 <phamhi> drained and cordoned tools-worker-1029 after status reset on reboot [tools]
12:37 <arturo> drain tools-worker-1038 to rebalance load in the k8s cluster [tools]
12:35 <arturo> uncordon tools-worker-1029 (was disabled for unknown reasons) [tools]
12:33 <arturo> drain tools-worker-1010 to rebalance load [tools]
10:33 <arturo> several sgewebgrid-lighttpd nodes (9) not available because cloudvirt1013 is rebooting [tools]
10:21 <arturo> several worker nodes (7) not available because cloudvirt1012 is rebooting [tools]
10:08 <arturo> several worker nodes (6) not available because cloudvirt1009 is rebooting [tools]
09:59 <arturo> several worker nodes (5) not available because cloudvirt1008 is rebooting [tools]
2019-10-08 §
19:39 <bstorm_> drained tools-worker-1007/8 to rebalance the cluster [tools]
19:34 <bstorm_> drained tools-worker-1009 and then 1014 for rebalancing [tools]
19:27 <bstorm_> drained tools-worker-1005 for rebalancing (and put these back in service as I went) [tools]
19:24 <bstorm_> drained tools-worker-1003 and 1009 for rebalancing [tools]
15:41 <arturo> deleted VM instance tools-sgebastion-0test. No longer in use. [tools]