1-50 of 2184 results (8ms)
2019-10-29 §
10:49 <arturo> deleting VMs tools-test-proxy-01, no longer in use [tools]
10:07 <arturo> deleting old jessie VMs tools-proxy-03/04 T235627 [tools]
2019-10-28 §
16:06 <arturo> delete VM instance `tools-test-proxy-01` and the puppet prefix `tools-test-proxy` [tools]
15:54 <arturo> tools-proxy-05 has now the 185.15.56.11 floating IP as active proxy. Old one 185.15.56.6 has been freed T235627 [tools]
15:54 <arturo> shutting down tools-proxy-03 T235627 [tools]
15:26 <bd808> Killed all processes owned by jem on tools-sgebastion-08 [tools]
15:16 <arturo> tools-proxy-05 has now the 185.15.56.5 floating IP as active proxy T235627 [tools]
15:14 <arturo> refresh hiera to use tools-proxy-05 as active proxy T235627 [tools]
15:11 <bd808> Killed ircbot.php processes started by jem on tools-sgebastion-08 per request on irc [tools]
14:58 <arturo> added `webproxy` security group to tools-proxy-05 and tools-proxy-06 (T235627) [tools]
14:57 <phamhi> drained tools-worker-1031.tools.eqiad.wmflabs to clean up disk space [tools]
14:45 <arturo> created VMs tools-proxy-05 and tools-proxy-06 (T235627) [tools]
14:43 <arturo> adding `role::wmcs::toolforge::proxy` to the `tools-proxy` puppet prefix (T235627) [tools]
14:42 <arturo> deleted `role::toollabs::proxy` from the `tools-proxy` puppet profile (T235627) [tools]
14:34 <arturo> icinga downtime toolschecker for 1h (T235627) [tools]
12:24 <arturo> upload image `coredns` v1.3.1 (eb516548c180) to docker registry (T236249) [tools]
12:23 <arturo> upload image `kube-apiserver` v1.15.1 (68c3eb07bfc3) to docker registry (T236249) [tools]
12:22 <arturo> upload image `kube-controller-manager` v1.15.1 (d75082f1d121) to docker registry (T236249) [tools]
12:20 <arturo> upload image `kube-proxy` v1.15.1 (89a062da739d) to docker registry (T236249) [tools]
12:18 <arturo> upload image `kube-scheduler` v1.15.1 (b0b3c4c404da) to docker registry (T236249) [tools]
12:04 <arturo> upload image `calico/node` v3.8.0 (cd3efa20ff37) to docker registry (T236249) [tools]
12:03 <arturo> upload image `calico/calico/pod2daemon-flexvol` v3.8.0 (f68c8f870a03) to docker registry (T236249) [tools]
12:01 <arturo> upload image `calico/cni` v3.8.0 (539ca36a4c13) to docker registry (T236249) [tools]
11:58 <arturo> upload image `calico/kube-controllers` v3.8.0 (df5ff96cd966) to docker registry (T236249) [tools]
11:47 <arturo> upload image `nginx-ingress-controller` v0.25.1 (0439eb3e11f1) to docker registry (T236249) [tools]
2019-10-24 §
16:32 <bstorm_> set the prod rsyslog config for kubernetes to false for Toolforge [tools]
2019-10-23 §
20:00 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.47 (T233347) [tools]
12:09 <phamhi> Deployed toollabs-webservice 0.47 to buster-tools and stretch-tools (T233347) [tools]
09:13 <arturo> 9 tools-sgeexec nodes and 6 other related VMs are down because hypervisor is rebooting [tools]
09:03 <arturo> tools-sgebastion-08 is down because hypervisor is rebooting [tools]
2019-10-22 §
16:56 <bstorm_> drained tools-worker-1025.tools.eqiad.wmflabs which was malfunctioning [tools]
09:25 <arturo> created the `tools.eqiad1.wikimedia.cloud.` DNS zone [tools]
2019-10-21 §
17:32 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.46 [tools]
2019-10-18 §
22:15 <bd808> Rescheduled continuous jobs away from tools-sgeexec-0904 because of high system load [tools]
22:09 <bd808> Cleared error state of webgrid-generic@tools-sgewebgrid-generic-0901, webgrid-lighttpd@tools-sgewebgrid-lighttpd-09{12,15,19,20,26} [tools]
21:29 <bd808> Rescheduled all grid engine webservice jobs (T217815) [tools]
2019-10-16 §
16:21 <phamhi> Deployed toollabs-webservice 0.46 to buster-tools and stretch-tools (T218461) [tools]
09:29 <arturo> toolforge is recovered from the reboot of cloudvirt1029 [tools]
09:17 <arturo> due to the reboot of cloudvirt1029, several sgeexec nodes (8) are offline, also sgewebgrid-lighttpd (8) and tools-worker (3) and the main toolforge proxy (tools-proxy-03) [tools]
2019-10-15 §
17:10 <phamhi> restart tools-worker-1035 because it is no longer responding [tools]
2019-10-14 §
09:26 <arturo> cleaned-up updatetools from tools-sge-services nodes (T229261) [tools]
2019-10-11 §
19:52 <bstorm_> restarted docker on tools-docker-builder after phamhi noticed the daemon had a routing issue (blank iptables) [tools]
11:55 <arturo> create tools-test-proxy-01 VM for testing T235059 and a puppet prefix for it [tools]
10:53 <arturo> added kubernetes-node_1.4.6-7_amd64.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
10:51 <arturo> added docker-engine_1.12.6-0~debian-jessie_amd64.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
10:46 <arturo> added logster_0.0.10-2~jessie1_all.deb to buster-tools and buster-toolsbeta (aptly) for T235059 [tools]
2019-10-10 §
02:33 <bd808> Rebooting tools-sgewebgrid-lighttpd-0903. Instance hung. [tools]
2019-10-09 §
22:52 <jeh> removing test instances tools-sssd-sgeexec-test-[12] from SGE [tools]
15:32 <phamhi> drained tools-worker-1020/23/33/35/36/40 to rebalance the cluster [tools]
14:46 <phamhi> drained and cordoned tools-worker-1029 after status reset on reboot [tools]