1351-1400 of 3546 results (21ms)
2019-11-01 §
21:00 <Krenair> Removed tools-checker.wmflabs.org A record to 208.80.155.229 as that target IP is in the old pre-neutron range that is no longer routed [tools]
20:57 <Krenair> Removed trusty.tools.wmflabs.org CNAME to login-trusty.tools.wmflabs.org as that target record does not exist, presumably deleted ages ago [tools]
20:56 <Krenair> Removed tools-trusty.wmflabs.org CNAME to login-trusty.tools.wmflabs.org as that target record does not exist, presumably deleted ages ago [tools]
20:38 <Krenair> Updated A record for tools-static.wmflabs.org to point towards project-proxy T236952 [tools]
2019-10-31 §
18:46 <andrewbogott> deleted and/or truncated a bunch of logfiles on tools-worker-1001. Runaway logfiles filled up the drive which prevented puppet from running. If puppet had run, it would have prevented the runaway logfiles. [tools]
13:59 <arturo> update puppet prefix `tools-k8s-etcd-` to use the `role::wmcs::toolforge::k8s::etcd` T236826 [tools]
13:41 <arturo> disabling puppet in tools-k8s-etcd- nodes to test https://gerrit.wikimedia.org/r/c/operations/puppet/+/546995 [tools]
10:15 <arturo> SSL cert replacement for tools-docker-registry and tools-k8s-master went fine apparently (T236962) [tools]
10:02 <arturo> icinga downtime toolschecker for 1h for replacing SSL certs in tools-docker-registry and tools-k8s-master (T236962) [tools]
2019-10-30 §
13:53 <arturo> replacing SSL cert in tools-proxy-x server apparently OK (merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/545679) T235252 [tools]
13:48 <arturo> replacing SSL cert in tools-proxy-x server (live-hacking https://gerrit.wikimedia.org/r/c/operations/puppet/+/545679 first for testing) T235252 [tools]
13:39 <arturo> icinga downtime toolschecker for 1h for replacing SSL cert T235252 [tools]
2019-10-29 §
10:49 <arturo> deleting VMs tools-test-proxy-01, no longer in use [tools]
10:07 <arturo> deleting old jessie VMs tools-proxy-03/04 T235627 [tools]
2019-10-28 §
16:06 <arturo> delete VM instance `tools-test-proxy-01` and the puppet prefix `tools-test-proxy` [tools]
15:54 <arturo> tools-proxy-05 has now the 185.15.56.11 floating IP as active proxy. Old one 185.15.56.6 has been freed T235627 [tools]
15:54 <arturo> shutting down tools-proxy-03 T235627 [tools]
15:26 <bd808> Killed all processes owned by jem on tools-sgebastion-08 [tools]
15:16 <arturo> tools-proxy-05 has now the 185.15.56.5 floating IP as active proxy T235627 [tools]
15:14 <arturo> refresh hiera to use tools-proxy-05 as active proxy T235627 [tools]
15:11 <bd808> Killed ircbot.php processes started by jem on tools-sgebastion-08 per request on irc [tools]
14:58 <arturo> added `webproxy` security group to tools-proxy-05 and tools-proxy-06 (T235627) [tools]
14:57 <phamhi> drained tools-worker-1031.tools.eqiad.wmflabs to clean up disk space [tools]
14:45 <arturo> created VMs tools-proxy-05 and tools-proxy-06 (T235627) [tools]
14:43 <arturo> adding `role::wmcs::toolforge::proxy` to the `tools-proxy` puppet prefix (T235627) [tools]
14:42 <arturo> deleted `role::toollabs::proxy` from the `tools-proxy` puppet profile (T235627) [tools]
14:34 <arturo> icinga downtime toolschecker for 1h (T235627) [tools]
12:24 <arturo> upload image `coredns` v1.3.1 (eb516548c180) to docker registry (T236249) [tools]
12:23 <arturo> upload image `kube-apiserver` v1.15.1 (68c3eb07bfc3) to docker registry (T236249) [tools]
12:22 <arturo> upload image `kube-controller-manager` v1.15.1 (d75082f1d121) to docker registry (T236249) [tools]
12:20 <arturo> upload image `kube-proxy` v1.15.1 (89a062da739d) to docker registry (T236249) [tools]
12:18 <arturo> upload image `kube-scheduler` v1.15.1 (b0b3c4c404da) to docker registry (T236249) [tools]
12:04 <arturo> upload image `calico/node` v3.8.0 (cd3efa20ff37) to docker registry (T236249) [tools]
12:03 <arturo> upload image `calico/calico/pod2daemon-flexvol` v3.8.0 (f68c8f870a03) to docker registry (T236249) [tools]
12:01 <arturo> upload image `calico/cni` v3.8.0 (539ca36a4c13) to docker registry (T236249) [tools]
11:58 <arturo> upload image `calico/kube-controllers` v3.8.0 (df5ff96cd966) to docker registry (T236249) [tools]
11:47 <arturo> upload image `nginx-ingress-controller` v0.25.1 (0439eb3e11f1) to docker registry (T236249) [tools]
2019-10-24 §
16:32 <bstorm_> set the prod rsyslog config for kubernetes to false for Toolforge [tools]
2019-10-23 §
20:00 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.47 (T233347) [tools]
12:09 <phamhi> Deployed toollabs-webservice 0.47 to buster-tools and stretch-tools (T233347) [tools]
09:13 <arturo> 9 tools-sgeexec nodes and 6 other related VMs are down because hypervisor is rebooting [tools]
09:03 <arturo> tools-sgebastion-08 is down because hypervisor is rebooting [tools]
2019-10-22 §
16:56 <bstorm_> drained tools-worker-1025.tools.eqiad.wmflabs which was malfunctioning [tools]
09:25 <arturo> created the `tools.eqiad1.wikimedia.cloud.` DNS zone [tools]
2019-10-21 §
17:32 <phamhi> Rebuilding all jessie and stretch docker images to pick up toollabs-webservice 0.46 [tools]
2019-10-18 §
22:15 <bd808> Rescheduled continuous jobs away from tools-sgeexec-0904 because of high system load [tools]
22:09 <bd808> Cleared error state of webgrid-generic@tools-sgewebgrid-generic-0901, webgrid-lighttpd@tools-sgewebgrid-lighttpd-09{12,15,19,20,26} [tools]
21:29 <bd808> Rescheduled all grid engine webservice jobs (T217815) [tools]
2019-10-16 §
16:21 <phamhi> Deployed toollabs-webservice 0.46 to buster-tools and stretch-tools (T218461) [tools]
09:29 <arturo> toolforge is recovered from the reboot of cloudvirt1029 [tools]