2251-2300 of 5083 results (34ms)
2020-12-10 §
17:16 <arturo> k8s control nodes were all upgraded to 1.17, now upgrading worker nodes (T263284) [tools]
15:49 <dcaro> puppet upgraded to 5.5.10 on the hosts, ping me if you see anything weird (clinic duty) [tools]
15:41 <arturo> icinga-downtime toolschecker for 2h (T263284) [tools]
15:35 <dcaro> Puppet 5 on tools-sgebastion-09 ran well and without issues, upgrading the other sge nodes (clinic duty) [tools]
15:32 <dcaro> Upgrading puppet from 4 to 5 on tools-sgebastion-09 (clinic duty) [tools]
12:41 <arturo> set hiera `profile::wmcs::kubeadm::component: thirdparty/kubeadm-k8s-1-17` in project & tools-k8s-control prefix (T263284) [tools]
11:50 <arturo> disabled puppet in all k8s nodes in preparation for version upgrade (T263284) [tools]
11:44 <arturo> disabled puppet in all k8s nodes in preparation for version upgrade (T263284) [tools]
09:58 <dcaro> successful tesseract upgrade on tools-sgewebgrid-lighttpd-0914, upgrading the rest of nodes (clinic duty) [tools]
09:49 <dcaro> upgrading tesseract on tools-sgewebgrid-lighttpd-0914 (clinic duty) [tools]
2020-12-08 §
19:01 <bstorm> pushed updated calico node image (v3.14.0) to internal docker registry as well T269016 [tools]
2020-12-07 §
22:56 <bstorm> pushed updated local copies of the typha, calico-cni and calico-pod2daemon-flexvol images to the tools internal registry T269016 [tools]
2020-12-03 §
09:18 <arturo> restarted kubelet systemd service on tools-k8s-worker-38. Node was NotReady, complaining about 'use of closed network connection' [tools]
09:16 <arturo> restarted kubelet systemd service on tools-k8s-worker-59. Node was NotReady, complaining about 'use of closed network connection' [tools]
2020-11-28 §
23:35 <Krenair> Re-scheduled 4 continuous jobs from tools-sgeexec-0908 as it appears to be broken, at about 23:20 UTC [tools]
04:35 <Krenair> Ran `sudo -i kubectl -n tool-mdbot delete cm maintain-kubeusers` on tools-k8s-control-1 for T268904, seems to have regenerated ~tools.mdbot/.kube/config [tools]
2020-11-24 §
17:44 <arturo> rebased labs/private.git. 2 patches had merge conflicts [tools]
16:36 <bd808> clush -w @all -b 'sudo -i apt-get purge nscd' [tools]
16:31 <bd808> Ran `sudo -i apt-get purge nscd` on tools-sgeexec-0932 to try and fix apt state for puppet [tools]
2020-11-10 §
19:45 <andrewbogott> rebooting tools-sgeexec-0950; OOM [tools]
2020-11-02 §
13:35 <arturo> (typo: dcaro) [tools]
13:35 <arturo> added dcar as projectadmin & user (T266068) [tools]
2020-10-29 §
21:33 <legoktm> published docker-registry.tools.wmflabs.org/toolbeta-test image (T265681) [tools]
21:10 <bstorm> Added another ingress node to k8s cluster in case the load spikes are the problem T266506 [tools]
17:33 <bstorm> hard rebooting tools-sgeexec-0905 and tools-sgeexec-0916 to get the grid back to full capacity [tools]
04:03 <legoktm> published docker-registry.tools.wmflabs.org/toolforge-buster0-builder:latest image (T265686) [tools]
2020-10-28 §
23:42 <bstorm> dramatically elevated the egress cap on tools-k8s-ingress nodes that were affected by the NFS settings T266506 [tools]
22:10 <bstorm> launching tools-k8s-ingress-3 to try and get an NFS-free node T266506 [tools]
21:58 <bstorm> set 'mount_nfs: false' on the tools-k8s-ingress prefix T266506 [tools]
2020-10-23 §
22:22 <legoktm> imported pack_0.14.2-1_amd64.deb into buster-tools (T266270) [tools]
2020-10-21 §
17:58 <legoktm> pushed toolforge-buster0-{build,run}:latest images to docker registry [tools]
2020-10-15 §
22:00 <bstorm> manually removing nscd from tools-sgebastion-08 and running puppet [tools]
18:23 <andrewbogott> uncordoning tools-k8s-worker-53, 54, 55, 59 [tools]
17:28 <andrewbogott> depooling tools-k8s-worker-53, 54, 55, 59 [tools]
17:27 <andrewbogott> uncordoning tools-k8s-worker-35, 37, 45 [tools]
16:44 <andrewbogott> depooling tools-k8s-worker-35, 37, 45 [tools]
2020-10-14 §
21:00 <andrewbogott> repooling tools-sgewebgrid-generic-0901 and tools-sgewebgrid-lighttpd-0915 [tools]
20:37 <andrewbogott> depooling tools-sgewebgrid-generic-0901 and tools-sgewebgrid-lighttpd-0915 [tools]
20:35 <andrewbogott> repooling tools-sgewebgrid-lighttpd-0911, 12, 13, 16 [tools]
20:31 <bd808> Deployed toollabs-webservice v0.74 [tools]
19:53 <andrewbogott> depooling tools-sgewebgrid-lighttpd-0911, 12, 13, 16 and moving to Ceph [tools]
19:47 <andrewbogott> repooling tools-sgeexec-0932, 33, 34 and moving to Ceph [tools]
19:07 <andrewbogott> depooling tools-sgeexec-0932, 33, 34 and moving to Ceph [tools]
19:06 <andrewbogott> repooling tools-sgeexec-0935, 36, 38, 40 and moving to Ceph [tools]
16:56 <andrewbogott> depooling tools-sgeexec-0935, 36, 38, 40 and moving to Ceph [tools]
2020-10-10 §
17:07 <bstorm> cleared errors on tools-sgeexec-0912.tools.eqiad.wmflabs to get the queue moving again [tools]
2020-10-08 §
17:07 <bstorm> rebuilding docker images with locales-all T263339 [tools]
2020-10-06 §
19:04 <andrewbogott> uncordoned tools-k8s-worker-38 [tools]
18:51 <andrewbogott> uncordoned tools-k8s-worker-52 [tools]
18:40 <andrewbogott> draining and cordoning tools-k8s-worker-52 and tools-k8s-worker-38 for ceph migration [tools]