51-100 of 2900 results (13ms)
2020-12-17 §
12:15 <arturo> created VM `tools-k8s-etcd-7` (T267966) [tools]
12:13 <arturo> created `tools-k8s-etcd` anti-affinity server group [tools]
2020-12-11 §
18:29 <bstorm> certificatesigningrequest.certificates.k8s.io "tool-production-error-tasks-metrics" deleted to stop maintain-kubeusers issues [tools]
12:14 <dcaro> upgrading stable/main (clinic duty) [tools]
12:12 <dcaro> upgrading buster-wikimedia/main (clinic duty) [tools]
12:03 <dcaro> upgrading stable-updates/main, mainly cacertificates (clinic duty) [tools]
12:01 <dcaro> upgrading stretch-backports/main, mainly libuv (clinic duty) [tools]
11:58 <dcaro> disabled all the repos blocking upgrades on tools-package-builder-02 (duplicated, other releases...) [tools]
11:35 <arturo> uncordon tools-k8s-worker-71 and tools-k8s-worker-55, they weren't uncordoned yesterday for whatever reasons (T263284) [tools]
11:27 <dcaro> upgrading stretch-wikimedia/main (clinic duty) [tools]
11:20 <dcaro> upgrading stretch-wikimedia/thirdparty/mono-project-stretch (clinic duty) [tools]
11:08 <dcaro> upgrade stretch-wikimedia/component/php72 (minor upgrades) (clinic duty) [tools]
11:04 <dcaro> upgrade oldstable/main packages (clinic duty) [tools]
10:58 <dcaro> upgrade kubectl done (clinic duty) [tools]
10:53 <dcaro> upgrade kubectl (clinic duty) [tools]
10:16 <dcaro> upgrading oldstable/main packages (clinic duty) [tools]
2020-12-10 §
17:35 <bstorm> k8s-control nodes upgraded to 1.17.13 T263284 [tools]
17:16 <arturo> k8s control nodes were all upgraded to 1.17, now upgrading worker nodes (T263284) [tools]
15:49 <dcaro> puppet upgraded to 5.5.10 on the hosts, ping me if you see anything weird (clinic duty) [tools]
15:41 <arturo> icinga-downtime toolschecker for 2h (T263284) [tools]
15:35 <dcaro> Puppet 5 on tools-sgebastion-09 ran well and without issues, upgrading the other sge nodes (clinic duty) [tools]
15:32 <dcaro> Upgrading puppet from 4 to 5 on tools-sgebastion-09 (clinic duty) [tools]
12:41 <arturo> set hiera `profile::wmcs::kubeadm::component: thirdparty/kubeadm-k8s-1-17` in project & tools-k8s-control prefix (T263284) [tools]
11:50 <arturo> disabled puppet in all k8s nodes in preparation for version upgrade (T263284) [tools]
11:44 <arturo> disabled puppet in all k8s nodes in preparation for version upgrade (T263284) [tools]
09:58 <dcaro> successful tesseract upgrade on tools-sgewebgrid-lighttpd-0914, upgrading the rest of nodes (clinic duty) [tools]
09:49 <dcaro> upgrading tesseract on tools-sgewebgrid-lighttpd-0914 (clinic duty) [tools]
2020-12-08 §
19:01 <bstorm> pushed updated calico node image (v3.14.0) to internal docker registry as well T269016 [tools]
2020-12-07 §
22:56 <bstorm> pushed updated local copies of the typha, calico-cni and calico-pod2daemon-flexvol images to the tools internal registry T269016 [tools]
2020-12-03 §
09:18 <arturo> restarted kubelet systemd service on tools-k8s-worker-38. Node was NotReady, complaining about 'use of closed network connection' [tools]
09:16 <arturo> restarted kubelet systemd service on tools-k8s-worker-59. Node was NotReady, complaining about 'use of closed network connection' [tools]
2020-11-28 §
23:35 <Krenair> Re-scheduled 4 continuous jobs from tools-sgeexec-0908 as it appears to be broken, at about 23:20 UTC [tools]
04:35 <Krenair> Ran `sudo -i kubectl -n tool-mdbot delete cm maintain-kubeusers` on tools-k8s-control-1 for T268904, seems to have regenerated ~tools.mdbot/.kube/config [tools]
2020-11-24 §
17:44 <arturo> rebased labs/private.git. 2 patches had merge conflicts [tools]
16:36 <bd808> clush -w @all -b 'sudo -i apt-get purge nscd' [tools]
16:31 <bd808> Ran `sudo -i apt-get purge nscd` on tools-sgeexec-0932 to try and fix apt state for puppet [tools]
2020-11-10 §
19:45 <andrewbogott> rebooting tools-sgeexec-0950; OOM [tools]
2020-11-02 §
13:35 <arturo> (typo: dcaro) [tools]
13:35 <arturo> added dcar as projectadmin & user (T266068) [tools]
2020-10-29 §
21:33 <legoktm> published docker-registry.tools.wmflabs.org/toolbeta-test image (T265681) [tools]
21:10 <bstorm> Added another ingress node to k8s cluster in case the load spikes are the problem T266506 [tools]
17:33 <bstorm> hard rebooting tools-sgeexec-0905 and tools-sgeexec-0916 to get the grid back to full capacity [tools]
04:03 <legoktm> published docker-registry.tools.wmflabs.org/toolforge-buster0-builder:latest image (T265686) [tools]
2020-10-28 §
23:42 <bstorm> dramatically elevated the egress cap on tools-k8s-ingress nodes that were affected by the NFS settings T266506 [tools]
22:10 <bstorm> launching tools-k8s-ingress-3 to try and get an NFS-free node T266506 [tools]
21:58 <bstorm> set 'mount_nfs: false' on the tools-k8s-ingress prefix T266506 [tools]
2020-10-23 §
22:22 <legoktm> imported pack_0.14.2-1_amd64.deb into buster-tools (T266270) [tools]
2020-10-21 §
17:58 <legoktm> pushed toolforge-buster0-{build,run}:latest images to docker registry [tools]
2020-10-15 §
22:00 <bstorm> manually removing nscd from tools-sgebastion-08 and running puppet [tools]
18:23 <andrewbogott> uncordoning tools-k8s-worker-53, 54, 55, 59 [tools]