1-50 of 2867 results (21ms)
2020-12-17 §
21:42 <bstorm> doing the same procedure to increase the timeouts more T267966 [tools]
19:56 <bstorm> puppet enabled one at a time, letting things catch up. Timeouts are now adjusted to something closer to fsync values T267966 [tools]
19:44 <bstorm> set etcd timeouts seed value to 20 instead of the default 10 (profile::wmcs::kubeadm::etcd_latency_ms) T267966 [tools]
18:58 <bstorm> disabling puppet on k8s-etcd servers to alter the timeouts T267966 [tools]
14:23 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-4 (T267966) [tools]
14:21 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-5 (T267966) [tools]
14:18 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-6 (T267966) [tools]
14:17 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-7 (T267966) [tools]
14:15 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-8 (T267966) [tools]
14:12 <arturo> updated kube-apiserver manifest with new etcd nodes (T267966) [tools]
13:56 <arturo> adding etcd dns_alt_names hiera keys to the puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/beb27b45a74765a64552f2d4f70a40b217b4f4e9%5E%21/ [tools]
13:12 <arturo> making k8s api server aware of the new etcd nodes via hiera update https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/3761c4c4dab1c3ed0ab0a1133d2ccf3df6c28baf%5E%21/ (T267966) [tools]
12:54 <arturo> joining new etcd nodes in the k8s etcd cluster (T267966) [tools]
12:52 <arturo> adding more etcd nodes in the hiera key in tools-k8s-etcd puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/b4f60768078eccdabdfab4cd99c7c57076de51b2 [tools]
12:50 <arturo> dropping more unused hiera keys in the tools-k8s-etcd puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/e9e66a6787d9b91c08cf4742a27b90b3e6d05aac [tools]
12:49 <arturo> dropping unused hiera keys in the tools-k8s-etcd puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/2b4cb4a41756e602fb0996e7d0210e9102172424 [tools]
12:16 <arturo> created VM `tools-k8s-etcd-8` (T267966) [tools]
12:15 <arturo> created VM `tools-k8s-etcd-7` (T267966) [tools]
12:13 <arturo> created `tools-k8s-etcd` anti-affinity server group [tools]
2020-12-11 §
18:29 <bstorm> certificatesigningrequest.certificates.k8s.io "tool-production-error-tasks-metrics" deleted to stop maintain-kubeusers issues [tools]
12:14 <dcaro> upgrading stable/main (clinic duty) [tools]
12:12 <dcaro> upgrading buster-wikimedia/main (clinic duty) [tools]
12:03 <dcaro> upgrading stable-updates/main, mainly cacertificates (clinic duty) [tools]
12:01 <dcaro> upgrading stretch-backports/main, mainly libuv (clinic duty) [tools]
11:58 <dcaro> disabled all the repos blocking upgrades on tools-package-builder-02 (duplicated, other releases...) [tools]
11:35 <arturo> uncordon tools-k8s-worker-71 and tools-k8s-worker-55, they weren't uncordoned yesterday for whatever reasons (T263284) [tools]
11:27 <dcaro> upgrading stretch-wikimedia/main (clinic duty) [tools]
11:20 <dcaro> upgrading stretch-wikimedia/thirdparty/mono-project-stretch (clinic duty) [tools]
11:08 <dcaro> upgrade stretch-wikimedia/component/php72 (minor upgrades) (clinic duty) [tools]
11:04 <dcaro> upgrade oldstable/main packages (clinic duty) [tools]
10:58 <dcaro> upgrade kubectl done (clinic duty) [tools]
10:53 <dcaro> upgrade kubectl (clinic duty) [tools]
10:16 <dcaro> upgrading oldstable/main packages (clinic duty) [tools]
2020-12-10 §
17:35 <bstorm> k8s-control nodes upgraded to 1.17.13 T263284 [tools]
17:16 <arturo> k8s control nodes were all upgraded to 1.17, now upgrading worker nodes (T263284) [tools]
15:49 <dcaro> puppet upgraded to 5.5.10 on the hosts, ping me if you see anything weird (clinic duty) [tools]
15:41 <arturo> icinga-downtime toolschecker for 2h (T263284) [tools]
15:35 <dcaro> Puppet 5 on tools-sgebastion-09 ran well and without issues, upgrading the other sge nodes (clinic duty) [tools]
15:32 <dcaro> Upgrading puppet from 4 to 5 on tools-sgebastion-09 (clinic duty) [tools]
12:41 <arturo> set hiera `profile::wmcs::kubeadm::component: thirdparty/kubeadm-k8s-1-17` in project & tools-k8s-control prefix (T263284) [tools]
11:50 <arturo> disabled puppet in all k8s nodes in preparation for version upgrade (T263284) [tools]
11:44 <arturo> disabled puppet in all k8s nodes in preparation for version upgrade (T263284) [tools]
09:58 <dcaro> successful tesseract upgrade on tools-sgewebgrid-lighttpd-0914, upgrading the rest of nodes (clinic duty) [tools]
09:49 <dcaro> upgrading tesseract on tools-sgewebgrid-lighttpd-0914 (clinic duty) [tools]
2020-12-08 §
19:01 <bstorm> pushed updated calico node image (v3.14.0) to internal docker registry as well T269016 [tools]
2020-12-07 §
22:56 <bstorm> pushed updated local copies of the typha, calico-cni and calico-pod2daemon-flexvol images to the tools internal registry T269016 [tools]
2020-12-03 §
09:18 <arturo> restarted kubelet systemd service on tools-k8s-worker-38. Node was NotReady, complaining about 'use of closed network connection' [tools]
09:16 <arturo> restarted kubelet systemd service on tools-k8s-worker-59. Node was NotReady, complaining about 'use of closed network connection' [tools]
2020-11-28 §
23:35 <Krenair> Re-scheduled 4 continuous jobs from tools-sgeexec-0908 as it appears to be broken, at about 23:20 UTC [tools]
04:35 <Krenair> Ran `sudo -i kubectl -n tool-mdbot delete cm maintain-kubeusers` on tools-k8s-control-1 for T268904, seems to have regenerated ~tools.mdbot/.kube/config [tools]