251-300 of 3141 results (20ms)
2021-02-22 §
18:56 <bstorm> deleted job 1962508 from the grid to clear it up T275301 [tools]
16:58 <bstorm> cleared error state on several grid queues [tools]
2021-02-19 §
12:31 <arturo> deploying new version of toolforge ingress admission controller [tools]
2021-02-17 §
21:26 <bstorm> deleted tools-puppetdb-01 since it is unused at this time (and undersized anyway) [tools]
2021-02-04 §
16:27 <bstorm> rebooting tools-package-builder-02 [tools]
2021-01-26 §
16:27 <bd808> Hard reboot of tools-sgeexec-0906 via Horizon for T272978 [tools]
2021-01-22 §
09:59 <dcaro> added the record redis.svc.tools.eqiad1.wikimedia.cloud pointing to tools-redis1003 (T272679) [tools]
2021-01-21 §
23:58 <bstorm> deployed new maintain-kubeusers to tools T271847 [tools]
2021-01-19 §
22:57 <bstorm> truncated 75GB error log /data/project/robokobot/virgule.err T272247 [tools]
22:48 <bstorm> truncated 100GB error log /data/project/magnus-toolserver/error.log T272247 [tools]
22:43 <bstorm> truncated 107GB log '/data/project/meetbot/logs/messages.log' T272247 [tools]
22:34 <bstorm> truncating 194 GB error log '/data/project/mix-n-match/mnm-microsync.err' T272247 [tools]
16:37 <bd808> Added Jhernandez to root sudoers group [tools]
2021-01-14 §
20:56 <bstorm> setting bastions to have mostly-uncapped egress network and 40MBps nfs_read for better shared use [tools]
20:43 <bstorm> running tc-setup across the k8s workers [tools]
20:40 <bstorm> running tc-setup across the grid fleet [tools]
17:58 <bstorm> hard rebooting tools-sgecron-01 following network issues during upgrade to stein T261134 [tools]
2021-01-13 §
10:02 <arturo> delete floating IP allocation 185.15.56.245 (T271867) [tools]
2021-01-12 §
18:16 <bstorm> deleted wedged CSR tool-adhs-wde to get maintain-kubeusers working again T271842 [tools]
2021-01-05 §
18:49 <bstorm> changing the limits on k8s etcd nodes again, so disabling puppet on them T267966 [tools]
2021-01-04 §
18:21 <bstorm> ran 'sudo systemctl stop getty@ttyS1.service && sudo systemctl disable getty@ttyS1.service' on tools-k8s-etcd-5 I have no idea why that keeps coming back. [tools]
2020-12-22 §
18:22 <bstorm> rebooting the grid master because it is misbehaving following the NFS outage [tools]
10:53 <arturo> rebase & resolve ugly git merge conflict in labs/private.git [tools]
2020-12-18 §
18:37 <bstorm> set profile::wmcs::kubeadm::etcd_latency_ms: 15 T267966 [tools]
2020-12-17 §
21:42 <bstorm> doing the same procedure to increase the timeouts more T267966 [tools]
19:56 <bstorm> puppet enabled one at a time, letting things catch up. Timeouts are now adjusted to something closer to fsync values T267966 [tools]
19:44 <bstorm> set etcd timeouts seed value to 20 instead of the default 10 (profile::wmcs::kubeadm::etcd_latency_ms) T267966 [tools]
18:58 <bstorm> disabling puppet on k8s-etcd servers to alter the timeouts T267966 [tools]
14:23 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-4 (T267966) [tools]
14:21 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-5 (T267966) [tools]
14:18 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-6 (T267966) [tools]
14:17 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-7 (T267966) [tools]
14:15 <arturo> regenerating puppet cert with proper alt names in tools-k8s-etcd-8 (T267966) [tools]
14:12 <arturo> updated kube-apiserver manifest with new etcd nodes (T267966) [tools]
13:56 <arturo> adding etcd dns_alt_names hiera keys to the puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/beb27b45a74765a64552f2d4f70a40b217b4f4e9%5E%21/ [tools]
13:12 <arturo> making k8s api server aware of the new etcd nodes via hiera update https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/3761c4c4dab1c3ed0ab0a1133d2ccf3df6c28baf%5E%21/ (T267966) [tools]
12:54 <arturo> joining new etcd nodes in the k8s etcd cluster (T267966) [tools]
12:52 <arturo> adding more etcd nodes in the hiera key in tools-k8s-etcd puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/b4f60768078eccdabdfab4cd99c7c57076de51b2 [tools]
12:50 <arturo> dropping more unused hiera keys in the tools-k8s-etcd puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/e9e66a6787d9b91c08cf4742a27b90b3e6d05aac [tools]
12:49 <arturo> dropping unused hiera keys in the tools-k8s-etcd puppet prefix https://gerrit.wikimedia.org/r/plugins/gitiles/cloud/instance-puppet/+/2b4cb4a41756e602fb0996e7d0210e9102172424 [tools]
12:16 <arturo> created VM `tools-k8s-etcd-8` (T267966) [tools]
12:15 <arturo> created VM `tools-k8s-etcd-7` (T267966) [tools]
12:13 <arturo> created `tools-k8s-etcd` anti-affinity server group [tools]
2020-12-11 §
18:29 <bstorm> certificatesigningrequest.certificates.k8s.io "tool-production-error-tasks-metrics" deleted to stop maintain-kubeusers issues [tools]
12:14 <dcaro> upgrading stable/main (clinic duty) [tools]
12:12 <dcaro> upgrading buster-wikimedia/main (clinic duty) [tools]
12:03 <dcaro> upgrading stable-updates/main, mainly cacertificates (clinic duty) [tools]
12:01 <dcaro> upgrading stretch-backports/main, mainly libuv (clinic duty) [tools]
11:58 <dcaro> disabled all the repos blocking upgrades on tools-package-builder-02 (duplicated, other releases...) [tools]
11:35 <arturo> uncordon tools-k8s-worker-71 and tools-k8s-worker-55, they weren't uncordoned yesterday for whatever reasons (T263284) [tools]