851-900 of 3132 results (20ms)
2019-12-17 §
01:26 <bstorm_> running the first run of maintain-kubeusers 2.0 for the new cluster T214513 (more successfully this time) [tools]
01:25 <bstorm_> unset the immutable bit from 1704 tool kubeconfigs T214513 [tools]
01:05 <bstorm_> beginning the first run of the new maintain-kubeusers in gentle-mode -- but it was just killed by some files setting the immutable bit T214513 [tools]
00:45 <bstorm_> enabled encryption at rest on the new k8s cluster [tools]
2019-12-16 §
22:04 <bd808> Added 'ALLOW IPv4 25/tcp from 0.0.0.0/0' to "MTA" security group applied to tools-mail-02 [tools]
19:05 <bstorm_> deployed the maintain-kubeusers operations pod to the new cluster [tools]
2019-12-14 §
10:48 <valhallasw`cloud> re-enabling puppet on tools-sgeexec-0912, likely left-over from NFS maintenance (no reason was specified). [tools]
2019-12-13 §
18:46 <bstorm_> updated tools-k8s-control-2 and 3 to the new config as well [tools]
17:56 <bstorm_> updated tools-k8s-control-1 to the new control plane configuration [tools]
17:47 <bstorm_> edited kubeadm-config configMap object to match the new init config [tools]
17:32 <bstorm_> rebooting tools-k8s-control-2 to correct mount issue [tools]
00:44 <bstorm_> rebooting tools-static-13 [tools]
00:28 <bstorm_> rebooting the k8s master to clear NFS errors [tools]
00:15 <bstorm_> switch tools-acme-chief config to match the new authdns_servers format upstream [tools]
2019-12-12 §
23:36 <bstorm_> rebooting toolschecker after downtiming the services [tools]
22:58 <bstorm_> rebooting tools-acme-chief-01 [tools]
22:53 <bstorm_> rebooting the cron server, tools-sgecron-01 as it wasn't recovered from last night's maintenance [tools]
11:20 <arturo> rolling reboot for all grid & k8s worker nodes due to NFS staleness [tools]
09:22 <arturo> reboot tools-sgeexec-0911 to try fixing weird NFS state [tools]
08:46 <arturo> doing `run-puppet-agent` in all VMs to see state of NFS [tools]
08:34 <arturo> reboot tools-worker-1033/1034 and tools-sgebastion-08 to try to correct NFS mount issues [tools]
2019-12-11 §
18:13 <bd808> Restarted maintain-dbusers on labstore1004. Process had not logged any account creations since 2019-12-01T22:45:45. [tools]
17:24 <andrewbogott> deleted and/or truncated a bunch of logfiles on tools-worker-1031 [tools]
2019-12-10 §
13:59 <arturo> set pod replicas to 3 in the new k8s cluster (T239405) [tools]
2019-12-09 §
11:06 <andrewbogott> deleting unused security groups: catgraph, devpi, MTA, mysql, syslog, test T91619 [tools]
2019-12-04 §
13:45 <arturo> drop puppet prefix `tools-cron`, deprecated and no longer in use [tools]
2019-11-29 §
11:45 <arturo> created 3 new VMs `tools-k8s-worker-[3,4,5]` (T239403) [tools]
10:28 <arturo> re-arm keyholder in tools-acme-chief-01 (password in labs/private.git @ tools-puppetmaster-01) [tools]
10:27 <arturo> re-arm keyholder in tools-acme-chief-02 (password in labs/private.git @ tools-puppetmaster-01) [tools]
2019-11-26 §
23:25 <bstorm_> rebuilding docker images to include the new webservice 0.52 in all versions instead of just the stretch ones T236202 [tools]
22:57 <bstorm_> push upgraded webservice 0.52 to the buster and jessie repos for container rebuilds T236202 [tools]
19:55 <phamhi> drained tools-worker-1002,8,15,32 to rebalance the cluster [tools]
19:45 <phamhi> cleaned up container that was taken up 16G of disk space on tools-worker-1020 in order to re-run puppet client [tools]
14:01 <arturo> drop hiera references to `tools-test-proxy-01.tools.eqiad.wmflabs`. Such VM no longer exists [tools]
14:00 <arturo> introduce the `profile::toolforge::proxies` hiera key in the global puppet config [tools]
2019-11-25 §
10:35 <arturo> refresh puppet certs for tools-k8s-etcd-[4-6] nodes (T238655) [tools]
10:35 <arturo> add puppet cert SANs via instance hiera to tools-k8s-etcd-[4-6] nodes (T238655) [tools]
2019-11-22 §
13:32 <arturo> created security group `tools-new-k8s-full-connectivity` and add new k8s VMs to it (T238654) [tools]
05:55 <jeh> add Riley Huntley `riley` to base tools project [tools]
2019-11-21 §
12:48 <arturo> reboot the new k8s cluster after the upgrade [tools]
11:49 <arturo> upgrading new k8s kubectl version to 1.15.6 (T238654) [tools]
11:44 <arturo> upgrading new k8s kubelet version to 1.15.6 (T238654) [tools]
10:29 <arturo> upgrading new k8s cluster version to 1.15.6 using kubeadm (T238654) [tools]
10:28 <arturo> install kubeadm 1.15.6 on worker/control nodes in the new k8s cluster (T238654) [tools]
2019-11-19 §
13:49 <arturo> re-create nginx-ingress pod due to deployment template refresh (T237643) [tools]
12:46 <arturo> deploy changes to tools-prometheus to account for the new k8s cluster (T237643) [tools]
2019-11-15 §
14:44 <arturo> stop live-hacks on tools-prometheus-01 T237643 [tools]
2019-11-13 §
17:20 <arturo> live-hacking tools-prometheus-01 to test some experimental configs for the new k8s cluster (T237643) [tools]
2019-11-12 §
12:52 <arturo> reboot tools-proxy-06 to reset iptables setup T238058 [tools]
2019-11-10 §
02:17 <bd808> Building new Docker images for T237836 (retrying after cleaning out old images on tools-docker-builder-06) [tools]