351-400 of 2923 results (20ms)
2020-04-09 §
04:29 <bd808> Running rebuild_all for Docker images to pick up toollabs-webservice v0.66 [try #2] (T154504, T234617) [tools]
04:18 <bd808> python3 build.py --image-prefix toolforge --tag latest --no-cache --push --single jessie-sssd [tools]
00:20 <bd808> Docker rebuild failed in toolforge-python2-sssd-base: "zlib1g-dev : Depends: zlib1g (= 1:1.2.8.dfsg-2+b1) but 1:1.2.8.dfsg-2+deb8u1 is to be installed" [tools]
2020-04-08 §
23:49 <bd808> Running rebuild_all for Docker images to pick up toollabs-webservice v0.66 (T154504, T234617) [tools]
23:35 <bstorm_> deploy toollabs-webservice v0.66 T154504 T234617 [tools]
2020-04-07 §
20:06 <andrewbogott> sss_cache -E on tools-sgebastion-08 and tools-sgebastion-09 [tools]
20:00 <andrewbogott> sss_cache -E on tools-sgebastion-07 [tools]
2020-04-06 §
19:16 <bstorm_> deleted tools-redis-1001/2 T248929 [tools]
2020-04-03 §
22:40 <bstorm_> shut down tools-redis-1001/2 T248929 [tools]
22:32 <bstorm_> switch tools-redis-1003 to the active redis server T248929 [tools]
20:41 <bstorm_> deleting tools-redis-1003/4 to attach them to an anti-affinity group T248929 [tools]
18:53 <bstorm_> spin up tools-redis-1004 on stretch and connect to cluster T248929 [tools]
18:23 <bstorm_> spin up tools-redis-1003 on stretch and connect to the cluster T248929 [tools]
16:50 <bstorm_> launching tools-redis-03 (Buster) to see what happens [tools]
2020-03-30 §
18:28 <bstorm_> Beginning rolling depool, remount, repool of k8s workers for T248702 [tools]
18:22 <bstorm_> disabled puppet across tools-k8s-worker-[1-55].tools.eqiad.wmflabs T248702 [tools]
16:56 <arturo> dropping `_psl.toolforge.org` TXT record (T168677) [tools]
2020-03-27 §
21:22 <bstorm_> removed puppet prefix tools-docker-builder T248703 [tools]
21:15 <bstorm_> deleted tools-docker-builder-06 T248703 [tools]
18:54 <bstorm_> launching tools-docker-imagebuilder-01 T248703 [tools]
12:52 <arturo> install python3-pykube on tools-k8s-control-3 for some tests interaction with the API from python [tools]
2020-03-24 §
11:44 <arturo> trying to solve a rebase/merge conflict in labs/private.git in tools-puppetmaster-02 [tools]
11:33 <arturo> merging tools-proxy patch https://gerrit.wikimedia.org/r/c/operations/puppet/+/579952/ (T234617) (second try with some additional bits in LUA) [tools]
10:16 <arturo> merging tools-proxy patch https://gerrit.wikimedia.org/r/c/operations/puppet/+/579952/ (T234617) [tools]
2020-03-18 §
19:07 <bstorm_> removed role::toollabs::logging::sender from project puppet (it wouldn't work anyway) [tools]
18:04 <bstorm_> removed puppet prefix tools-flannel-etcd T246689 [tools]
17:58 <bstorm_> removed puppet prefix tools-worker T246689 [tools]
17:57 <bstorm_> removed puppet prefix tools-k8s-master T246689 [tools]
17:36 <bstorm_> removed lots of deprecated hiera keys from horizon for the old cluster T246689 [tools]
16:59 <bstorm_> deleting "tools-worker-1002", "tools-worker-1001", "tools-k8s-master-01", "tools-flannel-etcd-03", "tools-k8s-etcd-03", "tools-flannel-etcd-02", "tools-k8s-etcd-02", "tools-flannel-etcd-01", "tools-k8s-etcd-01" T246689 [tools]
2020-03-17 §
13:29 <arturo> set `profile::toolforge::bastion::nproc: 200` for tools-sgebastion-08 (T219070) [tools]
00:08 <bstorm_> shut off tools-flannel-etcd-01/02/03 T246689 [tools]
2020-03-16 §
22:01 <bstorm_> shut off tools-k8s-etcd-01/02/03 T246689 [tools]
22:00 <bstorm_> shut off tools-k8s-master-01 T246689 [tools]
21:59 <bstorm_> shut down tools-worker-1001 and tools-worker-1002 T246689 [tools]
2020-03-11 §
17:00 <jeh> clean up apt cache on tools-sgebastion-07 [tools]
2020-03-06 §
16:25 <bstorm_> updating maintain-kubeusers image to filter invalid tool names [tools]
2020-03-03 §
18:16 <jeh> create OpenStack DNS record for elasticsearch.svc.tools.eqiad1.wikimedia.cloud (eqiad1 subdomain change) T236606 [tools]
18:02 <jeh> create OpenStack DNS record for elasticsearch.svc.tools.eqiad.wikimedia.cloud T236606 [tools]
17:31 <jeh> create a OpenStack virtual ip address for the new elasticsearch cluster T236606 [tools]
10:54 <arturo> deleted VMs `tools-worker-[1003-1020]` (legacy k8s cluster) (T246689) [tools]
10:51 <arturo> cordoned/drained all legacy k8s worker nodes except 1001/1002 (T246689) [tools]
2020-03-02 §
22:26 <jeh> starting first pass of elasticsearch data migration to new cluster T236606 [tools]
2020-03-01 §
01:48 <bstorm_> old version of kubectl removed. Anyone who needs it can download it with `curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.4.12/bin/linux/amd64/kubectl` [tools]
01:27 <bstorm_> running the force-migrate command to make sure any new kubernetes deployments are on the new cluster. [tools]
2020-02-28 §
22:14 <bstorm_> shutting down the old maintain-kubeusers and taking the gloves off the new one (removing --gentle-mode) [tools]
16:51 <bstorm_> node/tools-k8s-worker-15 uncordoned [tools]
16:44 <bstorm_> drained tools-k8s-worker-15 and hard rebooting it because it wasn't happy [tools]
16:36 <bstorm_> rebooting k8s workers 1-35 on the 2020 cluster to clear a strange nologin condition that has been there since the NFS maintenance [tools]
16:14 <bstorm_> rebooted tools-k8s-worker-7 to clear some puppet issues [tools]