601-650 of 2923 results (24ms)
2020-01-06 §
18:49 <bstorm_> edited /etc/fstab to remove NFS and rebooted to clear stale mounts on tools-k8s-haproxy-2 T241908 [tools]
18:47 <bstorm_> added mount_nfs=false to tools-k8s-haproxy puppet prefix T241908 [tools]
18:24 <bd808> Deleted shutdown instance tools-worker-1029 (was an SSSD testing instance) [tools]
16:42 <bstorm_> failed sge-shadow-master back to the main grid master [tools]
16:42 <bstorm_> Removed files for old S1tty that wasn't working on sge-grid-master [tools]
2020-01-04 §
18:11 <bd808> Shutdown tools-worker-1029 [tools]
18:10 <bd808> kubectl delete node tools-worker-1029.tools.eqiad.wmflabs [tools]
18:06 <bd808> Removed tools-worker-1029.tools.eqiad.wmflabs from k8s::worker_hosts hiera in preparation for decom [tools]
16:54 <bstorm_> moving VMs tools-worker-1012/1028/1005 from cloudvirt1024 to cloudvirt1003 due to hardware errors T241884 [tools]
16:47 <bstorm_> moving VM tools-flannel-etcd-02 from cloudvirt1024 to cloudvirt1003 due to hardware errors T241884 [tools]
16:16 <bd808> Draining tools-worker-10{05,12,28} due to hardware errors (T241884) [tools]
16:13 <arturo> moving VM tools-sgewebgrid-lighttpd-0927 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:11 <arturo> moving VM tools-sgewebgrid-lighttpd-0926 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:09 <arturo> moving VM tools-sgewebgrid-lighttpd-0925 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:08 <arturo> moving VM tools-sgewebgrid-lighttpd-0924 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:07 <arturo> moving VM tools-sgewebgrid-lighttpd-0923 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:06 <arturo> moving VM tools-sgewebgrid-lighttpd-0909 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:04 <arturo> moving VM tools-sgeexec-0923 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:02 <arturo> moving VM tools-sgeexec-0910 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241873) [tools]
2020-01-03 §
16:48 <bstorm_> updated the ValidatingWebhookConfiguration for the ingress admission controller to the working settings [tools]
11:51 <arturo> [new k8s] deploy cadvisor as in https://gerrit.wikimedia.org/r/c/operations/puppet/+/561654 (T237643) [tools]
11:21 <arturo> upload k8s.gcr.io/cadvisor:v0.30.2 docker image to the docker registry as docker-registry.tools.wmflabs.org/cadvisor:0.30.2 for T237643 [tools]
03:04 <bd808> Really rebuilding all {jessie,stretch,buster}-sssd images. Last time I forgot to actually update the git clone. [tools]
00:11 <bd808> Rebuiliding all stretch-ssd Docker images to pick up busybox [tools]
2020-01-02 §
23:54 <bd808> Rebuiliding all buster-ssd Docker images to pick up busybox [tools]
2019-12-30 §
05:02 <andrewbogott> moving tools-worker-1012 to cloudvirt1024 for T241523 [tools]
04:49 <andrewbogott> draining and rebooting tools-worker-1031, its drive is full [tools]
2019-12-29 §
01:38 <Krenair> Cordoned tools-worker-1012 and deleted pods associated with dplbot and dewikigreetbot as well as my own testing one, host seems to be under heavy load - T241523 [tools]
2019-12-27 §
15:06 <Krenair> Killed a "python parse_page.py outreachy" process by aikochou that was hogging IO on tools-sgebastion-07 [tools]
2019-12-25 §
16:07 <zhuyifei1999_> pkilled 5 `python pwb.py` processes belonging to `tools.kaleem-bot` on tools-sgebastion-07 [tools]
2019-12-22 §
20:13 <bd808> Enabled Puppet on tools-proxy-06.tools.eqiad.wmflabs after nginx config test (T241310) [tools]
18:52 <bd808> Disabled Puppet on tools-proxy-06.tools.eqiad.wmflabs to test nginx config change (T241310) [tools]
2019-12-20 §
22:28 <bd808> Re-enabled Puppet on tools-sgebastion-09. Reason for disable was "arturo raising systemd limits" [tools]
11:33 <arturo> reboot tools-k8s-control-3 to fix some stale NFS mount issues [tools]
2019-12-18 §
17:33 <bstorm_> updated package in aptly for toollabs-webservice to 0.53 [tools]
11:49 <arturo> introduce placeholder DNS records for toolforge.org domain. No services are provided under this domain yet for end users, this is just us testing (SSL, proxy stuff etc). This may be reverted anytime. [tools]
2019-12-17 §
20:25 <bd808> Fixed https://tools.wmflabs.org/ to redirect to https://tools.wmflabs.org/admin/ [tools]
19:20 <bstorm_> deployed the changes to the live proxy to enable the new kubernetes cluster T234037 [tools]
16:53 <bstorm_> maintain-kubeusers app deployed fully in tools for new kubernetes cluster T214513 T228499 [tools]
16:50 <bstorm_> updated the maintain-kubeusers docker image for beta and tools [tools]
04:48 <bstorm_> completed first run of maintain-kubeusers 2 in the new cluster T214513 [tools]
01:26 <bstorm_> running the first run of maintain-kubeusers 2.0 for the new cluster T214513 (more successfully this time) [tools]
01:25 <bstorm_> unset the immutable bit from 1704 tool kubeconfigs T214513 [tools]
01:05 <bstorm_> beginning the first run of the new maintain-kubeusers in gentle-mode -- but it was just killed by some files setting the immutable bit T214513 [tools]
00:45 <bstorm_> enabled encryption at rest on the new k8s cluster [tools]
2019-12-16 §
22:04 <bd808> Added 'ALLOW IPv4 25/tcp from 0.0.0.0/0' to "MTA" security group applied to tools-mail-02 [tools]
19:05 <bstorm_> deployed the maintain-kubeusers operations pod to the new cluster [tools]
2019-12-14 §
10:48 <valhallasw`cloud> re-enabling puppet on tools-sgeexec-0912, likely left-over from NFS maintenance (no reason was specified). [tools]
2019-12-13 §
18:46 <bstorm_> updated tools-k8s-control-2 and 3 to the new config as well [tools]
17:56 <bstorm_> updated tools-k8s-control-1 to the new control plane configuration [tools]