51-100 of 2387 results (8ms)
2020-01-07 §
05:02 <bd808> Creating tools-k8s-worker-[6-14] [tools]
00:26 <bstorm_> repooled tools-sgewebgrid-lighttpd-0919 [tools]
00:17 <bstorm_> repooled tools-sgewebgrid-lighttpd-0918 [tools]
00:15 <bstorm_> moving tools-sgewebgrid-lighttpd-0918 and -0919 to cloudvirt1004 from cloudvirt1029 to rebalance load [tools]
00:02 <bstorm_> depooled tools-sgewebgrid-lighttpd-0918 and 0919 to move to cloudvirt1004 to improve spread [tools]
2020-01-06 §
23:40 <bd808> Deleted tools-sgewebgrid-lighttpd-09{0[1-9],10} [tools]
23:36 <bd808> Shutdown tools-sgewebgrid-lighttpd-09{0[1-9],10} [tools]
23:34 <bd808> Decommissioned tools-sgewebgrid-lighttpd-09{0[1-9],10} [tools]
23:13 <bstorm_> Repooled tools-sgeexec-0922 because I don't know why it was depooled [tools]
23:01 <bd808> Depooled tools-sgewebgrid-lighttpd-0910.tools.eqiad.wmflabs [tools]
22:58 <bd808> Depooling tools-sgewebgrid-lighttpd-090[2-9] [tools]
22:57 <bd808> Disabling queues on tools-sgewebgrid-lighttpd-090[2-9] [tools]
21:07 <bd808> Restarted kube2proxy on tools-proxy-05 to try and refresh admin tool's routes [tools]
18:54 <bstorm_> edited /etc/fstab to remove NFS and unmounted the nfs volumes tools-k8s-haproxy-1 T241908 [tools]
18:49 <bstorm_> edited /etc/fstab to remove NFS and rebooted to clear stale mounts on tools-k8s-haproxy-2 T241908 [tools]
18:47 <bstorm_> added mount_nfs=false to tools-k8s-haproxy puppet prefix T241908 [tools]
18:24 <bd808> Deleted shutdown instance tools-worker-1029 (was an SSSD testing instance) [tools]
16:42 <bstorm_> failed sge-shadow-master back to the main grid master [tools]
16:42 <bstorm_> Removed files for old S1tty that wasn't working on sge-grid-master [tools]
2020-01-04 §
18:11 <bd808> Shutdown tools-worker-1029 [tools]
18:10 <bd808> kubectl delete node tools-worker-1029.tools.eqiad.wmflabs [tools]
18:06 <bd808> Removed tools-worker-1029.tools.eqiad.wmflabs from k8s::worker_hosts hiera in preparation for decom [tools]
16:54 <bstorm_> moving VMs tools-worker-1012/1028/1005 from cloudvirt1024 to cloudvirt1003 due to hardware errors T241884 [tools]
16:47 <bstorm_> moving VM tools-flannel-etcd-02 from cloudvirt1024 to cloudvirt1003 due to hardware errors T241884 [tools]
16:16 <bd808> Draining tools-worker-10{05,12,28} due to hardware errors (T241884) [tools]
16:13 <arturo> moving VM tools-sgewebgrid-lighttpd-0927 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:11 <arturo> moving VM tools-sgewebgrid-lighttpd-0926 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:09 <arturo> moving VM tools-sgewebgrid-lighttpd-0925 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:08 <arturo> moving VM tools-sgewebgrid-lighttpd-0924 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:07 <arturo> moving VM tools-sgewebgrid-lighttpd-0923 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:06 <arturo> moving VM tools-sgewebgrid-lighttpd-0909 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:04 <arturo> moving VM tools-sgeexec-0923 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241884) [tools]
16:02 <arturo> moving VM tools-sgeexec-0910 from cloudvirt1024 to cloudvirt1009 due to hardware errors (T241873) [tools]
2020-01-03 §
16:48 <bstorm_> updated the ValidatingWebhookConfiguration for the ingress admission controller to the working settings [tools]
11:51 <arturo> [new k8s] deploy cadvisor as in https://gerrit.wikimedia.org/r/c/operations/puppet/+/561654 (T237643) [tools]
11:21 <arturo> upload k8s.gcr.io/cadvisor:v0.30.2 docker image to the docker registry as docker-registry.tools.wmflabs.org/cadvisor:0.30.2 for T237643 [tools]
03:04 <bd808> Really rebuilding all {jessie,stretch,buster}-sssd images. Last time I forgot to actually update the git clone. [tools]
00:11 <bd808> Rebuiliding all stretch-ssd Docker images to pick up busybox [tools]
2020-01-02 §
23:54 <bd808> Rebuiliding all buster-ssd Docker images to pick up busybox [tools]
2019-12-30 §
05:02 <andrewbogott> moving tools-worker-1012 to cloudvirt1024 for T241523 [tools]
04:49 <andrewbogott> draining and rebooting tools-worker-1031, its drive is full [tools]
2019-12-29 §
01:38 <Krenair> Cordoned tools-worker-1012 and deleted pods associated with dplbot and dewikigreetbot as well as my own testing one, host seems to be under heavy load - T241523 [tools]
2019-12-27 §
15:06 <Krenair> Killed a "python parse_page.py outreachy" process by aikochou that was hogging IO on tools-sgebastion-07 [tools]
2019-12-25 §
16:07 <zhuyifei1999_> pkilled 5 `python pwb.py` processes belonging to `tools.kaleem-bot` on tools-sgebastion-07 [tools]
2019-12-22 §
20:13 <bd808> Enabled Puppet on tools-proxy-06.tools.eqiad.wmflabs after nginx config test (T241310) [tools]
18:52 <bd808> Disabled Puppet on tools-proxy-06.tools.eqiad.wmflabs to test nginx config change (T241310) [tools]
2019-12-20 §
22:28 <bd808> Re-enabled Puppet on tools-sgebastion-09. Reason for disable was "arturo raising systemd limits" [tools]
11:33 <arturo> reboot tools-k8s-control-3 to fix some stale NFS mount issues [tools]
2019-12-18 §
17:33 <bstorm_> updated package in aptly for toollabs-webservice to 0.53 [tools]
11:49 <arturo> introduce placeholder DNS records for toolforge.org domain. No services are provided under this domain yet for end users, this is just us testing (SSL, proxy stuff etc). This may be reverted anytime. [tools]