1-50 of 10000 results (25ms)
2020-07-22 ยง
23:32 <bstorm> setting the default NFS version to 4.2 while excepting the two stretch servers T257945 [paws]
23:24 <bstorm> created server group 'tools-k8s-worker' to create any new worker nodes in so that they have a low chance of being scheduled together by openstack unless it is necessary T258663 [tools]
23:22 <bstorm> running puppet and NFS 4.2 remount on tools-k8s-worker-[56-60] T257945 [tools]
23:17 <bstorm> running puppet and NFS 4.2 remount on tools-k8s-worker-[41-55] T257945 [tools]
23:14 <bstorm> running puppet and NFS 4.2 remount on tools-k8s-worker-[21-40] T257945 [tools]
23:11 <bstorm> running puppet and NFS remount on tools-k8s-worker-[1-15] T257945 [tools]
23:07 <bstorm> disabling puppet on k8s workers to reduce the effect of changing the NFS mount version all at once T257945 [tools]
22:28 <bstorm> setting tools-k8s-control prefix to mount NFS v4.2 T257945 [tools]
22:26 <wm-bot> <lucaswerkmeister> deployed 9eb2aa216d (no edit region without regions) [tools.wd-image-positions]
22:15 <bstorm> set the tools-k8s-control nodes to also use 800MBps to prevent issues with toolforge ingress and api system [tools]
22:07 <cdanis> remove downtime on api.svc.codfw.wmnet T258614 [production]
22:07 <bstorm> set the tools-k8s-haproxy-1 (main load balancer for toolforge) to have an egress limit of 800MB per sec instead of the same as all the other servers [tools]
22:06 <wm-bot> <lucaswerkmeister> deployed aa97ea1589 (Esc for editing regions) [tools.wd-image-positions]
21:36 <wm-bot> <lucaswerkmeister> deployed 6b34c5bb7b (editing regions) [tools.wd-image-positions]
20:48 <brennen> restarted php7.2-fpm on deployment-mediawiki-{07,09} for T258628 [releng]
20:10 <Urbanecm> tools.stewardbots@tools-sgebastion-07:~$ restart_stewardbot.sh [tools.stewardbots]
19:26 <jhuneidi@deploy1001> Synchronized php: group1 wikis to 1.36.0-wmf.1 (duration: 01m 03s) [production]
19:25 <jhuneidi@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.36.0-wmf.1 [production]
19:15 <urbanecm@deploy1001> Finished scap: 9529cf8d2570bbf6dd1e919c966f5954e39dbd67: b66ec9143bd96cbf3a20b70f6aa3f2d6d7963bb5: OOUI backport; 93755a6a92923ae390e3a04b19421c8562568d2a: i18n changes for OAuth, removal of spam messages (duration: 42m 26s) [production]
19:14 <ejegg> updated payments-wiki from bf91f8adff to 31a3de1130 [production]
19:11 <mutante> mw2335 - mw2339 - scap pull [production]
18:39 <dzahn@cumin1001> conftool action : set/weight=15; selector: name=mw233[5-9].codfw.wmnet [production]
18:38 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw233[6-9].codfw.wmnet [production]
18:36 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw233[6-9].codfw.wmnet [production]
18:33 <urbanecm@deploy1001> Started scap: 9529cf8d2570bbf6dd1e919c966f5954e39dbd67: b66ec9143bd96cbf3a20b70f6aa3f2d6d7963bb5: OOUI backport; 93755a6a92923ae390e3a04b19421c8562568d2a: i18n changes for OAuth, removal of spam messages [production]
18:33 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw2335.codfw.wmnet [production]
18:28 <dzahn@cumin1001> conftool action : set/pooled=inactive; selector: name=mw233[5-9].codfw.wmnet [production]
18:16 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2339.codfw.wmnet [production]
17:58 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2338.codfw.wmnet [production]
17:58 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2337.codfw.wmnet [production]
17:58 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2336.codfw.wmnet [production]
17:26 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw2335.codfw.wmnet [production]
15:31 <moritzm> updated stretch installer image to Stretch 9.13 release T258407 [production]
15:27 <jayme@deploy1001> helmfile [EQIAD] Ran 'sync' command on namespace 'eventstreams' for release 'production' . [production]
15:27 <jayme@deploy1001> helmfile [EQIAD] Ran 'sync' command on namespace 'eventstreams' for release 'canary' . [production]
15:05 <joal> manually drop /user/analytics/.Trash/200714000000/wmf/data/wmf/pageview/actor to free some space [analytics]
15:03 <joal> Manually drop /wmf/data/wmf/mediawiki/wikitext/history/snapshot=2020-03 to free some spqce [analytics]
15:01 <elukey> hdfs dfs -rm -r -skipTrash /var/log/hadoop-yarn/apps/analytics-privatedata/logs [analytics]
14:52 <XioNoX> add accept-data and remove bogus v6 IP from ulsfo sandbox vlan [production]
14:49 <elukey> hdfs dfs -rm -r -skipTrash /var/log/hadoop-yarn/apps/analytics/logs/* [analytics]
14:43 <akosiaris@cumin1001> conftool action : set/pooled=no; selector: dc=codfw,service=mobileapps,name=scb.* [production]
14:43 <jayme@deploy1001> helmfile [CODFW] Ran 'sync' command on namespace 'eventstreams' for release 'canary' . [production]
14:43 <jayme@deploy1001> helmfile [CODFW] Ran 'sync' command on namespace 'eventstreams' for release 'production' . [production]
14:35 <jayme@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'eventstreams' for release 'canary' . [production]
14:35 <jayme@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'eventstreams' for release 'production' . [production]
14:12 <jayme@deploy1001> helmfile [EQIAD] Ran 'sync' command on namespace 'eventgate-main' for release 'production' . [production]
14:12 <jayme@deploy1001> helmfile [EQIAD] Ran 'sync' command on namespace 'eventgate-main' for release 'canary' . [production]
14:06 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
14:04 <filippo@cumin1001> START - Cookbook sre.hosts.downtime [production]
14:03 <Amir1> restart codesearch to pick up new config (adding mediawiki/vagrant) [codesearch]