5901-5950 of 10000 results (40ms)
2021-02-09 §
17:43 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 20:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still [production]
17:37 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1300.eqiad.wmnet with reason: REIMAGE [production]
17:35 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw2220.codfw.wmnet with reason: REIMAGE [production]
17:35 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1300.eqiad.wmnet with reason: REIMAGE [production]
17:33 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw2220.codfw.wmnet with reason: REIMAGE [production]
17:13 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc-gp1001.eqiad.wmnet [production]
17:07 <jiji@cumin1001> START - Cookbook sre.hosts.reboot-single for host mc-gp1001.eqiad.wmnet [production]
17:01 <gehel@cumin1001> END (PASS) - Cookbook sre.wdqs.reboot (exit_code=0) [production]
16:47 <hashar@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.29 [production]
16:21 <moritzm> installing wireshark security updates [production]
16:20 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:14 <godog> swift eqiad-prod: decrease weight for SSDs on ms-be[1019-1026] - T272836 [production]
16:11 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
15:59 <volker-e@deploy1001> Finished deploy [design/style-guide@b9b7ee6]: Deploy design/style-guide: b9b7ee6 “Components”: Fix components overview SVG rendering glitch (#439) (duration: 00m 07s) [production]
15:59 <volker-e@deploy1001> Started deploy [design/style-guide@b9b7ee6]: Deploy design/style-guide: b9b7ee6 “Components”: Fix components overview SVG rendering glitch (#439) [production]
15:32 <papaul> power down logstash2035 for relocation [production]
15:23 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 95 hosts with reason: upgrading openstack [production]
15:22 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 95 hosts with reason: upgrading openstack [production]
15:22 <andrew@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 95 hosts with reason: upgrading openstack [production]
15:22 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
15:22 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
15:21 <andrew@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 95 hosts with reason: upgrading openstack [production]
15:15 <papaul> power down mw2220 for maintenance [production]
15:11 <hashar@deploy1001> Synchronized php: group1 wikis to 1.36.0-wmf.29 (duration: 01m 11s) [production]
15:10 <moritzm> readding ganeti5002 to the eqsin Ganeti cluster following mainboard replacement/reinstall T261130 [production]
15:10 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.36.0-wmf.29 [production]
15:06 <hashar@deploy1001> Synchronized php-1.36.0-wmf.29/extensions/FeaturedFeeds: Revert "Caching fixes" T264391 (duration: 01m 25s) [production]
14:57 <andrew@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
14:57 <andrew@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
14:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 100%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14270 and previous config saved to /var/cache/conftool/dbconfig/20210209-145206-root.json [production]
14:50 <jmm@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pybal-test2001.codfw.wmnet [production]
14:48 <jmm@cumin1001> START - Cookbook sre.hosts.reboot-single for host pybal-test2001.codfw.wmnet [production]
14:43 <gehel> rebooting wdqs1009 / 1010 for kernel upgrade [production]
14:37 <hashar@deploy1001> rebuilt and synchronized wikiversions files: Revert "group1 wikis to 1.36.0-wmf.29" [production]
14:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 85%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14269 and previous config saved to /var/cache/conftool/dbconfig/20210209-143703-root.json [production]
14:29 <hashar@deploy1001> Synchronized php: group1 wikis to 1.36.0-wmf.29 (duration: 01m 06s) [production]
14:28 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.36.0-wmf.29 [production]
14:26 <volans> cd /srv/external-monitoring; git fetch/status/pull on wikitech-static - T273951 [production]
14:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 75%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14268 and previous config saved to /var/cache/conftool/dbconfig/20210209-142159-root.json [production]
14:21 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group0 wikis to 1.36.0-wmf.29 [production]
14:14 <gehel> depooling wdqs1005, catching up on lag [production]
14:10 <hashar@deploy1001> Synchronized php-1.36.0-wmf.29/includes/libs/objectcache/wancache/WANObjectCache.php: WANObjectCache: throw on Closure - T273242 (duration: 01m 08s) [production]
14:06 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 60%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14267 and previous config saved to /var/cache/conftool/dbconfig/20210209-140655-root.json [production]
13:52 <Urbanecm> Deploy security patch (T274152) [production]
13:51 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 50%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14266 and previous config saved to /var/cache/conftool/dbconfig/20210209-135152-root.json [production]
13:36 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 40%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14265 and previous config saved to /var/cache/conftool/dbconfig/20210209-133648-root.json [production]
13:25 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
13:21 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 30%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14264 and previous config saved to /var/cache/conftool/dbconfig/20210209-132145-root.json [production]
13:08 <twentyafterfour> restart phabricator daemons to free 3.5gb of ram (memory leak?) [production]
13:06 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 25%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14263 and previous config saved to /var/cache/conftool/dbconfig/20210209-130641-root.json [production]