2301-2350 of 10000 results (34ms)
2021-02-09 ยง
15:22 <andrew@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 95 hosts with reason: upgrading openstack [production]
15:22 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
15:22 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
15:21 <andrew@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 95 hosts with reason: upgrading openstack [production]
15:15 <papaul> power down mw2220 for maintenance [production]
15:11 <hashar@deploy1001> Synchronized php: group1 wikis to 1.36.0-wmf.29 (duration: 01m 11s) [production]
15:10 <moritzm> readding ganeti5002 to the eqsin Ganeti cluster following mainboard replacement/reinstall T261130 [production]
15:10 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.36.0-wmf.29 [production]
15:06 <hashar@deploy1001> Synchronized php-1.36.0-wmf.29/extensions/FeaturedFeeds: Revert "Caching fixes" T264391 (duration: 01m 25s) [production]
14:57 <andrew@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
14:57 <andrew@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 10 hosts with reason: upgrading openstack [production]
14:52 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 100%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14270 and previous config saved to /var/cache/conftool/dbconfig/20210209-145206-root.json [production]
14:50 <jmm@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pybal-test2001.codfw.wmnet [production]
14:48 <jmm@cumin1001> START - Cookbook sre.hosts.reboot-single for host pybal-test2001.codfw.wmnet [production]
14:43 <gehel> rebooting wdqs1009 / 1010 for kernel upgrade [production]
14:37 <hashar@deploy1001> rebuilt and synchronized wikiversions files: Revert "group1 wikis to 1.36.0-wmf.29" [production]
14:37 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 85%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14269 and previous config saved to /var/cache/conftool/dbconfig/20210209-143703-root.json [production]
14:29 <hashar@deploy1001> Synchronized php: group1 wikis to 1.36.0-wmf.29 (duration: 01m 06s) [production]
14:28 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group1 wikis to 1.36.0-wmf.29 [production]
14:26 <volans> cd /srv/external-monitoring; git fetch/status/pull on wikitech-static - T273951 [production]
14:22 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 75%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14268 and previous config saved to /var/cache/conftool/dbconfig/20210209-142159-root.json [production]
14:21 <hashar@deploy1001> rebuilt and synchronized wikiversions files: group0 wikis to 1.36.0-wmf.29 [production]
14:14 <gehel> depooling wdqs1005, catching up on lag [production]
14:10 <hashar@deploy1001> Synchronized php-1.36.0-wmf.29/includes/libs/objectcache/wancache/WANObjectCache.php: WANObjectCache: throw on Closure - T273242 (duration: 01m 08s) [production]
14:06 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 60%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14267 and previous config saved to /var/cache/conftool/dbconfig/20210209-140655-root.json [production]
13:52 <Urbanecm> Deploy security patch (T274152) [production]
13:51 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 50%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14266 and previous config saved to /var/cache/conftool/dbconfig/20210209-135152-root.json [production]
13:36 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 40%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14265 and previous config saved to /var/cache/conftool/dbconfig/20210209-133648-root.json [production]
13:25 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.change-distro-from-cdh (exit_code=0) for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
13:21 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 30%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14264 and previous config saved to /var/cache/conftool/dbconfig/20210209-132145-root.json [production]
13:08 <twentyafterfour> restart phabricator daemons to free 3.5gb of ram (memory leak?) [production]
13:06 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 25%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14263 and previous config saved to /var/cache/conftool/dbconfig/20210209-130641-root.json [production]
12:51 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 20%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14262 and previous config saved to /var/cache/conftool/dbconfig/20210209-125138-root.json [production]
12:36 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 15%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14261 and previous config saved to /var/cache/conftool/dbconfig/20210209-123634-root.json [production]
12:21 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 13%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14260 and previous config saved to /var/cache/conftool/dbconfig/20210209-122131-root.json [production]
12:06 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 10%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14259 and previous config saved to /var/cache/conftool/dbconfig/20210209-120627-root.json [production]
12:05 <elukey@cumin1001> START - Cookbook sre.hadoop.change-distro-from-cdh for Hadoop analytics cluster: Change Hadoop distribution - elukey@cumin1001 [production]
12:02 <elukey@cumin1001> END (PASS) - Cookbook sre.hadoop.stop-cluster (exit_code=0) for Hadoop analytics cluster: Stop the Hadoop cluster before maintenance. - elukey@cumin1001 [production]
11:58 <hnowlan@puppetmaster1001> conftool action : set/weight=10; selector: name=maps2010.codfw.wmnet [production]
11:58 <hnowlan@puppetmaster1001> conftool action : set/weight=10; selector: name=maps2009.codfw.wmnet [production]
11:58 <hnowlan@puppetmaster1001> conftool action : set/weight=10; selector: name=maps2008.codfw.wmnet [production]
11:58 <hnowlan@puppetmaster1001> conftool action : set/weight=10; selector: name=maps2006.codfw.wmnet [production]
11:57 <hnowlan@puppetmaster1001> conftool action : set/weight=10; selector: name=maps2005.codfw.wmnet [production]
11:55 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs1013.eqiad.wmnet [production]
11:52 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1010.eqiad.wmnet [production]
11:52 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1008.eqiad.wmnet [production]
11:51 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1007.eqiad.wmnet [production]
11:51 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1006.eqiad.wmnet [production]
11:51 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 8%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14258 and previous config saved to /var/cache/conftool/dbconfig/20210209-115124-root.json [production]
11:51 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs1013.eqiad.wmnet [production]