2020-04-28
ยง
|
10:48 |
<liw@deploy1001> |
Pruned MediaWiki: 1.35.0-wmf.27 (duration: 12m 37s) |
[production] |
10:48 |
<_joe_> |
running heavy_page test on mw1407,9 |
[production] |
10:46 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'Repooling after reimaging to buster T250666', diff saved to https://phabricator.wikimedia.org/P11064 and previous config saved to /var/cache/conftool/dbconfig/20200428-104650-kormat.json |
[production] |
10:43 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=yes:weight=10; selector: dc=codfw,cluster=restbase,service=restbase-ssl,name=restbase2014.codfw.wmnet |
[production] |
10:43 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=yes:weight=10; selector: dc=codfw,cluster=restbase,service=restbase-backend,name=restbase2014.codfw.wmnet |
[production] |
10:41 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=yes:weight=10; selector: dc=codfw,cluster=restbase,service=restbase,name=restbase2014.codfw.wmnet |
[production] |
10:40 |
<XioNoX> |
remove unused policy-statements from routers |
[production] |
10:39 |
<ema> |
cp-text: upgrade purged to 0.9 and restart |
[production] |
10:38 |
<_joe_> |
running load.php test on mw1407,9 |
[production] |
10:34 |
<_joe_> |
running main_page test on mw1407,9 |
[production] |
10:28 |
<liw@deploy1001> |
Pruned MediaWiki: 1.35.0-wmf.30 (duration: 01m 27s) |
[production] |
10:28 |
<addshore> |
repool wdqs1007 (lag caught up) |
[production] |
10:10 |
<_joe_> |
starting benchmarks for light page on mw140{7,9} |
[production] |
10:08 |
<ema> |
upload purged 0.9 to buster-wikimedia |
[production] |
10:05 |
<liw> |
1.35.0-wmf.30 was branched at ffc8e887573d7b288067b263c5b6047b2b2db081 for T249962 |
[production] |
09:57 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:55 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:52 |
<liw> |
starting branch cut for train |
[production] |
09:35 |
<addshore> |
depool wdqs1007 to catch up on lag a bit |
[production] |
09:32 |
<mutante> |
running puppet on cp-ats for backend config change |
[production] |
09:22 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) |
[production] |
09:20 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'Depool db2124 T250666', diff saved to https://phabricator.wikimedia.org/P11063 and previous config saved to /var/cache/conftool/dbconfig/20200428-092052-kormat.json |
[production] |
09:12 |
<elukey@cumin1001> |
START - Cookbook sre.presto.roll-restart-workers |
[production] |
09:12 |
<elukey@cumin1001> |
END (FAIL) - Cookbook sre.presto.roll-restart-workers (exit_code=99) |
[production] |
09:12 |
<elukey@cumin1001> |
START - Cookbook sre.presto.roll-restart-workers |
[production] |
08:55 |
<XioNoX> |
re-set lost licenses on asw2-a/b-eqiad |
[production] |
08:40 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Fully repool db1105:3311 and 3312 after reimage', diff saved to https://phabricator.wikimedia.org/P11060 and previous config saved to /var/cache/conftool/dbconfig/20200428-084041-marostegui.json |
[production] |
08:36 |
<dcausse> |
deleting wikidatawiki_content_1587076410 from cloudelastic |
[production] |
08:30 |
<_joe_> |
restarting php-fpm on mw1407 and mw1409 again, then running traffic on them for 1 hour. |
[production] |
08:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repoo db1105:3311 and 3312 after reimage', diff saved to https://phabricator.wikimedia.org/P11059 and previous config saved to /var/cache/conftool/dbconfig/20200428-082420-marostegui.json |
[production] |
08:21 |
<dcausse> |
restarting blazegraph on wdqs1007 (T242453) |
[production] |
08:20 |
<jynus@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
08:17 |
<jynus@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
08:13 |
<kormat> |
reimaging db2124 to buster T250666 |
[production] |
08:13 |
<mutante> |
rsyncing transparency-report-private files from bromine to miscweb1002/2002. git-cloning was removed about a year ago but site still exists. need to figure out if it should be deleted (T188362 T247650) |
[production] |
08:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Slowly repoo db1105:3311 and 3312 after reimage', diff saved to https://phabricator.wikimedia.org/P11058 and previous config saved to /var/cache/conftool/dbconfig/20200428-080920-marostegui.json |
[production] |
08:06 |
<moritzm> |
installing qemu security updates |
[production] |
07:52 |
<_joe_> |
running benchmarks on mw1407 (LCStoreStaticArray) and mw1409 (LCStoreCDB) for T99740: restart php-fpm, pool for 5 minutes to warmup caches, then depool both servers. |
[production] |
07:49 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
07:44 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
07:26 |
<marostegui> |
Reimage db1105 |
[production] |
07:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1105:3311 and 3312 for reimage', diff saved to https://phabricator.wikimedia.org/P11057 and previous config saved to /var/cache/conftool/dbconfig/20200428-072416-marostegui.json |
[production] |
06:35 |
<marostegui> |
Deploy schema change on s3 master with replication for the wikis at T250071#6051598 - T250071 |
[production] |
06:06 |
<marostegui> |
Deploy schema change on s4 codfw, this will generate lag on codfw - T250055 |
[production] |
05:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repool db1112', diff saved to https://phabricator.wikimedia.org/P11056 and previous config saved to /var/cache/conftool/dbconfig/20200428-055719-marostegui.json |
[production] |
05:52 |
<marostegui> |
Reclone labsdb1011 from labsdb1012 - T249188 |
[production] |
05:42 |
<marostegui> |
Restart labsdb1011 with innodb_purge_threads set to 10 - T249188 |
[production] |
05:35 |
<marostegui> |
Deploy schema change on db1112 |
[production] |
05:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1112 for schema change', diff saved to https://phabricator.wikimedia.org/P11054 and previous config saved to /var/cache/conftool/dbconfig/20200428-053453-marostegui.json |
[production] |
04:59 |
<vgutierrez> |
depool and powercycle cp5012 |
[production] |