2021-08-16
§
|
11:22 |
<Lucas_WMDE> |
lucaswerkmeister-wmde@mwmaint2002:~$ mwscript namespaceDupes.php hrwiki --fix --add-prefix=T287024/ | tee T287024.out # T287024 |
[production] |
11:12 |
<lucaswerkmeister-wmde@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:710564|Add namespace aliases for hr.wiki (T287024)]] (duration: 00m 59s) |
[production] |
11:11 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
11:07 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
10:40 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
10:33 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
10:32 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/Wikibase.php: Config: [[gerrit:713225|Add tags for wikidata edits (T236893)]] (duration: 00m 58s) |
[production] |
09:16 |
<gehel> |
depooling wdqs codfw to allow catching up on lag |
[production] |
08:49 |
<jynus> |
replacing s2 with s4 on db2097 T287230 |
[production] |
08:28 |
<gehel> |
repool wdqs eqiad (`confctl --quiet --object-type discovery select 'dnsdisc=wdqs,name=eqiad' set/pooled=true`) - codfw currently overloaded |
[production] |
07:47 |
<marostegui> |
Rename aft_feedback tables on db2115, db2131 - T250715 |
[production] |
06:41 |
<TimStarling> |
on votewiki, set voter-privacy option to 1 on all prior elections T288924 |
[production] |
05:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3312 (re)pooling @ 100%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17031 and previous config saved to /var/cache/conftool/dbconfig/20210816-055445-root.json |
[production] |
05:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3311 (re)pooling @ 100%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17030 and previous config saved to /var/cache/conftool/dbconfig/20210816-055427-root.json |
[production] |
05:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3312 (re)pooling @ 75%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17029 and previous config saved to /var/cache/conftool/dbconfig/20210816-053941-root.json |
[production] |
05:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3311 (re)pooling @ 75%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17028 and previous config saved to /var/cache/conftool/dbconfig/20210816-053924-root.json |
[production] |
05:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3312 (re)pooling @ 50%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17027 and previous config saved to /var/cache/conftool/dbconfig/20210816-052437-root.json |
[production] |
05:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3311 (re)pooling @ 50%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17026 and previous config saved to /var/cache/conftool/dbconfig/20210816-052420-root.json |
[production] |
05:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3312 (re)pooling @ 25%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17025 and previous config saved to /var/cache/conftool/dbconfig/20210816-050934-root.json |
[production] |
05:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3311 (re)pooling @ 25%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17024 and previous config saved to /var/cache/conftool/dbconfig/20210816-050916-root.json |
[production] |
04:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3312 (re)pooling @ 10%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17023 and previous config saved to /var/cache/conftool/dbconfig/20210816-045430-root.json |
[production] |
04:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2088:3311 (re)pooling @ 10%: After upgrade', diff saved to https://phabricator.wikimedia.org/P17022 and previous config saved to /var/cache/conftool/dbconfig/20210816-045413-root.json |
[production] |
04:49 |
<marostegui> |
Upgrade db2088 (s1 and s2) to 10.4.21 |
[production] |
04:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2088 (s1 and s2) to upgrade', diff saved to https://phabricator.wikimedia.org/P17021 and previous config saved to /var/cache/conftool/dbconfig/20210816-044906-marostegui.json |
[production] |
2021-08-13
§
|
18:43 |
<bblack> |
reprepro: uploaded gdnsd-3.8.0-1~wmf1 to buster-wikimedia - T252132 |
[production] |
17:32 |
<jelto@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on mw[1451-1452,1454-1455].eqiad.wmnet with reason: setup new mediawiki servers in eqiad https://phabricator.wikimedia.org/T279309 |
[production] |
17:32 |
<jelto@cumin1001> |
START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on mw[1451-1452,1454-1455].eqiad.wmnet with reason: setup new mediawiki servers in eqiad https://phabricator.wikimedia.org/T279309 |
[production] |
17:06 |
<jelto@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw[1451-1452,1454-1455].eqiad.wmnet with reason: setup new mediawiki servers in eqiad https://phabricator.wikimedia.org/T279309 |
[production] |
17:05 |
<jelto@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw[1451-1452,1454-1455].eqiad.wmnet with reason: setup new mediawiki servers in eqiad https://phabricator.wikimedia.org/T279309 |
[production] |
15:39 |
<mutante> |
mw1451, mw1452, mw1454 - rebooting after reimage, memcached needs one |
[production] |
15:30 |
<mutante> |
mw1453 - racadm serveraction powercycle (down and was working until right before the switch issue) |
[production] |
15:18 |
<godog> |
restart pybal on lvs2009, to clear CRITICAL - thanos-swift_443: Servers thanos-fe2002.codfw.wmnet are marked down but pooled |
[production] |
15:14 |
<godog> |
restart pybal on lvs2010, to clear CRITICAL - thanos-swift_443: Servers thanos-fe2002.codfw.wmnet are marked down but pooled |
[production] |
15:02 |
<mutante> |
etherpad1002 - started failed ferm |
[production] |
15:00 |
<mutante> |
an-worker1117, an-worker1118 - started failed ferm (why are these slowly trickling in ) |
[production] |
14:57 |
<jelto@cumin1001> |
conftool action : set/pooled=no; selector: name=mw1450.eqiad.wmnet |
[production] |
14:57 |
<jelto@cumin1001> |
conftool action : set/pooled=no; selector: name=mw144[7-9].eqiad.wmnet |
[production] |
14:54 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw[1451-1452,1454-1455].eqiad.wmnet with reason: new setup |
[production] |
14:54 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw[1451-1452,1454-1455].eqiad.wmnet with reason: new setup |
[production] |
14:50 |
<mutante> |
an-worker1079 - started failed ferm |
[production] |
14:47 |
<jelto@cumin1001> |
conftool action : set/weight=25; selector: name=mw1450.eqiad.wmnet |
[production] |
14:46 |
<jelto@cumin1001> |
conftool action : set/weight=25; selector: name=mw144[7-9].eqiad.wmnet |
[production] |
14:45 |
<mutante> |
an-worker1095 - started ferm, service failed |
[production] |
14:44 |
<mutante> |
an-worker1082 - started ferm (was failed due to DNS hickup) |
[production] |
14:44 |
<jelto@cumin1001> |
conftool action : set/pooled=inactive; selector: name=mw1450.eqiad.wmnet |
[production] |
14:43 |
<jelto@cumin1001> |
conftool action : set/pooled=inactive; selector: name=mw144[7-9].eqiad.wmnet |
[production] |