2021-04-27
ยง
|
07:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 80%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15565 and previous config saved to /var/cache/conftool/dbconfig/20210427-074839-root.json |
[production] |
07:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1087 (re)pooling @ 50%: Repool db1087', diff saved to https://phabricator.wikimedia.org/P15564 and previous config saved to /var/cache/conftool/dbconfig/20210427-074318-root.json |
[production] |
07:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1179 (re)pooling @ 75%: Repool db1179', diff saved to https://phabricator.wikimedia.org/P15563 and previous config saved to /var/cache/conftool/dbconfig/20210427-074234-root.json |
[production] |
07:33 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 75%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15562 and previous config saved to /var/cache/conftool/dbconfig/20210427-073335-root.json |
[production] |
07:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1087 (re)pooling @ 25%: Repool db1087', diff saved to https://phabricator.wikimedia.org/P15561 and previous config saved to /var/cache/conftool/dbconfig/20210427-072814-root.json |
[production] |
07:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1179 (re)pooling @ 50%: Repool db1179', diff saved to https://phabricator.wikimedia.org/P15560 and previous config saved to /var/cache/conftool/dbconfig/20210427-072731-root.json |
[production] |
07:26 |
<godog> |
swift eqiad-prod: less weight for ms-be[1019-1026] / more weight to ms-be106[0-3] - T272836 |
[production] |
07:24 |
<liw@deploy1002> |
Finished scap: testwikis wikis to 1.37.0-wmf.3 (duration: 30m 54s) |
[production] |
07:21 |
<jayme@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on conf[2004-2006].codfw.wmnet with reason: for zookeeper migration |
[production] |
07:21 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on conf[2004-2006].codfw.wmnet with reason: for zookeeper migration |
[production] |
07:19 |
<jayme@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on conf[2002-2003].codfw.wmnet with reason: for zookeeper migration |
[production] |
07:19 |
<jayme@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on conf[2002-2003].codfw.wmnet with reason: for zookeeper migration |
[production] |
07:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 60%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15559 and previous config saved to /var/cache/conftool/dbconfig/20210427-071831-root.json |
[production] |
07:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1179 (re)pooling @ 25%: Repool db1179', diff saved to https://phabricator.wikimedia.org/P15558 and previous config saved to /var/cache/conftool/dbconfig/20210427-071227-root.json |
[production] |
07:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 50%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15557 and previous config saved to /var/cache/conftool/dbconfig/20210427-070328-root.json |
[production] |
06:56 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1179 for schema change', diff saved to https://phabricator.wikimedia.org/P15556 and previous config saved to /var/cache/conftool/dbconfig/20210427-065628-marostegui.json |
[production] |
06:55 |
<elukey> |
upgrade mariadb to 10.4.18-1 + reboot on db1108 - T279281 |
[production] |
06:54 |
<liw@deploy1002> |
Started scap: testwikis wikis to 1.37.0-wmf.3 |
[production] |
06:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 40%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15555 and previous config saved to /var/cache/conftool/dbconfig/20210427-064824-root.json |
[production] |
06:37 |
<liw> |
version 1.37.0-wmf.3 was branched at 20ab303fd1d883592b4d2ec2468dfaccad7a9e10 for T278347 |
[production] |
06:33 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 30%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15554 and previous config saved to /var/cache/conftool/dbconfig/20210427-063320-root.json |
[production] |
06:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 25%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15553 and previous config saved to /var/cache/conftool/dbconfig/20210427-061817-root.json |
[production] |
06:11 |
<elukey> |
powercycle elastic2043 - no ssh, no tty remote console available |
[production] |
06:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 20%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15552 and previous config saved to /var/cache/conftool/dbconfig/20210427-060313-root.json |
[production] |
05:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 15%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15551 and previous config saved to /var/cache/conftool/dbconfig/20210427-054809-root.json |
[production] |
05:33 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 10%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15550 and previous config saved to /var/cache/conftool/dbconfig/20210427-053306-root.json |
[production] |
05:30 |
<XioNoX> |
push pfw fw policies - T281137 |
[production] |
05:27 |
<legoktm> |
imported hyperkitty_1.3.4-2~bpo10+2 to apt.wm.o (T281213) |
[production] |
05:22 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1158 (re)pooling @ 100%: Repool db1158', diff saved to https://phabricator.wikimedia.org/P15549 and previous config saved to /var/cache/conftool/dbconfig/20210427-052236-root.json |
[production] |
05:21 |
<marostegui> |
Stop mysql on db1087 to clone db1167 (lag will appear on wikidata on wikireplicas) T258361 |
[production] |
05:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool db1114 temporarily as db1087 will be depooled', diff saved to https://phabricator.wikimedia.org/P15547 and previous config saved to /var/cache/conftool/dbconfig/20210427-052026-marostegui.json |
[production] |
05:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1124 (re)pooling @ 5%: Slowly pool into s7 db1124', diff saved to https://phabricator.wikimedia.org/P15546 and previous config saved to /var/cache/conftool/dbconfig/20210427-051802-root.json |
[production] |
05:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1124 with minimal weight for the first time in s7 T258361', diff saved to https://phabricator.wikimedia.org/P15545 and previous config saved to /var/cache/conftool/dbconfig/20210427-050826-marostegui.json |
[production] |
05:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1158 (re)pooling @ 75%: Repool db1158', diff saved to https://phabricator.wikimedia.org/P15544 and previous config saved to /var/cache/conftool/dbconfig/20210427-050732-root.json |
[production] |
05:03 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db1077.eqiad.wmnet |
[production] |
04:53 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts db1077.eqiad.wmnet |
[production] |
04:52 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1158 (re)pooling @ 50%: Repool db1158', diff saved to https://phabricator.wikimedia.org/P15543 and previous config saved to /var/cache/conftool/dbconfig/20210427-045229-root.json |
[production] |
04:46 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1124 with minimal weight for the first time in s7 T258361', diff saved to https://phabricator.wikimedia.org/P15541 and previous config saved to /var/cache/conftool/dbconfig/20210427-044609-marostegui.json |
[production] |
04:45 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1124 to dbctl, depooled, T258361', diff saved to https://phabricator.wikimedia.org/P15540 and previous config saved to /var/cache/conftool/dbconfig/20210427-044520-marostegui.json |
[production] |
04:37 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1158 (re)pooling @ 25%: Repool db1158', diff saved to https://phabricator.wikimedia.org/P15539 and previous config saved to /var/cache/conftool/dbconfig/20210427-043725-root.json |
[production] |
04:25 |
<legoktm> |
upgrading lists-next.wikimedia.org to mailman3-from-bullseye (T280887) |
[production] |
04:19 |
<marostegui> |
Set phabricator on read only T279625 |
[production] |
03:37 |
<ryankemper> |
[WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` |
[production] |
03:37 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` |
[production] |
03:37 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` |
[production] |
03:36 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@08ad17a]: 0.3.70 (duration: 08m 18s) |
[production] |
03:28 |
<ryankemper> |
[WDQS Deploy] Tests passing following deploy of `0.3.70` on canary `wdqs1003`; proceeding to rest of fleet |
[production] |
03:28 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@08ad17a]: 0.3.70 |
[production] |
03:27 |
<ryankemper> |
[WDQS Deploy] Gearing up for deploy of wdqs `0.3.70`. Pre-deploy tests passing on canary `wdqs1003` |
[production] |
03:17 |
<ryankemper> |
T280382 `wdqs1006` has been re-imaged and had the appropriate wikidata/categories journal files transferred. `df -h` shows disk space is no longer an issue following the switch to raid0: `/dev/md2 2.6T 998G 1.5T 40% /srv` |
[production] |