2021-11-23
ยง
|
11:25 |
<godog> |
powercycle ms-be2058 - down and nothign on console |
[production] |
11:17 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp5012.eqsin.wmnet with OS buster |
[production] |
11:15 |
<vgutierrez> |
pool cp5012 (text) using HAProxy as TLS terminator - T290005 |
[production] |
11:08 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
11:08 |
<Amir1> |
start of mwscript migrateRevisionActorTemp.php --wiki=testwiki --sleep=5 (T275246) |
[production] |
11:05 |
<jayme> |
cordoned kubestage1003.eqiad.wmnet kubestage1004.eqiad.wmnet (we have issues with POD IP prefix allocation) - T293729 |
[production] |
11:05 |
<jayme> |
uncordoned kubestage1001.eqiad.wmnet kubestage1002.eqiad.wmnet (we have issues with POD IP prefix allocation) - T293729 |
[production] |
11:04 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
11:02 |
<ladsgroup@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:740807|Set test wikis to write both for actor temp table migration (T275246)]] (duration: 00m 56s) |
[production] |
10:38 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
10:31 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
10:30 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:30 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet,db1155.eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:29 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1155.eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:29 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 4:00:00 on db1155.eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:29 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:28 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 4:00:00 on clouddb[1015,1019,1021].eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:22 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1121 (T296143)', diff saved to https://phabricator.wikimedia.org/P17800 and previous config saved to /var/cache/conftool/dbconfig/20211123-102234-ladsgroup.json |
[production] |
10:22 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db1121.eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:22 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 4:00:00 on db1121.eqiad.wmnet with reason: Maintenance T296143 |
[production] |
10:19 |
<urbanecm@deploy1002> |
Finished scap: c98acaa2ab27e630c0a1b55a64fb81b131c087f9: Backport localisation updates (duration: 11m 06s) |
[production] |
10:19 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. |
[production] |
10:18 |
<elukey@deploy1002> |
helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. |
[production] |
10:08 |
<urbanecm@deploy1002> |
Started scap: c98acaa2ab27e630c0a1b55a64fb81b131c087f9: Backport localisation updates |
[production] |
10:08 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reimage for host cp5012.eqsin.wmnet with OS buster |
[production] |
10:01 |
<vgutierrez> |
depool cp5012 to be reimaged as cache::text_haproxy - T290005 |
[production] |
09:57 |
<jayme> |
cordoned kubestage1001.eqiad.wmnet kubestage1002.eqiad.wmnet - T293729 |
[production] |
09:52 |
<kharlan@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'linkrecommendation' for release 'staging' . |
[production] |
09:37 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1124.eqiad.wmnet with OS bullseye |
[production] |
09:27 |
<Amir1> |
dropping useless GRANTs on s6 eqiad replicas without replication (T296274) |
[production] |
09:16 |
<Amir1> |
dropping useless GRANTs on s6 eqiad master without replication (T296274) |
[production] |
09:09 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.reimage for host db1124.eqiad.wmnet with OS bullseye |
[production] |
09:05 |
<Amir1> |
fixing incorrect grants of wikiadmin on localhost in s6 master in codfw with replication |
[production] |
07:52 |
<topranks> |
Adjusting BGP on cr1-eqiad and cr2-eqiad to keep MED unchanged in iBGP. |
[production] |
07:08 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1125.eqiad.wmnet with OS bullseye |
[production] |
06:41 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.reimage for host db1125.eqiad.wmnet with OS bullseye |
[production] |
05:28 |
<ryankemper> |
T295705 Downtimed `elastic2044` for one hour and doing a full reboot for good measure. Already ran the plugin upgrade: `DEBIAN_FRONTEND=noninteractive sudo apt-get -y -o Dpkg::Options::="--force-confdef" -o Dpkg::Options::="--force-confold" install elasticsearch-oss wmf-elasticsearch-search-plugins` |
[production] |
05:26 |
<ryankemper> |
T295705 Rolling restart of `codfw` complete. `elastic2044` was manually restarted earlier today so the cookbook didn't restart it (b/c we pass in a datetime cutoff threshold) so I'm manually upgrading and restarting that host |
[production] |
05:10 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) restart with plugin upgrade (2 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade + restart - ryankemper@cumin1001 - T295705 |
[production] |
04:17 |
<ryankemper> |
T295705 Properly disabled the sane-itizer; we don't want it running until after we (a) complete rolling restarts and (b) restore the missing `commonswikI_file` index (which is blocked on the restarts) |
[production] |
03:41 |
<Amir1> |
ladsgroup@mwmaint1002:~$ cat broken_imgs | xargs -I {} mwscript refreshImageMetadata.php --wiki=commonswiki --mediatype=OFFICE --verbose --mime 'image/*' --force --batch-size 1 --sleep 1 --start={} --end={} (T296001) |
[production] |
03:37 |
<Amir1> |
rebuilding metadata of all djvu files outside of commons (T296001) |
[production] |
03:06 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation restart with plugin upgrade (2 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade + restart - ryankemper@cumin1001 - T295705 |
[production] |
02:58 |
<ryankemper> |
T295705 `elasticsearch.exceptions.ConnectionTimeout: ConnectionTimeout caused by - ReadTimeoutError(HTTPSConnectionPool(host='search.svc.codfw.wmnet', port=9243): Read timed out. (read timeout=60))` Probably transient failure; will wait 10 mins and try again |
[production] |
02:57 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) restart with plugin upgrade (2 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade + restart - ryankemper@cumin1001 - T295705 |
[production] |
02:55 |
<ryankemper> |
T295705 `ryankemper@cumin1001:~$ sudo cookbook sre.elasticsearch.rolling-operation codfw "codfw plugin upgrade + restart" --upgrade --nodes-per-run 2 --start-datetime 2021-11-18T18:55:54 --task-id T295705` on tmux `rolling_restarts_codfw` |
[production] |
02:55 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation restart with plugin upgrade (2 nodes at a time) for ElasticSearch cluster search_codfw: codfw plugin upgrade + restart - ryankemper@cumin1001 - T295705 |
[production] |
02:41 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
02:37 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
02:12 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |