2022-04-12
§
|
23:48 |
<bking@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: Upgrading Elasticsearch to 6.8 in CODFW - bking@cumin1001 - T301958 |
[production] |
23:47 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134', diff saved to https://phabricator.wikimedia.org/P24538 and previous config saved to /var/cache/conftool/dbconfig/20220412-234753-ladsgroup.json |
[production] |
23:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1134 (T298565)', diff saved to https://phabricator.wikimedia.org/P24537 and previous config saved to /var/cache/conftool/dbconfig/20220412-233248-ladsgroup.json |
[production] |
23:23 |
<razzi@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host clouddb1014.eqiad.wmnet with OS bullseye |
[production] |
23:03 |
<razzi@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on clouddb1014.eqiad.wmnet with reason: host reimage |
[production] |
22:59 |
<razzi@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on clouddb1014.eqiad.wmnet with reason: host reimage |
[production] |
22:48 |
<razzi@cumin1001> |
START - Cookbook sre.hosts.reimage for host clouddb1014.eqiad.wmnet with OS bullseye |
[production] |
22:46 |
<razzi@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on clouddb1014.eqiad.wmnet with reason: Upgrade to bullseye |
[production] |
22:46 |
<razzi@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on clouddb1014.eqiad.wmnet with reason: Upgrade to bullseye |
[production] |
22:39 |
<ryankemper> |
T305646 Re-enabling puppet on `elastic2033`; still need to unban from elasticsearch cluster tomorrow |
[production] |
22:34 |
<bking@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: Upgrading Elasticsearch to 6.8 in CODFW - bking@cumin1001 - T301958 |
[production] |
22:32 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1134 (T298565)', diff saved to https://phabricator.wikimedia.org/P24536 and previous config saved to /var/cache/conftool/dbconfig/20220412-223206-ladsgroup.json |
[production] |
22:32 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1134.eqiad.wmnet with reason: Maintenance |
[production] |
22:32 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1134.eqiad.wmnet with reason: Maintenance |
[production] |
22:31 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184 (T298565)', diff saved to https://phabricator.wikimedia.org/P24535 and previous config saved to /var/cache/conftool/dbconfig/20220412-223158-ladsgroup.json |
[production] |
22:16 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184', diff saved to https://phabricator.wikimedia.org/P24534 and previous config saved to /var/cache/conftool/dbconfig/20220412-221652-ladsgroup.json |
[production] |
22:01 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184', diff saved to https://phabricator.wikimedia.org/P24533 and previous config saved to /var/cache/conftool/dbconfig/20220412-220147-ladsgroup.json |
[production] |
21:59 |
<bking@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: Upgrading Elasticsearch to 6.8 in CODFW - bking@cumin1001 - T301958 |
[production] |
21:46 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1184 (T298565)', diff saved to https://phabricator.wikimedia.org/P24531 and previous config saved to /var/cache/conftool/dbconfig/20220412-214642-ladsgroup.json |
[production] |
21:37 |
<milimetric@deploy1002> |
Finished deploy [analytics/refinery@34be9f3]: Regular analytics weekly train [analytics/refinery@34be9f3] (duration: 21m 24s) |
[production] |
21:16 |
<milimetric@deploy1002> |
Started deploy [analytics/refinery@34be9f3]: Regular analytics weekly train [analytics/refinery@34be9f3] |
[production] |
21:13 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
21:13 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
21:13 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
21:13 |
<razzi> |
razzi@clouddb1013:~$ sudo systemctl reset-failed wmf-pt-kill.service - the wmf-pt-kill@<section>.service units are running fine |
[production] |
21:13 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
20:54 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1184 (T298565)', diff saved to https://phabricator.wikimedia.org/P24530 and previous config saved to /var/cache/conftool/dbconfig/20220412-205414-ladsgroup.json |
[production] |
20:54 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1184.eqiad.wmnet with reason: Maintenance |
[production] |
20:54 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1184.eqiad.wmnet with reason: Maintenance |
[production] |
20:54 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1164 (T298565)', diff saved to https://phabricator.wikimedia.org/P24529 and previous config saved to /var/cache/conftool/dbconfig/20220412-205406-ladsgroup.json |
[production] |
20:41 |
<sbassett> |
re-deploy security patch for T226212 to wmf.6 - part 2 |
[production] |
20:39 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1164', diff saved to https://phabricator.wikimedia.org/P24528 and previous config saved to /var/cache/conftool/dbconfig/20220412-203900-ladsgroup.json |
[production] |
20:38 |
<sbassett> |
re-deploy security patch for T226212 to wmf.6 - part 1 |
[production] |
20:37 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
20:37 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
20:37 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
20:37 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
20:32 |
<cjming> |
end of UTC late backport & config window |
[production] |
20:27 |
<cjming@deploy1002> |
Synchronized wmf-config: Config: [[gerrit:779545|Stop setting $wgMultiContentRevisionSchemaMigrationStage (T231674)]] (duration: 01m 33s) |
[production] |