2022-03-21
ยง
|
18:54 |
<ebernhardson> |
T303548 start commonswiki reindexing on eqiad codfw and cloudelastic cirrus clusters |
[production] |
18:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312 (T298557)', diff saved to https://phabricator.wikimedia.org/P22906 and previous config saved to /var/cache/conftool/dbconfig/20220321-185042-marostegui.json |
[production] |
18:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312', diff saved to https://phabricator.wikimedia.org/P22905 and previous config saved to /var/cache/conftool/dbconfig/20220321-183537-marostegui.json |
[production] |
18:22 |
<brennen@deploy1002> |
Started scap: testwikis wikis to 1.39.0-wmf.2 refs T300203 |
[production] |
18:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312', diff saved to https://phabricator.wikimedia.org/P22904 and previous config saved to /var/cache/conftool/dbconfig/20220321-182032-marostegui.json |
[production] |
18:19 |
<otto@deploy1002> |
Finished deploy [analytics/refinery@2175d63]: gobblin prometheus metrics for all jobs - T294420 (duration: 04m 41s) |
[production] |
18:19 |
<brennen> |
trainsperiment (T300203): 1.39.0-wmf.1 on all wikis; starting prep of wmf.2, will abort if needed |
[production] |
18:15 |
<otto@deploy1002> |
Started deploy [analytics/refinery@2175d63]: gobblin prometheus metrics for all jobs - T294420 |
[production] |
18:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312 (T298557)', diff saved to https://phabricator.wikimedia.org/P22903 and previous config saved to /var/cache/conftool/dbconfig/20220321-180526-marostegui.json |
[production] |
18:04 |
<brennen@deploy1002> |
rebuilt and synchronized wikiversions files: all wikis to 1.39.0-wmf.1 refs T300203 |
[production] |
18:03 |
<otto@deploy1002> |
Finished deploy [analytics/refinery@2175d63] (hadoop-test): gobblin prometheus metrics for all jobs - T294420 (duration: 07m 19s) |
[production] |
17:59 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1021 for T302233 |
[production] |
17:59 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1020 for T302233 |
[production] |
17:57 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1016 for T302233 |
[production] |
17:55 |
<otto@deploy1002> |
Started deploy [analytics/refinery@2175d63] (hadoop-test): gobblin prometheus metrics for all jobs - T294420 |
[production] |
17:53 |
<pt1979@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudvirt1025.eqiad.wmnet with OS bullseye |
[production] |
17:51 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1017 for T302233 |
[production] |
17:49 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1013 for T302233 |
[production] |
17:49 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1018 for T302233 |
[production] |
17:46 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1014 |
[production] |
17:41 |
<ryankemper> |
[WCQS Deploy] Test query passed on commons-query.wikimedia.org; WCQS deploy complete |
[production] |
17:40 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@2b67de7] (wcqs): Deploy 0.3.107 to WCQS (duration: 02m 12s) |
[production] |
17:38 |
<ryankemper> |
[WCQS Deploy] Tests look good following deploy of `0.3.107` to canary `wcqs1002.eqiad.wmnet`, proceeding to rest of fleet |
[production] |
17:37 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@2b67de7] (wcqs): Deploy 0.3.107 to WCQS |
[production] |
17:35 |
<razzi> |
`sudo maintain-views --all-databases --replace-all --table flaggedrevs` on clouddb1018 after same command without `--table` argument timed out waiting for `zhwiki_p.page` |
[production] |
17:32 |
<ryankemper> |
[Maps] Running puppet agent on rest of `maps*`: `ryankemper@cumin1001:~$ sudo -E cumin -b 4 'maps*' 'run-puppet-agent'` |
[production] |
17:31 |
<ryankemper> |
[Maps] Ran puppet agent on maps master `maps1009` to verify puppet patch works; looks like osm import was disabled as intended `Notice: /Stage[main]/Osm::Imposm3/Systemd::Service[imposm]/Service[imposm]/ensure: ensure changed 'running' to 'stopped'` |
[production] |
17:26 |
<ryankemper> |
[WDQS Deploy] Restarting `wdqs-categories` across lvs-managed hosts, one node at a time: `sudo -E cumin -b 1 'A:wdqs-all and not A:wdqs-test' 'depool && sleep 45 && systemctl restart wdqs-categories && sleep 45 && pool'` |
[production] |
17:25 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-categories` across all test hosts simultaneously: `sudo -E cumin 'A:wdqs-test' 'systemctl restart wdqs-categories'` |
[production] |
17:25 |
<ryankemper> |
[WDQS Deploy] Restarted `wdqs-updater` across all hosts, 4 hosts at a time: `sudo -E cumin -b 4 'A:wdqs-all' 'systemctl restart wdqs-updater'` |
[production] |
17:22 |
<ryankemper@deploy1002> |
Finished deploy [wdqs/wdqs@2b67de7]: 0.3.107 (duration: 08m 26s) |
[production] |
17:15 |
<ryankemper> |
[WDQS Deploy] Tests passing following deploy of `0.3.107` on canary `wdqs1003`; proceeding to rest of fleet |
[production] |
17:14 |
<ryankemper@deploy1002> |
Started deploy [wdqs/wdqs@2b67de7]: 0.3.107 |
[production] |
17:13 |
<ryankemper> |
[WDQS Deploy] Gearing up for deploy of wdqs `0.3.107`. Pre-deploy tests passing on canary `wdqs1003` |
[production] |
17:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1146:3312 (T298557)', diff saved to https://phabricator.wikimedia.org/P22902 and previous config saved to /var/cache/conftool/dbconfig/20220321-170731-marostegui.json |
[production] |
17:07 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1146.eqiad.wmnet with reason: Maintenance |
[production] |
17:07 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1146.eqiad.wmnet with reason: Maintenance |
[production] |
16:58 |
<brennen> |
trainsperiment (T300203): blockers currently cleared, will hold wmf.1 -> group2 until 18:00 UTC, per deployment calendar |
[production] |
16:55 |
<taavi@deploy1002> |
Synchronized php-1.39.0-wmf.1/extensions/WikimediaEvents/includes/PageSplitter/PageSplitterHooks.php: Backport: [[gerrit:772364|PageSplitter: check for OutputPage::getTitle() returning null (T304331)]] (duration: 00m 50s) |
[production] |
16:53 |
<taavi@deploy1002> |
Synchronized php-1.38.0-wmf.26/extensions/WikimediaEvents/includes/PageSplitter/PageSplitterHooks.php: Backport: [[gerrit:772363|PageSplitter: check for OutputPage::getTitle() returning null (T304331)]] (duration: 00m 51s) |
[production] |
16:51 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on 8 hosts with reason: Maintenance |
[production] |
16:51 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on 8 hosts with reason: Maintenance |
[production] |
16:51 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2104.codfw.wmnet with reason: Maintenance |
[production] |
16:51 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2104.codfw.wmnet with reason: Maintenance |
[production] |
16:46 |
<razzi@cumin1001> |
END (FAIL) - Cookbook sre.wikireplicas.update-views (exit_code=99) |
[production] |
16:44 |
<lucaswerkmeister-wmde@deploy1002> |
Synchronized php-1.39.0-wmf.1/extensions/Wikibase/repo/: Backport: [[gerrit:772361|Add display to wbsearchentities response even if empty (T104344)]] (duration: 00m 53s) |
[production] |
16:15 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.reimage for host cloudvirt1025.eqiad.wmnet with OS bullseye |
[production] |
16:14 |
<rzl@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/zotero: apply |
[production] |
16:13 |
<rzl@deploy1002> |
helmfile [codfw] START helmfile.d/services/zotero: apply |
[production] |
16:13 |
<rzl@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/toolhub: apply |
[production] |