2021-03-03
§
|
06:41 |
<marostegui> |
Testing log |
[production] |
06:27 |
<ryankemper> |
T275345 T274555 `sudo confctl select 'name=elastic2054.codfw.wmnet' set/pooled=yes` on `ryankemper@puppetmaster1001` |
[production] |
06:26 |
<ryankemper> |
T275345 T274555 `sudo confctl select 'name=elastic2045.codfw.wmnet' set/pooled=yes` on `ryankemper@puppetmaster1001` |
[production] |
06:21 |
<ryankemper> |
T275345 T274555 Re-pooling `elastic2045` and `elastic2054` (commands follow) |
[production] |
06:20 |
<ryankemper> |
T275345 T274555 `curl -H 'Content-Type: application/json' -XPUT http://localhost:9400/_cluster/settings -d '{"transient":{"cluster.routing.allocation.exclude":{"_name": null,"_ip": null}}}'` => `{"acknowledged":true,"persistent":{},"transient":{}}` |
[production] |
06:18 |
<ryankemper> |
T275345 T274555 `curl -H 'Content-Type: application/json' -XPUT http://localhost:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.exclude":{"_name": null,"_ip": null}}}'` => `{"acknowledged":true,"persistent":{},"transient":{}}` |
[production] |
06:17 |
<ryankemper> |
T275345 T274555 Unbanning `elastic2045` and `elastic2054` from our cluster now that both hosts have been re-imaged and are running without errors (commands follow) |
[production] |
06:15 |
<ryankemper> |
T274555 Removed downtime for `elastic2054` |
[production] |
05:32 |
<ryankemper> |
T274555 `sudo -i wmf-auto-reimage-host --conftool -p T274555 elastic2054.codfw.wmnet` on `ryankemper@cumin2001` tmux session `elastic_reimage_elastic2054` |
[production] |
05:31 |
<ryankemper> |
T274555 `sudo -i wmf-auto-reimage-host --conftool -p T274555 elastic2054.codfw.wmnet` |
[production] |
05:27 |
<ryankemper> |
Downtime `wdqs1012` until `2021-03-03 19:25:40` (~14 hours from now). Its `wdqs-updater` is failing; ultimately it's blazegraph journal is probably in a bad state meaning we'd have to copy one over from a healthy node, but not kicking that off right now so that we can investigate a little bit first |
[production] |
05:16 |
<ryankemper> |
T275345 `ryankemper@elastic2045:~$ sudo apt-get upgrade wmf-elasticsearch-search-plugins` |
[production] |
03:50 |
<ryankemper> |
Depooled `wdqs1012` until I've got its updater back online |
[production] |
03:24 |
<ryankemper> |
`ryankemper@wdqs1012:~$ sudo systemctl restart wdqs-blazegraph` ~2 mins ago |
[production] |
02:45 |
<ejegg> |
updated fundraising CiviCRM from e1dacbe348 to b13e70d968 |
[production] |
02:09 |
<ejegg> |
updated payments-wiki from 365bf54393 to 65dbf0ed9d |
[production] |
00:42 |
<Urbanecm> |
Finished deployment in Evening B&C window; logmsgbot is currently down, and a simple restart did not bring it back up |
[production] |
00:41 |
<Urbanecm> |
00:40:16 Synchronized wmf-config/config/idwiki.yaml: 80edca8a385870a0e60a98198c99c9839fc01d80: Enable Growth features in idwiki in stealth mode (T259024; 3/3) (duration: 01m 09s) |
[production] |
00:38 |
<Urbanecm> |
00:38:12 Synchronized dblists/growthexperiments.dblist: 80edca8a385870a0e60a98198c99c9839fc01d80: Enable Growth features in idwiki in stealth mode (T259024; 2/3) (duration: 01m 10s) |
[production] |
00:31 |
<Urbanecm> |
00:31:26 Synchronized wmf-config/InitialiseSettings.php: 80edca8a385870a0e60a98198c99c9839fc01d80: Enable Growth features in idwiki in stealth mode (T259024; 1/3) (duration: 01m 11s) |
[production] |
00:21 |
<dwisehaupt> |
replication restarted on frdb2001 after utf8mb4 conversion completed. |
[production] |
00:21 |
<mutante> |
alert1001 systemctl restart tcpircbot-logmsgbot |
[production] |
00:08 |
<urbanecm@deploy1002> |
sync-file aborted: 80edca8a385870a0e60a98198c99c9839fc01d80: Enable Growth features in idwiki in stealth mode (T259024; 1/3) (duration: 06m 45s) |
[production] |
2021-03-02
§
|
23:52 |
<mutante> |
mwmaint2002 - find /home -nouser -delete |
[production] |
23:42 |
<shdubsh> |
restart kibana to finalize phatality 7.10 deployment |
[production] |
23:38 |
<twentyafterfour@deploy1002> |
Finished deploy [releng/phatality@4d0f053]: sudoer rules fixed, trying again: deploy phatality (duration: 00m 06s) |
[production] |
23:38 |
<twentyafterfour@deploy1002> |
Started deploy [releng/phatality@4d0f053]: sudoer rules fixed, trying again: deploy phatality |
[production] |
23:27 |
<twentyafterfour@deploy1002> |
Finished deploy [releng/phatality@4d0f053]: trying again: deploy phatality 7.10 (duration: 00m 37s) |
[production] |
23:27 |
<twentyafterfour@deploy1002> |
Started deploy [releng/phatality@4d0f053]: trying again: deploy phatality 7.10 |
[production] |
23:22 |
<twentyafterfour@deploy1002> |
Finished deploy [releng/phatality@4d0f053]: deploy phatality 7.10 (duration: 00m 05s) |
[production] |
23:22 |
<twentyafterfour@deploy1002> |
Started deploy [releng/phatality@4d0f053]: deploy phatality 7.10 |
[production] |
23:20 |
<twentyafterfour@deploy1002> |
Finished deploy [releng/phatality@4d0f053]: deploy phatality 7.10 (duration: 01m 01s) |
[production] |
23:19 |
<twentyafterfour@deploy1002> |
Started deploy [releng/phatality@4d0f053]: deploy phatality 7.10 |
[production] |
23:11 |
<mutante> |
mwmaint2002 - rsyncing home dirs from mwmaint1002 (T275905) |
[production] |
23:09 |
<ebernhardson> |
restart weged prometheus-wmf-elasticsearch-exporter-9200 on elastic2042 |
[production] |
23:03 |
<mforns@deploy1002> |
Finished deploy [analytics/refinery@3bd0858] (hadoop-test): Regular analytics weekly train TEST- forgot version bump [analytics/refinery@3bd0858d0c3b524e6d170099d1e2f3d12fad495d] (duration: 04m 56s) |
[production] |
22:58 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@3bd0858] (hadoop-test): Regular analytics weekly train TEST- forgot version bump [analytics/refinery@3bd0858d0c3b524e6d170099d1e2f3d12fad495d] |
[production] |
22:53 |
<mforns@deploy1002> |
Finished deploy [analytics/refinery@3bd0858] (thin): Regular analytics weekly train THIN- forgot bnump up [analytics/refinery@3bd0858d0c3b524e6d170099d1e2f3d12fad495d] (duration: 00m 06s) |
[production] |
22:53 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@3bd0858] (thin): Regular analytics weekly train THIN- forgot bnump up [analytics/refinery@3bd0858d0c3b524e6d170099d1e2f3d12fad495d] |
[production] |
22:53 |
<mforns@deploy1002> |
Finished deploy [analytics/refinery@3bd0858]: Regular analytics weekly train- forgot bump up [analytics/refinery@3bd0858d0c3b524e6d170099d1e2f3d12fad495d] (duration: 18m 41s) |
[production] |
22:34 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@3bd0858]: Regular analytics weekly train- forgot bump up [analytics/refinery@3bd0858d0c3b524e6d170099d1e2f3d12fad495d] |
[production] |
22:23 |
<mforns@deploy1002> |
Finished deploy [analytics/refinery@af99602] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@af99602101018664670a76d28cd755caf07dcde7] (duration: 07m 30s) |
[production] |
22:16 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@af99602] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@af99602101018664670a76d28cd755caf07dcde7] |
[production] |
22:14 |
<mforns@deploy1002> |
Finished deploy [analytics/refinery@af99602] (thin): Regular analytics weekly train THIN [analytics/refinery@af99602101018664670a76d28cd755caf07dcde7] (duration: 00m 07s) |
[production] |
22:14 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@af99602] (thin): Regular analytics weekly train THIN [analytics/refinery@af99602101018664670a76d28cd755caf07dcde7] |
[production] |
22:12 |
<mforns@deploy1002> |
Finished deploy [analytics/refinery@af99602]: Regular analytics weekly train [analytics/refinery@af99602101018664670a76d28cd755caf07dcde7] (duration: 13m 09s) |
[production] |
21:59 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@af99602]: Regular analytics weekly train [analytics/refinery@af99602101018664670a76d28cd755caf07dcde7] |
[production] |
21:58 |
<mforns@deploy1002> |
deploy aborted: Regular analytics weekly train [analytics/refinery@COMMIT_HASH] (duration: 00m 01s) |
[production] |
21:57 |
<mforns@deploy1002> |
Started deploy [analytics/refinery@af99602]: Regular analytics weekly train [analytics/refinery@COMMIT_HASH] |
[production] |
21:51 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on mwmaint2001.codfw.wmnet with reason: decom |
[production] |