3501-3550 of 10000 results (38ms)
2021-06-03 ยง
22:39 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1008.eqiad.wmnet --dest wdqs1005.eqiad.wmnet --reason "transferring fresh wikidata journal following reimage" --blazegraph_instance blazegraph` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
22:39 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
22:36 <ryankemper> T280382 Cancelled transfer to `wdqs1005`; the source host `wdqs1013` has a `wikidata.jnl` that is 80% too big; will transfer from different node -> `wdqs1005` and then fix the journal on `wdqs1013` after [production]
22:36 <ryankemper@cumin1001> END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) [production]
22:35 <ryankemper> T280382 `wdqs2005.codfw.wmnet` has been re-imaged and had the appropriate wikidata/categories journal files transferred. `df -h` shows disk space is no longer an issue following the switch to `raid0`: `/dev/md2 2.6T 998G 1.5T 40% /srv` [production]
22:28 <robh@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
22:15 <robh@cumin1001> START - Cookbook sre.dns.netbox [production]
21:55 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
20:54 <shdubsh> restart kafka on kafka-logging to take new retention config [production]
20:47 <sbassett> Deployed security patch for T282932 [production]
20:37 <ebernhardson> restart mjolnir-kafka-bulk-daemon on search-loader[12]001 [production]
20:35 <ebernhardson@deploy1002> Finished deploy [search/mjolnir/deploy@1c40c83]: bulk daemon: accept events for search_updates swift container (duration: 01m 00s) [production]
20:34 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1013.eqiad.wmnet --dest wdqs1005.eqiad.wmnet --reason "transferring fresh wikidata journal following reimage" --blazegraph_instance blazegraph` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
20:34 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
20:34 <ebernhardson@deploy1002> Started deploy [search/mjolnir/deploy@1c40c83]: bulk daemon: accept events for search_updates swift container [production]
20:34 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs2001.codfw.wmnet --dest wdqs2005.codfw.wmnet --reason "transferring fresh wikidata journal following reimage" --blazegraph_instance blazegraph` on `ryankemper@cumin2002` tmux session `wdqs_reimage` [production]
20:34 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer [production]
19:58 <mutante> [mwmaint1002:~] $ /usr/local/bin/systemd-timer-mail-wrapper -T root@mwmaint1002.eqiad.wmnet --only-on-error /usr/local/bin/cross-validate-accounts [production]
19:56 <mutante> [mwmaint1002:~] $ sudo systemctl start daily_account_consistency_check.service [production]
19:41 <dzahn@cumin1001> END (FAIL) - Cookbook sre.ganeti.makevm (exit_code=99) for new host doh5002.wikimedia.org [production]
19:41 <dzahn@cumin1001> START - Cookbook sre.ganeti.makevm for new host doh5002.wikimedia.org [production]
19:39 <ebernhardson@deploy1002> Finished deploy [wikimedia/discovery/analytics@339d402]: ship pip and wheel packages for virtualenvs (duration: 04m 27s) [production]
19:37 <dzahn@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host doh5001.wikimedia.org [production]
19:34 <ebernhardson@deploy1002> Started deploy [wikimedia/discovery/analytics@339d402]: ship pip and wheel packages for virtualenvs [production]
19:33 <mutante> [deneb:~] $ sudo systemctl start docker-reporter-releng-images - T251918 - icinga-wm> RECOVERY - Check systemd state on deneb is OK [production]
19:33 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
19:32 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
19:32 <mutante> [deneb:~] $ sudo systemctl start docker-reporter-releng-images [production]
19:28 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs2001.codfw.wmnet --dest wdqs2005.codfw.wmnet --reason "transferring fresh categories journal following reimage" --blazegraph_instance categories` on `ryankemper@cumin2002` tmux session `wdqs_reimage` [production]
19:27 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer [production]
19:27 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1013.eqiad.wmnet --dest wdqs1005.eqiad.wmnet --reason "transferring fresh categories journal following reimage" --blazegraph_instance categories` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
19:27 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:23 <dzahn@cumin1001> START - Cookbook sre.ganeti.makevm for new host doh5001.wikimedia.org [production]
19:14 <mutante> install1003 - restarting nginx after we switched from nginx-full to nginx-light package, same on other install servers T164456 [production]
19:05 <ryankemper@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs2005.codfw.wmnet with reason: REIMAGE [production]
19:03 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1005.eqiad.wmnet with reason: REIMAGE [production]
19:03 <ryankemper@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs2005.codfw.wmnet with reason: REIMAGE [production]
19:01 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1005.eqiad.wmnet with reason: REIMAGE [production]
18:52 <ebernhardson@deploy1002> Finished deploy [wikimedia/discovery/analytics@f40d41a]: resolve npe in datawriter (duration: 00m 31s) [production]
18:51 <ebernhardson@deploy1002> Started deploy [wikimedia/discovery/analytics@f40d41a]: resolve npe in datawriter [production]
18:46 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs2005.codfw.wmnet` on `ryankemper@cumin2002` tmux session `wdqs_reimage` [production]
18:46 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs1005.eqiad.wmnet` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
18:39 <ryankemper> [WDQS] depooled `wdqs1012` (has ~15 hours of lag to catch up on) [production]
18:37 <ryankemper> [WDQS] `ryankemper@wdqs1012:~$ sudo systemctl restart wdqs-blazegraph` (blazegraph on the host has been locked up for ~16 hours based off of https://grafana.wikimedia.org/d/000000489/wikidata-query-service?orgId=1&var-cluster_name=wdqs&from=1622683465757&to=1622745461547) [production]
18:37 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on cp1087.eqiad.wmnet with reason: replaced DIMM https://phabricator.wikimedia.org/T278729 [production]
18:37 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on cp1087.eqiad.wmnet with reason: replaced DIMM https://phabricator.wikimedia.org/T278729 [production]
18:28 <mutante> temp. disabling puppet on install* servers. switching nginx to light variant (T164456) [production]
18:16 <ebernhardson@deploy1002> Finished deploy [wikimedia/discovery/analytics@659a8e4]: resolve npe in datawriter (duration: 00m 15s) [production]
18:16 <ebernhardson@deploy1002> Started deploy [wikimedia/discovery/analytics@659a8e4]: resolve npe in datawriter [production]
17:49 <robh@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on moss-be1002.eqiad.wmnet with reason: REIMAGE [production]