5951-6000 of 10000 results (42ms)
2021-06-03 ยง
10:41 <godog> test librenms/AM paging [production]
10:40 <jiji@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
10:38 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 50%: Repool db1179', diff saved to https://phabricator.wikimedia.org/P16263 and previous config saved to /var/cache/conftool/dbconfig/20210603-103858-root.json [production]
10:28 <jiji@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
10:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1179 (re)pooling @ 25%: Repool db1179', diff saved to https://phabricator.wikimedia.org/P16262 and previous config saved to /var/cache/conftool/dbconfig/20210603-102354-root.json [production]
10:21 <kormat@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on pc2008.codfw.wmnet,pc1008.eqiad.wmnet with reason: Purging parsercache T282761 [production]
10:21 <kormat@cumin1001> START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on pc2008.codfw.wmnet,pc1008.eqiad.wmnet with reason: Purging parsercache T282761 [production]
10:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1179', diff saved to https://phabricator.wikimedia.org/P16261 and previous config saved to /var/cache/conftool/dbconfig/20210603-101950-marostegui.json [production]
10:13 <kormat@deploy1002> Synchronized wmf-config/db-eqiad.php: Set pc1010 as pc2 primary T282761 (duration: 00m 58s) [production]
09:38 <marostegui> Deploy schema change on s3 codfw master (with replication) - T282373 T282372 T282371 [production]
09:37 <moritzm> upgrading eqiad to debmonitor-client 0.3.0 (along with deleting/recreating system user within 100-499 range) T235162 [production]
08:55 <moritzm> uploading gitlab-ce 13.11.5-ce to apt.wikimedia.org thirdparty/gitlab [production]
08:43 <oblivian@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
08:37 <moritzm> upgrading codfw to debmonitor-client 0.3.0 (along with deleting/recreating system user within 100-499 range) T235162 [production]
08:23 <oblivian@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
08:19 <oblivian@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
08:09 <moritzm> upgrading esams/eqsin to debmonitor-client 0.3.0 (along with deleting/recreating system user within 100-499 range) [production]
07:52 <ryankemper> [WDQS] Pooled `wdqs1008` and `wdqs2006` (all caught up on lag) [production]
07:48 <moritzm> uploaded debmonitor-client 0.3.0-1+deb10u2 to apt.wikimedia.org [production]
06:24 <ryankemper> [WDQS] De-pooled `wdqs1008` and `wdqs2006` (~1 hour of lag to catch up on) [production]
06:23 <ryankemper> T280382 `wdqs2006.codfw.wmnet` has been re-imaged and had the appropriate wikidata/categories journal files transferred. `df -h` shows disk space is no longer an issue following the switch to `raid0`: `/dev/md2 2.6T 998G 1.5T 40% /srv` [production]
06:23 <ryankemper> T280382 `wdqs1008.eqiad.wmnet` has been re-imaged and had the appropriate wikidata/categories journal files transferred. `df -h` shows disk space is no longer an issue following the switch to `raid0`: `/dev/md2 2.6T 998G 1.5T 40% /srv` [production]
06:07 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
06:05 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
05:20 <marostegui> Deploy schema change on db1121, lag will appear on s4 (commonswiki) wiki replicas - T266486 T268392 T273360 [production]
05:18 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1121', diff saved to https://phabricator.wikimedia.org/P16259 and previous config saved to /var/cache/conftool/dbconfig/20210603-051853-marostegui.json [production]
05:14 <marostegui@cumin1001> dbctl commit (dc=all): 'db1144:3314 (re)pooling @ 100%: Repool db1144:3314', diff saved to https://phabricator.wikimedia.org/P16258 and previous config saved to /var/cache/conftool/dbconfig/20210603-051402-root.json [production]
04:58 <marostegui@cumin1001> dbctl commit (dc=all): 'db1144:3314 (re)pooling @ 75%: Repool db1144:3314', diff saved to https://phabricator.wikimedia.org/P16257 and previous config saved to /var/cache/conftool/dbconfig/20210603-045859-root.json [production]
04:43 <marostegui@cumin1001> dbctl commit (dc=all): 'db1144:3314 (re)pooling @ 50%: Repool db1144:3314', diff saved to https://phabricator.wikimedia.org/P16256 and previous config saved to /var/cache/conftool/dbconfig/20210603-044355-root.json [production]
04:37 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1005.eqiad.wmnet --dest wdqs1008.eqiad.wmnet --reason "transferring fresh wikidata journal following reimage" --blazegraph_instance blazegraph` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
04:36 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
04:36 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs2004.codfw.wmnet --dest wdqs2006.codfw.wmnet --reason "transferring fresh wikidata journal following reimage" --blazegraph_instance blazegraph` on `ryankemper@cumin2002` tmux session `wdqs_reimage` [production]
04:36 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer [production]
04:35 <ryankemper@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
04:34 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
04:29 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs2004.codfw.wmnet --dest wdqs2006.codfw.wmnet --reason "transferring fresh categories journal following reimage" --blazegraph_instance categories` on `ryankemper@cumin2002` tmux session `wdqs_reimage` [production]
04:29 <ryankemper@cumin2002> START - Cookbook sre.wdqs.data-transfer [production]
04:29 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1005.eqiad.wmnet --dest wdqs1008.eqiad.wmnet --reason "transferring fresh categories journal following reimage" --blazegraph_instance categories` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
04:29 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
04:28 <marostegui@cumin1001> dbctl commit (dc=all): 'db1144:3314 (re)pooling @ 25%: Repool db1144:3314', diff saved to https://phabricator.wikimedia.org/P16255 and previous config saved to /var/cache/conftool/dbconfig/20210603-042851-root.json [production]
02:22 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1008.eqiad.wmnet with reason: REIMAGE [production]
02:20 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1008.eqiad.wmnet with reason: REIMAGE [production]
02:09 <ryankemper@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs2006.codfw.wmnet with reason: REIMAGE [production]
02:07 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs1008.eqiad.wmnet` on `ryankemper@cumin1001` tmux session `wdqs_reimage` [production]
02:07 <ryankemper@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs2006.codfw.wmnet with reason: REIMAGE [production]
02:05 <ryankemper> T280382 `wdqs1003.eqiad.wmnet` has been re-imaged and had the appropriate wikidata/categories journal files transferred. `df -h` shows disk space is no longer an issue following the switch to `raid0`: `/dev/md2 2.9T 998G 1.8T 36% /srv` [production]
02:04 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
01:51 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs2006.codfw.wmnet` on `ryankemper@cumin2002` tmux session `wdqs_reimage` [production]
01:47 <ryankemper> T280382 `wdqs2003.codfw.wmnet` has been re-imaged and had the appropriate wikidata/categories journal files transferred. `df -h` shows disk space is no longer an issue following the switch to `raid0`: `/dev/md2 2.9T 998G 1.8T 36% /srv` [production]
01:43 <ryankemper> [WDQS] Pooled `wdqs1004` (caught up on lag) [production]