2021-06-08
ยง
|
22:14 |
<ryankemper> |
T284479 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/698850, running puppet on `cp3052.esams.wmnet` |
[production] |
22:10 |
<ryankemper> |
T284479 Yup more than enough evidence of a strong upward spike now. Proceeding to revert |
[production] |
22:10 |
<ryankemper> |
T284479 Already starting to see a large upward spike in requests. Doing a quick sanity check to make sure this is out of the ordinary but I'll likely be putting the block back in place shortly |
[production] |
22:09 |
<ryankemper> |
T284479 Puppet run complete across all of `cp-text`. Monitoring https://grafana.wikimedia.org/d/000000455/elasticsearch-percentiles?viewPanel=47&orgId=1&from=now-1h&to=now over the next few minutes to see if we see a large spike in `full_text` and `entity_full_text` queries |
[production] |
22:03 |
<ryankemper> |
T284479 Successful puppet run on `cp3052`, proceeding to rest of `A:cp-text`: `sudo cumin -b 15 'A:cp-text' 'run-puppet-agent -q'` |
[production] |
22:01 |
<ryankemper> |
T284479 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/698849, running puppet on `cp3052.esams.wmnet` |
[production] |
21:59 |
<ryankemper> |
T284479 Prior context: We put a block on a range of Google App Engine IPs yesterday to protect Cirrussearch from a bad actor; now we're going to try lifting the block and seeing if we're still getting slammed with traffic |
[production] |
21:44 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1009.eqiad.wmnet with reason: REIMAGE |
[production] |
21:42 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1009.eqiad.wmnet with reason: REIMAGE |
[production] |
21:29 |
<ryankemper> |
T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs1009.eqiad.wmnet` on `ryankemper@cumin1001` tmux session `wdqs_1009` |
[production] |
21:27 |
<ryankemper> |
T280382 Disabled puppet on `wdqs1010` out of abundance of caution; will re-enable after wdqs1009 is reimaged and xfer back is complete |
[production] |
21:12 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) |
[production] |
20:38 |
<bblack> |
authdns1001: update gdnsd to 3.7.0-2~wmf1 |
[production] |
20:18 |
<bblack> |
authdns2001: update gdnsd to 3.7.0-2~wmf1 |
[production] |
19:55 |
<bblack> |
dns[1235]002: update gdnsd to 3.7.0-2~wmf1 |
[production] |
19:53 |
<jhuneidi@deploy1002> |
rebuilt and synchronized wikiversions files: group0 wikis to 1.37.0-wmf.9 refs T281150 |
[production] |
19:46 |
<bblack> |
dns[1235]001: update gdnsd to 3.7.0-2~wmf1 |
[production] |
19:43 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
19:36 |
<ryankemper@cumin1001> |
END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) |
[production] |
19:36 |
<ryankemper> |
T280382 Cancelling the data-transfer run to restart it; realized that the cookbook will start up the `wdqs-updater` again so will locally hack the cookbook on `cumin1001` to prevent that |
[production] |
19:32 |
<ladsgroup@deploy1002> |
Synchronized php-1.37.0-wmf.9/extensions/Echo/modules/nojs/mw.echo.alert.monobook.less: Backport: [[gerrit:698848|Fix MonoBook orange banner hover styles (T284496)]] (duration: 01m 08s) |
[production] |
19:26 |
<bblack> |
dns400[12]: update gdnsd to 3.7.0-3~wmf1 |
[production] |
19:25 |
<bblack> |
apt: update gdnsd package to gdnsd-3.7.0-2~wmf1 (fix systemd reload issues) |
[production] |
19:20 |
<ryankemper> |
T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1009.eqiad.wmnet --dest wdqs1010.eqiad.wmnet --reason "transferring skolemized wikidata.jnl so we can reimage wdqs1009" --blazegraph_instance blazegraph --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` |
[production] |
19:20 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
19:19 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) |
[production] |
19:19 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
19:18 |
<ryankemper> |
T280382 `sudo systemctl stop wdqs-updater wdqs-blazegraph` on `wdqs1010` in preparation for transfer |
[production] |
19:08 |
<ryankemper> |
[WDQS] `ryankemper@wdqs1005:~$ sudo pool` (all caught up on lag) |
[production] |
18:47 |
<bblack> |
dns4001: update gdnsd to 3.7.0-1~wmf1 |
[production] |
18:43 |
<bblack> |
apt: update gdnsd package to gdnsd-3.7.0-1~wmf1 |
[production] |
17:49 |
<jgiannelos@deploy1002> |
helmfile [eqiad] Ran 'sync' command on namespace 'proton' for release 'production' . |
[production] |
17:36 |
<jgiannelos@deploy1002> |
helmfile [codfw] Ran 'sync' command on namespace 'proton' for release 'production' . |
[production] |
17:25 |
<jgiannelos@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'proton' for release 'production' . |
[production] |
17:10 |
<elukey> |
fix dbstore1007's ip address in analytics-in4 on cr{1,2}-eqiad |
[production] |
17:06 |
<jhuneidi@deploy1002> |
Finished scap: testwikis wikis to 1.37.0-wmf.9 refs T281150 (duration: 34m 12s) |
[production] |
16:32 |
<jhuneidi@deploy1002> |
Started scap: testwikis wikis to 1.37.0-wmf.9 refs T281150 |
[production] |
16:27 |
<papaul> |
powerdown moss-fe2002 for relocation |
[production] |
16:06 |
<papaul> |
powerdown ms-backup2002 for relocation |
[production] |
16:02 |
<oblivian@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
15:40 |
<papaul> |
powerdown ms-be2061 for relocation |
[production] |
15:40 |
<bblack@cumin1001> |
conftool action : set/pooled=yes; selector: name=cp203[34].codfw.wmnet |
[production] |
15:33 |
<papaul> |
powerdown thanos-fe2003 for relocation |
[production] |
15:23 |
<Krinkle> |
mwmaint1002: Running purge-parsercache-now.php on server 4/4 (pc1009) ref P16060, T280605, T282761. |
[production] |
15:19 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on pc2009.codfw.wmnet,pc1009.eqiad.wmnet with reason: Purging parsercache pc3 T282761 |
[production] |
15:19 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on pc2009.codfw.wmnet,pc1009.eqiad.wmnet with reason: Purging parsercache pc3 T282761 |
[production] |
15:13 |
<papaul> |
powerdown cp2034 for relocation |
[production] |
15:04 |
<papaul> |
powerdown cp2033 for relocation |
[production] |
14:59 |
<bblack@cumin1001> |
conftool action : set/pooled=no; selector: name=cp203[34].codfw.wmnet |
[production] |
14:43 |
<moritzm> |
cleanup now unused nginx mods and former deps (various X11 libs and libxslt) on testreduce1001/scandium after switch towards nginx-light T164456 |
[production] |