851-900 of 10000 results (31ms)
2021-06-09 §
05:32 <marostegui@cumin1001> dbctl commit (dc=all): 'db1135 (re)pooling @ 100%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16334 and previous config saved to /var/cache/conftool/dbconfig/20210609-053213-root.json [production]
05:17 <marostegui@cumin1001> dbctl commit (dc=all): 'db1135 (re)pooling @ 75%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16333 and previous config saved to /var/cache/conftool/dbconfig/20210609-051710-root.json [production]
05:02 <marostegui@cumin1001> dbctl commit (dc=all): 'db1135 (re)pooling @ 50%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16332 and previous config saved to /var/cache/conftool/dbconfig/20210609-050206-root.json [production]
04:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db1135 (re)pooling @ 25%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16331 and previous config saved to /var/cache/conftool/dbconfig/20210609-044703-root.json [production]
04:44 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1135 to remove rev_page_id index T163532', diff saved to https://phabricator.wikimedia.org/P16330 and previous config saved to /var/cache/conftool/dbconfig/20210609-044428-marostegui.json [production]
04:27 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
03:30 <eileen> civicrm revision changed from eac772e9c9 to 31d07115a0, config revision is 931a941a5e [production]
03:01 <Amir1> mwscript extensions/Cognate/maintenance/populateCognateSites.php --wiki=aawiktionary --site-group wiktionary (T284444) [production]
02:58 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
02:56 <Amir1> clean up of the rest of mbox files (except arbcom) (T282303) [production]
02:55 <ryankemper@cumin1001> END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) [production]
02:49 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1010.eqiad.wmnet --dest wdqs1009.eqiad.wmnet --reason "xfer categories following reimage" --blazegraph_instance categories --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` [production]
02:49 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
02:39 <ryankemper> T280382 Re-enabled puppet on `wdqs1010` [production]
01:20 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
00:37 <catrope@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:698654|Enable Wikisource OCR on select Wikisources (T283898)]] (duration: 01m 31s) [production]
00:00 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1010.eqiad.wmnet --dest wdqs1009.eqiad.wmnet --reason "transferring skolemized wikidata.jnl so we can reimage wdqs1009" --blazegraph_instance blazegraph --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` [production]
00:00 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
2021-06-08 §
22:36 <krinkle@deploy1002> Finished deploy [integration/docroot@d4c9e08]: (no justification provided) (duration: 00m 08s) [production]
22:36 <krinkle@deploy1002> Started deploy [integration/docroot@d4c9e08]: (no justification provided) [production]
22:21 <ryankemper> T284479 Block put back in place. We're back to expected traffic levels. We'll need a more granular mitigation in place before we can lift this block going forward. [production]
22:15 <ryankemper> T284479 Successful puppet run on `cp3052`, proceeding to rest of `A:cp-text`: `sudo cumin -b 19 'A:cp-text' 'run-puppet-agent -q'` [production]
22:14 <ryankemper> T284479 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/698850, running puppet on `cp3052.esams.wmnet` [production]
22:10 <ryankemper> T284479 Yup more than enough evidence of a strong upward spike now. Proceeding to revert [production]
22:10 <ryankemper> T284479 Already starting to see a large upward spike in requests. Doing a quick sanity check to make sure this is out of the ordinary but I'll likely be putting the block back in place shortly [production]
22:09 <ryankemper> T284479 Puppet run complete across all of `cp-text`. Monitoring https://grafana.wikimedia.org/d/000000455/elasticsearch-percentiles?viewPanel=47&orgId=1&from=now-1h&to=now over the next few minutes to see if we see a large spike in `full_text` and `entity_full_text` queries [production]
22:03 <ryankemper> T284479 Successful puppet run on `cp3052`, proceeding to rest of `A:cp-text`: `sudo cumin -b 15 'A:cp-text' 'run-puppet-agent -q'` [production]
22:01 <ryankemper> T284479 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/698849, running puppet on `cp3052.esams.wmnet` [production]
21:59 <ryankemper> T284479 Prior context: We put a block on a range of Google App Engine IPs yesterday to protect Cirrussearch from a bad actor; now we're going to try lifting the block and seeing if we're still getting slammed with traffic [production]
21:44 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1009.eqiad.wmnet with reason: REIMAGE [production]
21:42 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1009.eqiad.wmnet with reason: REIMAGE [production]
21:29 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs1009.eqiad.wmnet` on `ryankemper@cumin1001` tmux session `wdqs_1009` [production]
21:27 <ryankemper> T280382 Disabled puppet on `wdqs1010` out of abundance of caution; will re-enable after wdqs1009 is reimaged and xfer back is complete [production]
21:12 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
20:38 <bblack> authdns1001: update gdnsd to 3.7.0-2~wmf1 [production]
20:18 <bblack> authdns2001: update gdnsd to 3.7.0-2~wmf1 [production]
19:55 <bblack> dns[1235]002: update gdnsd to 3.7.0-2~wmf1 [production]
19:53 <jhuneidi@deploy1002> rebuilt and synchronized wikiversions files: group0 wikis to 1.37.0-wmf.9 refs T281150 [production]
19:46 <bblack> dns[1235]001: update gdnsd to 3.7.0-2~wmf1 [production]
19:43 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:36 <ryankemper@cumin1001> END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) [production]
19:36 <ryankemper> T280382 Cancelling the data-transfer run to restart it; realized that the cookbook will start up the `wdqs-updater` again so will locally hack the cookbook on `cumin1001` to prevent that [production]
19:32 <ladsgroup@deploy1002> Synchronized php-1.37.0-wmf.9/extensions/Echo/modules/nojs/mw.echo.alert.monobook.less: Backport: [[gerrit:698848|Fix MonoBook orange banner hover styles (T284496)]] (duration: 01m 08s) [production]
19:26 <bblack> dns400[12]: update gdnsd to 3.7.0-3~wmf1 [production]
19:25 <bblack> apt: update gdnsd package to gdnsd-3.7.0-2~wmf1 (fix systemd reload issues) [production]
19:20 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1009.eqiad.wmnet --dest wdqs1010.eqiad.wmnet --reason "transferring skolemized wikidata.jnl so we can reimage wdqs1009" --blazegraph_instance blazegraph --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` [production]
19:20 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:19 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
19:19 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:18 <ryankemper> T280382 `sudo systemctl stop wdqs-updater wdqs-blazegraph` on `wdqs1010` in preparation for transfer [production]