2021-06-09
§
|
11:27 |
<jbond> |
drop keep_env from sudo config - #T275852 |
[production] |
11:22 |
<jbond@deploy1002> |
Finished deploy [netbox/deploy@98cf8df]: (no justification provided) (duration: 00m 43s) |
[production] |
11:22 |
<jbond@deploy1002> |
Started deploy [netbox/deploy@98cf8df]: (no justification provided) |
[production] |
11:21 |
<jbond@deploy1002> |
Finished deploy [netbox/deploy@98cf8df]: (no justification provided) (duration: 01m 15s) |
[production] |
11:20 |
<jbond@deploy1002> |
Started deploy [netbox/deploy@98cf8df]: (no justification provided) |
[production] |
11:11 |
<awight> |
EU deployment window complete |
[production] |
11:10 |
<awight@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:698855|Set wgAutoConfirmCount to 10 for enwikisource (T284627)]] (duration: 02m 04s) |
[production] |
10:22 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1130.eqiad.wmnet with reason: REIMAGE |
[production] |
10:18 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1130.eqiad.wmnet with reason: REIMAGE |
[production] |
10:15 |
<jbond@deploy1002> |
Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next (duration: 00m 53s) |
[production] |
10:14 |
<jbond@deploy1002> |
Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next |
[production] |
10:13 |
<jbond@deploy1002> |
Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next (duration: 05m 41s) |
[production] |
10:07 |
<jbond@deploy1002> |
Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next |
[production] |
10:06 |
<jbond@deploy1002> |
Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next (duration: 00m 38s) |
[production] |
10:06 |
<jbond@deploy1002> |
Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next |
[production] |
10:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1130 T283235', diff saved to https://phabricator.wikimedia.org/P16337 and previous config saved to /var/cache/conftool/dbconfig/20210609-100423-marostegui.json |
[production] |
10:00 |
<jbond@deploy1002> |
Finished deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next (duration: 00m 48s) |
[production] |
09:59 |
<jbond@deploy1002> |
Started deploy [netbox/deploy@c70df91]: Force deploy of gerrit/672831 to netbox-next |
[production] |
09:58 |
<moritzm> |
cleanup now unused nginx mods and former deps (various X11 libs and libxslt) on schema* after switch towards nginx-light T164456 |
[production] |
07:54 |
<oblivian@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
07:16 |
<oblivian@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
06:25 |
<XioNoX> |
Add 185.71.138.0/24 to network::external and diffscan - T252132 |
[production] |
06:12 |
<oblivian@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . |
[production] |
05:32 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1135 (re)pooling @ 100%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16334 and previous config saved to /var/cache/conftool/dbconfig/20210609-053213-root.json |
[production] |
05:17 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1135 (re)pooling @ 75%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16333 and previous config saved to /var/cache/conftool/dbconfig/20210609-051710-root.json |
[production] |
05:02 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1135 (re)pooling @ 50%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16332 and previous config saved to /var/cache/conftool/dbconfig/20210609-050206-root.json |
[production] |
04:47 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1135 (re)pooling @ 25%: Repool db1135 after dropping an index', diff saved to https://phabricator.wikimedia.org/P16331 and previous config saved to /var/cache/conftool/dbconfig/20210609-044703-root.json |
[production] |
04:44 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1135 to remove rev_page_id index T163532', diff saved to https://phabricator.wikimedia.org/P16330 and previous config saved to /var/cache/conftool/dbconfig/20210609-044428-marostegui.json |
[production] |
04:27 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) |
[production] |
03:30 |
<eileen> |
civicrm revision changed from eac772e9c9 to 31d07115a0, config revision is 931a941a5e |
[production] |
03:01 |
<Amir1> |
mwscript extensions/Cognate/maintenance/populateCognateSites.php --wiki=aawiktionary --site-group wiktionary (T284444) |
[production] |
02:58 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
02:56 |
<Amir1> |
clean up of the rest of mbox files (except arbcom) (T282303) |
[production] |
02:55 |
<ryankemper@cumin1001> |
END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) |
[production] |
02:49 |
<ryankemper> |
T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1010.eqiad.wmnet --dest wdqs1009.eqiad.wmnet --reason "xfer categories following reimage" --blazegraph_instance categories --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` |
[production] |
02:49 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
02:39 |
<ryankemper> |
T280382 Re-enabled puppet on `wdqs1010` |
[production] |
01:20 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) |
[production] |
00:37 |
<catrope@deploy1002> |
Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:698654|Enable Wikisource OCR on select Wikisources (T283898)]] (duration: 01m 31s) |
[production] |
00:00 |
<ryankemper> |
T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1010.eqiad.wmnet --dest wdqs1009.eqiad.wmnet --reason "transferring skolemized wikidata.jnl so we can reimage wdqs1009" --blazegraph_instance blazegraph --without-lvs` on `ryankemper@cumin1001` tmux session `wdqs_1009` |
[production] |
00:00 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
2021-06-08
§
|
22:36 |
<krinkle@deploy1002> |
Finished deploy [integration/docroot@d4c9e08]: (no justification provided) (duration: 00m 08s) |
[production] |
22:36 |
<krinkle@deploy1002> |
Started deploy [integration/docroot@d4c9e08]: (no justification provided) |
[production] |
22:21 |
<ryankemper> |
T284479 Block put back in place. We're back to expected traffic levels. We'll need a more granular mitigation in place before we can lift this block going forward. |
[production] |
22:15 |
<ryankemper> |
T284479 Successful puppet run on `cp3052`, proceeding to rest of `A:cp-text`: `sudo cumin -b 19 'A:cp-text' 'run-puppet-agent -q'` |
[production] |
22:14 |
<ryankemper> |
T284479 Merged https://gerrit.wikimedia.org/r/c/operations/puppet/+/698850, running puppet on `cp3052.esams.wmnet` |
[production] |
22:10 |
<ryankemper> |
T284479 Yup more than enough evidence of a strong upward spike now. Proceeding to revert |
[production] |
22:10 |
<ryankemper> |
T284479 Already starting to see a large upward spike in requests. Doing a quick sanity check to make sure this is out of the ordinary but I'll likely be putting the block back in place shortly |
[production] |
22:09 |
<ryankemper> |
T284479 Puppet run complete across all of `cp-text`. Monitoring https://grafana.wikimedia.org/d/000000455/elasticsearch-percentiles?viewPanel=47&orgId=1&from=now-1h&to=now over the next few minutes to see if we see a large spike in `full_text` and `entity_full_text` queries |
[production] |
22:03 |
<ryankemper> |
T284479 Successful puppet run on `cp3052`, proceeding to rest of `A:cp-text`: `sudo cumin -b 15 'A:cp-text' 'run-puppet-agent -q'` |
[production] |