3701-3750 of 10000 results (46ms)
2021-05-06 §
05:43 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
05:43 <ryankemper@cumin1001> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) [production]
05:38 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
05:38 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1079 and db1158 to switch sanitarium masters', diff saved to https://phabricator.wikimedia.org/P15792 and previous config saved to /var/cache/conftool/dbconfig/20210506-053801-marostegui.json [production]
05:37 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs1011.eqiad.wmnet --dest wdqs1007.eqiad.wmnet --reason "transferring fresh categories journal following reimage" --blazegraph_instance categories` on `ryankemper@cumin1001` tmux session `reimage` [production]
05:37 <ryankemper> T280382 `sudo -i cookbook sre.wdqs.data-transfer --source wdqs2008.codfw.wmnet --dest wdqs2004.codfw.wmnet --reason "transferring fresh categories journal following reimage" --blazegraph_instance categories` on `ryankemper@cumin1001` tmux session `reimage` [production]
05:37 <ryankemper@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
05:32 <tstarling@deploy1002> Synchronized php-1.37.0-wmf.4/includes/page/PageReferenceValue.php: fixing T282070 RC/log breakage due to unblocking autoblocks (duration: 01m 09s) [production]
05:27 <effie> upgrade scap to 3.17.1-1 - T279695 [production]
03:56 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs2004.codfw.wmnet with reason: REIMAGE [production]
03:54 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wdqs1007.eqiad.wmnet with reason: REIMAGE [production]
03:53 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs2004.codfw.wmnet with reason: REIMAGE [production]
03:52 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on wdqs1007.eqiad.wmnet with reason: REIMAGE [production]
03:38 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs1007.eqiad.wmnet` on `ryankemper@cumin1001` tmux session `reimage` [production]
03:38 <ryankemper> T280382 `sudo -i wmf-auto-reimage-host -p T280382 wdqs2004.codfw.wmnet` on `ryankemper@cumin1001` tmux session `reimage` [production]
03:18 <ryankemper> [Elastic] `elastic2043` is ssh unreachable. Power cycling it to bring it briefly back online - if it has the shard it should be able to repair the cluster state. Otherwise I'll have to delete the index for `enwiki_titlesuggest_1620184482` given the data would be unrecoverable [production]
03:08 <ryankemper> [Elastic] `ryankemper@elastic2044:~$ curl -H 'Content-Type: application/json' -XPUT http://localhost:9200/_cluster/settings -d '{"transient":{"cluster.routing.allocation.exclude":{"_host": null,"_name": null}}}'` [production]
03:08 <ryankemper> [Elastic] Temporarily unbanning `elastic2033` and `elastic2043` from `production-search-codfw` to see if we can get the cluster green again. If it returns to green then we'll ban one node, wait for the shards to redistribute, and then ban the other [production]
03:06 <ryankemper> [Elastic] I banned two nodes simultaneously earlier today - if there's an index with only 1 replica, and its primary and replica happened to be on the two nodes I banned, then that would have caused this situation [production]
03:04 <ryankemper> [Elastic] It looks like we've got a single missing shard in `production-search-codfw` (port 9200), which is putting the cluster into red status. The cluster won't get back into green status without intervention [production]
02:56 <ryankemper@cumin1001> END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) reboot without plugin upgrade (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw reboot - ryankemper@cumin1001 - T280563 [production]
02:55 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation reboot without plugin upgrade (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw reboot - ryankemper@cumin1001 - T280563 [production]
00:35 <Amir1> sudo service mailman3-web restart [production]
2021-05-05 §
23:35 <ryankemper> T281621 T281327 [Elastic] Banned `elastic2033` and `elastic2043` from the Cirrussearch Elasticsearch clusters [production]
23:10 <urbanecm@deploy1002> Synchronized php-1.37.0-wmf.4/extensions/GlobalWatchlist/modules/SpecialGlobalWatchlist.display.css: 4947241f876234aabc578409c3691fb791c8f715: Fix centering of as-of label (duration: 01m 08s) [production]
22:13 <mutante> welcome new deployer derick - user created on deploy1002 and bastions (T281564) [production]
22:05 <mutante> pushing puppet run on all bastion hosts [production]
21:45 <mutante> mailing lists: approved Alangi Derick's pending request for membership in ops mailing list (is becoming deployer) T281309 [production]
21:37 <urbanecm@deploy1002> Synchronized php-1.37.0-wmf.4/extensions/CentralAuth/includes/CentralAuthUser.php: 52b134ed84c1c8ef5fcd6927f03567879553d31c: Cross-wiki block should pass correct wiki blocker (T281972) (duration: 01m 09s) [production]
21:34 <urbanecm@deploy1002> Synchronized php-1.37.0-wmf.3/extensions/CentralAuth/includes/CentralAuthUser.php: 6526884848d0bb88c83cec2c6b39461542e21ef6: Cross-wiki block should pass correct wiki blocker (T281972) (duration: 01m 08s) [production]
21:32 <urbanecm@deploy1002> Synchronized php-1.37.0-wmf.4/includes/user/UserIdentityValue.php: f189c4627cfc692fb743160030a5e5ab92df1485: UserIdentityValue: Introduce convenience static factory methods (T281972) (duration: 01m 09s) [production]
21:30 <urbanecm@deploy1002> Synchronized php-1.37.0-wmf.3/includes/user/UserIdentityValue.php: 8ffb52d5cad9e003696200b9cd3e957ab26bc868: UserIdentityValue: Introduce convenience static factory methods (T281972) (duration: 01m 11s) [production]
21:29 <urbanecm@deploy1002> sync-file aborted: 8ffb52d5cad9e003696200b9cd3e957ab26bc868: UserIdentityValue: Introduce convenience static factory methods (T281972) (duration: 00m 04s) [production]
20:37 <ejegg> updated email preferences wiki (donorwiki) from d449599540 to 9f51ace546 [production]
20:36 <ejegg> updated payments-wiki from d449599540 to 9f51ace546 [production]
20:20 <ejegg> updated email preferences wiki (donorwiki) from a232fc3438 to d449599540 [production]
19:59 <jbond42> re-enable puppet post 685485 [production]
19:53 <jbond42> disable puppet: rolling out change (685485) which affects all hosts [production]
19:21 <brennen@deploy1002> Synchronized php: group1 wikis to 1.37.0-wmf.4 (duration: 01m 07s) [production]
19:19 <brennen@deploy1002> rebuilt and synchronized wikiversions files: group1 wikis to 1.37.0-wmf.4 [production]
19:16 <jbond42> ignore the last log message will wait for deploy to finish [production]
19:16 <brennen@deploy1002> Synchronized php-1.37.0-wmf.4/tests/phpunit/includes: Backport: [[gerrit:685480|Fix order of joins in SpecialRecentChanges (T281981)]] (duration: 01m 10s) [production]
19:16 <jbond42> disable puppet: rolling out change (685485) which affects all hosts [production]
19:14 <brennen@deploy1002> Synchronized php-1.37.0-wmf.4/includes/specials: Backport: [[gerrit:685480|Fix order of joins in SpecialRecentChanges (T281981)]] (duration: 01m 08s) [production]
19:10 <Amir1> starting migration of public mailing lists in group b and c to mailman3 (T280322) [production]
19:01 <brennen> 1.37.0-wmf.4 train status (T281145): deploying patch for T282038 and then rolling forward to group1. [production]
18:59 <bblack@cumin1001> conftool action : set/pooled=yes; selector: name=cp501[46].eqsin.wmnet [production]
18:50 <bblack@cumin1001> conftool action : set/pooled=yes; selector: name=cp501[35].eqsin.wmnet [production]
18:43 <tgr_> Morning deploys done [production]
18:43 <tgr@deploy1002> Synchronized php-1.37.0-wmf.4/extensions/GrowthExperiments/modules/homepage/addlink/AddLinkArticleTarget.js: Backport: [[gerrit:685482|Prevent edit notices from appearing (T281960)]] (duration: 01m 08s) [production]