7901-7950 of 10000 results (38ms)
2020-06-04 §
09:05 <akosiaris@cumin1001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
09:05 <akosiaris@cumin1001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
09:05 <akosiaris@cumin1001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
09:05 <akosiaris@cumin1001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
09:04 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:04 <akosiaris@cumin1001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
09:03 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:03 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:03 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:03 <moritzm> deploying Java security updates on elastic search nodes [production]
09:03 <akosiaris@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:00 <akosiaris@cumin1001> END (FAIL) - Cookbook sre.hosts.downtime (exit_code=99) [production]
09:00 <akosiaris@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:59 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:59 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:59 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:58 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:50 <marostegui> Repool labsdb1009 after running maintain-views T252219 [production]
08:42 <moritzm> restarting archiva to pick up Java security updates [production]
08:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1107 to clone db1091 on s1 T253217', diff saved to https://phabricator.wikimedia.org/P11392 and previous config saved to /var/cache/conftool/dbconfig/20200604-081545-marostegui.json [production]
08:14 <marostegui> Run sudo /usr/local/sbin/maintain-views --all-databases --replace-all on labsdb1009 - T252219 [production]
07:49 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
07:45 <marostegui> Depool labsdb1009 - T252219 [production]
07:45 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime [production]
07:33 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: dc=eqiad,cluster=labweb,service=labweb-ssl [production]
07:32 <oblivian@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: dc=eqiad,cluster=cloudceph,service=cloudceph [production]
06:52 <mutante> mwmaint1002 started mediawiki_job_cirrus_build_completion_indices_eqiad.service [production]
06:06 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: name=logstash200.* [production]
06:05 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: name=logstash100.* [production]
06:04 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: cluster=eventschemas,service=eventschemas [production]
06:02 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: dc=codfw,cluster=elasticsearch,service=elasticsearch.* [production]
06:01 <oblivian@puppetmaster1001> conftool action : set/weight=10; selector: dc=codfw,cluster=elasticsearch,service=elasticsearch [production]
05:59 <_joe_> fixing weights of cp2040 T245594 [production]
05:31 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
05:28 <elukey@cumin1001> START - Cookbook sre.hosts.downtime [production]
00:36 <reedy@deploy1001> Synchronized php-1.35.0-wmf.35/includes/specials/SpecialUserrights.php: T254417 T251534 (duration: 01m 06s) [production]
2020-06-03 §
23:08 <reedy@deploy1001> Synchronized wmf-config/CommonSettings-labs.php: T249834 (duration: 01m 06s) [production]
23:06 <reedy@deploy1001> Synchronized wmf-config/InitialiseSettings-labs.php: T249834 (duration: 01m 06s) [production]
22:22 <ryankemper@cumin2001> END (PASS) - Cookbook sre.elasticsearch.rolling-upgrade (exit_code=0) [production]
21:54 <jforrester@deploy1001> rebuilt and synchronized wikiversions files: Re-rolling group1 to 1.35.0-wmf.35 for T253023 [production]
21:49 <jforrester@deploy1001> Synchronized php-1.35.0-wmf.35/extensions/EventStreamConfig/includes/ApiStreamConfigs.php: T254390 ApiStreamConfigs: If the 'constraints' parameter is unset, don't explode (duration: 01m 06s) [production]
21:43 <cstone> civicrm revision changed from 63508b01b9 to 11b0e7c7e5 [production]
21:16 <ryankemper@cumin2001> START - Cookbook sre.elasticsearch.rolling-upgrade [production]
21:15 <ryankemper> The previously ran `_cluster/reroute?retry_failed=true` command worked as intended, the two shards in question have recovered and we're back to green cluster status. We're now in a known state and ready to proceed with the eqiad rolling upgrade [production]