3601-3650 of 10000 results (87ms)
2020-03-02 ยง
14:42 <vgutierrez> Re-enable BGP in lvs5001 - T245984 [production]
14:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Give weight to es4 and es5 unused eqiad slaves T246072', diff saved to https://phabricator.wikimedia.org/P10579 and previous config saved to /var/cache/conftool/dbconfig/20200302-144033-marostegui.json [production]
14:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Give weight to es4 and es5 unused codfw slaves T246072', diff saved to https://phabricator.wikimedia.org/P10578 and previous config saved to /var/cache/conftool/dbconfig/20200302-143915-marostegui.json [production]
14:38 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Reading up to Q12M for the new term store everywhere (was Q10M) + warm db1126 & db1111 caches (T219123) cache bust (duration: 00m 56s) [production]
14:37 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Reading up to Q12M for the new term store everywhere (was Q10M) + warm db1126 & db1111 caches (T219123) (duration: 00m 58s) [production]
14:37 <vgutierrez@cumin2001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) [production]
14:36 <vgutierrez@cumin2001> START - Cookbook sre.hosts.decommission [production]
14:36 <vgutierrez> running the decommission cookbook against lvs2005 - T246666 [production]
14:20 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 200 to 250 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10577 and previous config saved to /var/cache/conftool/dbconfig/20200302-142017-marostegui.json [production]
14:19 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
14:17 <addshore> START warm cache for db1111 & db1126 for Q10-12 million T219123 (pass 3) [production]
14:15 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
14:05 <vgutierrez> update puppet compiler facts [production]
13:58 <addshore> START warm cache for db1111 & db1126 for Q10-12 million T219123 (pass 2) [production]
13:55 <vgutierrez> Switch from globalsign to LE as unified cert vendor on ulsfo - T230687 [production]
13:53 <vgutierrez> Switch from globalsign to LE as unified cert vendor on cp4026 - T230687 [production]
13:48 <vgutierrez> reimage lvs5001 with buster - T245984 [production]
13:33 <kart_> Update cxserver to 2020-03-02-115344-production: Reverting T246319 [production]
13:30 <kartik@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'cxserver' for release 'production' . [production]
13:28 <kartik@deploy1001> helmfile [EQIAD] Ran 'apply' command on namespace 'cxserver' for release 'production' . [production]
13:26 <kartik@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'cxserver' for release 'staging' . [production]
13:18 <elukey> roll restart Hadoop master daemons on an-master100[1,2] for openjdk upgrades [production]
13:11 <addshore> START warm cache for db1111 & db1126 for Q10-12 million T219123 (pass 1) [production]
13:08 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Reading up to Q10M for the new term store for clients (was Q8M) + warm db1126 & db1111 caches (T219123) cache bust (duration: 00m 55s) [production]
13:07 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Reading up to Q10M for the new term store for clients (was Q8M) + warm db1126 & db1111 caches (T219123) (duration: 00m 56s) [production]
12:58 <Urbanecm> Deploy security fix for T229731 [production]
12:16 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: 8280f81: Set cswiki and cywiki to use custom minerva logo again (T246535): take II (duration: 00m 57s) [production]
12:15 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: SWAT: 8280f81: Set cswiki and cywiki to use custom minerva logo again (T246535) (duration: 00m 58s) [production]
12:09 <oblivian@deploy1001> Synchronized wmf-config/ProductionServices.php: Switch search to use envoy as a proxy (duration: 00m 56s) [production]
11:54 <vgutierrez> enable BGP in lvs5002 - T245984 [production]
11:44 <addshore> START warm cache for db1111 & db1126 for Q8-10 million T219123 (pass 2) [production]
11:41 <jdrewniak@deploy1001> Synchronized portals: Wikimedia Portals Update: [[gerrit:576004| Bumping portals to master (563985)]] (duration: 00m 57s) [production]
11:40 <jdrewniak@deploy1001> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:576004| Bumping portals to master (563985)]] (duration: 00m 57s) [production]
11:39 <jbond42> enable strict_hostname_checking on the puppet masters https://gerrit.wikimedia.org/r/c/operations/puppet/+/575220 [production]
11:12 <kart_> Update cxserver to 2020-02-28-043702-production (T246319) [production]
11:07 <kartik@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'cxserver' for release 'production' . [production]
11:05 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
11:04 <kartik@deploy1001> helmfile [EQIAD] Ran 'apply' command on namespace 'cxserver' for release 'production' . [production]
11:03 <vgutierrez@cumin1001> START - Cookbook sre.hosts.downtime [production]
11:02 <kartik@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'cxserver' for release 'staging' . [production]
11:01 <addshore> START warm cache for db1111 & db1126 for Q8-10 million T219123 (pass 1) [production]
10:55 <addshore@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Reading up to Q8M for the new term store for clients (was Q6M) + warm db1126 & db1111 caches (T219123) (duration: 00m 58s) [production]
10:35 <vgutierrez> reimage lvs5002 with buster - T245984 [production]
10:34 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 150 to 200 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10576 and previous config saved to /var/cache/conftool/dbconfig/20200302-103445-marostegui.json [production]
10:22 <addshore> START warm cache for db1111 & db1126 for Q6-8 million T219123 (pass 2) [production]
09:59 <marostegui@cumin1001> dbctl commit (dc=all): 'Increase weight from 100 to 150 on db1111 T246447', diff saved to https://phabricator.wikimedia.org/P10575 and previous config saved to /var/cache/conftool/dbconfig/20200302-095921-marostegui.json [production]
09:58 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1119 after upgrade T239791', diff saved to https://phabricator.wikimedia.org/P10574 and previous config saved to /var/cache/conftool/dbconfig/20200302-095841-marostegui.json [production]
09:52 <moritzm> installing remaining curl security updates [production]
09:51 <addshore> START warm cache for db1111 & db1126 for Q6-8 million T219123 (pass 1) [production]
09:50 <elukey> powercycle an-worker1083 (no ssh, mgmt console available but tty not really usable, CPU soft lockups reported) [production]