4001-4050 of 10000 results (34ms)
2020-07-22 §
09:34 <jayme@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'blubberoid' for release 'staging' . [production]
09:27 <akosiaris@deploy2001> helmfile [CODFW] Ran 'sync' command on namespace 'mobileapps' for release 'production' . [production]
09:27 <akosiaris@deploy2001> helmfile [CODFW] Ran 'sync' command on namespace 'mobileapps' for release 'nontls' . [production]
09:25 <akosiaris> bump memory limits for mobileapps by 25% T218733 [production]
09:25 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'mobileapps' for release 'staging' . [production]
09:10 <jayme> updated docker-report to 0.0.7-1 on deneb [production]
09:09 <jayme> import docker-report 0.0.7-1 to buster-wikimedia [production]
09:06 <gehel> restarting blazegraph on all wdqs nodes - new vocabulary [production]
08:48 <dcausse> restarting blazegraph on wdqs1010 (testing new vocab) [production]
08:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1126', diff saved to https://phabricator.wikimedia.org/P12017 and previous config saved to /var/cache/conftool/dbconfig/20200722-084613-marostegui.json [production]
08:41 <kormat@cumin1001> dbctl commit (dc=all): 'Increase es1020 to 100% pooled in es4, reduce es1021 to weight 0 T257284', diff saved to https://phabricator.wikimedia.org/P12016 and previous config saved to /var/cache/conftool/dbconfig/20200722-084159-kormat.json [production]
08:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1126', diff saved to https://phabricator.wikimedia.org/P12015 and previous config saved to /var/cache/conftool/dbconfig/20200722-083926-marostegui.json [production]
08:35 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db1084 and db1107', diff saved to https://phabricator.wikimedia.org/P12014 and previous config saved to /var/cache/conftool/dbconfig/20200722-083535-marostegui.json [production]
08:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1126', diff saved to https://phabricator.wikimedia.org/P12013 and previous config saved to /var/cache/conftool/dbconfig/20200722-083140-marostegui.json [production]
08:30 <kart_> Updated cxserver to 2020-07-20-200559-production (T257674) [production]
08:28 <kartik@deploy1001> helmfile [EQIAD] Ran 'sync' command on namespace 'cxserver' for release 'production' . [production]
08:25 <kartik@deploy1001> helmfile [CODFW] Ran 'sync' command on namespace 'cxserver' for release 'production' . [production]
08:25 <volans@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:23 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1084 and db1107', diff saved to https://phabricator.wikimedia.org/P12012 and previous config saved to /var/cache/conftool/dbconfig/20200722-082309-marostegui.json [production]
08:22 <kartik@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'cxserver' for release 'staging' . [production]
08:20 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1126', diff saved to https://phabricator.wikimedia.org/P12010 and previous config saved to /var/cache/conftool/dbconfig/20200722-082023-marostegui.json [production]
08:19 <volans@cumin1001> START - Cookbook sre.dns.netbox [production]
08:16 <akosiaris> increase codfw mobileapps kubernetes traffic to 96% T218733. Take #2. Let's see if I can reproduce the weird increases in p99 latencies and figure out their cause [production]
08:15 <akosiaris@cumin1001> conftool action : set/weight=1; selector: dc=codfw,service=mobileapps,name=scb.* [production]
08:14 <kormat@cumin1001> dbctl commit (dc=all): 'Increase es1020 to 75% pooled in es4, reduce es1021 to weight 25 T257284', diff saved to https://phabricator.wikimedia.org/P12009 and previous config saved to /var/cache/conftool/dbconfig/20200722-081457-kormat.json [production]
08:13 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1084 and db1107', diff saved to https://phabricator.wikimedia.org/P12008 and previous config saved to /var/cache/conftool/dbconfig/20200722-081330-marostegui.json [production]
08:12 <moritzm> Turnilo switched to CAS [production]
08:05 <jayme> updated docker-report to 0.0.6-1 on deneb [production]
07:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db1084 and db1107', diff saved to https://phabricator.wikimedia.org/P12007 and previous config saved to /var/cache/conftool/dbconfig/20200722-075749-marostegui.json [production]
07:53 <kormat@cumin1001> dbctl commit (dc=all): 'Increase es1020 to 50% pooled in es4 T257284', diff saved to https://phabricator.wikimedia.org/P12006 and previous config saved to /var/cache/conftool/dbconfig/20200722-075312-kormat.json [production]
07:50 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1084 to s1, depooled T253217', diff saved to https://phabricator.wikimedia.org/P12005 and previous config saved to /var/cache/conftool/dbconfig/20200722-075040-marostegui.json [production]
07:49 <jayme> import docker-report 0.0.6-1 to buster-wikimedia [production]
07:40 <jynus> stop db1145 for hw maintenance T258249 [production]
06:47 <elukey> update analytics-in4/6 filters on cr1/cr2 eqiad (ref https://gerrit.wikimedia.org/r/c/operations/homer/public/+/614702) [production]
06:26 <marostegui> Stop MySQL on db1107 [production]
06:11 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
06:09 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime [production]
06:04 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1107 to clone db1084', diff saved to https://phabricator.wikimedia.org/P12003 and previous config saved to /var/cache/conftool/dbconfig/20200722-060432-marostegui.json [production]
05:16 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1126', diff saved to https://phabricator.wikimedia.org/P12002 and previous config saved to /var/cache/conftool/dbconfig/20200722-051607-marostegui.json [production]
2020-07-21 §
23:37 <ebernhardson@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Bump cirrus MLR models to latest (duration: 01m 06s) [production]
23:13 <Urbanecm> Evening backport window done [production]
23:12 <urbanecm@deploy1001> Synchronized wmf-config/CommonSettings.php: 7a50168d54b5e86834606fb8d7880eb3a923ffd5: Updating UploadWizard template: PD-old-70-1923->PD-old-70-expired (T258523) (duration: 01m 06s) [production]
23:06 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: 7acc9d966a07d589bb6aed5f801c9e1defc75fe1: Enable $wgWatchlistExpiry on testwiki (T257506) (duration: 01m 08s) [production]
19:10 <jhuneidi@deploy1001> rebuilt and synchronized wikiversions files: group0 wikis to 1.36.0-wmf.1 [production]
19:02 <catrope@deploy1001> Synchronized php-1.36.0-wmf.1/includes/Storage/PageUpdater.php: Fix handling of null edits (T257766) (duration: 01m 06s) [production]
19:01 <catrope@deploy1001> Synchronized php-1.35.0-wmf.41/includes/Storage/PageUpdater.php: Fix handling of null edits (T257766) (duration: 01m 11s) [production]
18:33 <jhuneidi@deploy1001> Finished scap: testwikis wikis to 1.36.0-wmf.1 (duration: 41m 22s) [production]
18:27 <ejegg> restored new URL for TY page in payments-wiki settings [production]
18:22 <mforns@deploy1001> Finished deploy [analytics/refinery@0c25de1] (thin): Redeploying to unbreak unique devices per domain monthly THIN [analytics/refinery@0c25de19a3a309276654b4463cca4f574336d8fd] (duration: 00m 07s) [production]
18:22 <mforns@deploy1001> Started deploy [analytics/refinery@0c25de1] (thin): Redeploying to unbreak unique devices per domain monthly THIN [analytics/refinery@0c25de19a3a309276654b4463cca4f574336d8fd] [production]