2001-2050 of 10000 results (25ms)
2020-09-17 §
13:13 <jayme> restarting pybal on lvs1016.eqiad.wmnet,lvs2010.codfw.wmnet [production]
13:03 <liw@deploy1001> rebuilt and synchronized wikiversions files: all wikis to 1.36.0-wmf.9 [production]
12:18 <cmjohnson1> pdu swap maintenance beginning now for racks D1, D2 and C1 eqiad [production]
11:24 <matthiasmullie> End Euro B&C [production]
11:24 <mlitn@deploy1001> Synchronized php-1.36.0-wmf.9/extensions/NavigationTiming/: Account for empty layout shift sources array (duration: 01m 05s) [production]
11:22 <mlitn@deploy1001> Synchronized php-1.36.0-wmf.9/extensions/WikimediaEvents/: Disable MediaSearch A/B test (duration: 01m 08s) [production]
11:10 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12627 and previous config saved to /var/cache/conftool/dbconfig/20200917-111028-marostegui.json [production]
11:06 <vgutierrez> update to acme-chief 0.29 on acmechief[12]001 - T263006 [production]
11:04 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'api-gateway' for release 'production' . [production]
11:04 <vgutierrez> upload acme-chief 0.29 to apt.wm.o (buster) - T263006 [production]
11:04 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'api-gateway' for release 'production' . [production]
11:03 <oblivian@cumin1001> conftool action : set/pooled=false; selector: dnsdisc=wikifeeds,name=eqiad [production]
10:58 <marostegui> Stop mysql on db1125 for PDU mainteanance, lag will appear on s2, s4, s6 and s7 on labsdb hosts T261459 [production]
10:58 <oblivian@cumin1001> conftool action : set/pooled=true; selector: dnsdisc=wikifeeds,name=codfw [production]
10:51 <oblivian@cumin1001> conftool action : set/pooled=false; selector: dnsdisc=wikifeeds,name=codfw [production]
10:48 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12626 and previous config saved to /var/cache/conftool/dbconfig/20200917-104816-marostegui.json [production]
10:46 <oblivian@cumin1001> conftool action : set/pooled=true; selector: dnsdisc=wikifeeds,name=eqiad [production]
10:40 <oblivian@cumin1001> conftool action : set/ttl=10; selector: dnsdisc=wikifeeds [production]
10:34 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'api-gateway' for release 'production' . [production]
10:27 <oblivian@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'mobileapps' for release 'production' . [production]
10:22 <oblivian@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'mobileapps' for release 'production' . [production]
10:20 <oblivian@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'mobileapps' for release 'staging' . [production]
10:18 <oblivian@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
10:17 <oblivian@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'wikifeeds' for release 'staging' . [production]
09:14 <oblivian@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'wikifeeds' for release 'production' . [production]
08:49 <jayme> deleting some random pods in kubernetes staging to rebalance load back on kubestage1002 - T262527 [production]
08:43 <jayme> uncordoned kubestage1002 after kernel upgrade - T262527 [production]
08:37 <jayme@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) [production]
08:37 <godog> graphite compress /var/log/carbon logs older than 2d [production]
08:29 <jayme@cumin1001> START - Cookbook sre.hosts.reboot-single [production]
08:25 <jayme> reboot kubestage1002 for kernel upgrade - T262527 [production]
08:24 <godog> graphite add 300G to /srv [production]
07:55 <jayme> draining kubestage1002 for kernel upgrade - T262527 [production]
07:55 <jayme> cordoning kubestage1002 for kernel upgrade - T262527 [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12624 and previous config saved to /var/cache/conftool/dbconfig/20200917-070145-marostegui.json [production]
06:55 <hashar> Taking a heap dump of Gerrit JVM [production]
06:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12623 and previous config saved to /var/cache/conftool/dbconfig/20200917-061931-marostegui.json [production]
06:03 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12622 and previous config saved to /var/cache/conftool/dbconfig/20200917-060312-marostegui.json [production]
05:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool es2015 after cloning es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12621 and previous config saved to /var/cache/conftool/dbconfig/20200917-055219-marostegui.json [production]
05:51 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1131 for on-site maintenace', diff saved to https://phabricator.wikimedia.org/P12620 and previous config saved to /var/cache/conftool/dbconfig/20200917-055158-marostegui.json [production]
05:46 <marostegui> Stop mysql on db1131 - T262901 [production]
05:42 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool es2031 on es2 for the first time with minimal weight T261717', diff saved to https://phabricator.wikimedia.org/P12619 and previous config saved to /var/cache/conftool/dbconfig/20200917-054226-marostegui.json [production]
05:35 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2015 after cloning es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12618 and previous config saved to /var/cache/conftool/dbconfig/20200917-053503-marostegui.json [production]
05:23 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2015 after cloning es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12617 and previous config saved to /var/cache/conftool/dbconfig/20200917-052347-marostegui.json [production]
05:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool es2011 as es1 master and es2017 as es3 master and then depool es2018 and es2012 to clone es2029 and es2030 T261717', diff saved to https://phabricator.wikimedia.org/P12616 and previous config saved to /var/cache/conftool/dbconfig/20200917-051741-marostegui.json [production]
05:07 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool es2015 after cloning es2031 T261717', diff saved to https://phabricator.wikimedia.org/P12615 and previous config saved to /var/cache/conftool/dbconfig/20200917-050739-marostegui.json [production]
04:53 <marostegui> Deploy schema change on s1 eqiad primary master - T238966 [production]
01:22 <Krinkle> krinkle@mwmaint1002 synced docroot/noc – https://gerrit.wikimedia.org/r/620138 [production]
01:22 <Krinkle> krinkle@mwmaint2001 synced docroot/noc – https://gerrit.wikimedia.org/r/620138 [production]
2020-09-16 §
23:41 <catrope@deploy1001> Synchronized php-1.36.0-wmf.8/extensions/FlaggedRevs: T262970 (duration: 01m 06s) [production]