2351-2400 of 10000 results (24ms)
2020-09-24 ยง
10:51 <jbond42> disable puppet fleet wide to deploy a puppetmaster change [production]
10:49 <moritzm> installing libproxy security updates [production]
10:23 <volans> uploaded python3-wmflib_0.0.2 to apt.wikimedia.org buster-wikimedia [production]
10:20 <kormat@cumin1001> dbctl commit (dc=all): 'db2138:3312 (re)pooling @ 100%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12789 and previous config saved to /var/cache/conftool/dbconfig/20200924-102025-kormat.json [production]
10:05 <kormat@cumin1001> dbctl commit (dc=all): 'db2138:3312 (re)pooling @ 75%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12788 and previous config saved to /var/cache/conftool/dbconfig/20200924-100521-kormat.json [production]
10:02 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'api-gateway' for release 'staging' . [production]
10:01 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'api-gateway' for release 'production' . [production]
09:50 <kormat@cumin1001> dbctl commit (dc=all): 'db2138:3312 (re)pooling @ 50%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12787 and previous config saved to /var/cache/conftool/dbconfig/20200924-095018-kormat.json [production]
09:50 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
09:50 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
09:48 <jayme> restart pybal on lvs1015.eqiad.wmnet,lvs2009.codfw.wmnet - T255875 [production]
09:46 <jayme> restart pybal on lvs1016.eqiad.wmnet,lvs2010.codfw.wmnet - T255875 [production]
09:43 <jayme> running puppet on lvs servers - T255875 [production]
09:35 <kormat@cumin1001> dbctl commit (dc=all): 'db2138:3312 (re)pooling @ 25%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12786 and previous config saved to /var/cache/conftool/dbconfig/20200924-093514-kormat.json [production]
09:25 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
09:25 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
09:20 <ema> cp4021: repool with varnish 6.0.6-1wm1 T263557 [production]
09:19 <ema> cp4021: redepool with varnish to 6.0.6-1wm1 T263557 [production]
09:14 <kormat@cumin1001> dbctl commit (dc=all): 'db2138:3312 depooling: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12785 and previous config saved to /var/cache/conftool/dbconfig/20200924-091445-kormat.json [production]
09:14 <kormat@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:14 <kormat@cumin1001> START - Cookbook sre.hosts.downtime [production]
09:14 <ema> cp4021: depool and upgrade varnish to 6.0.6-1wm1 T263557 [production]
09:05 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
09:04 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
08:59 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
08:59 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
08:38 <kormat@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:38 <kormat@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:24 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2127 for MCR schema change', diff saved to https://phabricator.wikimedia.org/P12784 and previous config saved to /var/cache/conftool/dbconfig/20200924-082443-marostegui.json [production]
08:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db2109 (re)pooling @ 100%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12783 and previous config saved to /var/cache/conftool/dbconfig/20200924-082319-root.json [production]
08:20 <volans@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:17 <volans@cumin1001> START - Cookbook sre.dns.netbox [production]
08:15 <volans@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
08:15 <XioNoX> configure vrrp_master_pinning in codfw - T263212 [production]
08:10 <moritzm> installing mariadb-10.1/mariadb-10.3 updates (packaged version from Debian, not the wmf-mariadb variants we used for mysqld) [production]
08:09 <volans@cumin1001> START - Cookbook sre.hosts.decommission [production]
08:08 <volans@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
08:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db2109 (re)pooling @ 66%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12782 and previous config saved to /var/cache/conftool/dbconfig/20200924-080816-root.json [production]
07:58 <volans@cumin1001> START - Cookbook sre.hosts.decommission [production]
07:57 <marostegui> Remove es2018 from tendril and zarcillo T263613 [production]
07:57 <XioNoX> configure vrrp_master_pinning in eqiad - T263212 [production]
07:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db2109 (re)pooling @ 33%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12781 and previous config saved to /var/cache/conftool/dbconfig/20200924-075312-root.json [production]
07:52 <klausman@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
07:49 <klausman@cumin1001> START - Cookbook sre.hosts.downtime [production]
07:49 <godog> roll restart logstash codfw, gc death [production]
07:25 <XioNoX> push pfw policies - T263674 [production]
06:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Place db2073 into vslow, not api in s4', diff saved to https://phabricator.wikimedia.org/P12780 and previous config saved to /var/cache/conftool/dbconfig/20200924-064018-marostegui.json [production]
06:22 <elukey> powercycle elastic2037 (host stuck, no mgmt serial console working, DIMM errors in racadm getsel) [production]
05:57 <marostegui> Remove es2012 from tendril and zarcillo T263613 [production]
05:41 <marostegui@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]