2351-2400 of 10000 results (38ms)
2021-02-09 ยง
11:52 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1008.eqiad.wmnet [production]
11:51 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1007.eqiad.wmnet [production]
11:51 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes:weight=10; selector: name=maps1006.eqiad.wmnet [production]
11:51 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 8%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14258 and previous config saved to /var/cache/conftool/dbconfig/20210209-115124-root.json [production]
11:51 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs1013.eqiad.wmnet [production]
11:50 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes; selector: name=maps1001.eqiad.wmnet [production]
11:46 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs1014.eqiad.wmnet [production]
11:40 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs1014.eqiad.wmnet [production]
11:36 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 5%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14257 and previous config saved to /var/cache/conftool/dbconfig/20210209-113620-root.json [production]
11:34 <elukey> start the upgrade process for Hadoop Analytics [production]
11:33 <elukey@cumin1001> START - Cookbook sre.hadoop.stop-cluster for Hadoop analytics cluster: Stop the Hadoop cluster before maintenance. - elukey@cumin1001 [production]
11:32 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs1015.eqiad.wmnet [production]
11:27 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs1015.eqiad.wmnet [production]
11:23 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs1016.eqiad.wmnet [production]
11:21 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 4%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14256 and previous config saved to /var/cache/conftool/dbconfig/20210209-112116-root.json [production]
11:18 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs1016.eqiad.wmnet [production]
11:17 <vgutierrez> rolling restart of eqiad LVS instances to catch up on kernel upgrades [production]
11:07 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3005.esams.wmnet [production]
11:06 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 3%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14255 and previous config saved to /var/cache/conftool/dbconfig/20210209-110613-root.json [production]
11:02 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs3005.esams.wmnet [production]
10:57 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still [production]
10:57 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still [production]
10:55 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3006.esams.wmnet [production]
10:53 <jmm@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cumin2001.codfw.wmnet [production]
10:51 <marostegui@cumin1001> dbctl commit (dc=all): 'db1157 (re)pooling @ 2%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14254 and previous config saved to /var/cache/conftool/dbconfig/20210209-105109-root.json [production]
10:50 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs3006.esams.wmnet [production]
10:48 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3007.esams.wmnet [production]
10:43 <vgutierrez@cumin1001> START - Cookbook sre.hosts.reboot-single for host lvs3007.esams.wmnet [production]
10:41 <vgutierrez> rolling restart of esams LVS instances to catch up on kernel upgrades [production]
10:40 <jmm@cumin1001> START - Cookbook sre.hosts.reboot-single for host cumin2001.codfw.wmnet [production]
10:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 100%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14253 and previous config saved to /var/cache/conftool/dbconfig/20210209-103443-root.json [production]
10:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 100%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14252 and previous config saved to /var/cache/conftool/dbconfig/20210209-103414-root.json [production]
10:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db1157 for the first time in s3 T258361', diff saved to https://phabricator.wikimedia.org/P14251 and previous config saved to /var/cache/conftool/dbconfig/20210209-102109-marostegui.json [production]
10:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 75%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14250 and previous config saved to /var/cache/conftool/dbconfig/20210209-101939-root.json [production]
10:19 <jiji@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc1019.eqiad.wmnet [production]
10:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 75%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14249 and previous config saved to /var/cache/conftool/dbconfig/20210209-101911-root.json [production]
10:15 <marostegui@cumin1001> dbctl commit (dc=all): 'Add db1157 to dbctl, depooled T258361', diff saved to https://phabricator.wikimedia.org/P14248 and previous config saved to /var/cache/conftool/dbconfig/20210209-101556-marostegui.json [production]
10:13 <jiji@cumin1001> START - Cookbook sre.hosts.reboot-single for host mc1019.eqiad.wmnet [production]
10:12 <gehel@cumin1001> START - Cookbook sre.wdqs.reboot [production]
10:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 50%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14247 and previous config saved to /var/cache/conftool/dbconfig/20210209-100436-root.json [production]
10:04 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 50%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14246 and previous config saved to /var/cache/conftool/dbconfig/20210209-100407-root.json [production]
09:49 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 25%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14245 and previous config saved to /var/cache/conftool/dbconfig/20210209-094932-root.json [production]
09:49 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 25%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14244 and previous config saved to /var/cache/conftool/dbconfig/20210209-094904-root.json [production]
09:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 10%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14243 and previous config saved to /var/cache/conftool/dbconfig/20210209-093429-root.json [production]
09:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 10%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14242 and previous config saved to /var/cache/conftool/dbconfig/20210209-093400-root.json [production]
09:22 <godog> swift eqiad-prod: decrease weight for SSDs on ms-be[1019-1026] - T272836 [production]
08:44 <XioNoX> repool esams - T272342 [production]
08:30 <XioNoX> rollback redirect ns2 to authdns1001 - T252631 [production]
08:09 <XioNoX> alright, brace yourself, esams switch stack is going to go down [production]
08:03 <ayounsi@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:30:00 on 32 hosts with reason: switch upgrade [production]