2021-02-09
ยง
|
11:18 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host lvs1016.eqiad.wmnet |
[production] |
11:17 |
<vgutierrez> |
rolling restart of eqiad LVS instances to catch up on kernel upgrades |
[production] |
11:14 |
<dcaro> |
Merged the osd scheduler change for all osds, applying on all cloudcephosd* (T273791) |
[admin] |
11:07 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3005.esams.wmnet |
[production] |
11:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1157 (re)pooling @ 3%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14255 and previous config saved to /var/cache/conftool/dbconfig/20210209-110613-root.json |
[production] |
11:02 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host lvs3005.esams.wmnet |
[production] |
10:57 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still |
[production] |
10:57 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on maps1005.eqiad.wmnet with reason: Resyncing database, still |
[production] |
10:55 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3006.esams.wmnet |
[production] |
10:53 |
<jmm@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cumin2001.codfw.wmnet |
[production] |
10:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1157 (re)pooling @ 2%: Slowly pool db1157 into s3', diff saved to https://phabricator.wikimedia.org/P14254 and previous config saved to /var/cache/conftool/dbconfig/20210209-105109-root.json |
[production] |
10:50 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host lvs3006.esams.wmnet |
[production] |
10:48 |
<vgutierrez@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3007.esams.wmnet |
[production] |
10:43 |
<vgutierrez@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host lvs3007.esams.wmnet |
[production] |
10:41 |
<vgutierrez> |
rolling restart of esams LVS instances to catch up on kernel upgrades |
[production] |
10:40 |
<jmm@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host cumin2001.codfw.wmnet |
[production] |
10:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 100%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14253 and previous config saved to /var/cache/conftool/dbconfig/20210209-103443-root.json |
[production] |
10:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 100%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14252 and previous config saved to /var/cache/conftool/dbconfig/20210209-103414-root.json |
[production] |
10:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Pool db1157 for the first time in s3 T258361', diff saved to https://phabricator.wikimedia.org/P14251 and previous config saved to /var/cache/conftool/dbconfig/20210209-102109-marostegui.json |
[production] |
10:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 75%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14250 and previous config saved to /var/cache/conftool/dbconfig/20210209-101939-root.json |
[production] |
10:19 |
<jiji@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host mc1019.eqiad.wmnet |
[production] |
10:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 75%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14249 and previous config saved to /var/cache/conftool/dbconfig/20210209-101911-root.json |
[production] |
10:15 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Add db1157 to dbctl, depooled T258361', diff saved to https://phabricator.wikimedia.org/P14248 and previous config saved to /var/cache/conftool/dbconfig/20210209-101556-marostegui.json |
[production] |
10:13 |
<jiji@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host mc1019.eqiad.wmnet |
[production] |
10:12 |
<gehel@cumin1001> |
START - Cookbook sre.wdqs.reboot |
[production] |
10:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 50%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14247 and previous config saved to /var/cache/conftool/dbconfig/20210209-100436-root.json |
[production] |
10:04 |
<elukey> |
stop mysql replication an-coord1001 -> an-coord1002, an-coord1001 -> db1108 |
[analytics] |
10:04 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 50%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14246 and previous config saved to /var/cache/conftool/dbconfig/20210209-100407-root.json |
[production] |
09:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 25%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14245 and previous config saved to /var/cache/conftool/dbconfig/20210209-094932-root.json |
[production] |
09:49 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 25%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14244 and previous config saved to /var/cache/conftool/dbconfig/20210209-094904-root.json |
[production] |
09:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3317 (re)pooling @ 10%: Slowly repooling db1090:3317 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14243 and previous config saved to /var/cache/conftool/dbconfig/20210209-093429-root.json |
[production] |
09:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1090:3312 (re)pooling @ 10%: Slowly repooling db1090:3312 after cloning db1170', diff saved to https://phabricator.wikimedia.org/P14242 and previous config saved to /var/cache/conftool/dbconfig/20210209-093400-root.json |
[production] |
09:22 |
<godog> |
swift eqiad-prod: decrease weight for SSDs on ms-be[1019-1026] - T272836 |
[production] |
08:44 |
<XioNoX> |
repool esams - T272342 |
[production] |
08:30 |
<XioNoX> |
rollback redirect ns2 to authdns1001 - T252631 |
[production] |
08:29 |
<elukey> |
leave hdfs safemode to let distcp do its job |
[analytics] |
08:25 |
<elukey> |
set hdfs safemode on for the Analytics cluster |
[analytics] |
08:19 |
<elukey> |
umount /mnt/hdfs from all nodes using it |
[analytics] |
08:16 |
<joal> |
Kill flink yarn app |
[analytics] |
08:09 |
<XioNoX> |
alright, brace yourself, esams switch stack is going to go down |
[production] |
08:08 |
<elukey> |
stop jupyterhub on stat100x |
[analytics] |
08:07 |
<elukey> |
stop hive on an-coord100[1,2] - prep step for bigtop upgrade |
[analytics] |
08:05 |
<elukey> |
stop oozie an-coord1001 - prep step for bigtop upgrade |
[analytics] |
08:03 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:30:00 on 32 hosts with reason: switch upgrade |
[production] |
08:03 |
<elukey> |
stop presto-server on an-presto100x and an-coord1001 - prep step for bigtop upgrade |
[analytics] |
08:02 |
<ayounsi@cumin1001> |
START - Cookbook sre.hosts.downtime for 1:30:00 on 32 hosts with reason: switch upgrade |
[production] |
07:54 |
<XioNoX> |
redirect ns2 to authdns1001 - T252631 |
[production] |
07:47 |
<hashar@deploy1001> |
Finished deploy [integration/docroot@672e79f]: build: Add /scap/log to gitignore (duration: 00m 06s) |
[production] |
07:47 |
<hashar@deploy1001> |
Started deploy [integration/docroot@672e79f]: build: Add /scap/log to gitignore |
[production] |
07:34 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove db1081 from dbctl T273040', diff saved to https://phabricator.wikimedia.org/P14241 and previous config saved to /var/cache/conftool/dbconfig/20210209-073455-marostegui.json |
[production] |