2022-11-23
ยง
|
18:55 |
<pt1979@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['arclamp1001'] |
[production] |
18:54 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1202.eqiad.wmnet with reason: Maintenance |
[production] |
18:54 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 5:00:00 on db1202.eqiad.wmnet with reason: Maintenance |
[production] |
18:54 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1194 (T321126)', diff saved to https://phabricator.wikimedia.org/P40824 and previous config saved to /var/cache/conftool/dbconfig/20221123-185444-marostegui.json |
[production] |
18:53 |
<jbond@cumin2002> |
START - Cookbook sre.hosts.reimage for host ms-be2050.codfw.wmnet with OS bullseye |
[production] |
18:52 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1098:3316', diff saved to https://phabricator.wikimedia.org/P40823 and previous config saved to /var/cache/conftool/dbconfig/20221123-185233-ladsgroup.json |
[production] |
18:51 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host arclamp1001.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
18:45 |
<sukhe@cumin2002> |
START - Cookbook sre.hosts.reimage for host lvs4010.ulsfo.wmnet with OS buster |
[production] |
18:42 |
<sukhe> |
restart pybal on lvs4007.ulsfo.wmnet |
[production] |
18:42 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2129 (T323214)', diff saved to https://phabricator.wikimedia.org/P40822 and previous config saved to /var/cache/conftool/dbconfig/20221123-184207-ladsgroup.json |
[production] |
18:42 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2129.codfw.wmnet with reason: Maintenance |
[production] |
18:41 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2129.codfw.wmnet with reason: Maintenance |
[production] |
18:41 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2124 (T323214)', diff saved to https://phabricator.wikimedia.org/P40821 and previous config saved to /var/cache/conftool/dbconfig/20221123-184145-ladsgroup.json |
[production] |
18:41 |
<pt1979@cumin2002> |
START - Cookbook sre.hosts.provision for host arclamp1001.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
18:39 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1194', diff saved to https://phabricator.wikimedia.org/P40820 and previous config saved to /var/cache/conftool/dbconfig/20221123-183937-marostegui.json |
[production] |
18:37 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1098:3316', diff saved to https://phabricator.wikimedia.org/P40819 and previous config saved to /var/cache/conftool/dbconfig/20221123-183726-ladsgroup.json |
[production] |
18:37 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1057.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
18:36 |
<pt1979@cumin1001> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host cloudvirt1056.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
18:26 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2124', diff saved to https://phabricator.wikimedia.org/P40818 and previous config saved to /var/cache/conftool/dbconfig/20221123-182638-ladsgroup.json |
[production] |
18:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1194', diff saved to https://phabricator.wikimedia.org/P40817 and previous config saved to /var/cache/conftool/dbconfig/20221123-182431-marostegui.json |
[production] |
18:22 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1098:3316 (T323214)', diff saved to https://phabricator.wikimedia.org/P40816 and previous config saved to /var/cache/conftool/dbconfig/20221123-182220-ladsgroup.json |
[production] |
18:12 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart; prev restart was done before some hosts had ran puppet - ryankemper@cumin1001 - T319020 |
[production] |
18:11 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2124', diff saved to https://phabricator.wikimedia.org/P40815 and previous config saved to /var/cache/conftool/dbconfig/20221123-181132-ladsgroup.json |
[production] |
18:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1194 (T321126)', diff saved to https://phabricator.wikimedia.org/P40814 and previous config saved to /var/cache/conftool/dbconfig/20221123-180924-marostegui.json |
[production] |
18:08 |
<oblivian@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/proton: apply |
[production] |
18:07 |
<oblivian@deploy1002> |
helmfile [codfw] START helmfile.d/services/proton: apply |
[production] |
18:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1194 (T321126)', diff saved to https://phabricator.wikimedia.org/P40813 and previous config saved to /var/cache/conftool/dbconfig/20221123-180709-marostegui.json |
[production] |
18:07 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1194.eqiad.wmnet with reason: Maintenance |
[production] |
18:06 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 5:00:00 on db1194.eqiad.wmnet with reason: Maintenance |
[production] |
18:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191 (T321126)', diff saved to https://phabricator.wikimedia.org/P40812 and previous config saved to /var/cache/conftool/dbconfig/20221123-180648-marostegui.json |
[production] |
18:04 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/proton: apply |
[production] |
18:03 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/proton: apply |
[production] |
18:03 |
<oblivian@deploy1002> |
helmfile [staging] DONE helmfile.d/services/proton: apply |
[production] |
18:02 |
<oblivian@deploy1002> |
helmfile [staging] START helmfile.d/services/proton: apply |
[production] |
18:01 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1056.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
18:00 |
<pt1979@cumin1001> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:56 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2124 (T323214)', diff saved to https://phabricator.wikimedia.org/P40810 and previous config saved to /var/cache/conftool/dbconfig/20221123-175625-ladsgroup.json |
[production] |
17:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191', diff saved to https://phabricator.wikimedia.org/P40809 and previous config saved to /var/cache/conftool/dbconfig/20221123-175141-marostegui.json |
[production] |
17:44 |
<ryankemper> |
[Elastic] T319020 Kicked off rolling restart of cloudelastic to apply new heap size 8->10G; see `ryankemper@cumin1001` tmux session `cloudelastic_restarts` |
[production] |
17:42 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart; prev restart was done before some hosts had ran puppet - ryankemper@cumin1001 - T319020 |
[production] |
17:42 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:39 |
<urandom> |
initiating Cassandra bootstrap, aqs1018-a -- T307802 |
[production] |
17:37 |
<pt1979@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:36 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191', diff saved to https://phabricator.wikimedia.org/P40807 and previous config saved to /var/cache/conftool/dbconfig/20221123-173635-marostegui.json |
[production] |
17:33 |
<eevans@cumin1001> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching aqs[2001-2004].codfw.wmnet,aqs[1010-1015].eqiad.wmnet: T314309 restarting to pick up new JRE - eevans@cumin1001 |
[production] |
17:27 |
<pt1979@cumin1001> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host cloudvirt1054.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:22 |
<oblivian@deploy1002> |
helmfile [staging] DONE helmfile.d/services/proton: apply |
[production] |
17:21 |
<oblivian@deploy1002> |
helmfile [staging] START helmfile.d/services/proton: apply |
[production] |
17:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191 (T321126)', diff saved to https://phabricator.wikimedia.org/P40806 and previous config saved to /var/cache/conftool/dbconfig/20221123-172128-marostegui.json |
[production] |