2022-11-23
ยง
|
18:12 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart; prev restart was done before some hosts had ran puppet - ryankemper@cumin1001 - T319020 |
[production] |
18:11 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2124', diff saved to https://phabricator.wikimedia.org/P40815 and previous config saved to /var/cache/conftool/dbconfig/20221123-181132-ladsgroup.json |
[production] |
18:09 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1194 (T321126)', diff saved to https://phabricator.wikimedia.org/P40814 and previous config saved to /var/cache/conftool/dbconfig/20221123-180924-marostegui.json |
[production] |
18:08 |
<oblivian@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/proton: apply |
[production] |
18:07 |
<oblivian@deploy1002> |
helmfile [codfw] START helmfile.d/services/proton: apply |
[production] |
18:07 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1194 (T321126)', diff saved to https://phabricator.wikimedia.org/P40813 and previous config saved to /var/cache/conftool/dbconfig/20221123-180709-marostegui.json |
[production] |
18:07 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1194.eqiad.wmnet with reason: Maintenance |
[production] |
18:06 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 5:00:00 on db1194.eqiad.wmnet with reason: Maintenance |
[production] |
18:06 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191 (T321126)', diff saved to https://phabricator.wikimedia.org/P40812 and previous config saved to /var/cache/conftool/dbconfig/20221123-180648-marostegui.json |
[production] |
18:04 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/proton: apply |
[production] |
18:03 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/proton: apply |
[production] |
18:03 |
<oblivian@deploy1002> |
helmfile [staging] DONE helmfile.d/services/proton: apply |
[production] |
18:02 |
<oblivian@deploy1002> |
helmfile [staging] START helmfile.d/services/proton: apply |
[production] |
18:01 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1056.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
18:00 |
<pt1979@cumin1001> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:56 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2124 (T323214)', diff saved to https://phabricator.wikimedia.org/P40810 and previous config saved to /var/cache/conftool/dbconfig/20221123-175625-ladsgroup.json |
[production] |
17:51 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191', diff saved to https://phabricator.wikimedia.org/P40809 and previous config saved to /var/cache/conftool/dbconfig/20221123-175141-marostegui.json |
[production] |
17:44 |
<ryankemper> |
[Elastic] T319020 Kicked off rolling restart of cloudelastic to apply new heap size 8->10G; see `ryankemper@cumin1001` tmux session `cloudelastic_restarts` |
[production] |
17:42 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: cloudelastic cluster restart; prev restart was done before some hosts had ran puppet - ryankemper@cumin1001 - T319020 |
[production] |
17:42 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:39 |
<urandom> |
initiating Cassandra bootstrap, aqs1018-a -- T307802 |
[production] |
17:37 |
<pt1979@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:36 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1055.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:36 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191', diff saved to https://phabricator.wikimedia.org/P40807 and previous config saved to /var/cache/conftool/dbconfig/20221123-173635-marostegui.json |
[production] |
17:33 |
<eevans@cumin1001> |
END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching aqs[2001-2004].codfw.wmnet,aqs[1010-1015].eqiad.wmnet: T314309 restarting to pick up new JRE - eevans@cumin1001 |
[production] |
17:27 |
<pt1979@cumin1001> |
END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host cloudvirt1054.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
17:22 |
<oblivian@deploy1002> |
helmfile [staging] DONE helmfile.d/services/proton: apply |
[production] |
17:21 |
<oblivian@deploy1002> |
helmfile [staging] START helmfile.d/services/proton: apply |
[production] |
17:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1191 (T321126)', diff saved to https://phabricator.wikimedia.org/P40806 and previous config saved to /var/cache/conftool/dbconfig/20221123-172128-marostegui.json |
[production] |
17:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1191 (T321126)', diff saved to https://phabricator.wikimedia.org/P40805 and previous config saved to /var/cache/conftool/dbconfig/20221123-171911-marostegui.json |
[production] |
17:19 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db1191.eqiad.wmnet with reason: Maintenance |
[production] |
17:18 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 5:00:00 on db1191.eqiad.wmnet with reason: Maintenance |
[production] |
17:18 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1174 (T321126)', diff saved to https://phabricator.wikimedia.org/P40804 and previous config saved to /var/cache/conftool/dbconfig/20221123-171850-marostegui.json |
[production] |
17:18 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
17:18 |
<pt1979@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add DNS for arclamp1001 - pt1979@cumin2002" |
[production] |
17:16 |
<pt1979@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add DNS for arclamp1001 - pt1979@cumin2002" |
[production] |
17:12 |
<pt1979@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
17:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P40803 and previous config saved to /var/cache/conftool/dbconfig/20221123-170343-marostegui.json |
[production] |
16:57 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1054.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
16:56 |
<pt1979@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host cloudvirt1054.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
16:56 |
<pt1979@cumin1001> |
END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['contint1002'] |
[production] |
16:52 |
<pt1979@cumin1001> |
START - Cookbook sre.hosts.provision for host cloudvirt1054.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
16:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1174', diff saved to https://phabricator.wikimedia.org/P40802 and previous config saved to /var/cache/conftool/dbconfig/20221123-164837-marostegui.json |
[production] |
16:46 |
<oblivian@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/image-suggestion: apply |
[production] |
16:45 |
<oblivian@deploy1002> |
helmfile [eqiad] START helmfile.d/services/image-suggestion: apply |
[production] |
16:43 |
<oblivian@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/image-suggestion: apply |
[production] |
16:42 |
<oblivian@deploy1002> |
helmfile [codfw] START helmfile.d/services/image-suggestion: apply |
[production] |
16:34 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1098:3316 (T323214)', diff saved to https://phabricator.wikimedia.org/P40801 and previous config saved to /var/cache/conftool/dbconfig/20221123-163412-ladsgroup.json |
[production] |
16:34 |
<pt1979@cumin1001> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['contint1002'] |
[production] |
16:34 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1098.eqiad.wmnet with reason: Maintenance |
[production] |