2024-05-29
ยง
|
15:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2159 (T364299)', diff saved to https://phabricator.wikimedia.org/P63568 and previous config saved to /var/cache/conftool/dbconfig/20240529-155349-marostegui.json |
[production] |
15:53 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2187.codfw.wmnet with reason: Maintenance |
[production] |
15:53 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db2187.codfw.wmnet with reason: Maintenance |
[production] |
15:53 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2159.codfw.wmnet with reason: Maintenance |
[production] |
15:53 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db2159.codfw.wmnet with reason: Maintenance |
[production] |
15:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2150 (T364299)', diff saved to https://phabricator.wikimedia.org/P63567 and previous config saved to /var/cache/conftool/dbconfig/20240529-155321-marostegui.json |
[production] |
15:52 |
<robh@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:49 |
<jynus@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on dbprov2003.codfw.wmnet with reason: upgrade to 10.6 |
[production] |
15:49 |
<jynus@cumin1002> |
START - Cookbook sre.hosts.downtime for 5:00:00 on dbprov2003.codfw.wmnet with reason: upgrade to 10.6 |
[production] |
15:49 |
<jynus@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on dbprov1003.eqiad.wmnet with reason: upgrade to 10.6 |
[production] |
15:49 |
<jynus@cumin1002> |
START - Cookbook sre.hosts.downtime for 5:00:00 on dbprov1003.eqiad.wmnet with reason: upgrade to 10.6 |
[production] |
15:48 |
<jclark@cumin1002> |
START - Cookbook sre.hosts.provision for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
15:48 |
<jynus@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on db2141.codfw.wmnet with reason: upgrade to 10.6 |
[production] |
15:48 |
<jynus@cumin1002> |
START - Cookbook sre.hosts.downtime for 5:00:00 on db2141.codfw.wmnet with reason: upgrade to 10.6 |
[production] |
15:48 |
<jclark@cumin1002> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
15:45 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db1163 (re)pooling @ 50%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P63566 and previous config saved to /var/cache/conftool/dbconfig/20240529-154510-arnaudb.json |
[production] |
15:45 |
<robh@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:44 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1181', diff saved to https://phabricator.wikimedia.org/P63565 and previous config saved to /var/cache/conftool/dbconfig/20240529-154446-marostegui.json |
[production] |
15:39 |
<robh@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:38 |
<robh@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:38 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P63564 and previous config saved to /var/cache/conftool/dbconfig/20240529-153813-marostegui.json |
[production] |
15:32 |
<dancy@deploy1002> |
Finished scap: Backport for [[gerrit:1036750|Remove the php symlink (v2) (T359643)]] (duration: 13m 03s) |
[production] |
15:31 |
<robh@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:31 |
<robh@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:30 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db1163 (re)pooling @ 25%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P63563 and previous config saved to /var/cache/conftool/dbconfig/20240529-153001-arnaudb.json |
[production] |
15:29 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1181', diff saved to https://phabricator.wikimedia.org/P63562 and previous config saved to /var/cache/conftool/dbconfig/20240529-152937-marostegui.json |
[production] |
15:29 |
<robh@cumin2002> |
END (ERROR) - Cookbook sre.hardware.upgrade-firmware (exit_code=97) upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:27 |
<jclark@cumin1002> |
START - Cookbook sre.hosts.provision for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
15:26 |
<mvernon@cumin2002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
15:26 |
<mvernon@cumin2002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: correct IPs for apus - mvernon@cumin2002" |
[production] |
15:25 |
<mvernon@cumin2002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: correct IPs for apus - mvernon@cumin2002" |
[production] |
15:25 |
<jclark@cumin1002> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
15:24 |
<jclark@cumin1002> |
START - Cookbook sre.hosts.provision for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
15:23 |
<robh@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:23 |
<dancy@deploy1002> |
dancy: Continuing with sync |
[production] |
15:23 |
<mvernon@cumin2002> |
START - Cookbook sre.dns.netbox |
[production] |
15:23 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P63561 and previous config saved to /var/cache/conftool/dbconfig/20240529-152305-marostegui.json |
[production] |
15:22 |
<robh@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:22 |
<dancy@deploy1002> |
dancy: Backport for [[gerrit:1036750|Remove the php symlink (v2) (T359643)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
15:21 |
<robh@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] |
[production] |
15:20 |
<cdanis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-web: sync |
[production] |
15:19 |
<cdanis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-api-int: sync |
[production] |
15:19 |
<cdanis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-api-ext: sync |
[production] |
15:19 |
<cdanis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-parsoid: sync |
[production] |
15:19 |
<dancy@deploy1002> |
Started scap: Backport for [[gerrit:1036750|Remove the php symlink (v2) (T359643)]] |
[production] |
15:18 |
<cdanis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-wikifunctions: sync |
[production] |
15:18 |
<cdanis@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-debug: sync |
[production] |
15:18 |
<cdanis@deploy1002> |
helmfile [codfw] START helmfile.d/services/mw-api-ext: sync |
[production] |
15:18 |
<cdanis@deploy1002> |
helmfile [codfw] START helmfile.d/services/mw-web: sync |
[production] |
15:18 |
<cdanis@deploy1002> |
helmfile [codfw] START helmfile.d/services/mw-parsoid: sync |
[production] |