951-1000 of 10000 results (86ms)
2024-05-29 ยง
15:48 <jclark@cumin1002> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED [production]
15:45 <arnaudb@cumin1002> dbctl commit (dc=all): 'db1163 (re)pooling @ 50%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P63566 and previous config saved to /var/cache/conftool/dbconfig/20240529-154510-arnaudb.json [production]
15:45 <robh@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] [production]
15:44 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1181', diff saved to https://phabricator.wikimedia.org/P63565 and previous config saved to /var/cache/conftool/dbconfig/20240529-154446-marostegui.json [production]
15:39 <robh@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] [production]
15:38 <robh@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] [production]
15:38 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P63564 and previous config saved to /var/cache/conftool/dbconfig/20240529-153813-marostegui.json [production]
15:32 <dancy@deploy1002> Finished scap: Backport for [[gerrit:1036750|Remove the php symlink (v2) (T359643)]] (duration: 13m 03s) [production]
15:31 <robh@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] [production]
15:31 <robh@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] [production]
15:30 <arnaudb@cumin1002> dbctl commit (dc=all): 'db1163 (re)pooling @ 25%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P63563 and previous config saved to /var/cache/conftool/dbconfig/20240529-153001-arnaudb.json [production]
15:29 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1181', diff saved to https://phabricator.wikimedia.org/P63562 and previous config saved to /var/cache/conftool/dbconfig/20240529-152937-marostegui.json [production]
15:29 <robh@cumin2002> END (ERROR) - Cookbook sre.hardware.upgrade-firmware (exit_code=97) upgrade firmware for hosts ['cloudvirt1041'] [production]
15:27 <jclark@cumin1002> START - Cookbook sre.hosts.provision for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED [production]
15:26 <mvernon@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:26 <mvernon@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: correct IPs for apus - mvernon@cumin2002" [production]
15:25 <mvernon@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: correct IPs for apus - mvernon@cumin2002" [production]
15:25 <jclark@cumin1002> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED [production]
15:24 <jclark@cumin1002> START - Cookbook sre.hosts.provision for host kafka-main1010.mgmt.eqiad.wmnet with reboot policy FORCED [production]
15:23 <robh@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] [production]
15:23 <dancy@deploy1002> dancy: Continuing with sync [production]
15:23 <mvernon@cumin2002> START - Cookbook sre.dns.netbox [production]
15:23 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2150', diff saved to https://phabricator.wikimedia.org/P63561 and previous config saved to /var/cache/conftool/dbconfig/20240529-152305-marostegui.json [production]
15:22 <robh@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] [production]
15:22 <dancy@deploy1002> dancy: Backport for [[gerrit:1036750|Remove the php symlink (v2) (T359643)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
15:21 <robh@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] [production]
15:20 <cdanis@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-web: sync [production]
15:19 <cdanis@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-api-int: sync [production]
15:19 <cdanis@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-api-ext: sync [production]
15:19 <cdanis@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-parsoid: sync [production]
15:19 <dancy@deploy1002> Started scap: Backport for [[gerrit:1036750|Remove the php symlink (v2) (T359643)]] [production]
15:18 <cdanis@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-wikifunctions: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] START helmfile.d/services/mw-api-ext: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] START helmfile.d/services/mw-web: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] START helmfile.d/services/mw-parsoid: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] START helmfile.d/services/mw-api-int: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] START helmfile.d/services/mw-wikifunctions: sync [production]
15:18 <cdanis@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: sync [production]
15:17 <robh@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1041'] [production]
15:17 <robh@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1041'] [production]
15:14 <arnaudb@cumin1002> dbctl commit (dc=all): 'db1163 (re)pooling @ 10%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P63560 and previous config saved to /var/cache/conftool/dbconfig/20240529-151455-arnaudb.json [production]
15:14 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1181 (T366123)', diff saved to https://phabricator.wikimedia.org/P63559 and previous config saved to /var/cache/conftool/dbconfig/20240529-151430-marostegui.json [production]
15:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1181 (T366123)', diff saved to https://phabricator.wikimedia.org/P63558 and previous config saved to /var/cache/conftool/dbconfig/20240529-151219-marostegui.json [production]
15:12 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:11 <arnaudb@cumin1002> dbctl commit (dc=all): 'db1169 (re)pooling @ 100%: post reimage repool', diff saved to https://phabricator.wikimedia.org/P63557 and previous config saved to /var/cache/conftool/dbconfig/20240529-151152-arnaudb.json [production]
15:11 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 8:00:00 on db1181.eqiad.wmnet with reason: Maintenance [production]
15:11 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db1174 (T366123)', diff saved to https://phabricator.wikimedia.org/P63556 and previous config saved to /var/cache/conftool/dbconfig/20240529-151145-marostegui.json [production]
15:09 <jclark@cumin1002> START - Cookbook sre.hosts.reimage for host kafka-main1009.eqiad.wmnet with OS bullseye [production]
15:08 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1163.eqiad.wmnet with OS bookworm [production]