5651-5700 of 10000 results (138ms)
2024-06-19 ยง
15:03 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
15:01 <pfischer@deploy1002> helmfile [codfw] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
15:01 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on mw2282.codfw.wmnet with reason: Host move [production]
15:01 <pfischer@deploy1002> helmfile [codfw] START helmfile.d/services/cirrus-streaming-updater: apply [production]
15:01 <cgoubert@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on mw2282.codfw.wmnet with reason: Host move [production]
15:00 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for mw2282.codfw.wmnet [production]
15:00 <cgoubert@cumin1002> START - Cookbook sre.hosts.remove-downtime for mw2282.codfw.wmnet [production]
14:59 <cgoubert@cumin1002> END (ERROR) - Cookbook sre.hosts.remove-downtime (exit_code=97) for wikikube-worker2003.codfw.wmnet [production]
14:59 <cgoubert@cumin1002> START - Cookbook sre.hosts.remove-downtime for wikikube-worker2003.codfw.wmnet [production]
14:42 <marostegui> Deploy schema change on s2 eqiad master dbmaint T364069 [production]
14:40 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db[1155-1156].eqiad.wmnet with reason: Long schema change [production]
14:40 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db[1155-1156].eqiad.wmnet with reason: Long schema change [production]
14:39 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db[1155,1158].eqiad.wmnet with reason: Long schema change [production]
14:38 <taavi@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudvirt1042.eqiad.wmnet with OS bookworm [production]
14:38 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on clouddb[1014,1018,1021].eqiad.wmnet,db[1155,1158].eqiad.wmnet with reason: Long schema change [production]
14:38 <taavi@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - taavi@cumin1002" [production]
14:37 <taavi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudvirt1043.eqiad.wmnet with reason: host reimage [production]
14:36 <taavi@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - taavi@cumin1002" [production]
14:35 <moritzm> installing nano security updates [production]
14:34 <taavi@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudvirt1043.eqiad.wmnet with reason: host reimage [production]
14:24 <moritzm> installing libvpx security updates [production]
14:23 <moritzm> installing pymysql security updates [production]
14:19 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub: sync on production [production]
14:19 <taavi@cumin1002> START - Cookbook sre.hosts.reimage for host cloudvirt1043.eqiad.wmnet with OS bookworm [production]
14:17 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub: apply on production [production]
14:14 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s_services/services/datahub-next: sync on staging [production]
14:12 <brouberol@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub-next: apply on staging [production]
14:11 <taavi@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudvirt1042.eqiad.wmnet with reason: host reimage [production]
14:11 <taavi@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts ['cloudvirt1043.eqiad.wmnet'] [production]
14:10 <klausman@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ml-staging2003.codfw.wmnet with OS bookworm [production]
14:10 <klausman@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - klausman@cumin2002" [production]
14:09 <taavi@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1043.eqiad.wmnet'] [production]
14:09 <klausman@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - klausman@cumin2002" [production]
14:09 <taavi@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts ['cloudvirt1043.eqiad.wmnet'] [production]
14:08 <taavi@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudvirt1042.eqiad.wmnet with reason: host reimage [production]
14:08 <taavi@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1043.eqiad.wmnet'] [production]
14:07 <taavi@cumin2002> END (ERROR) - Cookbook sre.hardware.upgrade-firmware (exit_code=97) upgrade firmware for hosts ['cloudvirt1043.eqiad.wmnet'] [production]
14:07 <taavi@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirt1043.eqiad.wmnet'] [production]
14:01 <taavi@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudvirt1044.eqiad.wmnet with OS bookworm [production]
13:57 <klausman@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on ml-staging2003.codfw.wmnet with reason: host reimage [production]
13:54 <klausman@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on ml-staging2003.codfw.wmnet with reason: host reimage [production]
13:53 <taavi@cumin1002> START - Cookbook sre.hosts.reimage for host cloudvirt1042.eqiad.wmnet with OS bookworm [production]
13:53 <taavi@cumin1002> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host cloudvirt1042.eqiad.wmnet with OS bookworm [production]
13:51 <klausman@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Trying to fix Puppet error on ml-staging2003 - klausman@cumin2002" [production]
13:50 <klausman@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Trying to fix Puppet error on ml-staging2003 - klausman@cumin2002" [production]
13:49 <klausman@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Trying to fix Puppet error on ml-staging2003 - klausman@cumin2002" [production]
13:48 <klausman@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Trying to fix Puppet error on ml-staging2003 - klausman@cumin2002" [production]
13:43 <taavi@cumin1002> START - Cookbook sre.hosts.reimage for host cloudvirt1042.eqiad.wmnet with OS bookworm [production]
13:42 <klausman@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Trying to fix Puppet error on ml-staging2003 - klausman@cumin2002" [production]
13:41 <taavi@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirt1042.eqiad.wmnet'] [production]