1851-1900 of 10000 results (125ms)
2024-02-01 ยง
13:44 <btullis@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['elastic2094.codfw.wmnet'] [production]
13:44 <arnaudb@cumin1002> START - Cookbook sre.mysql.clone Will create a clone of db1144.eqiad.wmnet onto db1244.eqiad.wmnet [production]
13:42 <btullis@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['elastic2094.codfw.wmnet'] [production]
13:41 <arnaudb@cumin1002> dbctl commit (dc=all): 'Cloning db1144 in db1244 for T350458', diff saved to https://phabricator.wikimedia.org/P56064 and previous config saved to /var/cache/conftool/dbconfig/20240201-134107-arnaudb.json [production]
13:40 <btullis@cumin1002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-druid-analytics cluster: Roll restart of jvm daemons. [production]
13:40 <btullis@cumin1002> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. [production]
13:39 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1244.eqiad.wmnet with reason: provisionning db1244.eqiad.wmnet - T350458 [production]
13:39 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1244.eqiad.wmnet with reason: provisionning db1244.eqiad.wmnet - T350458 [production]
13:39 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1144.eqiad.wmnet with reason: provisionning db1244.eqiad.wmnet - T350458 [production]
13:38 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1144.eqiad.wmnet with reason: provisionning db1244.eqiad.wmnet - T350458 [production]
13:35 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host cloudlb2002-dev.codfw.wmnet [production]
13:33 <btullis@cumin1002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. [production]
13:31 <btullis@cumin2002> START - Cookbook sre.hosts.reimage for host elastic2094.codfw.wmnet with OS bullseye [production]
13:31 <btullis@cumin1002> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-analytics cluster: Roll restart of jvm daemons. [production]
13:30 <akosiaris@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
13:30 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1244.eqiad.wmnet with reason: provisionning db1234.eqiad.wmnet - T350458 [production]
13:30 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2138:3314', diff saved to https://phabricator.wikimedia.org/P56062 and previous config saved to /var/cache/conftool/dbconfig/20240201-132938-marostegui.json [production]
13:29 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1244.eqiad.wmnet with reason: provisionning db1234.eqiad.wmnet - T350458 [production]
13:29 <arnaudb@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1144.eqiad.wmnet with reason: provisionning db1234.eqiad.wmnet - T350458 [production]
13:29 <akosiaris@deploy2002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
13:29 <arnaudb@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1144.eqiad.wmnet with reason: provisionning db1234.eqiad.wmnet - T350458 [production]
13:27 <akosiaris@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-debug: apply [production]
13:26 <akosiaris@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-debug: apply [production]
13:25 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2088.codfw.wmnet with reason: host reimage [production]
13:24 <btullis@cumin1002> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-analytics cluster: Roll restart of jvm daemons. [production]
13:23 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host cloudlb2002-dev.codfw.wmnet [production]
13:22 <btullis@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2088.codfw.wmnet with reason: host reimage [production]
13:20 <ayounsi@cumin1002> START - Cookbook sre.hosts.reimage for host sretest2005.codfw.wmnet with OS bookworm [production]
13:16 <ayounsi@cumin1002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host sretest2005.codfw.wmnet with OS bookworm [production]
13:16 <btullis@cumin1002> END (PASS) - Cookbook sre.opensearch.roll-restart-reboot (exit_code=0) rolling restart_daemons on A:datahubsearch [production]
13:15 <ayounsi@cumin1002> START - Cookbook sre.hosts.reimage for host sretest2005.codfw.wmnet with OS bookworm [production]
13:14 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2138:3314 (T355609)', diff saved to https://phabricator.wikimedia.org/P56061 and previous config saved to /var/cache/conftool/dbconfig/20240201-131432-marostegui.json [production]
13:08 <btullis@cumin1002> START - Cookbook sre.opensearch.roll-restart-reboot rolling restart_daemons on A:datahubsearch [production]
13:03 <akosiaris@deploy2002> helmfile [codfw] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
13:03 <akosiaris@deploy2002> helmfile [codfw] START helmfile.d/services/rdf-streaming-updater: apply [production]
12:59 <akosiaris@deploy2002> helmfile [eqiad] DONE helmfile.d/services/rdf-streaming-updater: apply [production]
12:59 <akosiaris@deploy2002> helmfile [eqiad] START helmfile.d/services/rdf-streaming-updater: apply [production]
12:58 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host cloudlb2001-dev.codfw.wmnet [production]
12:58 <btullis@cumin2002> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['elastic2088.codfw.wmnet'] [production]
12:57 <btullis@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['elastic2088.codfw.wmnet'] [production]
12:55 <btullis@cumin2002> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=1) upgrade firmware for hosts ['elastic2088.codfw.wmnet'] [production]
12:54 <btullis@cumin2002> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['elastic2088.codfw.wmnet'] [production]
12:49 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2138:3314 (T355609)', diff saved to https://phabricator.wikimedia.org/P56060 and previous config saved to /var/cache/conftool/dbconfig/20240201-124928-marostegui.json [production]
12:49 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db2138.codfw.wmnet with reason: Maintenance [production]
12:49 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db2138.codfw.wmnet with reason: Maintenance [production]
12:49 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2137:3314 (T355609)', diff saved to https://phabricator.wikimedia.org/P56059 and previous config saved to /var/cache/conftool/dbconfig/20240201-124906-marostegui.json [production]
12:46 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host cloudlb2001-dev.codfw.wmnet [production]
12:36 <btullis@cumin1002> START - Cookbook sre.hosts.reimage for host elastic2088.codfw.wmnet with OS bullseye [production]
12:34 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2137:3314', diff saved to https://phabricator.wikimedia.org/P56058 and previous config saved to /var/cache/conftool/dbconfig/20240201-123400-marostegui.json [production]
12:30 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2107.codfw.wmnet with reason: Maintenance [production]