6951-7000 of 10000 results (72ms)
2023-08-21 ยง
12:55 <stevemunene@cumin1001> START - Cookbook sre.hosts.reimage for host an-worker1111.eqiad.wmnet with OS bullseye [production]
12:53 <topranks> setting Lumen transport esams eqiad to default OSPF cost of 800 (bring circuit into normal usage) [production]
12:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1130', diff saved to https://phabricator.wikimedia.org/P50607 and previous config saved to /var/cache/conftool/dbconfig/20230821-125137-ladsgroup.json [production]
12:51 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1005.eqiad.wmnet [production]
12:51 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-stretch1002.eqiad.wmnet [production]
12:51 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-stretch1001.eqiad.wmnet [production]
12:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2111 (T344589)', diff saved to https://phabricator.wikimedia.org/P50606 and previous config saved to /var/cache/conftool/dbconfig/20230821-124529-ladsgroup.json [production]
12:44 <jelto@cumin1001> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=99) for host gitlab2002.wikimedia.org [production]
12:43 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-stretch1001.eqiad.wmnet [production]
12:42 <topranks> enabling BGP over Lumen transport cr2-eqiad to cr1-esams [production]
12:41 <btullis@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. [production]
12:41 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1005.eqiad.wmnet [production]
12:41 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1004.eqiad.wmnet [production]
12:40 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1015.eqiad.wmnet [production]
12:39 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2111 (T344589)', diff saved to https://phabricator.wikimedia.org/P50605 and previous config saved to /var/cache/conftool/dbconfig/20230821-123906-ladsgroup.json [production]
12:39 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2111.codfw.wmnet with reason: Maintenance [production]
12:38 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2111.codfw.wmnet with reason: Maintenance [production]
12:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1130 (T344589)', diff saved to https://phabricator.wikimedia.org/P50604 and previous config saved to /var/cache/conftool/dbconfig/20230821-123631-ladsgroup.json [production]
12:35 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2101.codfw.wmnet with reason: Maintenance [production]
12:35 <btullis@cumin1001> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. [production]
12:35 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2101.codfw.wmnet with reason: Maintenance [production]
12:35 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1015.eqiad.wmnet [production]
12:35 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1014.eqiad.wmnet [production]
12:31 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1004.eqiad.wmnet [production]
12:31 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1003.eqiad.wmnet [production]
12:31 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1130 (T344589)', diff saved to https://phabricator.wikimedia.org/P50603 and previous config saved to /var/cache/conftool/dbconfig/20230821-123123-ladsgroup.json [production]
12:31 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
12:31 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
12:29 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1014.eqiad.wmnet [production]
12:29 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1013.eqiad.wmnet [production]
12:23 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1013.eqiad.wmnet [production]
12:23 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1012.eqiad.wmnet [production]
12:22 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1003.eqiad.wmnet [production]
12:22 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1002.eqiad.wmnet [production]
12:16 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1012.eqiad.wmnet [production]
12:16 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1011.eqiad.wmnet [production]
12:13 <btullis@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-druid-analytics cluster: Roll restart of jvm daemons. [production]
12:13 <klausman@cumin1001> START - Cookbook sre.k8s.reboot-nodes rolling reboot on A:ml-staging-worker [production]
12:12 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1002.eqiad.wmnet [production]
12:12 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1001.eqiad.wmnet [production]
12:09 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1011.eqiad.wmnet [production]
12:09 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1010.eqiad.wmnet [production]
12:07 <btullis@cumin1001> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-druid-analytics cluster: Roll restart of jvm daemons. [production]
12:06 <btullis@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-analytics cluster: Roll restart of jvm daemons. [production]
12:04 <klausman@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-reboot (exit_code=0) rolling reboot on A:ml-cache-codfw [production]
12:03 <zabe@deploy1002> Finished scap: Backport for [[gerrit:950811|Revert "Revert "add su namespace translations""]] (duration: 08m 13s) [production]
12:03 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1001.eqiad.wmnet [production]
12:02 <klausman@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching A:ml-cache-eqiad: Restart to pick up OpenJDK 11 security updates - klausman@cumin1001 [production]
12:01 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1010.eqiad.wmnet [production]
12:00 <btullis@cumin1001> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-analytics cluster: Roll restart of jvm daemons. [production]