7551-7600 of 10000 results (94ms)
2023-08-21 ยง
13:04 <jelto@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host gitlab2002.wikimedia.org [production]
13:03 <klausman@cumin1001> START - Cookbook sre.ganeti.reboot-vm for VM ml-staging-ctrl2002.codfw.wmnet [production]
13:02 <klausman@cumin1001> END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM ml-staging-ctrl2001.codfw.wmnet [production]
13:01 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1172 (T342617)', diff saved to https://phabricator.wikimedia.org/P50609 and previous config saved to /var/cache/conftool/dbconfig/20230821-130118-ladsgroup.json [production]
13:01 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1172.eqiad.wmnet with reason: Maintenance [production]
13:00 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1172.eqiad.wmnet with reason: Maintenance [production]
13:00 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2111', diff saved to https://phabricator.wikimedia.org/P50608 and previous config saved to /var/cache/conftool/dbconfig/20230821-130036-ladsgroup.json [production]
12:58 <jelto@cumin1001> START - Cookbook sre.hosts.reboot-single for host gitlab2002.wikimedia.org [production]
12:58 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-stretch1002.eqiad.wmnet [production]
12:58 <klausman@cumin1001> START - Cookbook sre.ganeti.reboot-vm for VM ml-staging-ctrl2001.codfw.wmnet [production]
12:57 <klausman@cumin1001> END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:ml-staging-worker [production]
12:56 <stevemunene@cumin1001> START - Cookbook sre.hosts.reimage for host an-worker1112.eqiad.wmnet with OS bullseye [production]
12:55 <stevemunene@cumin1001> START - Cookbook sre.hosts.reimage for host an-worker1111.eqiad.wmnet with OS bullseye [production]
12:53 <topranks> setting Lumen transport esams eqiad to default OSPF cost of 800 (bring circuit into normal usage) [production]
12:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1130', diff saved to https://phabricator.wikimedia.org/P50607 and previous config saved to /var/cache/conftool/dbconfig/20230821-125137-ladsgroup.json [production]
12:51 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1005.eqiad.wmnet [production]
12:51 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-stretch1002.eqiad.wmnet [production]
12:51 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-stretch1001.eqiad.wmnet [production]
12:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2111 (T344589)', diff saved to https://phabricator.wikimedia.org/P50606 and previous config saved to /var/cache/conftool/dbconfig/20230821-124529-ladsgroup.json [production]
12:44 <jelto@cumin1001> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=99) for host gitlab2002.wikimedia.org [production]
12:43 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-stretch1001.eqiad.wmnet [production]
12:42 <topranks> enabling BGP over Lumen transport cr2-eqiad to cr1-esams [production]
12:41 <btullis@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. [production]
12:41 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1005.eqiad.wmnet [production]
12:41 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1004.eqiad.wmnet [production]
12:40 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1015.eqiad.wmnet [production]
12:39 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2111 (T344589)', diff saved to https://phabricator.wikimedia.org/P50605 and previous config saved to /var/cache/conftool/dbconfig/20230821-123906-ladsgroup.json [production]
12:39 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2111.codfw.wmnet with reason: Maintenance [production]
12:38 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2111.codfw.wmnet with reason: Maintenance [production]
12:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1130 (T344589)', diff saved to https://phabricator.wikimedia.org/P50604 and previous config saved to /var/cache/conftool/dbconfig/20230821-123631-ladsgroup.json [production]
12:35 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2101.codfw.wmnet with reason: Maintenance [production]
12:35 <btullis@cumin1001> START - Cookbook sre.zookeeper.roll-restart-zookeeper for Zookeeper A:zookeeper-druid-public cluster: Roll restart of jvm daemons. [production]
12:35 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2101.codfw.wmnet with reason: Maintenance [production]
12:35 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1015.eqiad.wmnet [production]
12:35 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1014.eqiad.wmnet [production]
12:31 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1004.eqiad.wmnet [production]
12:31 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1003.eqiad.wmnet [production]
12:31 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1130 (T344589)', diff saved to https://phabricator.wikimedia.org/P50603 and previous config saved to /var/cache/conftool/dbconfig/20230821-123123-ladsgroup.json [production]
12:31 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
12:31 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
12:29 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1014.eqiad.wmnet [production]
12:29 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1013.eqiad.wmnet [production]
12:23 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1013.eqiad.wmnet [production]
12:23 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1012.eqiad.wmnet [production]
12:22 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host cephosd1003.eqiad.wmnet [production]
12:22 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host cephosd1002.eqiad.wmnet [production]
12:16 <btullis@cumin1001> START - Cookbook sre.hosts.reboot-single for host kafka-jumbo1012.eqiad.wmnet [production]
12:16 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host kafka-jumbo1011.eqiad.wmnet [production]
12:13 <btullis@cumin1001> END (PASS) - Cookbook sre.zookeeper.roll-restart-zookeeper (exit_code=0) for Zookeeper A:zookeeper-druid-analytics cluster: Roll restart of jvm daemons. [production]
12:13 <klausman@cumin1001> START - Cookbook sre.k8s.reboot-nodes rolling reboot on A:ml-staging-worker [production]