5051-5100 of 10000 results (91ms)
2023-08-30 ยง
07:20 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1021.eqiad.wmnet [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 75%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51995 and previous config saved to /var/cache/conftool/dbconfig/20230830-072009-root.json [production]
07:19 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] (duration: 15m 53s) [production]
07:18 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1021.eqiad.wmnet [production]
07:17 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti1021.eqiad.wmnet [production]
07:16 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores2005.codfw.wmnet [production]
07:16 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2004.codfw.wmnet [production]
07:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P51994 and previous config saved to /var/cache/conftool/dbconfig/20230830-071416-ladsgroup.json [production]
07:13 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P51993 and previous config saved to /var/cache/conftool/dbconfig/20230830-071356-ladsgroup.json [production]
07:13 <ladsgroup@deploy1002> ladsgroup and pfischer: Continuing with sync [production]
07:12 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51992 and previous config saved to /var/cache/conftool/dbconfig/20230830-071228-root.json [production]
07:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P51991 and previous config saved to /var/cache/conftool/dbconfig/20230830-071152-ladsgroup.json [production]
07:11 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2104.codfw.wmnet with reason: Maintenance [production]
07:11 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2104.codfw.wmnet with reason: Maintenance [production]
07:10 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores2004.codfw.wmnet [production]
07:09 <stevemunene@cumin1001> START - Cookbook sre.hosts.reimage for host an-worker1129.eqiad.wmnet with OS bullseye [production]
07:09 <stevemunene@cumin1001> START - Cookbook sre.hosts.reimage for host an-worker1128.eqiad.wmnet with OS bullseye [production]
07:08 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2003.codfw.wmnet [production]
07:06 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1021.eqiad.wmnet [production]
07:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 50%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51990 and previous config saved to /var/cache/conftool/dbconfig/20230830-070504-root.json [production]
07:04 <ladsgroup@deploy1002> ladsgroup and pfischer: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] synced to the testservers mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
07:04 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1020.eqiad.wmnet [production]
07:03 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1020.eqiad.wmnet [production]
07:03 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] [production]
07:01 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores2003.codfw.wmnet [production]
07:01 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1127.eqiad.wmnet with OS bullseye [production]
06:58 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1126.eqiad.wmnet with OS bullseye [production]
06:58 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P51989 and previous config saved to /var/cache/conftool/dbconfig/20230830-065849-ladsgroup.json [production]
06:57 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 25%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51988 and previous config saved to /var/cache/conftool/dbconfig/20230830-065723-root.json [production]
06:57 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1020.eqiad.wmnet [production]
06:50 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1020.eqiad.wmnet [production]
06:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 25%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51987 and previous config saved to /var/cache/conftool/dbconfig/20230830-064959-root.json [production]
06:43 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129 (T343718)', diff saved to https://phabricator.wikimedia.org/P51986 and previous config saved to /var/cache/conftool/dbconfig/20230830-064343-ladsgroup.json [production]
06:42 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51985 and previous config saved to /var/cache/conftool/dbconfig/20230830-064219-root.json [production]
06:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2097.codfw.wmnet with reason: Maintenance [production]
06:42 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2097.codfw.wmnet with reason: Maintenance [production]
06:41 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1129 (T343718)', diff saved to https://phabricator.wikimedia.org/P51984 and previous config saved to /var/cache/conftool/dbconfig/20230830-064131-ladsgroup.json [production]
06:41 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1129.eqiad.wmnet with reason: Maintenance [production]
06:41 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1129.eqiad.wmnet with reason: Maintenance [production]
06:37 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1127.eqiad.wmnet with reason: host reimage [production]
06:35 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1126.eqiad.wmnet with reason: host reimage [production]
06:34 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 10%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51983 and previous config saved to /var/cache/conftool/dbconfig/20230830-063455-root.json [production]
06:33 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.debug (exit_code=0) for Netbox circuit ID 33 [production]
06:33 <ayounsi@cumin1001> START - Cookbook sre.network.debug for Netbox circuit ID 33 [production]
06:33 <stevemunene@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1127.eqiad.wmnet with reason: host reimage [production]
06:32 <stevemunene@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1126.eqiad.wmnet with reason: host reimage [production]
06:32 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.debug (exit_code=0) for Netbox circuit ID 33 [production]
06:31 <ayounsi@cumin1001> START - Cookbook sre.network.debug for Netbox circuit ID 33 [production]
06:27 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 5%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51982 and previous config saved to /var/cache/conftool/dbconfig/20230830-062714-root.json [production]
06:19 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 5%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51981 and previous config saved to /var/cache/conftool/dbconfig/20230830-061950-root.json [production]