1601-1650 of 10000 results (113ms)
2023-08-30 ยง
08:01 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1022.eqiad.wmnet [production]
08:01 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1022.eqiad.wmnet [production]
07:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2125 (T343718)', diff saved to https://phabricator.wikimedia.org/P52009 and previous config saved to /var/cache/conftool/dbconfig/20230830-075956-ladsgroup.json [production]
07:59 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2125.codfw.wmnet with reason: Maintenance [production]
07:59 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2125.codfw.wmnet with reason: Maintenance [production]
07:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P52008 and previous config saved to /var/cache/conftool/dbconfig/20230830-075934-ladsgroup.json [production]
07:57 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1146:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52007 and previous config saved to /var/cache/conftool/dbconfig/20230830-075736-ladsgroup.json [production]
07:57 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1146.eqiad.wmnet with reason: Maintenance [production]
07:57 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1146.eqiad.wmnet with reason: Maintenance [production]
07:57 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores2007.codfw.wmnet [production]
07:54 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1022.eqiad.wmnet [production]
07:51 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1128.eqiad.wmnet with OS bullseye [production]
07:50 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1129.eqiad.wmnet with OS bullseye [production]
07:48 <marostegui@cumin1001> dbctl commit (dc=all): 'db1128 (re)pooling @ 3%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52006 and previous config saved to /var/cache/conftool/dbconfig/20230830-074852-root.json [production]
07:47 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance es2028 (T344589)', diff saved to https://phabricator.wikimedia.org/P52005 and previous config saved to /var/cache/conftool/dbconfig/20230830-074702-ladsgroup.json [production]
07:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2104', diff saved to https://phabricator.wikimedia.org/P52004 and previous config saved to /var/cache/conftool/dbconfig/20230830-074428-ladsgroup.json [production]
07:42 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1022.eqiad.wmnet [production]
07:42 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52003 and previous config saved to /var/cache/conftool/dbconfig/20230830-074238-root.json [production]
07:42 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling es2028 (T344589)', diff saved to https://phabricator.wikimedia.org/P52002 and previous config saved to /var/cache/conftool/dbconfig/20230830-074202-ladsgroup.json [production]
07:41 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on es2028.codfw.wmnet with reason: Maintenance [production]
07:41 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on es2028.codfw.wmnet with reason: Maintenance [production]
07:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 100%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P52001 and previous config saved to /var/cache/conftool/dbconfig/20230830-073514-root.json [production]
07:33 <marostegui@cumin1001> dbctl commit (dc=all): 'db1128 (re)pooling @ 1%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52000 and previous config saved to /var/cache/conftool/dbconfig/20230830-073347-root.json [production]
07:31 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2006.codfw.wmnet [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1128 upgrade to mariadb 10.4.31', diff saved to https://phabricator.wikimedia.org/P51999 and previous config saved to /var/cache/conftool/dbconfig/20230830-073144-root.json [production]
07:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2104', diff saved to https://phabricator.wikimedia.org/P51998 and previous config saved to /var/cache/conftool/dbconfig/20230830-072922-ladsgroup.json [production]
07:29 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
07:29 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
07:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129 (T343718)', diff saved to https://phabricator.wikimedia.org/P51997 and previous config saved to /var/cache/conftool/dbconfig/20230830-072902-ladsgroup.json [production]
07:28 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1128.eqiad.wmnet with reason: host reimage [production]
07:27 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51996 and previous config saved to /var/cache/conftool/dbconfig/20230830-072733-root.json [production]
07:26 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1021.eqiad.wmnet [production]
07:26 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1021.eqiad.wmnet [production]
07:25 <stevemunene@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1129.eqiad.wmnet with reason: host reimage [production]
07:25 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores2006.codfw.wmnet [production]
07:23 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2005.codfw.wmnet [production]
07:22 <stevemunene@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1128.eqiad.wmnet with reason: host reimage [production]
07:22 <stevemunene@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1129.eqiad.wmnet with reason: host reimage [production]
07:20 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1021.eqiad.wmnet [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1165 (re)pooling @ 75%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51995 and previous config saved to /var/cache/conftool/dbconfig/20230830-072009-root.json [production]
07:19 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] (duration: 15m 53s) [production]
07:18 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1021.eqiad.wmnet [production]
07:17 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti1021.eqiad.wmnet [production]
07:16 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores2005.codfw.wmnet [production]
07:16 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2004.codfw.wmnet [production]
07:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P51994 and previous config saved to /var/cache/conftool/dbconfig/20230830-071416-ladsgroup.json [production]
07:13 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P51993 and previous config saved to /var/cache/conftool/dbconfig/20230830-071356-ladsgroup.json [production]
07:13 <ladsgroup@deploy1002> ladsgroup and pfischer: Continuing with sync [production]
07:12 <marostegui@cumin1001> dbctl commit (dc=all): 'db1173 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51992 and previous config saved to /var/cache/conftool/dbconfig/20230830-071228-root.json [production]
07:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P51991 and previous config saved to /var/cache/conftool/dbconfig/20230830-071152-ladsgroup.json [production]