2023-08-30
ยง
|
07:41 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on es2028.codfw.wmnet with reason: Maintenance |
[production] |
07:41 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on es2028.codfw.wmnet with reason: Maintenance |
[production] |
07:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1165 (re)pooling @ 100%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P52001 and previous config saved to /var/cache/conftool/dbconfig/20230830-073514-root.json |
[production] |
07:33 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1128 (re)pooling @ 1%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52000 and previous config saved to /var/cache/conftool/dbconfig/20230830-073347-root.json |
[production] |
07:31 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2006.codfw.wmnet |
[production] |
07:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1128 upgrade to mariadb 10.4.31', diff saved to https://phabricator.wikimedia.org/P51999 and previous config saved to /var/cache/conftool/dbconfig/20230830-073144-root.json |
[production] |
07:29 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2104', diff saved to https://phabricator.wikimedia.org/P51998 and previous config saved to /var/cache/conftool/dbconfig/20230830-072922-ladsgroup.json |
[production] |
07:29 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1139.eqiad.wmnet with reason: Maintenance |
[production] |
07:29 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1139.eqiad.wmnet with reason: Maintenance |
[production] |
07:29 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129 (T343718)', diff saved to https://phabricator.wikimedia.org/P51997 and previous config saved to /var/cache/conftool/dbconfig/20230830-072902-ladsgroup.json |
[production] |
07:28 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1128.eqiad.wmnet with reason: host reimage |
[production] |
07:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1173 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51996 and previous config saved to /var/cache/conftool/dbconfig/20230830-072733-root.json |
[production] |
07:26 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1021.eqiad.wmnet |
[production] |
07:26 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1021.eqiad.wmnet |
[production] |
07:25 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1129.eqiad.wmnet with reason: host reimage |
[production] |
07:25 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2006.codfw.wmnet |
[production] |
07:23 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2005.codfw.wmnet |
[production] |
07:22 |
<stevemunene@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1128.eqiad.wmnet with reason: host reimage |
[production] |
07:22 |
<stevemunene@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1129.eqiad.wmnet with reason: host reimage |
[production] |
07:20 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1021.eqiad.wmnet |
[production] |
07:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1165 (re)pooling @ 75%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51995 and previous config saved to /var/cache/conftool/dbconfig/20230830-072009-root.json |
[production] |
07:19 |
<ladsgroup@deploy1002> |
Finished scap: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] (duration: 15m 53s) |
[production] |
07:18 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1021.eqiad.wmnet |
[production] |
07:17 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti1021.eqiad.wmnet |
[production] |
07:16 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2005.codfw.wmnet |
[production] |
07:16 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2004.codfw.wmnet |
[production] |
07:14 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P51994 and previous config saved to /var/cache/conftool/dbconfig/20230830-071416-ladsgroup.json |
[production] |
07:13 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P51993 and previous config saved to /var/cache/conftool/dbconfig/20230830-071356-ladsgroup.json |
[production] |
07:13 |
<ladsgroup@deploy1002> |
ladsgroup and pfischer: Continuing with sync |
[production] |
07:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1173 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51992 and previous config saved to /var/cache/conftool/dbconfig/20230830-071228-root.json |
[production] |
07:11 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P51991 and previous config saved to /var/cache/conftool/dbconfig/20230830-071152-ladsgroup.json |
[production] |
07:11 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2104.codfw.wmnet with reason: Maintenance |
[production] |
07:11 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2104.codfw.wmnet with reason: Maintenance |
[production] |
07:10 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2004.codfw.wmnet |
[production] |
07:09 |
<stevemunene@cumin1001> |
START - Cookbook sre.hosts.reimage for host an-worker1129.eqiad.wmnet with OS bullseye |
[production] |
07:09 |
<stevemunene@cumin1001> |
START - Cookbook sre.hosts.reimage for host an-worker1128.eqiad.wmnet with OS bullseye |
[production] |
07:08 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2003.codfw.wmnet |
[production] |
07:06 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1021.eqiad.wmnet |
[production] |
07:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1165 (re)pooling @ 50%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P51990 and previous config saved to /var/cache/conftool/dbconfig/20230830-070504-root.json |
[production] |
07:04 |
<ladsgroup@deploy1002> |
ladsgroup and pfischer: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] synced to the testservers mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) |
[production] |
07:04 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1020.eqiad.wmnet |
[production] |
07:03 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1020.eqiad.wmnet |
[production] |
07:03 |
<ladsgroup@deploy1002> |
Started scap: Backport for [[gerrit:952346|Disable search result deduplication. (T341227)]] |
[production] |
07:01 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2003.codfw.wmnet |
[production] |
07:01 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1127.eqiad.wmnet with OS bullseye |
[production] |
06:58 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1126.eqiad.wmnet with OS bullseye |
[production] |
06:58 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P51989 and previous config saved to /var/cache/conftool/dbconfig/20230830-065849-ladsgroup.json |
[production] |
06:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1173 (re)pooling @ 25%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51988 and previous config saved to /var/cache/conftool/dbconfig/20230830-065723-root.json |
[production] |
06:57 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1020.eqiad.wmnet |
[production] |
06:50 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1020.eqiad.wmnet |
[production] |