2023-08-30
ยง
|
08:31 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1023.eqiad.wmnet |
[production] |
08:30 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2125 (T343718)', diff saved to https://phabricator.wikimedia.org/P52015 and previous config saved to /var/cache/conftool/dbconfig/20230830-083025-ladsgroup.json |
[production] |
08:27 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1023.eqiad.wmnet |
[production] |
08:26 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52014 and previous config saved to /var/cache/conftool/dbconfig/20230830-082645-ladsgroup.json |
[production] |
08:19 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1128 (re)pooling @ 10%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52013 and previous config saved to /var/cache/conftool/dbconfig/20230830-081901-root.json |
[production] |
08:17 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance es2028', diff saved to https://phabricator.wikimedia.org/P52012 and previous config saved to /var/cache/conftool/dbconfig/20230830-081714-ladsgroup.json |
[production] |
08:06 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2008.codfw.wmnet |
[production] |
08:05 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1023.eqiad.wmnet |
[production] |
08:04 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2007.codfw.wmnet |
[production] |
08:03 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1128 (re)pooling @ 5%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52011 and previous config saved to /var/cache/conftool/dbconfig/20230830-080356-root.json |
[production] |
08:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance es2028', diff saved to https://phabricator.wikimedia.org/P52010 and previous config saved to /var/cache/conftool/dbconfig/20230830-080208-ladsgroup.json |
[production] |
08:01 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1022.eqiad.wmnet |
[production] |
08:01 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1022.eqiad.wmnet |
[production] |
07:59 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2125 (T343718)', diff saved to https://phabricator.wikimedia.org/P52009 and previous config saved to /var/cache/conftool/dbconfig/20230830-075956-ladsgroup.json |
[production] |
07:59 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2125.codfw.wmnet with reason: Maintenance |
[production] |
07:59 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2125.codfw.wmnet with reason: Maintenance |
[production] |
07:59 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2104 (T343718)', diff saved to https://phabricator.wikimedia.org/P52008 and previous config saved to /var/cache/conftool/dbconfig/20230830-075934-ladsgroup.json |
[production] |
07:57 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1146:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52007 and previous config saved to /var/cache/conftool/dbconfig/20230830-075736-ladsgroup.json |
[production] |
07:57 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1146.eqiad.wmnet with reason: Maintenance |
[production] |
07:57 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1146.eqiad.wmnet with reason: Maintenance |
[production] |
07:57 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2007.codfw.wmnet |
[production] |
07:54 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1022.eqiad.wmnet |
[production] |
07:51 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1128.eqiad.wmnet with OS bullseye |
[production] |
07:50 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1129.eqiad.wmnet with OS bullseye |
[production] |
07:48 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1128 (re)pooling @ 3%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52006 and previous config saved to /var/cache/conftool/dbconfig/20230830-074852-root.json |
[production] |
07:47 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance es2028 (T344589)', diff saved to https://phabricator.wikimedia.org/P52005 and previous config saved to /var/cache/conftool/dbconfig/20230830-074702-ladsgroup.json |
[production] |
07:44 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2104', diff saved to https://phabricator.wikimedia.org/P52004 and previous config saved to /var/cache/conftool/dbconfig/20230830-074428-ladsgroup.json |
[production] |
07:42 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1022.eqiad.wmnet |
[production] |
07:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1173 (re)pooling @ 100%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P52003 and previous config saved to /var/cache/conftool/dbconfig/20230830-074238-root.json |
[production] |
07:42 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling es2028 (T344589)', diff saved to https://phabricator.wikimedia.org/P52002 and previous config saved to /var/cache/conftool/dbconfig/20230830-074202-ladsgroup.json |
[production] |
07:41 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on es2028.codfw.wmnet with reason: Maintenance |
[production] |
07:41 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on es2028.codfw.wmnet with reason: Maintenance |
[production] |
07:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1165 (re)pooling @ 100%: Repooling after onsite upgrade', diff saved to https://phabricator.wikimedia.org/P52001 and previous config saved to /var/cache/conftool/dbconfig/20230830-073514-root.json |
[production] |
07:33 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1128 (re)pooling @ 1%: Repooling after upgrade 10.4.31 T344309', diff saved to https://phabricator.wikimedia.org/P52000 and previous config saved to /var/cache/conftool/dbconfig/20230830-073347-root.json |
[production] |
07:31 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2006.codfw.wmnet |
[production] |
07:31 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1128 upgrade to mariadb 10.4.31', diff saved to https://phabricator.wikimedia.org/P51999 and previous config saved to /var/cache/conftool/dbconfig/20230830-073144-root.json |
[production] |
07:29 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2104', diff saved to https://phabricator.wikimedia.org/P51998 and previous config saved to /var/cache/conftool/dbconfig/20230830-072922-ladsgroup.json |
[production] |
07:29 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1139.eqiad.wmnet with reason: Maintenance |
[production] |
07:29 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1139.eqiad.wmnet with reason: Maintenance |
[production] |
07:29 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129 (T343718)', diff saved to https://phabricator.wikimedia.org/P51997 and previous config saved to /var/cache/conftool/dbconfig/20230830-072902-ladsgroup.json |
[production] |
07:28 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1128.eqiad.wmnet with reason: host reimage |
[production] |
07:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1173 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P51996 and previous config saved to /var/cache/conftool/dbconfig/20230830-072733-root.json |
[production] |
07:26 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1021.eqiad.wmnet |
[production] |
07:26 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1021.eqiad.wmnet |
[production] |
07:25 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-worker1129.eqiad.wmnet with reason: host reimage |
[production] |
07:25 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host ores2006.codfw.wmnet |
[production] |
07:23 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores2005.codfw.wmnet |
[production] |
07:22 |
<stevemunene@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1128.eqiad.wmnet with reason: host reimage |
[production] |
07:22 |
<stevemunene@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1129.eqiad.wmnet with reason: host reimage |
[production] |
07:20 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1021.eqiad.wmnet |
[production] |