2024-11-01
§
|
13:43 |
<elukey@cumin1002> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ganeti1044.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART |
[production] |
13:43 |
<elukey@cumin1002> |
START - Cookbook sre.hosts.provision for host ganeti1044.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART |
[production] |
13:38 |
<elukey@cumin1002> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ganeti1044.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART |
[production] |
13:33 |
<elukey@cumin1002> |
START - Cookbook sre.hosts.provision for host ganeti1044.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART |
[production] |
13:20 |
<ladsgroup@cumin1002> |
START - Cookbook sre.mysql.pool db2190 gradually with 4 steps - Maint over |
[production] |
12:43 |
<cmooney@cumin1002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1025.eqiad.wmnet |
[production] |
12:43 |
<cmooney@cumin1002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1025.eqiad.wmnet |
[production] |
12:43 |
<cmooney@cumin1002> |
END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti1025.eqiad.wmnet |
[production] |
12:43 |
<cmooney@cumin1002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1025.eqiad.wmnet |
[production] |
12:42 |
<cmooney@cumin1002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1025.eqiad.wmnet |
[production] |
12:28 |
<cmooney@cumin1002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1025.eqiad.wmnet |
[production] |
12:28 |
<topranks> |
rebooting ganeti1025 as VMs are unresponsive and will not shutdown or move |
[production] |
10:38 |
<kevinbazira@deploy2002> |
helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'experimental' for release 'main' . |
[production] |
09:46 |
<sukhe|off> |
sudo cumin -b4 "A:cp and A:magru" "run-puppet-agent" to pick up CR 1085569 |
[production] |
02:25 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2198.codfw.wmnet with reason: Maintenance |
[production] |
02:24 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2198.codfw.wmnet with reason: Maintenance |
[production] |
02:24 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2195 (T376905)', diff saved to https://phabricator.wikimedia.org/P70840 and previous config saved to /var/cache/conftool/dbconfig/20241101-022447-ladsgroup.json |
[production] |
02:09 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2195', diff saved to https://phabricator.wikimedia.org/P70839 and previous config saved to /var/cache/conftool/dbconfig/20241101-020940-ladsgroup.json |
[production] |
01:59 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-presto1019.eqiad.wmnet with OS bullseye |
[production] |
01:54 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2195', diff saved to https://phabricator.wikimedia.org/P70838 and previous config saved to /var/cache/conftool/dbconfig/20241101-015433-ladsgroup.json |
[production] |
01:42 |
<urandom> |
Decommissioning Cassandra/aqs1013-{a,b} — T378725 |
[production] |
01:40 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 30 days, 0:00:00 on aqs1013.eqiad.wmnet with reason: Decommissioning — T378725 |
[production] |
01:40 |
<eevans@cumin1002> |
START - Cookbook sre.hosts.downtime for 30 days, 0:00:00 on aqs1013.eqiad.wmnet with reason: Decommissioning — T378725 |
[production] |
01:39 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2195 (T376905)', diff saved to https://phabricator.wikimedia.org/P70837 and previous config saved to /var/cache/conftool/dbconfig/20241101-013926-ladsgroup.json |
[production] |
01:39 |
<eevans@cumin1002> |
END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for aqs1022.eqiad.wmnet |
[production] |
01:39 |
<eevans@cumin1002> |
START - Cookbook sre.hosts.remove-downtime for aqs1022.eqiad.wmnet |
[production] |
01:31 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db2195 (T376905)', diff saved to https://phabricator.wikimedia.org/P70836 and previous config saved to /var/cache/conftool/dbconfig/20241101-013102-ladsgroup.json |
[production] |
01:30 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2195.codfw.wmnet with reason: Maintenance |
[production] |
01:30 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2195.codfw.wmnet with reason: Maintenance |
[production] |
01:30 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2181 (T376905)', diff saved to https://phabricator.wikimedia.org/P70835 and previous config saved to /var/cache/conftool/dbconfig/20241101-013035-ladsgroup.json |
[production] |
01:25 |
<bking@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-presto1019.eqiad.wmnet with reason: host reimage |
[production] |
01:22 |
<bking@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on an-presto1019.eqiad.wmnet with reason: host reimage |
[production] |
01:15 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2181', diff saved to https://phabricator.wikimedia.org/P70834 and previous config saved to /var/cache/conftool/dbconfig/20241101-011528-ladsgroup.json |
[production] |
01:07 |
<bking@cumin2002> |
START - Cookbook sre.hosts.reimage for host an-presto1019.eqiad.wmnet with OS bullseye |
[production] |
01:00 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2181', diff saved to https://phabricator.wikimedia.org/P70833 and previous config saved to /var/cache/conftool/dbconfig/20241101-010021-ladsgroup.json |
[production] |
00:54 |
<bking@cumin2002> |
START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['an-presto1019.eqiad.wmnet'] |
[production] |
00:54 |
<bking@cumin2002> |
END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['an-presto1019.eqiad.wmnet'] |
[production] |
00:45 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2181 (T376905)', diff saved to https://phabricator.wikimedia.org/P70832 and previous config saved to /var/cache/conftool/dbconfig/20241101-004514-ladsgroup.json |
[production] |
00:35 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Depooling db2181 (T376905)', diff saved to https://phabricator.wikimedia.org/P70831 and previous config saved to /var/cache/conftool/dbconfig/20241101-003546-ladsgroup.json |
[production] |
00:35 |
<ladsgroup@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2181.codfw.wmnet with reason: Maintenance |
[production] |
00:35 |
<ladsgroup@cumin1002> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2181.codfw.wmnet with reason: Maintenance |
[production] |
00:35 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2167 (T376905)', diff saved to https://phabricator.wikimedia.org/P70830 and previous config saved to /var/cache/conftool/dbconfig/20241101-003520-ladsgroup.json |
[production] |
00:20 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2167', diff saved to https://phabricator.wikimedia.org/P70829 and previous config saved to /var/cache/conftool/dbconfig/20241101-002013-ladsgroup.json |
[production] |
00:05 |
<ladsgroup@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2167', diff saved to https://phabricator.wikimedia.org/P70828 and previous config saved to /var/cache/conftool/dbconfig/20241101-000506-ladsgroup.json |
[production] |