751-800 of 10000 results (82ms)
2023-08-31 ยง
16:41 <cmooney@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:41 <cmooney@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add management record for lsw1-a2-codfw - cmooney@cumin1001" [production]
16:40 <cmooney@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add management record for lsw1-a2-codfw - cmooney@cumin1001" [production]
16:29 <eevans@cumin1001> START - Cookbook sre.hosts.reboot-single for host restbase1030.eqiad.wmnet [production]
16:17 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2180 (T343718)', diff saved to https://phabricator.wikimedia.org/P52236 and previous config saved to /var/cache/conftool/dbconfig/20230831-161736-ladsgroup.json [production]
16:04 <cgoubert@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-cluster (exit_code=0) [production]
16:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2180', diff saved to https://phabricator.wikimedia.org/P52235 and previous config saved to /var/cache/conftool/dbconfig/20230831-160230-ladsgroup.json [production]
16:02 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be2057.codfw.wmnet [production]
15:56 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on cloudservices1006.eqiad.wmnet with reason: service bootstrap [production]
15:55 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on cloudservices1006.eqiad.wmnet with reason: service bootstrap [production]
15:54 <mvernon@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be1058.eqiad.wmnet [production]
15:49 <hnowlan@cumin1001> END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on P{lvs1019*,lvs2013*} and A:lvs (T336380) [production]
15:49 <mvernon@cumin2002> START - Cookbook sre.hosts.reboot-single for host ms-be2057.codfw.wmnet [production]
15:48 <hnowlan@cumin1001> START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on P{lvs1019*,lvs2013*} and A:lvs (T336380) [production]
15:47 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2180', diff saved to https://phabricator.wikimedia.org/P52234 and previous config saved to /var/cache/conftool/dbconfig/20230831-154724-ladsgroup.json [production]
15:46 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be2056.codfw.wmnet [production]
15:45 <hnowlan@cumin1001> END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on P{lvs1020*,lvs2014*} and A:lvs (T336380) [production]
15:44 <hnowlan@cumin1001> START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on P{lvs1020*,lvs2014*} and A:lvs (T336380) [production]
15:40 <mvernon@cumin1001> START - Cookbook sre.hosts.reboot-single for host ms-be1058.eqiad.wmnet [production]
15:40 <mvernon@cumin2002> START - Cookbook sre.hosts.reboot-single for host ms-be2056.codfw.wmnet [production]
15:39 <moritzm> failover ganeti master in ulsfo to ganeti4005 [production]
15:36 <mvernon@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be1057.eqiad.wmnet [production]
15:35 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be2055.codfw.wmnet [production]
15:35 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti4007.ulsfo.wmnet [production]
15:35 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti4007.ulsfo.wmnet [production]
15:32 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2180 (T343718)', diff saved to https://phabricator.wikimedia.org/P52233 and previous config saved to /var/cache/conftool/dbconfig/20230831-153217-ladsgroup.json [production]
15:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2180 (T343718)', diff saved to https://phabricator.wikimedia.org/P52232 and previous config saved to /var/cache/conftool/dbconfig/20230831-153005-ladsgroup.json [production]
15:29 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2180.codfw.wmnet with reason: Maintenance [production]
15:29 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2180.codfw.wmnet with reason: Maintenance [production]
15:29 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2171:3316 (T343718)', diff saved to https://phabricator.wikimedia.org/P52231 and previous config saved to /var/cache/conftool/dbconfig/20230831-152943-ladsgroup.json [production]
15:29 <mvernon@cumin1001> START - Cookbook sre.hosts.reboot-single for host ms-be1057.eqiad.wmnet [production]
15:29 <mvernon@cumin2002> START - Cookbook sre.hosts.reboot-single for host ms-be2055.codfw.wmnet [production]
15:28 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti4007.ulsfo.wmnet [production]
15:27 <marostegui@cumin1001> dbctl commit (dc=all): 'db1132 (re)pooling @ 100%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52230 and previous config saved to /var/cache/conftool/dbconfig/20230831-152710-root.json [production]
15:24 <jynus> extend backup1009 lv by additional 10TiB [production]
15:22 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti4007.ulsfo.wmnet [production]
15:22 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be2054.codfw.wmnet [production]
15:21 <mvernon@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be1056.eqiad.wmnet [production]
15:15 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
15:14 <mvernon@cumin1001> START - Cookbook sre.hosts.reboot-single for host ms-be1056.eqiad.wmnet [production]
15:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2171:3316', diff saved to https://phabricator.wikimedia.org/P52229 and previous config saved to /var/cache/conftool/dbconfig/20230831-151437-ladsgroup.json [production]
15:12 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wdqs1010.eqiad.wmnet with OS bullseye [production]
15:12 <bking@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - bking@cumin1001" [production]
15:12 <marostegui@cumin1001> dbctl commit (dc=all): 'db1132 (re)pooling @ 75%: Repooling after cloning another host', diff saved to https://phabricator.wikimedia.org/P52228 and previous config saved to /var/cache/conftool/dbconfig/20230831-151205-root.json [production]
15:11 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti4006.ulsfo.wmnet [production]
15:11 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti4006.ulsfo.wmnet [production]
15:11 <mvernon@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ms-be1055.eqiad.wmnet [production]
15:05 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti4006.ulsfo.wmnet [production]
15:00 <mvernon@cumin1001> START - Cookbook sre.hosts.reboot-single for host ms-be1055.eqiad.wmnet [production]
14:59 <mvernon@cumin2002> START - Cookbook sre.hosts.reboot-single for host ms-be2054.codfw.wmnet [production]