1551-1600 of 10000 results (88ms)
2023-04-10 §
21:08 <eevans@cumin1001> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host sessionstore1001.eqiad.wmnet [production]
21:06 <urandom> restarting Cassandra, sessionstore1003-a — T327954 [production]
21:04 <urandom> restarting Cassandra, sessionstore1002-a — T327954 [production]
20:57 <eevans@cumin1001> START - Cookbook sre.hosts.reboot-single for host sessionstore1001.eqiad.wmnet [production]
20:40 <brett@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on lvs3005.esams.wmnet with reason: host reimage [production]
20:36 <brett@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on lvs3005.esams.wmnet with reason: host reimage [production]
20:15 <brett@cumin2002> START - Cookbook sre.hosts.reimage for host lvs3005.esams.wmnet with OS bullseye [production]
20:09 <jclark@cumin1001> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1073.mgmt.eqiad.wmnet with reboot policy FORCED [production]
20:07 <jclark@cumin1001> START - Cookbook sre.hosts.provision for host ms-be1073.mgmt.eqiad.wmnet with reboot policy FORCED [production]
20:07 <jclark@cumin1001> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1072.mgmt.eqiad.wmnet with reboot policy FORCED [production]
20:05 <jclark@cumin1001> START - Cookbook sre.hosts.provision for host ms-be1072.mgmt.eqiad.wmnet with reboot policy FORCED [production]
19:53 <jclark@cumin1001> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirtlocal1003'] [production]
19:52 <jclark@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirtlocal1003'] [production]
19:52 <jclark@cumin1001> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirtlocal1002'] [production]
19:52 <jclark@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirtlocal1002'] [production]
19:51 <jclark@cumin1001> END (FAIL) - Cookbook sre.hardware.upgrade-firmware (exit_code=99) upgrade firmware for hosts ['cloudvirtlocal1001'] [production]
19:51 <jclark@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['cloudvirtlocal1001'] [production]
19:48 <jclark@cumin1001> END (PASS) - Cookbook sre.hosts.provision (exit_code=0) for host cloudvirtlocal1001.mgmt.eqiad.wmnet with reboot policy FORCED [production]
19:35 <brett> Disable Puppet/PyBal on lvs3005 in preparation for reimaging - T321309 [production]
19:25 <mutante> mw2488 - scap pull - T334429 [production]
19:22 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for lvs6002.drmrs.wmnet [production]
19:22 <sukhe@cumin2002> START - Cookbook sre.hosts.remove-downtime for lvs6002.drmrs.wmnet [production]
19:19 <mforns@deploy2002> Finished deploy [airflow-dags/analytics@6d6f1ec]: (no justification provided) (duration: 00m 11s) [production]
19:19 <mforns@deploy2002> Started deploy [airflow-dags/analytics@6d6f1ec]: (no justification provided) [production]
19:16 <mutante> power-cycling mw2448 - down, no console output T334429 [production]
19:08 <brett@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs6002.drmrs.wmnet with OS bullseye [production]
18:46 <brett@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on lvs6002.drmrs.wmnet with reason: host reimage [production]
18:43 <brett@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on lvs6002.drmrs.wmnet with reason: host reimage [production]
18:34 <herron@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
18:34 <herron@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: add kafka-logging1004 ipv6 - herron@cumin1001" [production]
18:33 <herron@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: add kafka-logging1004 ipv6 - herron@cumin1001" [production]
18:31 <herron@cumin1001> START - Cookbook sre.dns.netbox [production]
18:22 <brett@cumin2002> START - Cookbook sre.hosts.reimage for host lvs6002.drmrs.wmnet with OS bullseye [production]
18:16 <krinkle@deploy2002> Synchronized wmf-config/: (no justification provided) (duration: 587m 34s) [production]
17:29 <brett> Disable Puppet/PyBal on lvs6002 in preparation for reimaging - T321309 [production]
16:48 <brett@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs6001.drmrs.wmnet with OS bullseye [production]
16:31 <brett@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on lvs6001.drmrs.wmnet with reason: host reimage [production]
16:27 <brett@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on lvs6001.drmrs.wmnet with reason: host reimage [production]
16:05 <brett@cumin2002> START - Cookbook sre.hosts.reimage for host lvs6001.drmrs.wmnet with OS bullseye [production]
15:53 <herron> centrallog1002:~# systemctl restart rsyslog [production]
15:46 <brett> Disable Puppet/PyBal on lvs6001 in preparation for reimaging - T321309 [production]
14:57 <sukhe> enable puppet on A:lvs and A:ulsfo to merge 906580 [production]
14:52 <sukhe> disable puppet on A:lvs and A:ulsfo to merge 906580 [production]
14:10 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P46242 and previous config saved to /var/cache/conftool/dbconfig/20230410-141052-root.json [production]
13:55 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P46241 and previous config saved to /var/cache/conftool/dbconfig/20230410-135547-root.json [production]
13:40 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P46240 and previous config saved to /var/cache/conftool/dbconfig/20230410-134042-root.json [production]
13:25 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P46239 and previous config saved to /var/cache/conftool/dbconfig/20230410-132538-root.json [production]
13:10 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P46238 and previous config saved to /var/cache/conftool/dbconfig/20230410-131033-root.json [production]
12:55 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P46237 and previous config saved to /var/cache/conftool/dbconfig/20230410-125528-root.json [production]
12:40 <marostegui@cumin1001> dbctl commit (dc=all): 'db1201 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P46236 and previous config saved to /var/cache/conftool/dbconfig/20230410-124023-root.json [production]