501-550 of 10000 results (92ms)
2024-06-07 ยง
08:49 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2027.codfw.wmnet [production]
08:48 <jynus> reboot dbprov1001,1002,2001,2002 [production]
08:46 <jynus@cumin1002> START - Cookbook sre.dns.netbox [production]
08:41 <jynus@cumin1002> START - Cookbook sre.hosts.decommission for hosts db2099.codfw.wmnet [production]
08:40 <jynus@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db2098.codfw.wmnet [production]
08:40 <jynus@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:39 <jynus@cumin1002> START - Cookbook sre.dns.netbox [production]
08:39 <jynus@cumin1002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db2097.codfw.wmnet [production]
08:39 <jynus@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:39 <jynus@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2097.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jynus@cumin1002" [production]
08:37 <jynus@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2097.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jynus@cumin1002" [production]
08:35 <jynus@cumin1002> START - Cookbook sre.dns.netbox [production]
08:19 <fabfur@cumin1002> START - Cookbook sre.hosts.reboot-single for host cp4049.ulsfo.wmnet [production]
08:19 <fabfur@cumin1002> conftool action : set/pooled=yes; selector: name=cp4049.ulsfo.wmnet [production]
08:18 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2025.codfw.wmnet [production]
08:18 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2025.codfw.wmnet [production]
08:15 <jynus> deleted from zarcillo db2097, db2098, db2099 T362802 T366877 T362883 [production]
08:12 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti2025.codfw.wmnet [production]
08:09 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2025.codfw.wmnet [production]
08:03 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host pki-root1002.eqiad.wmnet [production]
07:57 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db1190 (T364299)', diff saved to https://phabricator.wikimedia.org/P64239 and previous config saved to /var/cache/conftool/dbconfig/20240607-075742-marostegui.json [production]
07:57 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1190.eqiad.wmnet with reason: Maintenance [production]
07:57 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1190.eqiad.wmnet with reason: Maintenance [production]
07:57 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host pki-root1002.eqiad.wmnet [production]
07:56 <ryankemper@cumin2002> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
07:51 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host seaborgium.wikimedia.org [production]
07:48 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host seaborgium.wikimedia.org [production]
07:45 <jynus@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on db2097.codfw.wmnet with reason: about to decommission [production]
07:45 <jynus@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on db2097.codfw.wmnet with reason: about to decommission [production]
07:45 <jynus@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on db2099.codfw.wmnet with reason: about to decommission [production]
07:44 <jynus@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on db2099.codfw.wmnet with reason: about to decommission [production]
07:30 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host bast1003.wikimedia.org with OS bookworm [production]
07:19 <jynus@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on db2098.codfw.wmnet with reason: about to decommission [production]
07:19 <jynus@cumin1002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on db2098.codfw.wmnet with reason: about to decommission [production]
07:12 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
07:09 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on bast1003.wikimedia.org with reason: host reimage [production]
07:07 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on bast1003.wikimedia.org with reason: host reimage [production]
06:52 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host bast1003.wikimedia.org with OS bookworm [production]
06:51 <moritzm> reimaging bast1003 to bookworm [production]
06:36 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
06:34 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
06:31 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
05:15 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
04:44 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1150.eqiad.wmnet with reason: Maintenance [production]
04:44 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1150.eqiad.wmnet with reason: Maintenance [production]
04:35 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_eqiad: eqiad cluster reboot - ryankemper@cumin2002 - T366555 [production]
04:33 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Depooling db2163 (T352010)', diff saved to https://phabricator.wikimedia.org/P64238 and previous config saved to /var/cache/conftool/dbconfig/20240607-043343-ladsgroup.json [production]
04:33 <ladsgroup@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2163.codfw.wmnet with reason: Maintenance [production]
04:33 <ladsgroup@cumin1002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2163.codfw.wmnet with reason: Maintenance [production]
04:33 <ladsgroup@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2162 (T352010)', diff saved to https://phabricator.wikimedia.org/P64237 and previous config saved to /var/cache/conftool/dbconfig/20240607-043320-ladsgroup.json [production]