901-950 of 10000 results (77ms)
2023-08-30 ยง
12:25 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ores1007.eqiad.wmnet [production]
12:19 <ladsgroup@deploy1002> isaranto and ladsgroup: Continuing with sync [production]
12:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P52080 and previous config saved to /var/cache/conftool/dbconfig/20230830-121921-ladsgroup.json [production]
12:19 <ladsgroup@deploy1002> isaranto and ladsgroup: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] synced to the testservers mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, and mw-debug kubernetes deployment (accessible via k8s-experimental XWD option) [production]
12:19 <elukey@cumin1001> START - Cookbook sre.hosts.reboot-single for host ores1007.eqiad.wmnet [production]
12:17 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:953590|ores-extension: fix thresholds (T343308)]] [production]
12:16 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudservices1006.eqiad.wmnet with reason: host reimage [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1033.eqiad.wmnet [production]
12:15 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1033.eqiad.wmnet [production]
12:14 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148', diff saved to https://phabricator.wikimedia.org/P52079 and previous config saved to /var/cache/conftool/dbconfig/20230830-121433-ladsgroup.json [production]
12:13 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudservices1006.eqiad.wmnet with reason: host reimage [production]
12:12 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:10 <aborrero@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:10 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
12:09 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1033.eqiad.wmnet [production]
12:08 <jbond@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host sretest1001.eqiad.wmnet with OS bullseye [production]
12:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1220 (T344589)', diff saved to https://phabricator.wikimedia.org/P52078 and previous config saved to /var/cache/conftool/dbconfig/20230830-120511-ladsgroup.json [production]
12:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52077 and previous config saved to /var/cache/conftool/dbconfig/20230830-120415-ladsgroup.json [production]
11:59 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148', diff saved to https://phabricator.wikimedia.org/P52076 and previous config saved to /var/cache/conftool/dbconfig/20230830-115927-ladsgroup.json [production]
11:59 <aborrero@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host cloudservices1006.eqiad.wmnet with OS bullseye [production]
11:57 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1033.eqiad.wmnet [production]
11:56 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1032.eqiad.wmnet [production]
11:56 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1032.eqiad.wmnet [production]
11:52 <cmooney@cumin1001> END (FAIL) - Cookbook sre.network.provision (exit_code=99) for device ssw1-a1-codfw.mgmt.codfw.wmnet [production]
11:52 <cmooney@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:52 <cmooney@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Remove management record for ssw1-a1-codfw - cmooney@cumin1001" [production]
11:51 <jbond@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on sretest1001.eqiad.wmnet with reason: host reimage [production]
11:51 <cmooney@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Remove management record for ssw1-a1-codfw - cmooney@cumin1001" [production]
11:50 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1220', diff saved to https://phabricator.wikimedia.org/P52074 and previous config saved to /var/cache/conftool/dbconfig/20230830-115005-ladsgroup.json [production]
11:50 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1032.eqiad.wmnet [production]
11:48 <jbond@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on sretest1001.eqiad.wmnet with reason: host reimage [production]
11:47 <cmooney@cumin1001> START - Cookbook sre.dns.netbox [production]
11:44 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2148 (T343718)', diff saved to https://phabricator.wikimedia.org/P52073 and previous config saved to /var/cache/conftool/dbconfig/20230830-114421-ladsgroup.json [production]
11:40 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1032.eqiad.wmnet [production]
11:37 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1182 (T343718)', diff saved to https://phabricator.wikimedia.org/P52072 and previous config saved to /var/cache/conftool/dbconfig/20230830-113728-ladsgroup.json [production]
11:37 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
11:37 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1182.eqiad.wmnet with reason: Maintenance [production]
11:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52071 and previous config saved to /var/cache/conftool/dbconfig/20230830-113656-ladsgroup.json [production]
11:36 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1031.eqiad.wmnet [production]
11:36 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1031.eqiad.wmnet [production]
11:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1220', diff saved to https://phabricator.wikimedia.org/P52070 and previous config saved to /var/cache/conftool/dbconfig/20230830-113459-ladsgroup.json [production]
11:34 <jbond@cumin1001> START - Cookbook sre.hosts.reimage for host sretest1001.eqiad.wmnet with OS bullseye [production]
11:30 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1031.eqiad.wmnet [production]
11:21 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1170:3312', diff saved to https://phabricator.wikimedia.org/P52069 and previous config saved to /var/cache/conftool/dbconfig/20230830-112150-ladsgroup.json [production]
11:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1220 (T344589)', diff saved to https://phabricator.wikimedia.org/P52068 and previous config saved to /var/cache/conftool/dbconfig/20230830-111952-ladsgroup.json [production]
11:17 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2148 (T343718)', diff saved to https://phabricator.wikimedia.org/P52067 and previous config saved to /var/cache/conftool/dbconfig/20230830-111720-ladsgroup.json [production]
11:17 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2148.codfw.wmnet with reason: Maintenance [production]
11:17 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2148.codfw.wmnet with reason: Maintenance [production]
11:16 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2138:3312 (T343718)', diff saved to https://phabricator.wikimedia.org/P52066 and previous config saved to /var/cache/conftool/dbconfig/20230830-111659-ladsgroup.json [production]
11:16 <jbond> switch cumin to the puppetdb api micro service Gerrit:953203 [production]