2022-12-09
ยง
|
11:09 |
<jgiannelos@deploy1002> |
Finished deploy [kartotherian/deploy@17b9319] (codfw): codfw: Enable mirroring for 25% of the traffic (duration: 05m 08s) |
[production] |
11:06 |
<cgoubert@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Decomissioning netmon2001 - cgoubert@cumin1001" |
[production] |
11:03 |
<jgiannelos@deploy1002> |
Started deploy [kartotherian/deploy@17b9319] (codfw): codfw: Enable mirroring for 25% of the traffic |
[production] |
11:02 |
<ladsgroup@deploy1002> |
Finished scap: Backport for [[gerrit:866472|Followup to 5cb38845: Don't drop revid info (T324801)]] (duration: 12m 59s) |
[production] |
11:01 |
<cgoubert@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
11:00 |
<mvernon@cumin2002> |
START - Cookbook sre.hosts.reimage for host thanos-be2004.codfw.wmnet with OS bullseye |
[production] |
10:51 |
<ladsgroup@deploy1002> |
ladsgroup and ladsgroup: Backport for [[gerrit:866472|Followup to 5cb38845: Don't drop revid info (T324801)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet |
[production] |
10:49 |
<ladsgroup@deploy1002> |
Started scap: Backport for [[gerrit:866472|Followup to 5cb38845: Don't drop revid info (T324801)]] |
[production] |
10:36 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti5006.eqsin.wmnet to cluster eqsin and group 1 |
[production] |
10:34 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti5006.eqsin.wmnet to cluster eqsin and group 1 |
[production] |
10:25 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5006.eqsin.wmnet |
[production] |
10:15 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti5006.eqsin.wmnet |
[production] |
10:09 |
<mvernon@cumin2002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thanos-be2003.codfw.wmnet with OS bullseye |
[production] |
09:53 |
<mvernon@cumin2002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be2003.codfw.wmnet with reason: host reimage |
[production] |
09:51 |
<mvernon@cumin2002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be2003.codfw.wmnet with reason: host reimage |
[production] |
09:34 |
<mvernon@cumin2002> |
START - Cookbook sre.hosts.reimage for host thanos-be2003.codfw.wmnet with OS bullseye |
[production] |
08:39 |
<marostegui> |
dbmaint schema change on s8@eqiad T324797 |
[production] |
08:39 |
<marostegui> |
dbmaint schema change on s7@eqiad T324797 |
[production] |
08:38 |
<marostegui> |
dbmaint schema change on s6@eqiad T324797 |
[production] |
08:38 |
<marostegui> |
dbmaint schema change on s5@eqiad T324797 |
[production] |
08:38 |
<marostegui> |
dbmaint schema change on s4@eqiad T324797 |
[production] |
08:38 |
<marostegui> |
dbmaint schema change on s2@eqiad T324797 |
[production] |
08:38 |
<marostegui> |
dbmaint schema change on s1@eqiad T324797 |
[production] |
08:35 |
<marostegui> |
dbmaint schema change on s3@eqiad T324797 |
[production] |
08:02 |
<marostegui> |
dbmaint schema change on s3 T324797 |
[production] |
07:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 100%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42661 and previous config saved to /var/cache/conftool/dbconfig/20221209-075057-root.json |
[production] |
07:36 |
<marostegui> |
dbmaint schema change on s5 T324797 |
[production] |
07:36 |
<marostegui> |
dbmaint schema change on s1 T324797 |
[production] |
07:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 75%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42660 and previous config saved to /var/cache/conftool/dbconfig/20221209-073552-root.json |
[production] |
07:29 |
<marostegui> |
dbmaint schema change on s6 T324797 |
[production] |
07:29 |
<marostegui> |
dbmaint schema change on s8 T324797 |
[production] |
07:29 |
<marostegui> |
dbmaint schema change on s7 T324797 |
[production] |
07:29 |
<marostegui> |
dbmaint schema change on s4 T324797 |
[production] |
07:29 |
<marostegui> |
dbmaint schema change on s2 T324797 |
[production] |
07:28 |
<marostegui> |
Deploy schema change on s2 T324797 |
[production] |
07:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 50%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42659 and previous config saved to /var/cache/conftool/dbconfig/20221209-072047-root.json |
[production] |
07:05 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 25%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42658 and previous config saved to /var/cache/conftool/dbconfig/20221209-070542-root.json |
[production] |
07:00 |
<marostegui> |
Deploy schema change on s4 T324797 |
[production] |
06:58 |
<marostegui> |
Deploy schema change on s7 T324797 |
[production] |
06:57 |
<marostegui> |
Deploy schema change on s8 T324797 |
[production] |
06:55 |
<marostegui> |
Deploy schema change on s6 T324797 |
[production] |
06:50 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 10%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42657 and previous config saved to /var/cache/conftool/dbconfig/20221209-065037-root.json |
[production] |
06:35 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 5%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42656 and previous config saved to /var/cache/conftool/dbconfig/20221209-063532-root.json |
[production] |
06:20 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1206 (re)pooling @ 1%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42655 and previous config saved to /var/cache/conftool/dbconfig/20221209-062027-root.json |
[production] |
05:28 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 |
[production] |
05:13 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 |
[production] |
05:10 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 |
[production] |
05:03 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 |
[production] |
04:09 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 |
[production] |
03:52 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 50 hosts with reason: Rolling restart in progress |
[production] |