6051-6100 of 10000 results (81ms)
2022-12-09 ยง
10:49 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:866472|Followup to 5cb38845: Don't drop revid info (T324801)]] [production]
10:36 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti5006.eqsin.wmnet to cluster eqsin and group 1 [production]
10:34 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti5006.eqsin.wmnet to cluster eqsin and group 1 [production]
10:25 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5006.eqsin.wmnet [production]
10:15 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5006.eqsin.wmnet [production]
10:09 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thanos-be2003.codfw.wmnet with OS bullseye [production]
09:53 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be2003.codfw.wmnet with reason: host reimage [production]
09:51 <mvernon@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be2003.codfw.wmnet with reason: host reimage [production]
09:34 <mvernon@cumin2002> START - Cookbook sre.hosts.reimage for host thanos-be2003.codfw.wmnet with OS bullseye [production]
08:39 <marostegui> dbmaint schema change on s8@eqiad T324797 [production]
08:39 <marostegui> dbmaint schema change on s7@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s6@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s5@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s4@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s2@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s1@eqiad T324797 [production]
08:35 <marostegui> dbmaint schema change on s3@eqiad T324797 [production]
08:02 <marostegui> dbmaint schema change on s3 T324797 [production]
07:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 100%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42661 and previous config saved to /var/cache/conftool/dbconfig/20221209-075057-root.json [production]
07:36 <marostegui> dbmaint schema change on s5 T324797 [production]
07:36 <marostegui> dbmaint schema change on s1 T324797 [production]
07:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 75%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42660 and previous config saved to /var/cache/conftool/dbconfig/20221209-073552-root.json [production]
07:29 <marostegui> dbmaint schema change on s6 T324797 [production]
07:29 <marostegui> dbmaint schema change on s8 T324797 [production]
07:29 <marostegui> dbmaint schema change on s7 T324797 [production]
07:29 <marostegui> dbmaint schema change on s4 T324797 [production]
07:29 <marostegui> dbmaint schema change on s2 T324797 [production]
07:28 <marostegui> Deploy schema change on s2 T324797 [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 50%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42659 and previous config saved to /var/cache/conftool/dbconfig/20221209-072047-root.json [production]
07:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 25%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42658 and previous config saved to /var/cache/conftool/dbconfig/20221209-070542-root.json [production]
07:00 <marostegui> Deploy schema change on s4 T324797 [production]
06:58 <marostegui> Deploy schema change on s7 T324797 [production]
06:57 <marostegui> Deploy schema change on s8 T324797 [production]
06:55 <marostegui> Deploy schema change on s6 T324797 [production]
06:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 10%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42657 and previous config saved to /var/cache/conftool/dbconfig/20221209-065037-root.json [production]
06:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 5%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42656 and previous config saved to /var/cache/conftool/dbconfig/20221209-063532-root.json [production]
06:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 1%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42655 and previous config saved to /var/cache/conftool/dbconfig/20221209-062027-root.json [production]
05:28 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
05:13 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
05:10 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
05:03 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
04:09 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
03:52 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 50 hosts with reason: Rolling restart in progress [production]
03:52 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 50 hosts with reason: Rolling restart in progress [production]
03:51 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
02:39 <cwhite@deploy1002> rebuilt and synchronized wikiversions files: Revert "group2 wikis to 1.40.0-wmf.13" [production]
02:27 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cassandra-dev2003.codfw.wmnet with OS buster [production]
02:27 <eevans@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - eevans@cumin1001" [production]
02:23 <eevans@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - eevans@cumin1001" [production]
02:11 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cassandra-dev2003.codfw.wmnet with reason: host reimage [production]