1151-1200 of 10000 results (71ms)
2022-12-09 ยง
08:38 <marostegui> dbmaint schema change on s6@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s5@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s4@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s2@eqiad T324797 [production]
08:38 <marostegui> dbmaint schema change on s1@eqiad T324797 [production]
08:35 <marostegui> dbmaint schema change on s3@eqiad T324797 [production]
08:02 <marostegui> dbmaint schema change on s3 T324797 [production]
07:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 100%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42661 and previous config saved to /var/cache/conftool/dbconfig/20221209-075057-root.json [production]
07:36 <marostegui> dbmaint schema change on s5 T324797 [production]
07:36 <marostegui> dbmaint schema change on s1 T324797 [production]
07:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 75%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42660 and previous config saved to /var/cache/conftool/dbconfig/20221209-073552-root.json [production]
07:29 <marostegui> dbmaint schema change on s6 T324797 [production]
07:29 <marostegui> dbmaint schema change on s8 T324797 [production]
07:29 <marostegui> dbmaint schema change on s7 T324797 [production]
07:29 <marostegui> dbmaint schema change on s4 T324797 [production]
07:29 <marostegui> dbmaint schema change on s2 T324797 [production]
07:28 <marostegui> Deploy schema change on s2 T324797 [production]
07:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 50%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42659 and previous config saved to /var/cache/conftool/dbconfig/20221209-072047-root.json [production]
07:05 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 25%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42658 and previous config saved to /var/cache/conftool/dbconfig/20221209-070542-root.json [production]
07:00 <marostegui> Deploy schema change on s4 T324797 [production]
06:58 <marostegui> Deploy schema change on s7 T324797 [production]
06:57 <marostegui> Deploy schema change on s8 T324797 [production]
06:55 <marostegui> Deploy schema change on s6 T324797 [production]
06:50 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 10%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42657 and previous config saved to /var/cache/conftool/dbconfig/20221209-065037-root.json [production]
06:35 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 5%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42656 and previous config saved to /var/cache/conftool/dbconfig/20221209-063532-root.json [production]
06:20 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 1%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42655 and previous config saved to /var/cache/conftool/dbconfig/20221209-062027-root.json [production]
05:28 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
05:13 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
05:10 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
05:03 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
04:09 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
03:52 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 50 hosts with reason: Rolling restart in progress [production]
03:52 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 50 hosts with reason: Rolling restart in progress [production]
03:51 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
02:39 <cwhite@deploy1002> rebuilt and synchronized wikiversions files: Revert "group2 wikis to 1.40.0-wmf.13" [production]
02:27 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cassandra-dev2003.codfw.wmnet with OS buster [production]
02:27 <eevans@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - eevans@cumin1001" [production]
02:23 <eevans@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - eevans@cumin1001" [production]
02:11 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cassandra-dev2003.codfw.wmnet with reason: host reimage [production]
02:08 <eevans@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cassandra-dev2003.codfw.wmnet with reason: host reimage [production]
01:49 <eevans@cumin1001> START - Cookbook sre.hosts.reimage for host cassandra-dev2003.codfw.wmnet with OS buster [production]
01:48 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cassandra-dev2002.codfw.wmnet with OS buster [production]
01:48 <eevans@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - eevans@cumin1001" [production]
01:47 <eevans@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - eevans@cumin1001" [production]
01:33 <eevans@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cassandra-dev2002.codfw.wmnet with reason: host reimage [production]
01:30 <eevans@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cassandra-dev2002.codfw.wmnet with reason: host reimage [production]
01:11 <eevans@cumin1001> START - Cookbook sre.hosts.reimage for host cassandra-dev2002.codfw.wmnet with OS buster [production]
01:07 <eevans@cumin1001> END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cassandra-dev2003 [production]
01:06 <eevans@cumin1001> START - Cookbook sre.network.configure-switch-interfaces for host cassandra-dev2003 [production]
01:06 <eevans@cumin1001> END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cassandra-dev2002 [production]