6601-6650 of 10000 results (93ms)
2022-12-12 §
10:02 <jgiannelos@deploy1002> Finished deploy [kartotherian/deploy@bdc19a3] (codfw): Increase codfw mirrored traffic to 50% (duration: 01m 42s) [production]
10:00 <jgiannelos@deploy1002> Started deploy [kartotherian/deploy@bdc19a3] (codfw): Increase codfw mirrored traffic to 50% [production]
09:58 <cgoubert@cumin1001> conftool action : set/pooled=inactive; selector: name=parse1002.eqiad.wmnet [production]
09:06 <jgiannelos@deploy1002> Finished deploy [kartotherian/deploy@6af0d2d] (codfw): Increase codfw mirrored traffic to 25% (duration: 02m 15s) [production]
09:03 <jgiannelos@deploy1002> Started deploy [kartotherian/deploy@6af0d2d] (codfw): Increase codfw mirrored traffic to 25% [production]
08:55 <mvernon@cumin2002> START - Cookbook sre.hosts.reimage for host thanos-be1002.eqiad.wmnet with OS bullseye [production]
08:25 <matthiasmullie> UTC morning backports done [production]
08:24 <mlitn@deploy1002> Finished scap: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] (duration: 18m 21s) [production]
08:16 <mlitn@deploy1002> mlitn and mlitn: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet [production]
08:08 <XioNoX> remove bast5001 from management routers ACLs (replaced by bast5002) [production]
08:06 <mlitn@deploy1002> Started scap: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] [production]
07:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 100%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42672 and previous config saved to /var/cache/conftool/dbconfig/20221212-074700-root.json [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 75%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42671 and previous config saved to /var/cache/conftool/dbconfig/20221212-073155-root.json [production]
07:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 50%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42670 and previous config saved to /var/cache/conftool/dbconfig/20221212-071650-root.json [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 25%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42669 and previous config saved to /var/cache/conftool/dbconfig/20221212-070145-root.json [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 10%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42668 and previous config saved to /var/cache/conftool/dbconfig/20221212-064640-root.json [production]
06:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 5%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42667 and previous config saved to /var/cache/conftool/dbconfig/20221212-063135-root.json [production]
06:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 1%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42666 and previous config saved to /var/cache/conftool/dbconfig/20221212-061630-root.json [production]
2022-12-10 §
03:46 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: search_codfw elasticsearch and plugin upgrade - ryankemper@cumin2002 [production]
02:00 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 49 hosts with reason: Plugin upgrade for T322776 [production]
02:00 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 49 hosts with reason: Plugin upgrade for T322776 [production]
01:59 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: search_codfw elasticsearch and plugin upgrade - ryankemper@cumin2002 [production]
01:21 <ryankemper@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
2022-12-09 §
23:59 <bking@cumin2002> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
23:39 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
22:27 <cstone> payments-wiki upgraded from 35555b67 to 77297c12 [production]
19:50 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
19:49 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
19:34 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
19:34 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
18:22 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:16 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx2001.wikimedia.org with reason: Moar Disk 2! [production]
18:16 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx2001.wikimedia.org with reason: Moar Disk 2! [production]
18:04 <jnuche@deploy1002> Installation of scap version "4.30.2" completed for 562 hosts [production]
18:04 <jnuche@deploy1002> Installing scap version "4.30.2" for 562 hosts [production]
17:59 <jnuche@deploy1002> Installing scap version "4.30.2" for 563 hosts [production]
17:44 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx2001.wikimedia.org with reason: Moar Disk [production]
17:44 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx2001.wikimedia.org with reason: Moar Disk [production]
17:03 <claime> eventgate-analytics bumped to 30 replicas to absorb increased load - T320518 [production]
17:02 <cmjohnson@cumin1001> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host kafka-stretch1001.mgmt.eqiad.wmnet with reboot policy FORCED [production]
17:00 <cmjohnson@cumin1001> START - Cookbook sre.hosts.provision for host kafka-stretch1001.mgmt.eqiad.wmnet with reboot policy FORCED [production]
16:59 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:59 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: kafka-stretch1001 - cmjohnson@cumin1001" [production]
16:58 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host kafka-stretch1002.eqiad.wmnet with OS bullseye [production]
16:58 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - cmjohnson@cumin1001" [production]
16:58 <cmjohnson@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: kafka-stretch1001 - cmjohnson@cumin1001" [production]
16:57 <cmjohnson@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - cmjohnson@cumin1001" [production]
16:56 <cmjohnson@cumin1001> START - Cookbook sre.dns.netbox [production]
16:43 <otto@deploy1002> helmfile [staging] DONE helmfile.d/services/eventgate-analytics: apply [production]
16:42 <cmjohnson@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kafka-stretch1002.eqiad.wmnet with reason: host reimage [production]