4701-4750 of 10000 results (92ms)
2022-12-12 §
10:57 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thanos-be1002.eqiad.wmnet with OS bullseye [production]
10:52 <claime> Switched parse1002 to parse1003 in parsoid-canary - T324949 [production]
10:51 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5007.eqsin.wmnet [production]
10:49 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host ganeti5007.eqsin.wmnet [production]
10:47 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:867122|Bump portals to HEAD]] (duration: 09m 48s) [production]
10:42 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be1002.eqiad.wmnet with reason: host reimage [production]
10:39 <ladsgroup@deploy1002> ladsgroup and ladsgroup: Backport for [[gerrit:867122|Bump portals to HEAD]] synced to the testservers: mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet [production]
10:39 <mvernon@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be1002.eqiad.wmnet with reason: host reimage [production]
10:37 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:867122|Bump portals to HEAD]] [production]
10:36 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5007.eqsin.wmnet [production]
10:35 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5007.eqsin.wmnet [production]
10:26 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5007.eqsin.wmnet [production]
10:24 <slyngs> update add-ldap-group tool, https://gerrit.wikimedia.org/r/c/operations/puppet/+/860568 [production]
10:18 <slyngs> Update modify-mfa tools, https://gerrit.wikimedia.org/r/c/operations/puppet/+/861385 [production]
10:17 <claime> depooled parse1002.eqiad.wmnet for hw failure - T324949 [production]
10:02 <jgiannelos@deploy1002> Finished deploy [kartotherian/deploy@bdc19a3] (codfw): Increase codfw mirrored traffic to 50% (duration: 01m 42s) [production]
10:00 <jgiannelos@deploy1002> Started deploy [kartotherian/deploy@bdc19a3] (codfw): Increase codfw mirrored traffic to 50% [production]
09:58 <cgoubert@cumin1001> conftool action : set/pooled=inactive; selector: name=parse1002.eqiad.wmnet [production]
09:06 <jgiannelos@deploy1002> Finished deploy [kartotherian/deploy@6af0d2d] (codfw): Increase codfw mirrored traffic to 25% (duration: 02m 15s) [production]
09:03 <jgiannelos@deploy1002> Started deploy [kartotherian/deploy@6af0d2d] (codfw): Increase codfw mirrored traffic to 25% [production]
08:55 <mvernon@cumin2002> START - Cookbook sre.hosts.reimage for host thanos-be1002.eqiad.wmnet with OS bullseye [production]
08:25 <matthiasmullie> UTC morning backports done [production]
08:24 <mlitn@deploy1002> Finished scap: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] (duration: 18m 21s) [production]
08:16 <mlitn@deploy1002> mlitn and mlitn: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet [production]
08:08 <XioNoX> remove bast5001 from management routers ACLs (replaced by bast5002) [production]
08:06 <mlitn@deploy1002> Started scap: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] [production]
07:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 100%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42672 and previous config saved to /var/cache/conftool/dbconfig/20221212-074700-root.json [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 75%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42671 and previous config saved to /var/cache/conftool/dbconfig/20221212-073155-root.json [production]
07:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 50%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42670 and previous config saved to /var/cache/conftool/dbconfig/20221212-071650-root.json [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 25%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42669 and previous config saved to /var/cache/conftool/dbconfig/20221212-070145-root.json [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 10%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42668 and previous config saved to /var/cache/conftool/dbconfig/20221212-064640-root.json [production]
06:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 5%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42667 and previous config saved to /var/cache/conftool/dbconfig/20221212-063135-root.json [production]
06:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 1%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42666 and previous config saved to /var/cache/conftool/dbconfig/20221212-061630-root.json [production]
2022-12-10 §
03:46 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: search_codfw elasticsearch and plugin upgrade - ryankemper@cumin2002 [production]
02:00 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 49 hosts with reason: Plugin upgrade for T322776 [production]
02:00 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 49 hosts with reason: Plugin upgrade for T322776 [production]
01:59 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: search_codfw elasticsearch and plugin upgrade - ryankemper@cumin2002 [production]
01:21 <ryankemper@cumin1001> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
2022-12-09 §
23:59 <bking@cumin2002> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
23:39 <ryankemper@cumin1001> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_eqiad: search_eqiad elasticsearch and plugin upgrade - ryankemper@cumin1001 - T322776 [production]
22:27 <cstone> payments-wiki upgraded from 35555b67 to 77297c12 [production]
19:50 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
19:49 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
19:34 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
19:34 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx1001.wikimedia.org with reason: Moar Disk! [production]
18:22 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
18:16 <jhathaway@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on mx2001.wikimedia.org with reason: Moar Disk 2! [production]
18:16 <jhathaway@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on mx2001.wikimedia.org with reason: Moar Disk 2! [production]
18:04 <jnuche@deploy1002> Installation of scap version "4.30.2" completed for 562 hosts [production]
18:04 <jnuche@deploy1002> Installing scap version "4.30.2" for 562 hosts [production]