5301-5350 of 10000 results (81ms)
2022-12-12 §
12:48 <moritzm> installing Django security updates [production]
12:32 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thanos-be1003.eqiad.wmnet with OS bullseye [production]
12:19 <moritzm> installing jqueryui security updates [production]
12:16 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be1003.eqiad.wmnet with reason: host reimage [production]
12:13 <mvernon@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be1003.eqiad.wmnet with reason: host reimage [production]
12:04 <moritzm> installing twisted security updates [production]
11:59 <mvernon@cumin2002> START - Cookbook sre.hosts.reimage for host thanos-be1003.eqiad.wmnet with OS bullseye [production]
11:49 <moritzm> drain ganeti5003 for eventual decom T322048 [production]
11:43 <moritzm> failover Ganeti master in eqsin to ganeti5004 (5003 will be decommissioned) T322048 [production]
11:40 <cgoubert@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 14 days, 0:00:00 on parse1002.eqiad.wmnet with reason: Bad CPU [production]
11:40 <cgoubert@cumin1001> START - Cookbook sre.hosts.downtime for 14 days, 0:00:00 on parse1002.eqiad.wmnet with reason: Bad CPU [production]
11:13 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti5007.eqsin.wmnet to cluster eqsin and group 1 [production]
11:10 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti5007.eqsin.wmnet to cluster eqsin and group 1 [production]
10:58 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5007.eqsin.wmnet [production]
10:57 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host thanos-be1002.eqiad.wmnet with OS bullseye [production]
10:52 <claime> Switched parse1002 to parse1003 in parsoid-canary - T324949 [production]
10:51 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5007.eqsin.wmnet [production]
10:49 <jmm@cumin2002> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host ganeti5007.eqsin.wmnet [production]
10:47 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:867122|Bump portals to HEAD]] (duration: 09m 48s) [production]
10:42 <mvernon@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on thanos-be1002.eqiad.wmnet with reason: host reimage [production]
10:39 <ladsgroup@deploy1002> ladsgroup and ladsgroup: Backport for [[gerrit:867122|Bump portals to HEAD]] synced to the testservers: mwdebug1001.eqiad.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug2002.codfw.wmnet [production]
10:39 <mvernon@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on thanos-be1002.eqiad.wmnet with reason: host reimage [production]
10:37 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:867122|Bump portals to HEAD]] [production]
10:36 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5007.eqsin.wmnet [production]
10:35 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti5007.eqsin.wmnet [production]
10:26 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti5007.eqsin.wmnet [production]
10:24 <slyngs> update add-ldap-group tool, https://gerrit.wikimedia.org/r/c/operations/puppet/+/860568 [production]
10:18 <slyngs> Update modify-mfa tools, https://gerrit.wikimedia.org/r/c/operations/puppet/+/861385 [production]
10:17 <claime> depooled parse1002.eqiad.wmnet for hw failure - T324949 [production]
10:02 <jgiannelos@deploy1002> Finished deploy [kartotherian/deploy@bdc19a3] (codfw): Increase codfw mirrored traffic to 50% (duration: 01m 42s) [production]
10:00 <jgiannelos@deploy1002> Started deploy [kartotherian/deploy@bdc19a3] (codfw): Increase codfw mirrored traffic to 50% [production]
09:58 <cgoubert@cumin1001> conftool action : set/pooled=inactive; selector: name=parse1002.eqiad.wmnet [production]
09:06 <jgiannelos@deploy1002> Finished deploy [kartotherian/deploy@6af0d2d] (codfw): Increase codfw mirrored traffic to 25% (duration: 02m 15s) [production]
09:03 <jgiannelos@deploy1002> Started deploy [kartotherian/deploy@6af0d2d] (codfw): Increase codfw mirrored traffic to 25% [production]
08:55 <mvernon@cumin2002> START - Cookbook sre.hosts.reimage for host thanos-be1002.eqiad.wmnet with OS bullseye [production]
08:25 <matthiasmullie> UTC morning backports done [production]
08:24 <mlitn@deploy1002> Finished scap: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] (duration: 18m 21s) [production]
08:16 <mlitn@deploy1002> mlitn and mlitn: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet [production]
08:08 <XioNoX> remove bast5001 from management routers ACLs (replaced by bast5002) [production]
08:06 <mlitn@deploy1002> Started scap: Backport for [[gerrit:845518|Add mediawiki.searchpreview schema (T321069)]] [production]
07:47 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 100%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42672 and previous config saved to /var/cache/conftool/dbconfig/20221212-074700-root.json [production]
07:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 75%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42671 and previous config saved to /var/cache/conftool/dbconfig/20221212-073155-root.json [production]
07:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 50%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42670 and previous config saved to /var/cache/conftool/dbconfig/20221212-071650-root.json [production]
07:01 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 25%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42669 and previous config saved to /var/cache/conftool/dbconfig/20221212-070145-root.json [production]
06:46 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 10%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42668 and previous config saved to /var/cache/conftool/dbconfig/20221212-064640-root.json [production]
06:31 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 5%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42667 and previous config saved to /var/cache/conftool/dbconfig/20221212-063135-root.json [production]
06:16 <marostegui@cumin1001> dbctl commit (dc=all): 'db1206 (re)pooling @ 1%: Testing new RAID controller', diff saved to https://phabricator.wikimedia.org/P42666 and previous config saved to /var/cache/conftool/dbconfig/20221212-061630-root.json [production]
2022-12-10 §
03:46 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: search_codfw elasticsearch and plugin upgrade - ryankemper@cumin2002 [production]
02:00 <ryankemper@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 49 hosts with reason: Plugin upgrade for T322776 [production]
02:00 <ryankemper@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 49 hosts with reason: Plugin upgrade for T322776 [production]