3501-3550 of 10000 results (77ms)
2022-08-31 ยง
09:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 75%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33714 and previous config saved to /var/cache/conftool/dbconfig/20220831-095348-root.json [production]
09:51 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe2003.codfw.wmnet [production]
09:44 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host thanos-fe2003.codfw.wmnet [production]
09:44 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe2002.codfw.wmnet [production]
09:38 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 50%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33713 and previous config saved to /var/cache/conftool/dbconfig/20220831-093844-root.json [production]
09:37 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host thanos-fe2002.codfw.wmnet [production]
09:34 <filippo@cumin1001> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host thanos-fe2001.codfw.wmnet [production]
09:33 <cgoubert@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on parse1002.eqiad.wmnet with reason: host reimage [production]
09:29 <cgoubert@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on parse1002.eqiad.wmnet with reason: host reimage [production]
09:27 <moritzm> installing docker.io bugfix updates from Bullseye point release [production]
09:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 25%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33712 and previous config saved to /var/cache/conftool/dbconfig/20220831-092339-root.json [production]
09:22 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host thanos-fe2001.codfw.wmnet [production]
09:19 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host webperf2003.codfw.wmnet [production]
09:17 <cgoubert@cumin1001> START - Cookbook sre.hosts.reimage for host parse1002.eqiad.wmnet with OS buster [production]
09:17 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe1003.eqiad.wmnet [production]
09:13 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host webperf2003.codfw.wmnet [production]
09:11 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host thanos-fe1003.eqiad.wmnet [production]
09:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33711 and previous config saved to /var/cache/conftool/dbconfig/20220831-090834-root.json [production]
08:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 5%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33710 and previous config saved to /var/cache/conftool/dbconfig/20220831-085329-root.json [production]
08:51 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe1002.eqiad.wmnet [production]
08:43 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host thanos-fe1002.eqiad.wmnet [production]
08:39 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe1001.eqiad.wmnet [production]
08:38 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 4%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33709 and previous config saved to /var/cache/conftool/dbconfig/20220831-083824-root.json [production]
08:32 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host thanos-fe1001.eqiad.wmnet [production]
08:28 <moritzm> upgrading ganeti2016/ganeti2018 to 3.0.2 T312637 [production]
08:28 <cgoubert@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on 24 hosts with reason: Downtiming php7.4 parsoid servers until they are ready to pool [production]
08:27 <cgoubert@cumin1001> START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on 24 hosts with reason: Downtiming php7.4 parsoid servers until they are ready to pool [production]
08:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 3%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33708 and previous config saved to /var/cache/conftool/dbconfig/20220831-082319-root.json [production]
08:20 <vgutierrez> end test trafficserver: Hide non session cookies during cache lookup in cp6016 - T316338 [production]
08:12 <vgutierrez> test trafficserver: Hide non session cookies during cache lookup in cp6016 - T316338 [production]
08:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 2%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33707 and previous config saved to /var/cache/conftool/dbconfig/20220831-080815-root.json [production]
07:54 <filippo@cumin1001> END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host prometheus2006.codfw.wmnet [production]
07:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db1120 (re)pooling @ 1%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33706 and previous config saved to /var/cache/conftool/dbconfig/20220831-075310-root.json [production]
07:51 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti2022.codfw.wmnet to cluster codfw and group B [production]
07:50 <filippo@cumin1001> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host prometheus1006.eqiad.wmnet [production]
07:50 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti2022.codfw.wmnet to cluster codfw and group B [production]
07:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1120 for upgrade', diff saved to https://phabricator.wikimedia.org/P33705 and previous config saved to /var/cache/conftool/dbconfig/20220831-074748-root.json [production]
07:45 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2022.codfw.wmnet [production]
07:40 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host prometheus1006.eqiad.wmnet [production]
07:39 <filippo@cumin1001> START - Cookbook sre.hosts.reboot-single for host prometheus2006.codfw.wmnet [production]
07:37 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti2022.codfw.wmnet [production]
07:15 <godog> bounce thanos-compact on thanos-fe2001 [production]
05:00 <marostegui> Failover m3 from db1183 to db1159 - T316506 [production]
04:44 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db[2132,2160].codfw.wmnet,db[1117,1195].eqiad.wmnet with reason: switchover m1 T316506 [production]
04:44 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on db[2132,2160].codfw.wmnet,db[1117,1195].eqiad.wmnet with reason: switchover m1 T316506 [production]
03:23 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 [production]
03:23 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 [production]
03:17 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 [production]
02:50 <ryankemper@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 [production]
02:49 <ryankemper@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 [production]