2022-08-31
§
|
09:17 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe1003.eqiad.wmnet |
[production] |
09:13 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host webperf2003.codfw.wmnet |
[production] |
09:11 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host thanos-fe1003.eqiad.wmnet |
[production] |
09:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1120 (re)pooling @ 10%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33711 and previous config saved to /var/cache/conftool/dbconfig/20220831-090834-root.json |
[production] |
08:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1120 (re)pooling @ 5%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33710 and previous config saved to /var/cache/conftool/dbconfig/20220831-085329-root.json |
[production] |
08:51 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe1002.eqiad.wmnet |
[production] |
08:43 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host thanos-fe1002.eqiad.wmnet |
[production] |
08:39 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host thanos-fe1001.eqiad.wmnet |
[production] |
08:38 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1120 (re)pooling @ 4%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33709 and previous config saved to /var/cache/conftool/dbconfig/20220831-083824-root.json |
[production] |
08:32 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host thanos-fe1001.eqiad.wmnet |
[production] |
08:28 |
<moritzm> |
upgrading ganeti2016/ganeti2018 to 3.0.2 T312637 |
[production] |
08:28 |
<cgoubert@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on 24 hosts with reason: Downtiming php7.4 parsoid servers until they are ready to pool |
[production] |
08:27 |
<cgoubert@cumin1001> |
START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on 24 hosts with reason: Downtiming php7.4 parsoid servers until they are ready to pool |
[production] |
08:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1120 (re)pooling @ 3%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33708 and previous config saved to /var/cache/conftool/dbconfig/20220831-082319-root.json |
[production] |
08:20 |
<vgutierrez> |
end test trafficserver: Hide non session cookies during cache lookup in cp6016 - T316338 |
[production] |
08:12 |
<vgutierrez> |
test trafficserver: Hide non session cookies during cache lookup in cp6016 - T316338 |
[production] |
08:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1120 (re)pooling @ 2%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33707 and previous config saved to /var/cache/conftool/dbconfig/20220831-080815-root.json |
[production] |
07:54 |
<filippo@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reboot-single (exit_code=1) for host prometheus2006.codfw.wmnet |
[production] |
07:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db1120 (re)pooling @ 1%: Repooling after upgrade', diff saved to https://phabricator.wikimedia.org/P33706 and previous config saved to /var/cache/conftool/dbconfig/20220831-075310-root.json |
[production] |
07:51 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti2022.codfw.wmnet to cluster codfw and group B |
[production] |
07:50 |
<filippo@cumin1001> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host prometheus1006.eqiad.wmnet |
[production] |
07:50 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti2022.codfw.wmnet to cluster codfw and group B |
[production] |
07:47 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db1120 for upgrade', diff saved to https://phabricator.wikimedia.org/P33705 and previous config saved to /var/cache/conftool/dbconfig/20220831-074748-root.json |
[production] |
07:45 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2022.codfw.wmnet |
[production] |
07:40 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host prometheus1006.eqiad.wmnet |
[production] |
07:39 |
<filippo@cumin1001> |
START - Cookbook sre.hosts.reboot-single for host prometheus2006.codfw.wmnet |
[production] |
07:37 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2022.codfw.wmnet |
[production] |
07:15 |
<godog> |
bounce thanos-compact on thanos-fe2001 |
[production] |
05:00 |
<marostegui> |
Failover m3 from db1183 to db1159 - T316506 |
[production] |
04:44 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on db[2132,2160].codfw.wmnet,db[1117,1195].eqiad.wmnet with reason: switchover m1 T316506 |
[production] |
04:44 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 1:00:00 on db[2132,2160].codfw.wmnet,db[1117,1195].eqiad.wmnet with reason: switchover m1 T316506 |
[production] |
03:23 |
<ryankemper@cumin2002> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
03:23 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
03:17 |
<ryankemper@cumin2002> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
02:50 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
02:49 |
<ryankemper@cumin2002> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
00:15 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
00:14 |
<ryankemper> |
T316719 First elastic host upgraded properly. Cancelling cookbook to kick off a new rolling upgrade that will go 3 nodes at a time (first run was just one host as a sanity check) |
[production] |
00:14 |
<ryankemper@cumin2002> |
END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
00:08 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
2022-08-30
§
|
23:55 |
<ryankemper> |
T316719 Merged https://phabricator.wikimedia.org/T316719; running puppet across codfw fleet: `ryankemper@cumin2002:~$ sudo -E cumin -b 6 'A:elastic-codfw' 'run-puppet-agent'` |
[production] |
23:50 |
<ryankemper@cumin2002> |
END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
23:50 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
22:02 |
<eileen> |
civicrm upgraded from a31c7590 to 76308ffb |
[production] |
21:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1138 (T314041)', diff saved to https://phabricator.wikimedia.org/P33703 and previous config saved to /var/cache/conftool/dbconfig/20220830-210218-ladsgroup.json |
[production] |
21:02 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1138.eqiad.wmnet with reason: Maintenance |
[production] |
21:02 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1138.eqiad.wmnet with reason: Maintenance |
[production] |
20:43 |
<ryankemper@cumin2002> |
END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
20:43 |
<ryankemper@cumin2002> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.UPGRADE (1 nodes at a time) for ElasticSearch cluster search_codfw: codfw es7 cluster upgrade - ryankemper@cumin2002 - T316719 |
[production] |
20:24 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |