2024-06-11
ยง
|
08:31 |
<filippo@cumin1002> |
END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling reboot on A:kafka-logging-codfw |
[production] |
08:30 |
<marostegui> |
Install 10.11 on db1153 (non used x2 replioca) |
[production] |
08:13 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1222 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P64600 and previous config saved to /var/cache/conftool/dbconfig/20240611-081314-root.json |
[production] |
08:05 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1024.eqiad.wmnet |
[production] |
08:04 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1024.eqiad.wmnet |
[production] |
08:02 |
<gmodena@deploy1002> |
helmfile [staging] DONE helmfile.d/services/mw-page-content-change-enrich: apply |
[production] |
08:02 |
<gmodena@deploy1002> |
helmfile [staging] START helmfile.d/services/mw-page-content-change-enrich: apply |
[production] |
07:58 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1024.eqiad.wmnet |
[production] |
07:58 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1222 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P64599 and previous config saved to /var/cache/conftool/dbconfig/20240611-075809-root.json |
[production] |
07:55 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2029.codfw.wmnet |
[production] |
07:54 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2030.codfw.wmnet |
[production] |
07:54 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2030.codfw.wmnet |
[production] |
07:48 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2030.codfw.wmnet |
[production] |
07:47 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1024.eqiad.wmnet |
[production] |
07:45 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1023.eqiad.wmnet |
[production] |
07:45 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2030.codfw.wmnet |
[production] |
07:44 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1023.eqiad.wmnet |
[production] |
07:43 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1222 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P64598 and previous config saved to /var/cache/conftool/dbconfig/20240611-074304-root.json |
[production] |
07:40 |
<kart_> |
Updated MinT to 2024-06-11-052620-production (T364122, T346226, T357548) |
[production] |
07:40 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1233 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P64597 and previous config saved to /var/cache/conftool/dbconfig/20240611-074009-root.json |
[production] |
07:38 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1023.eqiad.wmnet |
[production] |
07:37 |
<kartik@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply |
[production] |
07:36 |
<filippo@cumin1002> |
START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling reboot on A:kafka-logging-codfw |
[production] |
07:28 |
<kartik@deploy1002> |
helmfile [eqiad] START helmfile.d/services/machinetranslation: apply |
[production] |
07:27 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1222 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P64596 and previous config saved to /var/cache/conftool/dbconfig/20240611-072758-root.json |
[production] |
07:26 |
<kartik@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/machinetranslation: apply |
[production] |
07:25 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1233 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P64595 and previous config saved to /var/cache/conftool/dbconfig/20240611-072504-root.json |
[production] |
07:18 |
<kartik@deploy1002> |
helmfile [codfw] START helmfile.d/services/machinetranslation: apply |
[production] |
07:17 |
<kartik@deploy1002> |
helmfile [staging] DONE helmfile.d/services/machinetranslation: apply |
[production] |
07:13 |
<kartik@deploy1002> |
helmfile [staging] START helmfile.d/services/machinetranslation: apply |
[production] |
07:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1222 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P64594 and previous config saved to /var/cache/conftool/dbconfig/20240611-071253-root.json |
[production] |
07:11 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1023.eqiad.wmnet |
[production] |
07:09 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1233 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P64593 and previous config saved to /var/cache/conftool/dbconfig/20240611-070958-root.json |
[production] |
07:05 |
<arnaudb@deploy1002> |
Finished scap: Backport for [[gerrit:1041401|Revert "dbconfig: temporary disable writes on es6"]] (duration: 11m 36s) |
[production] |
07:02 |
<moritzm> |
failover ganeti master in codfw to ganeti2020 |
[production] |
06:57 |
<arnaudb@deploy1002> |
arnaudb: Continuing with sync |
[production] |
06:56 |
<arnaudb@deploy1002> |
arnaudb: Backport for [[gerrit:1041401|Revert "dbconfig: temporary disable writes on es6"]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
06:54 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1233 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P64592 and previous config saved to /var/cache/conftool/dbconfig/20240611-065453-root.json |
[production] |
06:54 |
<arnaudb@deploy1002> |
Started scap: Backport for [[gerrit:1041401|Revert "dbconfig: temporary disable writes on es6"]] |
[production] |
06:40 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'mimic weight', diff saved to https://phabricator.wikimedia.org/P64591 and previous config saved to /var/cache/conftool/dbconfig/20240611-064041-arnaudb.json |
[production] |
06:40 |
<oblivian@deploy1002> |
Unlocked for deployment [ALL REPOSITORIES]: incident in progress, blocking deploys --joe (duration: 15m 33s) |
[production] |
06:39 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1233 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P64590 and previous config saved to /var/cache/conftool/dbconfig/20240611-063947-root.json |
[production] |
06:39 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'mimic weight', diff saved to https://phabricator.wikimedia.org/P64589 and previous config saved to /var/cache/conftool/dbconfig/20240611-063903-arnaudb.json |
[production] |
06:31 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'Promote es1037 to es6 primary T367055', diff saved to https://phabricator.wikimedia.org/P64588 and previous config saved to /var/cache/conftool/dbconfig/20240611-063109-arnaudb.json |
[production] |
06:30 |
<oblivian@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mw-debug: apply |
[production] |
06:30 |
<arnaudb> |
Starting es6 eqiad failover from es1038 to es1037 - T367055 |
[production] |
06:24 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1233 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P64587 and previous config saved to /var/cache/conftool/dbconfig/20240611-062441-root.json |
[production] |
06:24 |
<oblivian@deploy1002> |
Locking from deployment [ALL REPOSITORIES]: incident in progress, blocking deploys --joe |
[production] |
06:23 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'Set es1037 with weight 0 T367055', diff saved to https://phabricator.wikimedia.org/P64586 and previous config saved to /var/cache/conftool/dbconfig/20240611-062353-arnaudb.json |
[production] |
06:23 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 6 hosts with reason: Primary switchover es6 T367055 |
[production] |