2024-06-13
ยง
|
09:12 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2125 (re)pooling @ 100%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64810 and previous config saved to /var/cache/conftool/dbconfig/20240613-091200-arnaudb.json |
[production] |
09:07 |
<jiji@cumin1002> |
END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:wikikube-worker-eqiad |
[production] |
09:07 |
<brouberol@deploy1002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s_services/services/datahub-next: apply on staging |
[production] |
09:01 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1201', diff saved to https://phabricator.wikimedia.org/P64809 and previous config saved to /var/cache/conftool/dbconfig/20240613-090122-marostegui.json |
[production] |
08:59 |
<kamila@cumin1002> |
START - Cookbook sre.hosts.reimage for host wikikube-ctrl1003.eqiad.wmnet with OS bullseye |
[production] |
08:56 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2125 (re)pooling @ 75%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64808 and previous config saved to /var/cache/conftool/dbconfig/20240613-085654-arnaudb.json |
[production] |
08:46 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1201 (T367261)', diff saved to https://phabricator.wikimedia.org/P64807 and previous config saved to /var/cache/conftool/dbconfig/20240613-084615-marostegui.json |
[production] |
08:43 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1201 (T367261)', diff saved to https://phabricator.wikimedia.org/P64806 and previous config saved to /var/cache/conftool/dbconfig/20240613-084310-marostegui.json |
[production] |
08:43 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.gitlab.upgrade (exit_code=0) on GitLab host gitlab1004.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
08:43 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1201.eqiad.wmnet with reason: Maintenance |
[production] |
08:42 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1201.eqiad.wmnet with reason: Maintenance |
[production] |
08:42 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1187 (T367261)', diff saved to https://phabricator.wikimedia.org/P64805 and previous config saved to /var/cache/conftool/dbconfig/20240613-084248-marostegui.json |
[production] |
08:41 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2125 (re)pooling @ 50%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64804 and previous config saved to /var/cache/conftool/dbconfig/20240613-084149-arnaudb.json |
[production] |
08:37 |
<jelto@cumin1002> |
START - Cookbook sre.gitlab.upgrade on GitLab host gitlab1004.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
08:36 |
<jelto@cumin1002> |
END (PASS) - Cookbook sre.gitlab.upgrade (exit_code=0) on GitLab host gitlab1003.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
08:30 |
<jelto@cumin1002> |
START - Cookbook sre.gitlab.upgrade on GitLab host gitlab1003.wikimedia.org with reason: Upgrade GitLab Replica to new version |
[production] |
08:29 |
<kart_> |
Updated MinT to 2024-06-12-111204-production (T363563) |
[production] |
08:27 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1187', diff saved to https://phabricator.wikimedia.org/P64803 and previous config saved to /var/cache/conftool/dbconfig/20240613-082741-marostegui.json |
[production] |
08:26 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2125 (re)pooling @ 25%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64802 and previous config saved to /var/cache/conftool/dbconfig/20240613-082643-arnaudb.json |
[production] |
08:25 |
<kartik@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply |
[production] |
08:15 |
<kartik@deploy1002> |
helmfile [eqiad] START helmfile.d/services/machinetranslation: apply |
[production] |
08:13 |
<kartik@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/machinetranslation: apply |
[production] |
08:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1187', diff saved to https://phabricator.wikimedia.org/P64801 and previous config saved to /var/cache/conftool/dbconfig/20240613-081234-marostegui.json |
[production] |
08:11 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'db2125 (re)pooling @ 10%: post maintenance repool', diff saved to https://phabricator.wikimedia.org/P64800 and previous config saved to /var/cache/conftool/dbconfig/20240613-081138-arnaudb.json |
[production] |
08:11 |
<jiji@cumin1002> |
START - Cookbook sre.k8s.reboot-nodes rolling reboot on A:wikikube-worker-eqiad |
[production] |
08:08 |
<arnaudb@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on db2125.codfw.wmnet with reason: index issue |
[production] |
08:08 |
<arnaudb@cumin1002> |
START - Cookbook sre.hosts.downtime for 3:00:00 on db2125.codfw.wmnet with reason: index issue |
[production] |
08:06 |
<arnaudb@cumin1002> |
dbctl commit (dc=all): 'index error depool db2125', diff saved to https://phabricator.wikimedia.org/P64799 and previous config saved to /var/cache/conftool/dbconfig/20240613-080624-arnaudb.json |
[production] |
08:06 |
<kartik@deploy1002> |
helmfile [codfw] START helmfile.d/services/machinetranslation: apply |
[production] |
07:59 |
<kartik@deploy1002> |
helmfile [staging] DONE helmfile.d/services/machinetranslation: apply |
[production] |
07:57 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1187 (T367261)', diff saved to https://phabricator.wikimedia.org/P64798 and previous config saved to /var/cache/conftool/dbconfig/20240613-075727-marostegui.json |
[production] |
07:55 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1230 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P64797 and previous config saved to /var/cache/conftool/dbconfig/20240613-075500-root.json |
[production] |
07:54 |
<kartik@deploy1002> |
helmfile [staging] START helmfile.d/services/machinetranslation: apply |
[production] |
07:54 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1187 (T367261)', diff saved to https://phabricator.wikimedia.org/P64796 and previous config saved to /var/cache/conftool/dbconfig/20240613-075420-marostegui.json |
[production] |
07:54 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1187.eqiad.wmnet with reason: Maintenance |
[production] |
07:54 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1187.eqiad.wmnet with reason: Maintenance |
[production] |
07:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1180 (T367261)', diff saved to https://phabricator.wikimedia.org/P64795 and previous config saved to /var/cache/conftool/dbconfig/20240613-075358-marostegui.json |
[production] |
07:39 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1230 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P64794 and previous config saved to /var/cache/conftool/dbconfig/20240613-073955-root.json |
[production] |
07:38 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1180', diff saved to https://phabricator.wikimedia.org/P64793 and previous config saved to /var/cache/conftool/dbconfig/20240613-073851-marostegui.json |
[production] |
07:28 |
<ryankemper@cumin2002> |
END (FAIL) - Cookbook sre.hadoop.reboot-workers (exit_code=99) for Hadoop analytics cluster |
[production] |
07:24 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1230 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P64792 and previous config saved to /var/cache/conftool/dbconfig/20240613-072450-root.json |
[production] |
07:23 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1180', diff saved to https://phabricator.wikimedia.org/P64791 and previous config saved to /var/cache/conftool/dbconfig/20240613-072344-marostegui.json |
[production] |
07:21 |
<jiji@cumin1002> |
END (PASS) - Cookbook sre.k8s.reboot-nodes (exit_code=0) rolling reboot on A:wikikube-worker-eqiad |
[production] |
07:09 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1230 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P64790 and previous config saved to /var/cache/conftool/dbconfig/20240613-070944-root.json |
[production] |
07:08 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1180 (T367261)', diff saved to https://phabricator.wikimedia.org/P64789 and previous config saved to /var/cache/conftool/dbconfig/20240613-070837-marostegui.json |
[production] |
07:05 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1180 (T367261)', diff saved to https://phabricator.wikimedia.org/P64788 and previous config saved to /var/cache/conftool/dbconfig/20240613-070531-marostegui.json |
[production] |
07:05 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1180.eqiad.wmnet with reason: Maintenance |
[production] |
07:05 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1180.eqiad.wmnet with reason: Maintenance |
[production] |
07:05 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1173 (T367261)', diff saved to https://phabricator.wikimedia.org/P64787 and previous config saved to /var/cache/conftool/dbconfig/20240613-070509-marostegui.json |
[production] |
06:54 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1230 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P64786 and previous config saved to /var/cache/conftool/dbconfig/20240613-065439-root.json |
[production] |