2025-02-04
ยง
|
12:15 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for db2222.codfw.wmnet |
[production] |
12:14 |
<vgutierrez> |
upgrading pybal on secondary load balancers - T373027 |
[production] |
12:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2222 for index rebuild', diff saved to https://phabricator.wikimedia.org/P73163 and previous config saved to /var/cache/conftool/dbconfig/20250204-121450-marostegui.json |
[production] |
12:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2040 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P73162 and previous config saved to /var/cache/conftool/dbconfig/20250204-121400-root.json |
[production] |
12:13 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1242 (T384592)', diff saved to https://phabricator.wikimedia.org/P73161 and previous config saved to /var/cache/conftool/dbconfig/20250204-121331-marostegui.json |
[production] |
12:11 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on P{lvs500[4-5]*} and A:lvs (T373027) |
[production] |
12:10 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2220 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73160 and previous config saved to /var/cache/conftool/dbconfig/20250204-121056-root.json |
[production] |
12:10 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on P{lvs500[4-5]*} and A:lvs (T373027) |
[production] |
12:07 |
<elukey> |
manually executed docker-system-prune-dangling.service on build2001 |
[production] |
12:04 |
<elukey> |
manually dropped 2.5.1rocm6.2-1-20250202 on build2001 - T385531 |
[production] |
12:03 |
<vgutierrez> |
upgrading pybal on eqsin - T373027 |
[production] |
11:59 |
<elukey@deploy2002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
11:59 |
<elukey@deploy2002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
11:58 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2040 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P73158 and previous config saved to /var/cache/conftool/dbconfig/20250204-115855-root.json |
[production] |
11:54 |
<vgutierrez> |
uploaded pybal 1.15.15 to apt.wm.o (bullseye-wikimedia) T373027 |
[production] |
11:54 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1227.eqiad.wmnet with reason: Index rebuild |
[production] |
11:54 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1227.eqiad.wmnet |
[production] |
11:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1236 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73157 and previous config saved to /var/cache/conftool/dbconfig/20250204-115323-root.json |
[production] |
11:48 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1227.eqiad.wmnet |
[production] |
11:48 |
<jynus> |
deploying new backup grants for matomo and analytics_meta T383902 |
[production] |
11:48 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1227 for index rebuild', diff saved to https://phabricator.wikimedia.org/P73156 and previous config saved to /var/cache/conftool/dbconfig/20250204-114808-marostegui.json |
[production] |
11:43 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2040 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P73155 and previous config saved to /var/cache/conftool/dbconfig/20250204-114350-root.json |
[production] |
11:41 |
<jiji@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/mw-parsoid: apply |
[production] |
11:39 |
<jiji@deploy2002> |
helmfile [codfw] START helmfile.d/services/mw-parsoid: apply |
[production] |
11:38 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1236 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73154 and previous config saved to /var/cache/conftool/dbconfig/20250204-113818-root.json |
[production] |
11:34 |
<jiji@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/mw-parsoid: apply |
[production] |
11:33 |
<jiji@deploy2002> |
helmfile [eqiad] START helmfile.d/services/mw-parsoid: apply |
[production] |
11:33 |
<jiji@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/mw-jobrunner: apply |
[production] |
11:31 |
<jiji@deploy2002> |
helmfile [eqiad] START helmfile.d/services/mw-jobrunner: apply |
[production] |
11:28 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2040 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P73153 and previous config saved to /var/cache/conftool/dbconfig/20250204-112844-root.json |
[production] |
11:28 |
<jiji@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/mw-jobrunner: apply |
[production] |
11:26 |
<jiji@deploy2002> |
helmfile [codfw] START helmfile.d/services/mw-jobrunner: apply |
[production] |
11:23 |
<hnowlan@deploy1003> |
helmfile [codfw] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
11:23 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1236 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73152 and previous config saved to /var/cache/conftool/dbconfig/20250204-112313-root.json |
[production] |
11:22 |
<hnowlan@deploy1003> |
helmfile [codfw] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
11:22 |
<hnowlan@deploy1003> |
helmfile [eqiad] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
11:20 |
<hnowlan@deploy1003> |
helmfile [eqiad] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
11:20 |
<jiji@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/shellbox: apply |
[production] |
11:18 |
<jiji@deploy2002> |
helmfile [eqiad] START helmfile.d/services/shellbox: apply |
[production] |
11:17 |
<hnowlan@deploy1003> |
helmfile [staging] DONE helmfile.d/services/changeprop-jobqueue: apply |
[production] |
11:17 |
<hnowlan@deploy1003> |
helmfile [staging] START helmfile.d/services/changeprop-jobqueue: apply |
[production] |
11:13 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es2040 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P73151 and previous config saved to /var/cache/conftool/dbconfig/20250204-111337-root.json |
[production] |
11:08 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1197 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73150 and previous config saved to /var/cache/conftool/dbconfig/20250204-110830-root.json |
[production] |
11:08 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1236 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73149 and previous config saved to /var/cache/conftool/dbconfig/20250204-110808-root.json |
[production] |
11:03 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1229.eqiad.wmnet with reason: Index rebuild |
[production] |
11:01 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1229.eqiad.wmnet |
[production] |
10:59 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for es2040.codfw.wmnet |
[production] |
10:56 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1229.eqiad.wmnet |
[production] |
10:55 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1229 for index rebuild', diff saved to https://phabricator.wikimedia.org/P73148 and previous config saved to /var/cache/conftool/dbconfig/20250204-105546-marostegui.json |
[production] |
10:54 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for es2040.codfw.wmnet |
[production] |