2025-03-03
ยง
|
11:10 |
<elukey@deploy2002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
11:09 |
<elukey@deploy2002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
11:08 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2210 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73937 and previous config saved to /var/cache/conftool/dbconfig/20250303-110830-root.json |
[production] |
11:05 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs |
[production] |
11:01 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.migrate-service-ipip for role: mediawiki::jobrunner@codfw |
[production] |
10:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1190 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73936 and previous config saved to /var/cache/conftool/dbconfig/20250303-105943-root.json |
[production] |
10:58 |
<fceratto@cumin1002> |
END (PASS) - Cookbook sre.mysql.depool (exit_code=0) db2166 - catching up replication |
[production] |
10:58 |
<fceratto@cumin1002> |
START - Cookbook sre.mysql.depool db2166 - catching up replication |
[production] |
10:54 |
<elukey@deploy2002> |
helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. |
[production] |
10:53 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2210 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73935 and previous config saved to /var/cache/conftool/dbconfig/20250303-105325-root.json |
[production] |
10:52 |
<elukey@deploy2002> |
helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. |
[production] |
10:51 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.clone (exit_code=0) of db1233.eqiad.wmnet onto db1246.eqiad.wmnet |
[production] |
10:44 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1190 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73934 and previous config saved to /var/cache/conftool/dbconfig/20250303-104438-root.json |
[production] |
10:40 |
<ayounsi@cumin1002> |
END (PASS) - Cookbook sre.network.cf (exit_code=0) |
[production] |
10:40 |
<ayounsi@cumin1002> |
START - Cookbook sre.network.cf |
[production] |
10:38 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2210 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73933 and previous config saved to /var/cache/conftool/dbconfig/20250303-103820-root.json |
[production] |
10:34 |
<fceratto@cumin1002> |
START - Cookbook sre.mysql.clone of db1248.eqiad.wmnet onto db1252.eqiad.wmnet |
[production] |
10:28 |
<fceratto@cumin1002> |
END (ERROR) - Cookbook sre.mysql.clone (exit_code=97) of db1248.eqiad.wmnet onto db1252.eqiad.wmnet |
[production] |
10:26 |
<hashar> |
Upgraded scap to 4.139.0 # T303828 |
[production] |
10:26 |
<hashar@deploy2002> |
Installation of scap version "4.139.0" completed for 204 hosts |
[production] |
10:21 |
<hashar@deploy2002> |
Installing scap version "4.139.0" for 204 host(s) |
[production] |
10:21 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2210 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73931 and previous config saved to /var/cache/conftool/dbconfig/20250303-102109-root.json |
[production] |
10:06 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2210 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73930 and previous config saved to /var/cache/conftool/dbconfig/20250303-100603-root.json |
[production] |
09:46 |
<mvernon@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on ms-be1080.eqiad.wmnet with reason: disk failed |
[production] |
09:45 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.migrate-service-ipip (exit_code=0) for role: docker_registry_ha::registry@codfw |
[production] |
09:45 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs |
[production] |
09:44 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti1030.eqiad.wmnet to cluster eqiad and group A |
[production] |
09:44 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs |
[production] |
09:43 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti1030.eqiad.wmnet to cluster eqiad and group A |
[production] |
09:43 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti1027.eqiad.wmnet to cluster eqiad and group A |
[production] |
09:43 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.addnode for new host ganeti1027.eqiad.wmnet to cluster eqiad and group A |
[production] |
09:38 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.migrate-service-ipip for role: docker_registry_ha::registry@codfw |
[production] |
09:28 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.clone of db1233.eqiad.wmnet onto db1246.eqiad.wmnet |
[production] |
09:11 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.migrate-service-ipip (exit_code=0) for role: docker_registry_ha::registry@eqiad |
[production] |
09:11 |
<vgutierrez@cumin1002> |
END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on (A:lvs-low-traffic-eqiad or A:lvs-secondary-eqiad) and A:bullseye and A:lvs |
[production] |
09:10 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on (A:lvs-low-traffic-eqiad or A:lvs-secondary-eqiad) and A:bullseye and A:lvs |
[production] |
09:07 |
<vgutierrez@cumin1002> |
START - Cookbook sre.loadbalancer.migrate-service-ipip for role: docker_registry_ha::registry@eqiad |
[production] |
08:55 |
<marostegui@cumin1002> |
END (FAIL) - Cookbook sre.mysql.clone (exit_code=99) of db1233.eqiad.wmnet onto db1246.eqiad.wmnet |
[production] |
08:55 |
<marostegui@cumin1002> |
START - Cookbook sre.mysql.clone of db1233.eqiad.wmnet onto db1246.eqiad.wmnet |
[production] |
08:30 |
<fceratto@cumin1002> |
START - Cookbook sre.mysql.clone of db2166.codfw.wmnet onto db2167.codfw.wmnet |
[production] |
08:29 |
<kartik@deploy2002> |
Finished scap sync-world: Backport for [[gerrit:1123802|Enable CX unified dashboard on sqwiki (T386719)]] (duration: 25m 32s) |
[production] |
08:20 |
<kartik@deploy2002> |
sbisson, kartik: Continuing with sync |
[production] |
08:16 |
<kartik@deploy2002> |
sbisson, kartik: Backport for [[gerrit:1123802|Enable CX unified dashboard on sqwiki (T386719)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
08:08 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1190.eqiad.wmnet with reason: Index rebuild |
[production] |
08:04 |
<kartik@deploy2002> |
Started scap sync-world: Backport for [[gerrit:1123802|Enable CX unified dashboard on sqwiki (T386719)]] |
[production] |
07:55 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1190.eqiad.wmnet |
[production] |
07:53 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2210.codfw.wmnet with reason: Index rebuild |
[production] |
07:53 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2210.codfw.wmnet |
[production] |
07:52 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2164.codfw.wmnet with reason: Index rebuild |
[production] |
07:52 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1172.eqiad.wmnet with reason: Index rebuild |
[production] |