2951-3000 of 10000 results (133ms)
2025-03-03 ยง
11:32 <marostegui@cumin1002> START - Cookbook sre.mysql.upgrade for db2206.codfw.wmnet [production]
11:32 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2206 db1249', diff saved to https://phabricator.wikimedia.org/P73941 and previous config saved to /var/cache/conftool/dbconfig/20250303-113225-root.json [production]
11:29 <marostegui@cumin1002> dbctl commit (dc=all): 'db1190 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73940 and previous config saved to /var/cache/conftool/dbconfig/20250303-112954-root.json [production]
11:25 <marostegui@cumin1002> dbctl commit (dc=all): 'db1233 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P73939 and previous config saved to /var/cache/conftool/dbconfig/20250303-112548-root.json [production]
11:18 <fceratto@cumin1002> END (FAIL) - Cookbook sre.mysql.clone (exit_code=99) of db2166.codfw.wmnet onto db2167.codfw.wmnet [production]
11:17 <elukey@deploy2002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'sync'. [production]
11:17 <elukey@deploy2002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'sync'. [production]
11:14 <marostegui@cumin1002> dbctl commit (dc=all): 'db1190 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73938 and previous config saved to /var/cache/conftool/dbconfig/20250303-111448-root.json [production]
11:12 <elukey@deploy2002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'sync'. [production]
11:11 <elukey@deploy2002> helmfile [ml-serve-codfw] START helmfile.d/admin 'sync'. [production]
11:11 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.migrate-service-ipip (exit_code=0) for role: mediawiki::jobrunner@codfw [production]
11:11 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs [production]
11:10 <elukey@deploy2002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
11:09 <elukey@deploy2002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
11:08 <marostegui@cumin1002> dbctl commit (dc=all): 'db2210 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73937 and previous config saved to /var/cache/conftool/dbconfig/20250303-110830-root.json [production]
11:05 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs [production]
11:01 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.migrate-service-ipip for role: mediawiki::jobrunner@codfw [production]
10:59 <marostegui@cumin1002> dbctl commit (dc=all): 'db1190 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73936 and previous config saved to /var/cache/conftool/dbconfig/20250303-105943-root.json [production]
10:58 <fceratto@cumin1002> END (PASS) - Cookbook sre.mysql.depool (exit_code=0) db2166 - catching up replication [production]
10:58 <fceratto@cumin1002> START - Cookbook sre.mysql.depool db2166 - catching up replication [production]
10:54 <elukey@deploy2002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'sync'. [production]
10:53 <marostegui@cumin1002> dbctl commit (dc=all): 'db2210 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73935 and previous config saved to /var/cache/conftool/dbconfig/20250303-105325-root.json [production]
10:52 <elukey@deploy2002> helmfile [ml-staging-codfw] START helmfile.d/admin 'sync'. [production]
10:51 <marostegui@cumin1002> END (PASS) - Cookbook sre.mysql.clone (exit_code=0) of db1233.eqiad.wmnet onto db1246.eqiad.wmnet [production]
10:44 <marostegui@cumin1002> dbctl commit (dc=all): 'db1190 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73934 and previous config saved to /var/cache/conftool/dbconfig/20250303-104438-root.json [production]
10:40 <ayounsi@cumin1002> END (PASS) - Cookbook sre.network.cf (exit_code=0) [production]
10:40 <ayounsi@cumin1002> START - Cookbook sre.network.cf [production]
10:38 <marostegui@cumin1002> dbctl commit (dc=all): 'db2210 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73933 and previous config saved to /var/cache/conftool/dbconfig/20250303-103820-root.json [production]
10:34 <fceratto@cumin1002> START - Cookbook sre.mysql.clone of db1248.eqiad.wmnet onto db1252.eqiad.wmnet [production]
10:28 <fceratto@cumin1002> END (ERROR) - Cookbook sre.mysql.clone (exit_code=97) of db1248.eqiad.wmnet onto db1252.eqiad.wmnet [production]
10:26 <hashar> Upgraded scap to 4.139.0 # T303828 [production]
10:26 <hashar@deploy2002> Installation of scap version "4.139.0" completed for 204 hosts [production]
10:21 <hashar@deploy2002> Installing scap version "4.139.0" for 204 host(s) [production]
10:21 <marostegui@cumin1002> dbctl commit (dc=all): 'db2210 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73931 and previous config saved to /var/cache/conftool/dbconfig/20250303-102109-root.json [production]
10:06 <marostegui@cumin1002> dbctl commit (dc=all): 'db2210 (re)pooling @ 10%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73930 and previous config saved to /var/cache/conftool/dbconfig/20250303-100603-root.json [production]
09:46 <mvernon@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on ms-be1080.eqiad.wmnet with reason: disk failed [production]
09:45 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.migrate-service-ipip (exit_code=0) for role: docker_registry_ha::registry@codfw [production]
09:45 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs [production]
09:44 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti1030.eqiad.wmnet to cluster eqiad and group A [production]
09:44 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on (A:lvs-low-traffic-codfw or A:lvs-secondary-codfw) and A:bullseye and A:lvs [production]
09:43 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti1030.eqiad.wmnet to cluster eqiad and group A [production]
09:43 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti1027.eqiad.wmnet to cluster eqiad and group A [production]
09:43 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti1027.eqiad.wmnet to cluster eqiad and group A [production]
09:38 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.migrate-service-ipip for role: docker_registry_ha::registry@codfw [production]
09:28 <marostegui@cumin1002> START - Cookbook sre.mysql.clone of db1233.eqiad.wmnet onto db1246.eqiad.wmnet [production]
09:11 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.migrate-service-ipip (exit_code=0) for role: docker_registry_ha::registry@eqiad [production]
09:11 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.loadbalancer.restart-pybal (exit_code=0) rolling-restart of pybal on (A:lvs-low-traffic-eqiad or A:lvs-secondary-eqiad) and A:bullseye and A:lvs [production]
09:10 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.restart-pybal rolling-restart of pybal on (A:lvs-low-traffic-eqiad or A:lvs-secondary-eqiad) and A:bullseye and A:lvs [production]
09:07 <vgutierrez@cumin1002> START - Cookbook sre.loadbalancer.migrate-service-ipip for role: docker_registry_ha::registry@eqiad [production]
08:55 <marostegui@cumin1002> END (FAIL) - Cookbook sre.mysql.clone (exit_code=99) of db1233.eqiad.wmnet onto db1246.eqiad.wmnet [production]