651-700 of 10000 results (119ms)
2025-02-26 ยง
12:07 <marostegui> Starting es6 eqiad failover from es1038 to es1037 - T387273 [production]
12:07 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1044.eqiad.wmnet [production]
12:06 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 6 hosts with reason: Primary switchover es6 T387273 [production]
12:06 <marostegui@cumin1002> dbctl commit (dc=all): 'Set es1037 with weight 0 T387273', diff saved to https://phabricator.wikimedia.org/P73676 and previous config saved to /var/cache/conftool/dbconfig/20250226-120649-root.json [production]
12:06 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/shellbox-media: apply [production]
12:06 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/shellbox-media: apply [production]
12:02 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/shellbox-media: apply [production]
12:02 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/shellbox-media: apply [production]
12:01 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1155.eqiad.wmnet with reason: Index rebuild [production]
11:56 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1044.eqiad.wmnet [production]
11:54 <jmm@cumin2002> START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1044.eqiad.wmnet [production]
11:54 <jmm@cumin2002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on ganeti1024.eqiad.wmnet with reason: remove from cluster for reimage [production]
11:51 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/shellbox-timeline: apply [production]
11:49 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/shellbox-timeline: apply [production]
11:49 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1206.eqiad.wmnet with reason: Index rebuild [production]
11:49 <vgutierrez> uploaded gobgpd 3.33 to apt.wm.o (bookworm-wikimedia) - T386687 [production]
11:48 <marostegui@cumin1002> END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1206.eqiad.wmnet [production]
11:46 <marostegui@cumin1002> dbctl commit (dc=all): 'db2218 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73675 and previous config saved to /var/cache/conftool/dbconfig/20250226-114640-root.json [production]
11:46 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2170.codfw.wmnet with reason: Index rebuild [production]
11:45 <marostegui@cumin1002> END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2170.codfw.wmnet [production]
11:45 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1168.eqiad.wmnet with reason: Index rebuild [production]
11:45 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/shellbox-timeline: apply [production]
11:45 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2180.codfw.wmnet with reason: Index rebuild [production]
11:45 <marostegui@cumin1002> END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1168.eqiad.wmnet [production]
11:45 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/shellbox-timeline: apply [production]
11:43 <marostegui@cumin1002> END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2180.codfw.wmnet [production]
11:42 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1024.eqiad.wmnet [production]
11:39 <marostegui@cumin1002> START - Cookbook sre.mysql.upgrade for db1206.eqiad.wmnet [production]
11:39 <marostegui@cumin1002> START - Cookbook sre.mysql.upgrade for db2170.codfw.wmnet [production]
11:39 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db1206 db2170 T385561', diff saved to https://phabricator.wikimedia.org/P73674 and previous config saved to /var/cache/conftool/dbconfig/20250226-113935-root.json [production]
11:37 <marostegui@cumin1002> START - Cookbook sre.mysql.upgrade for db1168.eqiad.wmnet [production]
11:37 <marostegui@cumin1002> START - Cookbook sre.mysql.upgrade for db2180.codfw.wmnet [production]
11:36 <marostegui@cumin1002> dbctl commit (dc=all): 'Pool db2169 with 100%', diff saved to https://phabricator.wikimedia.org/P73673 and previous config saved to /var/cache/conftool/dbconfig/20250226-113613-marostegui.json [production]
11:34 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db1168 db2180 T386242', diff saved to https://phabricator.wikimedia.org/P73671 and previous config saved to /var/cache/conftool/dbconfig/20250226-113453-root.json [production]
11:32 <hnowlan@deploy1003> helmfile [staging] DONE helmfile.d/services/mobileapps: sync [production]
11:32 <hnowlan@deploy1003> helmfile [staging] START helmfile.d/services/mobileapps: sync [production]
11:31 <marostegui@cumin1002> dbctl commit (dc=all): 'db2218 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73670 and previous config saved to /var/cache/conftool/dbconfig/20250226-113134-root.json [production]
11:29 <marostegui@cumin1002> dbctl commit (dc=all): 'db2169 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73669 and previous config saved to /var/cache/conftool/dbconfig/20250226-112915-root.json [production]
11:21 <vgutierrez> repooling lvs7001 running liberica - T384477 [production]
11:16 <marostegui@cumin1002> dbctl commit (dc=all): 'db2218 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73668 and previous config saved to /var/cache/conftool/dbconfig/20250226-111629-root.json [production]
11:14 <marostegui@cumin1002> dbctl commit (dc=all): 'db1180 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73667 and previous config saved to /var/cache/conftool/dbconfig/20250226-111421-root.json [production]
11:14 <marostegui@cumin1002> dbctl commit (dc=all): 'db2169 (re)pooling @ 50%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73666 and previous config saved to /var/cache/conftool/dbconfig/20250226-111410-root.json [production]
11:04 <marostegui@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 27 hosts with reason: Schema change [production]
11:04 <vgutierrez@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs7001.magru.wmnet with OS bookworm [production]
11:03 <marostegui> Drop schema change on s7 codfw master with replication dbmaint T385645 [production]
11:02 <marostegui@cumin1002> dbctl commit (dc=all): 'db1195 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73665 and previous config saved to /var/cache/conftool/dbconfig/20250226-110209-root.json [production]
11:01 <marostegui@cumin1002> dbctl commit (dc=all): 'db2218 (re)pooling @ 25%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73664 and previous config saved to /var/cache/conftool/dbconfig/20250226-110124-root.json [production]
10:59 <marostegui> Drop schema change on s3 codfw master with replication dbmaint T385645 [production]
10:59 <marostegui@cumin1002> dbctl commit (dc=all): 'db2153 (re)pooling @ 100%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73663 and previous config saved to /var/cache/conftool/dbconfig/20250226-105937-root.json [production]
10:59 <marostegui@cumin1002> dbctl commit (dc=all): 'db1180 (re)pooling @ 75%: Repooling after rebuild index', diff saved to https://phabricator.wikimedia.org/P73662 and previous config saved to /var/cache/conftool/dbconfig/20250226-105916-root.json [production]