4851-4900 of 10000 results (111ms)
2024-04-29 ยง
09:45 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P61365 and previous config saved to /var/cache/conftool/dbconfig/20240429-094512-root.json [production]
09:43 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cephosd1001.eqiad.wmnet with reason: host reimage [production]
09:43 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db2121.codfw.wmnet with reason: host reimage [production]
09:42 <akosiaris@deploy1002> helmfile [codfw] DONE helmfile.d/services/wikifeeds: apply [production]
09:42 <akosiaris@deploy1002> helmfile [codfw] START helmfile.d/services/wikifeeds: apply [production]
09:41 <btullis@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on cephosd1001.eqiad.wmnet with reason: host reimage [production]
09:39 <akosiaris@deploy1002> helmfile [staging] DONE helmfile.d/services/wikifeeds: apply [production]
09:39 <akosiaris@deploy1002> helmfile [staging] START helmfile.d/services/wikifeeds: apply [production]
09:38 <ladsgroup@deploy1002> Finished scap: Backport for [[gerrit:1025178|rdbms: Protect against stale cache in LB::getMaxLag() (T361824)]] (duration: 20m 15s) [production]
09:37 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2178', diff saved to https://phabricator.wikimedia.org/P61364 and previous config saved to /var/cache/conftool/dbconfig/20240429-093729-marostegui.json [production]
09:36 <btullis@deploy1002> helmfile [eqiad] DONE helmfile.d/services/image-suggestion: apply [production]
09:36 <btullis@deploy1002> helmfile [eqiad] START helmfile.d/services/image-suggestion: apply [production]
09:35 <btullis@deploy1002> helmfile [codfw] DONE helmfile.d/services/image-suggestion: apply [production]
09:35 <btullis@deploy1002> helmfile [codfw] START helmfile.d/services/image-suggestion: apply [production]
09:31 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/image-suggestion: apply [production]
09:31 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/image-suggestion: apply [production]
09:30 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P61363 and previous config saved to /var/cache/conftool/dbconfig/20240429-093007-root.json [production]
09:25 <ladsgroup@deploy1002> ladsgroup: Continuing with sync [production]
09:25 <marostegui@cumin1002> START - Cookbook sre.hosts.reimage for host db2121.codfw.wmnet with OS bookworm [production]
09:24 <btullis@cumin1002> START - Cookbook sre.hosts.reimage for host cephosd1001.eqiad.wmnet with OS bullseye [production]
09:23 <btullis@cumin1002> START - Cookbook sre.hosts.reimage for host cephadm1001.eqiad.wmnet with OS bullseye [production]
09:22 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2178', diff saved to https://phabricator.wikimedia.org/P61362 and previous config saved to /var/cache/conftool/dbconfig/20240429-092222-marostegui.json [production]
09:22 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2218 from api', diff saved to https://phabricator.wikimedia.org/P61361 and previous config saved to /var/cache/conftool/dbconfig/20240429-092213-marostegui.json [production]
09:21 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2121 T363668', diff saved to https://phabricator.wikimedia.org/P61360 and previous config saved to /var/cache/conftool/dbconfig/20240429-092104-root.json [production]
09:20 <ladsgroup@deploy1002> ladsgroup: Backport for [[gerrit:1025178|rdbms: Protect against stale cache in LB::getMaxLag() (T361824)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
09:20 <marostegui@cumin1002> dbctl commit (dc=all): 'Promote db2218 to s7 primary T363668', diff saved to https://phabricator.wikimedia.org/P61359 and previous config saved to /var/cache/conftool/dbconfig/20240429-092029-marostegui.json [production]
09:20 <marostegui> Starting s7 codfw failover from db2121 to db2218 - T363668 [production]
09:18 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:1025178|rdbms: Protect against stale cache in LB::getMaxLag() (T361824)]] [production]
09:15 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P61358 and previous config saved to /var/cache/conftool/dbconfig/20240429-091500-root.json [production]
09:07 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2178 (T361627)', diff saved to https://phabricator.wikimedia.org/P61357 and previous config saved to /var/cache/conftool/dbconfig/20240429-090701-marostegui.json [production]
09:04 <jayme@cumin1002> END (PASS) - Cookbook sre.ganeti.changedisk (exit_code=0) for changing disk type of kubestagemaster2003.codfw.wmnet to plain [production]
09:03 <jayme@cumin1002> START - Cookbook sre.ganeti.changedisk for changing disk type of kubestagemaster2003.codfw.wmnet to plain [production]
09:03 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2178 (T361627)', diff saved to https://phabricator.wikimedia.org/P61356 and previous config saved to /var/cache/conftool/dbconfig/20240429-090329-marostegui.json [production]
09:03 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2178.codfw.wmnet with reason: Maintenance [production]
09:03 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db2178.codfw.wmnet with reason: Maintenance [production]
09:03 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61355 and previous config saved to /var/cache/conftool/dbconfig/20240429-090317-marostegui.json [production]
09:00 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 28 hosts with reason: Primary switchover s7 T363668 [production]
09:00 <marostegui@cumin1002> dbctl commit (dc=all): 'Set db2218 with weight 0 T363668', diff saved to https://phabricator.wikimedia.org/P61354 and previous config saved to /var/cache/conftool/dbconfig/20240429-090046-marostegui.json [production]
09:00 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on 28 hosts with reason: Primary switchover s7 T363668 [production]
08:59 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P61353 and previous config saved to /var/cache/conftool/dbconfig/20240429-085953-root.json [production]
08:58 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61352 and previous config saved to /var/cache/conftool/dbconfig/20240429-085829-root.json [production]
08:54 <jayme@cumin1002> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host kubestagemaster2003.codfw.wmnet [production]
08:54 <jayme@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host kubestagemaster2003.codfw.wmnet with OS bullseye [production]
08:48 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171', diff saved to https://phabricator.wikimedia.org/P61351 and previous config saved to /var/cache/conftool/dbconfig/20240429-084808-marostegui.json [production]
08:45 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1223.eqiad.wmnet with OS bookworm [production]
08:44 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P61350 and previous config saved to /var/cache/conftool/dbconfig/20240429-084447-root.json [production]
08:43 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61349 and previous config saved to /var/cache/conftool/dbconfig/20240429-084323-root.json [production]
08:40 <jayme@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestagemaster2003.codfw.wmnet with reason: host reimage [production]
08:37 <jayme@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on kubestagemaster2003.codfw.wmnet with reason: host reimage [production]
08:33 <taavi@deploy1002> Finished scap: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] (duration: 15m 27s) [production]