7001-7050 of 10000 results (117ms)
2024-04-29 ยง
09:30 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P61363 and previous config saved to /var/cache/conftool/dbconfig/20240429-093007-root.json [production]
09:25 <ladsgroup@deploy1002> ladsgroup: Continuing with sync [production]
09:25 <marostegui@cumin1002> START - Cookbook sre.hosts.reimage for host db2121.codfw.wmnet with OS bookworm [production]
09:24 <btullis@cumin1002> START - Cookbook sre.hosts.reimage for host cephosd1001.eqiad.wmnet with OS bullseye [production]
09:23 <btullis@cumin1002> START - Cookbook sre.hosts.reimage for host cephadm1001.eqiad.wmnet with OS bullseye [production]
09:22 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2178', diff saved to https://phabricator.wikimedia.org/P61362 and previous config saved to /var/cache/conftool/dbconfig/20240429-092222-marostegui.json [production]
09:22 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2218 from api', diff saved to https://phabricator.wikimedia.org/P61361 and previous config saved to /var/cache/conftool/dbconfig/20240429-092213-marostegui.json [production]
09:21 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db2121 T363668', diff saved to https://phabricator.wikimedia.org/P61360 and previous config saved to /var/cache/conftool/dbconfig/20240429-092104-root.json [production]
09:20 <ladsgroup@deploy1002> ladsgroup: Backport for [[gerrit:1025178|rdbms: Protect against stale cache in LB::getMaxLag() (T361824)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
09:20 <marostegui@cumin1002> dbctl commit (dc=all): 'Promote db2218 to s7 primary T363668', diff saved to https://phabricator.wikimedia.org/P61359 and previous config saved to /var/cache/conftool/dbconfig/20240429-092029-marostegui.json [production]
09:20 <marostegui> Starting s7 codfw failover from db2121 to db2218 - T363668 [production]
09:18 <ladsgroup@deploy1002> Started scap: Backport for [[gerrit:1025178|rdbms: Protect against stale cache in LB::getMaxLag() (T361824)]] [production]
09:15 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P61358 and previous config saved to /var/cache/conftool/dbconfig/20240429-091500-root.json [production]
09:07 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2178 (T361627)', diff saved to https://phabricator.wikimedia.org/P61357 and previous config saved to /var/cache/conftool/dbconfig/20240429-090701-marostegui.json [production]
09:04 <jayme@cumin1002> END (PASS) - Cookbook sre.ganeti.changedisk (exit_code=0) for changing disk type of kubestagemaster2003.codfw.wmnet to plain [production]
09:03 <jayme@cumin1002> START - Cookbook sre.ganeti.changedisk for changing disk type of kubestagemaster2003.codfw.wmnet to plain [production]
09:03 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2178 (T361627)', diff saved to https://phabricator.wikimedia.org/P61356 and previous config saved to /var/cache/conftool/dbconfig/20240429-090329-marostegui.json [production]
09:03 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2178.codfw.wmnet with reason: Maintenance [production]
09:03 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db2178.codfw.wmnet with reason: Maintenance [production]
09:03 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61355 and previous config saved to /var/cache/conftool/dbconfig/20240429-090317-marostegui.json [production]
09:00 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 28 hosts with reason: Primary switchover s7 T363668 [production]
09:00 <marostegui@cumin1002> dbctl commit (dc=all): 'Set db2218 with weight 0 T363668', diff saved to https://phabricator.wikimedia.org/P61354 and previous config saved to /var/cache/conftool/dbconfig/20240429-090046-marostegui.json [production]
09:00 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on 28 hosts with reason: Primary switchover s7 T363668 [production]
08:59 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P61353 and previous config saved to /var/cache/conftool/dbconfig/20240429-085953-root.json [production]
08:58 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61352 and previous config saved to /var/cache/conftool/dbconfig/20240429-085829-root.json [production]
08:54 <jayme@cumin1002> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host kubestagemaster2003.codfw.wmnet [production]
08:54 <jayme@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host kubestagemaster2003.codfw.wmnet with OS bullseye [production]
08:48 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171', diff saved to https://phabricator.wikimedia.org/P61351 and previous config saved to /var/cache/conftool/dbconfig/20240429-084808-marostegui.json [production]
08:45 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1223.eqiad.wmnet with OS bookworm [production]
08:44 <marostegui@cumin1002> dbctl commit (dc=all): 'db1223 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P61350 and previous config saved to /var/cache/conftool/dbconfig/20240429-084447-root.json [production]
08:43 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61349 and previous config saved to /var/cache/conftool/dbconfig/20240429-084323-root.json [production]
08:40 <jayme@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestagemaster2003.codfw.wmnet with reason: host reimage [production]
08:37 <jayme@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on kubestagemaster2003.codfw.wmnet with reason: host reimage [production]
08:33 <taavi@deploy1002> Finished scap: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] (duration: 15m 27s) [production]
08:33 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171', diff saved to https://phabricator.wikimedia.org/P61348 and previous config saved to /var/cache/conftool/dbconfig/20240429-083301-marostegui.json [production]
08:29 <Dreamy_Jazz> Restarting MediaModeration scanning script - https://wikitech.wikimedia.org/wiki/MediaModeration [production]
08:28 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P61347 and previous config saved to /var/cache/conftool/dbconfig/20240429-082817-root.json [production]
08:24 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1223.eqiad.wmnet with reason: host reimage [production]
08:22 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1223.eqiad.wmnet with reason: host reimage [production]
08:21 <jayme@cumin1002> START - Cookbook sre.hosts.reimage for host kubestagemaster2003.codfw.wmnet with OS bullseye [production]
08:20 <marostegui@cumin1002> dbctl commit (dc=all): 'es1023 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61346 and previous config saved to /var/cache/conftool/dbconfig/20240429-082056-root.json [production]
08:20 <taavi@deploy1002> taavi: Continuing with sync [production]
08:20 <taavi@deploy1002> taavi: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
08:17 <taavi@deploy1002> Started scap: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] [production]
08:17 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61345 and previous config saved to /var/cache/conftool/dbconfig/20240429-081754-marostegui.json [production]
08:16 <jayme@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" [production]
08:15 <jayme@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" [production]
08:15 <jayme@cumin1002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) kubestagemaster2003.codfw.wmnet on all recursors [production]
08:15 <jayme@cumin1002> START - Cookbook sre.dns.wipe-cache kubestagemaster2003.codfw.wmnet on all recursors [production]
08:15 <jayme@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]