2024-04-29
ยง
|
09:00 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Set db2218 with weight 0 T363668', diff saved to https://phabricator.wikimedia.org/P61354 and previous config saved to /var/cache/conftool/dbconfig/20240429-090046-marostegui.json |
[production] |
09:00 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 1:00:00 on 28 hosts with reason: Primary switchover s7 T363668 |
[production] |
08:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1223 (re)pooling @ 5%: Repooling', diff saved to https://phabricator.wikimedia.org/P61353 and previous config saved to /var/cache/conftool/dbconfig/20240429-085953-root.json |
[production] |
08:58 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2159 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61352 and previous config saved to /var/cache/conftool/dbconfig/20240429-085829-root.json |
[production] |
08:54 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host kubestagemaster2003.codfw.wmnet |
[production] |
08:54 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host kubestagemaster2003.codfw.wmnet with OS bullseye |
[production] |
08:48 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2171', diff saved to https://phabricator.wikimedia.org/P61351 and previous config saved to /var/cache/conftool/dbconfig/20240429-084808-marostegui.json |
[production] |
08:45 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db1223.eqiad.wmnet with OS bookworm |
[production] |
08:44 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1223 (re)pooling @ 1%: Repooling', diff saved to https://phabricator.wikimedia.org/P61350 and previous config saved to /var/cache/conftool/dbconfig/20240429-084447-root.json |
[production] |
08:43 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2159 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61349 and previous config saved to /var/cache/conftool/dbconfig/20240429-084323-root.json |
[production] |
08:40 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on kubestagemaster2003.codfw.wmnet with reason: host reimage |
[production] |
08:37 |
<jayme@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on kubestagemaster2003.codfw.wmnet with reason: host reimage |
[production] |
08:33 |
<taavi@deploy1002> |
Finished scap: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] (duration: 15m 27s) |
[production] |
08:33 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2171', diff saved to https://phabricator.wikimedia.org/P61348 and previous config saved to /var/cache/conftool/dbconfig/20240429-083301-marostegui.json |
[production] |
08:29 |
<Dreamy_Jazz> |
Restarting MediaModeration scanning script - https://wikitech.wikimedia.org/wiki/MediaModeration |
[production] |
08:28 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2159 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P61347 and previous config saved to /var/cache/conftool/dbconfig/20240429-082817-root.json |
[production] |
08:24 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on db1223.eqiad.wmnet with reason: host reimage |
[production] |
08:22 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 2:00:00 on db1223.eqiad.wmnet with reason: host reimage |
[production] |
08:21 |
<jayme@cumin1002> |
START - Cookbook sre.hosts.reimage for host kubestagemaster2003.codfw.wmnet with OS bullseye |
[production] |
08:20 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1023 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61346 and previous config saved to /var/cache/conftool/dbconfig/20240429-082056-root.json |
[production] |
08:20 |
<taavi@deploy1002> |
taavi: Continuing with sync |
[production] |
08:20 |
<taavi@deploy1002> |
taavi: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) |
[production] |
08:17 |
<taavi@deploy1002> |
Started scap: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] |
[production] |
08:17 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61345 and previous config saved to /var/cache/conftool/dbconfig/20240429-081754-marostegui.json |
[production] |
08:16 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" |
[production] |
08:15 |
<jayme@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" |
[production] |
08:15 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) kubestagemaster2003.codfw.wmnet on all recursors |
[production] |
08:15 |
<jayme@cumin1002> |
START - Cookbook sre.dns.wipe-cache kubestagemaster2003.codfw.wmnet on all recursors |
[production] |
08:15 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
08:15 |
<jayme@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" |
[production] |
08:14 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1212 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61344 and previous config saved to /var/cache/conftool/dbconfig/20240429-081455-root.json |
[production] |
08:14 |
<jayme@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" |
[production] |
08:13 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61343 and previous config saved to /var/cache/conftool/dbconfig/20240429-081323-marostegui.json |
[production] |
08:13 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db2159 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P61342 and previous config saved to /var/cache/conftool/dbconfig/20240429-081312-root.json |
[production] |
08:13 |
<marostegui@cumin1002> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2171.codfw.wmnet with reason: Maintenance |
[production] |
08:12 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.downtime for 4:00:00 on db2171.codfw.wmnet with reason: Maintenance |
[production] |
08:12 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db2157 (T361627)', diff saved to https://phabricator.wikimedia.org/P61341 and previous config saved to /var/cache/conftool/dbconfig/20240429-081254-marostegui.json |
[production] |
08:11 |
<jayme@cumin1002> |
START - Cookbook sre.dns.netbox |
[production] |
08:11 |
<jayme@cumin1002> |
START - Cookbook sre.ganeti.makevm for new host kubestagemaster2003.codfw.wmnet |
[production] |
08:09 |
<marostegui@cumin1002> |
START - Cookbook sre.hosts.reimage for host db1223.eqiad.wmnet with OS bookworm |
[production] |
08:07 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1223', diff saved to https://phabricator.wikimedia.org/P61340 and previous config saved to /var/cache/conftool/dbconfig/20240429-080710-root.json |
[production] |
08:05 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'es1023 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61339 and previous config saved to /var/cache/conftool/dbconfig/20240429-080550-root.json |
[production] |
08:04 |
<dani@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/miscweb: apply |
[production] |
08:04 |
<dani@deploy1002> |
helmfile [codfw] START helmfile.d/services/miscweb: apply |
[production] |
08:04 |
<dani@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/miscweb: apply |
[production] |
08:04 |
<dani@deploy1002> |
helmfile [eqiad] START helmfile.d/services/miscweb: apply |
[production] |
08:04 |
<dani@deploy1002> |
helmfile [staging] DONE helmfile.d/services/miscweb: apply |
[production] |
08:04 |
<dani@deploy1002> |
helmfile [staging] START helmfile.d/services/miscweb: apply |
[production] |
08:00 |
<dcausse> |
restarting blazegraph on wdqs1019 (BlazegraphFreeAllocatorsDecreasingRapidly) |
[production] |
07:59 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'db1212 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61338 and previous config saved to /var/cache/conftool/dbconfig/20240429-075949-root.json |
[production] |