2301-2350 of 10000 results (115ms)
2024-04-29 ยง
08:22 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on db1223.eqiad.wmnet with reason: host reimage [production]
08:21 <jayme@cumin1002> START - Cookbook sre.hosts.reimage for host kubestagemaster2003.codfw.wmnet with OS bullseye [production]
08:20 <marostegui@cumin1002> dbctl commit (dc=all): 'es1023 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61346 and previous config saved to /var/cache/conftool/dbconfig/20240429-082056-root.json [production]
08:20 <taavi@deploy1002> taavi: Continuing with sync [production]
08:20 <taavi@deploy1002> taavi: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
08:17 <taavi@deploy1002> Started scap: Backport for [[gerrit:1025174|Fix disabling TOTP keys with scratch tokens (T363548)]] [production]
08:17 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61345 and previous config saved to /var/cache/conftool/dbconfig/20240429-081754-marostegui.json [production]
08:16 <jayme@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" [production]
08:15 <jayme@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.ganeti.makevm: created new VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" [production]
08:15 <jayme@cumin1002> END (PASS) - Cookbook sre.dns.wipe-cache (exit_code=0) kubestagemaster2003.codfw.wmnet on all recursors [production]
08:15 <jayme@cumin1002> START - Cookbook sre.dns.wipe-cache kubestagemaster2003.codfw.wmnet on all recursors [production]
08:15 <jayme@cumin1002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:15 <jayme@cumin1002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" [production]
08:14 <marostegui@cumin1002> dbctl commit (dc=all): 'db1212 (re)pooling @ 100%: Repooling', diff saved to https://phabricator.wikimedia.org/P61344 and previous config saved to /var/cache/conftool/dbconfig/20240429-081455-root.json [production]
08:14 <jayme@cumin1002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add records for VM kubestagemaster2003.codfw.wmnet - jayme@cumin1002" [production]
08:13 <marostegui@cumin1002> dbctl commit (dc=all): 'Depooling db2171 (T361627)', diff saved to https://phabricator.wikimedia.org/P61343 and previous config saved to /var/cache/conftool/dbconfig/20240429-081323-marostegui.json [production]
08:13 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 25%: Repooling', diff saved to https://phabricator.wikimedia.org/P61342 and previous config saved to /var/cache/conftool/dbconfig/20240429-081312-root.json [production]
08:13 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 4:00:00 on db2171.codfw.wmnet with reason: Maintenance [production]
08:12 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 4:00:00 on db2171.codfw.wmnet with reason: Maintenance [production]
08:12 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2157 (T361627)', diff saved to https://phabricator.wikimedia.org/P61341 and previous config saved to /var/cache/conftool/dbconfig/20240429-081254-marostegui.json [production]
08:11 <jayme@cumin1002> START - Cookbook sre.dns.netbox [production]
08:11 <jayme@cumin1002> START - Cookbook sre.ganeti.makevm for new host kubestagemaster2003.codfw.wmnet [production]
08:09 <marostegui@cumin1002> START - Cookbook sre.hosts.reimage for host db1223.eqiad.wmnet with OS bookworm [production]
08:07 <marostegui@cumin1002> dbctl commit (dc=all): 'Depool db1223', diff saved to https://phabricator.wikimedia.org/P61340 and previous config saved to /var/cache/conftool/dbconfig/20240429-080710-root.json [production]
08:05 <marostegui@cumin1002> dbctl commit (dc=all): 'es1023 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61339 and previous config saved to /var/cache/conftool/dbconfig/20240429-080550-root.json [production]
08:04 <dani@deploy1002> helmfile [codfw] DONE helmfile.d/services/miscweb: apply [production]
08:04 <dani@deploy1002> helmfile [codfw] START helmfile.d/services/miscweb: apply [production]
08:04 <dani@deploy1002> helmfile [eqiad] DONE helmfile.d/services/miscweb: apply [production]
08:04 <dani@deploy1002> helmfile [eqiad] START helmfile.d/services/miscweb: apply [production]
08:04 <dani@deploy1002> helmfile [staging] DONE helmfile.d/services/miscweb: apply [production]
08:04 <dani@deploy1002> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
08:00 <dcausse> restarting blazegraph on wdqs1019 (BlazegraphFreeAllocatorsDecreasingRapidly) [production]
07:59 <marostegui@cumin1002> dbctl commit (dc=all): 'db1212 (re)pooling @ 75%: Repooling', diff saved to https://phabricator.wikimedia.org/P61338 and previous config saved to /var/cache/conftool/dbconfig/20240429-075949-root.json [production]
07:59 <dani@deploy1002> helmfile [codfw] DONE helmfile.d/services/miscweb: apply [production]
07:59 <dani@deploy1002> helmfile [codfw] START helmfile.d/services/miscweb: apply [production]
07:59 <dani@deploy1002> helmfile [eqiad] DONE helmfile.d/services/miscweb: apply [production]
07:58 <dani@deploy1002> helmfile [eqiad] START helmfile.d/services/miscweb: apply [production]
07:58 <dani@deploy1002> helmfile [staging] DONE helmfile.d/services/miscweb: apply [production]
07:58 <dani@deploy1002> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
07:58 <marostegui@cumin1002> dbctl commit (dc=all): 'db2159 (re)pooling @ 10%: Repooling', diff saved to https://phabricator.wikimedia.org/P61337 and previous config saved to /var/cache/conftool/dbconfig/20240429-075806-root.json [production]
07:57 <marostegui@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2157', diff saved to https://phabricator.wikimedia.org/P61336 and previous config saved to /var/cache/conftool/dbconfig/20240429-075746-marostegui.json [production]
07:52 <dani@deploy1002> helmfile [codfw] DONE helmfile.d/services/miscweb: apply [production]
07:52 <dani@deploy1002> helmfile [codfw] START helmfile.d/services/miscweb: apply [production]
07:52 <dani@deploy1002> helmfile [eqiad] DONE helmfile.d/services/miscweb: apply [production]
07:52 <dani@deploy1002> helmfile [eqiad] START helmfile.d/services/miscweb: apply [production]
07:52 <dani@deploy1002> helmfile [staging] DONE helmfile.d/services/miscweb: apply [production]
07:52 <dani@deploy1002> helmfile [staging] START helmfile.d/services/miscweb: apply [production]
07:50 <marostegui@cumin1002> dbctl commit (dc=all): 'es1023 (re)pooling @ 50%: Repooling', diff saved to https://phabricator.wikimedia.org/P61335 and previous config saved to /var/cache/conftool/dbconfig/20240429-075045-root.json [production]
07:49 <dani@deploy1002> helmfile [codfw] DONE helmfile.d/services/miscweb: apply [production]
07:48 <dani@deploy1002> helmfile [codfw] START helmfile.d/services/miscweb: apply [production]