6751-6800 of 10000 results (63ms)
2022-01-12 ยง
16:07 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1134 (T297191)', diff saved to https://phabricator.wikimedia.org/P18674 and previous config saved to /var/cache/conftool/dbconfig/20220112-160755-marostegui.json [production]
16:02 <elukey> stop kafka* on kafka-main1002 to allow dcops maintenance (nic/bios upgrades) - T298867 [production]
15:57 <oblivian@deploy1002> helmfile [eqiad] DONE helmfile.d/services/shellbox-media: sync on main [production]
15:56 <oblivian@deploy1002> helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main [production]
15:56 <bking@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host elastic2051.codfw.wmnet with OS stretch [production]
15:52 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1134', diff saved to https://phabricator.wikimedia.org/P18673 and previous config saved to /var/cache/conftool/dbconfig/20220112-155250-marostegui.json [production]
15:37 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1134', diff saved to https://phabricator.wikimedia.org/P18672 and previous config saved to /var/cache/conftool/dbconfig/20220112-153745-marostegui.json [production]
15:23 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host elastic2051.codfw.wmnet with OS stretch [production]
15:22 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1134 (T297191)', diff saved to https://phabricator.wikimedia.org/P18671 and previous config saved to /var/cache/conftool/dbconfig/20220112-152240-marostegui.json [production]
15:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1134 (T297191)', diff saved to https://phabricator.wikimedia.org/P18670 and previous config saved to /var/cache/conftool/dbconfig/20220112-152133-marostegui.json [production]
15:21 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1134.eqiad.wmnet with reason: Maintenance [production]
15:21 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1134.eqiad.wmnet with reason: Maintenance [production]
15:21 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1133.eqiad.wmnet with reason: Maintenance [production]
15:21 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1133.eqiad.wmnet with reason: Maintenance [production]
15:21 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1163 (T297191)', diff saved to https://phabricator.wikimedia.org/P18669 and previous config saved to /var/cache/conftool/dbconfig/20220112-152121-marostegui.json [production]
15:14 <elukey> stop kafka* on kafka-main1001 to allow dcops maintenance (nic/bios upgrades) - T298867 [production]
15:06 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P18668 and previous config saved to /var/cache/conftool/dbconfig/20220112-150616-marostegui.json [production]
14:59 <moritzm> switch kubestagetcd1005 to DRBD (needed to be able to shuffle instances around for the Ganeti buster update) [production]
14:59 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd1005.eqiad.wmnet with reason: switch to DRBD disk storage [production]
14:59 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd1005.eqiad.wmnet with reason: switch to DRBD disk storage [production]
14:56 <oblivian@deploy1002> helmfile [eqiad] DONE helmfile.d/services/shellbox-media: sync on main [production]
14:55 <oblivian@deploy1002> helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main [production]
14:54 <oblivian@deploy1002> helmfile [eqiad] DONE helmfile.d/services/shellbox-media: apply on main [production]
14:54 <oblivian@deploy1002> helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main [production]
14:51 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P18667 and previous config saved to /var/cache/conftool/dbconfig/20220112-145111-marostegui.json [production]
14:42 <oblivian@deploy1002> helmfile [eqiad] DONE helmfile.d/services/shellbox-media: sync on main [production]
14:42 <oblivian@deploy1002> helmfile [eqiad] START helmfile.d/services/shellbox-media: apply on main [production]
14:40 <jelto> remove helm2 from deployment_server T251305 https://gerrit.wikimedia.org/r/c/operations/puppet/+/753026 [production]
14:37 <jelto@deploy1002> helmfile [staging] DONE helmfile.d/services/blubberoid: sync on staging [production]
14:37 <jelto@deploy1002> helmfile [staging] DONE helmfile.d/services/blubberoid: apply on production [production]
14:37 <jelto@deploy1002> helmfile [staging] START helmfile.d/services/blubberoid: apply on staging [production]
14:36 <jmm@cumin2002> END (PASS) - Cookbook sre.ganeti.reboot-vm (exit_code=0) for VM netflow1002.eqiad.wmnet [production]
14:36 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1163 (T297191)', diff saved to https://phabricator.wikimedia.org/P18666 and previous config saved to /var/cache/conftool/dbconfig/20220112-143606-marostegui.json [production]
14:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1163 (T297191)', diff saved to https://phabricator.wikimedia.org/P18665 and previous config saved to /var/cache/conftool/dbconfig/20220112-143258-marostegui.json [production]
14:32 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
14:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1128 (T297191)', diff saved to https://phabricator.wikimedia.org/P18664 and previous config saved to /var/cache/conftool/dbconfig/20220112-143241-marostegui.json [production]
14:30 <jmm@cumin2002> START - Cookbook sre.ganeti.reboot-vm for VM netflow1002.eqiad.wmnet [production]
14:30 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
14:26 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
14:26 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
14:25 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
14:23 <moritzm> switch kubestagetcd1004 to DRBD (needed to be able to shuffle instances around for the Ganeti buster update) [production]
14:22 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on kubestagetcd1004.eqiad.wmnet with reason: switch to DRBD disk storage [production]
14:22 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on kubestagetcd1004.eqiad.wmnet with reason: switch to DRBD disk storage [production]
14:20 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]