3051-3100 of 10000 results (34ms)
2023-02-13 ยง
14:16 <btullis@deploy1002> helmfile [eqiad] START helmfile.d/services/eventgate-main: sync [production]
14:16 <btullis@deploy1002> helmfile [codfw] DONE helmfile.d/services/eventgate-main: sync [production]
14:16 <btullis@deploy1002> helmfile [codfw] START helmfile.d/services/eventgate-main: sync [production]
14:15 <btullis@deploy1002> helmfile [staging] DONE helmfile.d/services/eventgate-main: sync [production]
14:15 <lucaswerkmeister-wmde@deploy1002> Finished scap: Backport for [[gerrit:887998|Add iOS stream config]] (duration: 10m 06s) [production]
14:15 <btullis@deploy1002> helmfile [staging] START helmfile.d/services/eventgate-main: sync [production]
14:15 <btullis> roll-restarting all eventgate pods [analytics]
14:13 <elukey@cumin1001> START - Cookbook sre.ganeti.reimage for host ml-staging-etcd2001.codfw.wmnet with OS bullseye [production]
14:13 <dcaro@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on cloudcephosd1002.eqiad.wmnet with reason: moving racks [production]
14:12 <dcaro@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on cloudcephosd1002.eqiad.wmnet with reason: moving racks [production]
14:12 <dcaro@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on cloudcephosd1001.eqiad.wmnet with reason: moving racks [production]
14:12 <elukey@cumin1001> START - Cookbook sre.k8s.upgrade-cluster Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 [production]
14:12 <dcaro@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on cloudcephosd1001.eqiad.wmnet with reason: moving racks [production]
14:11 <nfraison@cumin1001> END (FAIL) - Cookbook sre.ganeti.reimage (exit_code=99) for host an-test-presto1001.eqiad.wmnet with OS bullseye [production]
14:10 <nfraison@cumin1001> START - Cookbook sre.ganeti.reimage for host an-test-presto1001.eqiad.wmnet with OS bullseye [production]
14:07 <lucaswerkmeister-wmde@deploy1002> mazevedo and lucaswerkmeister-wmde: Backport for [[gerrit:887998|Add iOS stream config]] synced to the testservers: mwdebug2002.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2001.codfw.wmnet, mwdebug1001.eqiad.wmnet [production]
14:07 <jbond> upload node-bgpalerter_1.31.2 to apt [production]
14:06 <wm-bot2> Set the ceph cluster for eqiad1 in maintenance, alert silence ids: 8fbf6bfd-eec1-4d81-8e0d-ea431d8411ee (T329498) - cookbook ran by dcaro@vulcanus [admin]
14:06 <nfraison> Reimage an-test-presto1001 to upgrade to bullseye T329361 [analytics]
14:05 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-reload (exit_code=99) [production]
14:05 <bking@cumin1001> START - Cookbook sre.wdqs.data-reload [production]
14:05 <lucaswerkmeister-wmde@deploy1002> Started scap: Backport for [[gerrit:887998|Add iOS stream config]] [production]
14:02 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1130 (T329203)', diff saved to https://phabricator.wikimedia.org/P44415 and previous config saved to /var/cache/conftool/dbconfig/20230213-140243-marostegui.json [production]
14:02 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
14:02 <elukey@cumin1001> END (FAIL) - Cookbook sre.k8s.upgrade-cluster (exit_code=99) Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 [production]
14:02 <elukey@cumin1001> START - Cookbook sre.k8s.upgrade-cluster Upgrade K8s version: Upgrade ml-staging-codfw cluster to 1.23 [production]
14:02 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db1130.eqiad.wmnet with reason: Maintenance [production]
14:02 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3315 (T329203)', diff saved to https://phabricator.wikimedia.org/P44414 and previous config saved to /var/cache/conftool/dbconfig/20230213-140222-marostegui.json [production]
13:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db2166 (T328817)', diff saved to https://phabricator.wikimedia.org/P44413 and previous config saved to /var/cache/conftool/dbconfig/20230213-135753-marostegui.json [production]
13:57 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2166.codfw.wmnet with reason: Maintenance [production]
13:57 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 12:00:00 on db2166.codfw.wmnet with reason: Maintenance [production]
13:57 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2164 (T328817)', diff saved to https://phabricator.wikimedia.org/P44412 and previous config saved to /var/cache/conftool/dbconfig/20230213-135732-marostegui.json [production]
13:55 <wm-bot2> drained, depooled and removed worker toolsbeta-test-k8s-worker-5 - cookbook ran by arturo@nostromo [toolsbeta]
13:54 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 6677 [production]
13:54 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 6677 [production]
13:47 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3315', diff saved to https://phabricator.wikimedia.org/P44411 and previous config saved to /var/cache/conftool/dbconfig/20230213-134716-marostegui.json [production]
13:46 <wm-bot2> Depooled and removed worker toolsbeta-test-k8s-worker-4.toolsbeta.eqiad1.wikimedia.cloud - cookbook ran by arturo@nostromo [toolsbeta]
13:46 <wm-bot2> Drained node toolsbeta-test-k8s-worker-4 - cookbook ran by arturo@nostromo [toolsbeta]
13:46 <wm-bot2> Draining node toolsbeta-test-k8s-worker-4... - cookbook ran by arturo@nostromo [toolsbeta]
13:45 <wm-bot2> Depooling and removing worker , will pick the oldest - cookbook ran by arturo@nostromo [toolsbeta]
13:42 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2164', diff saved to https://phabricator.wikimedia.org/P44410 and previous config saved to /var/cache/conftool/dbconfig/20230213-134226-marostegui.json [production]
13:32 <taavi> re-enable puppet on labstore1004 T329377 [admin]
13:32 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3315', diff saved to https://phabricator.wikimedia.org/P44409 and previous config saved to /var/cache/conftool/dbconfig/20230213-133210-marostegui.json [production]
13:31 <wm-bot2> Depooling and removing worker , will pick the oldest - cookbook ran by arturo@nostromo [toolsbeta]
13:30 <wm-bot2> Depooling and removing worker , will pick the oldest - cookbook ran by arturo@nostromo [toolsbeta]
13:27 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2164', diff saved to https://phabricator.wikimedia.org/P44408 and previous config saved to /var/cache/conftool/dbconfig/20230213-132719-marostegui.json [production]
13:17 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1113:3315 (T329203)', diff saved to https://phabricator.wikimedia.org/P44407 and previous config saved to /var/cache/conftool/dbconfig/20230213-131703-marostegui.json [production]
13:15 <arturo> cordoned & drained k8s workers 4 to 7 to force workload to relocate to 8 (T329378) [toolsbeta]
13:14 <wm-bot2> build & push docker image docker-registry.tools.wmflabs.org/maintain-kubeusers:aac195b from https://gerrit.wikimedia.org/r/labs/tools/maintain-kubeusers (aac195b) - cookbook ran by taavi@runko [tools]
13:13 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1113:3315 (T329203)', diff saved to https://phabricator.wikimedia.org/P44406 and previous config saved to /var/cache/conftool/dbconfig/20230213-131348-marostegui.json [production]