1151-1200 of 10000 results (78ms)
2024-02-06 ยง
15:50 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 3:00:00 on cp[2033-2034].codfw.wmnet with reason: T355860 [production]
15:48 <bking@cumin2002> START - Cookbook sre.hosts.decommission for hosts cloudelastic1009.wikimedia.org [production]
15:47 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
15:47 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
15:46 <arnaudb@cumin1002> START - Cookbook sre.mysql.clone Will create a clone of db2169.codfw.wmnet onto db2194.codfw.wmnet [production]
15:44 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2058*,elastic2070*,elastic2095*,elastic2096* for switch maintenance - bking@cumin2002 - T355860 [production]
15:44 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058*,elastic2070*,elastic2095*,elastic2096* for switch maintenance - bking@cumin2002 - T355860 [production]
15:44 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2094.codfw.wmnet with OS bullseye [production]
15:43 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2058* for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058* for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <jgiannelos@deploy2002> Started deploy [restbase/deploy@05fa5c9]: Disabling storage for ptwiki [production]
15:43 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2058 for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058 for switch maintenance - bking@cumin2002 - T355860 [production]
15:42 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:42 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:41 <btullis@deploy2002> helmfile [codfw] START helmfile.d/services/datahub: sync on main [production]
15:41 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: eelastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:41 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: eelastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:37 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:37 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:34 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B4 for switch maintenance - bking@cumin2002 - T355860 [production]
15:34 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B4 for switch maintenance - bking@cumin2002 - T355860 [production]
15:28 <btullis@cumin1002> END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
15:27 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:27 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:26 <bking@cumin2002> conftool action : set/pooled=no; selector: name=wdqs2016.codfw.wmnet [production]
15:26 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2094.codfw.wmnet with reason: host reimage [production]
15:25 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:25 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:23 <btullis@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2094.codfw.wmnet with reason: host reimage [production]
15:14 <filippo@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
15:14 <filippo@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
15:11 <herron@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-logging-codfw [production]
15:07 <topranks> Disabling netbox service on netbox1002 prior to db restore from backup [production]
15:06 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on netbox1002.eqiad.wmnet with reason: Restoring DB from backup on netboxdb1002 [production]
15:06 <marostegui@cumin1002> dbctl commit (dc=all): 'es1029 (re)pooling @ 100%: After reimage', diff saved to https://phabricator.wikimedia.org/P56344 and previous config saved to /var/cache/conftool/dbconfig/20240206-150649-root.json [production]
15:06 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 0:30:00 on netbox1002.eqiad.wmnet with reason: Restoring DB from backup on netboxdb1002 [production]
14:56 <btullis@cumin1002> START - Cookbook sre.presto.roll-restart-workers for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
14:54 <btullis@cumin1002> START - Cookbook sre.hosts.reimage for host elastic2094.codfw.wmnet with OS bullseye [production]
14:54 <hashar@deploy2002> Finished deploy [gerrit/gerrit@2e441ac]: wm-checks-api: handle Zuul 'Merge failed' messages - T356647 (duration: 00m 07s) [production]
14:54 <hashar@deploy2002> Started deploy [gerrit/gerrit@2e441ac]: wm-checks-api: handle Zuul 'Merge failed' messages - T356647 [production]
14:51 <marostegui@cumin1002> dbctl commit (dc=all): 'es1029 (re)pooling @ 75%: After reimage', diff saved to https://phabricator.wikimedia.org/P56343 and previous config saved to /var/cache/conftool/dbconfig/20240206-145144-root.json [production]
14:51 <mvernon@cumin2002> END (FAIL) - Cookbook sre.swift.convert-disks (exit_code=99) for host ms-be2044 [production]
14:49 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts cloudelastic1009.wikimedia.org [production]
14:49 <bking@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:49 <bking@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudelastic1009.wikimedia.org decommissioned, removing all IPs except the asset tag one - bking@cumin2002" [production]
14:48 <bking@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: cloudelastic1009.wikimedia.org decommissioned, removing all IPs except the asset tag one - bking@cumin2002" [production]
14:47 <herron@cumin1002> START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-logging-codfw [production]
14:45 <Lucas_WMDE> UTC afternoon backport+config window done [production]
14:45 <bking@cumin2002> START - Cookbook sre.dns.netbox [production]