2801-2850 of 10000 results (86ms)
2024-02-06 ยง
15:59 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1150.eqiad.wmnet with reason: Maintenance [production]
15:59 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on cr[1-2]-codfw with reason: prepping for server uplink migration [production]
15:58 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on cr[1-2]-codfw with reason: prepping for server uplink migration [production]
15:58 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1150.eqiad.wmnet with reason: Maintenance [production]
15:58 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on asw-b-codfw,lsw1-b4-codfw.mgmt with reason: prepping for server uplink migration [production]
15:58 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on asw-b-codfw,lsw1-b4-codfw.mgmt with reason: prepping for server uplink migration [production]
15:56 <topranks> moving Netbox server uplinks from asw-b4-codfw to lsw1-b4-codfw to prep config for server moves T355860 [production]
15:53 <bking@cumin2002> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts cloudelastic1009.wikimedia.org [production]
15:53 <bking@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:52 <btullis@deploy2002> helmfile [codfw] DONE helmfile.d/services/datahub: sync on main [production]
15:51 <bking@cumin2002> START - Cookbook sre.dns.netbox [production]
15:51 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on cp[2033-2034].codfw.wmnet with reason: T355860 [production]
15:50 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 3:00:00 on cp[2033-2034].codfw.wmnet with reason: T355860 [production]
15:48 <bking@cumin2002> START - Cookbook sre.hosts.decommission for hosts cloudelastic1009.wikimedia.org [production]
15:47 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
15:47 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
15:46 <arnaudb@cumin1002> START - Cookbook sre.mysql.clone Will create a clone of db2169.codfw.wmnet onto db2194.codfw.wmnet [production]
15:44 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2058*,elastic2070*,elastic2095*,elastic2096* for switch maintenance - bking@cumin2002 - T355860 [production]
15:44 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058*,elastic2070*,elastic2095*,elastic2096* for switch maintenance - bking@cumin2002 - T355860 [production]
15:44 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2094.codfw.wmnet with OS bullseye [production]
15:43 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2058* for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058* for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <jgiannelos@deploy2002> Started deploy [restbase/deploy@05fa5c9]: Disabling storage for ptwiki [production]
15:43 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2058 for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058 for switch maintenance - bking@cumin2002 - T355860 [production]
15:42 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:42 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:41 <btullis@deploy2002> helmfile [codfw] START helmfile.d/services/datahub: sync on main [production]
15:41 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: eelastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:41 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: eelastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:37 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:37 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:34 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B4 for switch maintenance - bking@cumin2002 - T355860 [production]
15:34 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B4 for switch maintenance - bking@cumin2002 - T355860 [production]
15:28 <btullis@cumin1002> END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
15:27 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:27 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:26 <bking@cumin2002> conftool action : set/pooled=no; selector: name=wdqs2016.codfw.wmnet [production]
15:26 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2094.codfw.wmnet with reason: host reimage [production]
15:25 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:25 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:23 <btullis@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2094.codfw.wmnet with reason: host reimage [production]
15:14 <filippo@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
15:14 <filippo@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
15:11 <herron@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-logging-codfw [production]
15:07 <topranks> Disabling netbox service on netbox1002 prior to db restore from backup [production]
15:06 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on netbox1002.eqiad.wmnet with reason: Restoring DB from backup on netboxdb1002 [production]
15:06 <marostegui@cumin1002> dbctl commit (dc=all): 'es1029 (re)pooling @ 100%: After reimage', diff saved to https://phabricator.wikimedia.org/P56344 and previous config saved to /var/cache/conftool/dbconfig/20240206-150649-root.json [production]
15:06 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 0:30:00 on netbox1002.eqiad.wmnet with reason: Restoring DB from backup on netboxdb1002 [production]
14:56 <btullis@cumin1002> START - Cookbook sre.presto.roll-restart-workers for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]