851-900 of 10000 results (85ms)
2024-02-06 ยง
16:04 <bking@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: migrate cloudelastic1009 to private IPs - bking@cumin2002" [production]
16:03 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on 23 hosts with reason: Migrate servers in codfw rack B4 from asw-b4-codfw to lsw1-b4-codfw [production]
16:02 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 0:30:00 on 23 hosts with reason: Migrate servers in codfw rack B4 from asw-b4-codfw to lsw1-b4-codfw [production]
16:01 <bking@cumin2002> START - Cookbook sre.dns.netbox [production]
16:01 <jgiannelos@deploy2002> Finished deploy [restbase/deploy@05fa5c9]: Disabling storage for ptwiki (duration: 17m 39s) [production]
16:00 <topranks> configuring lsw1-b4-codfw with port config for new hosts T355860 [production]
15:59 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1150.eqiad.wmnet with reason: Maintenance [production]
15:59 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on cr[1-2]-codfw with reason: prepping for server uplink migration [production]
15:58 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on cr[1-2]-codfw with reason: prepping for server uplink migration [production]
15:58 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1150.eqiad.wmnet with reason: Maintenance [production]
15:58 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on asw-b-codfw,lsw1-b4-codfw.mgmt with reason: prepping for server uplink migration [production]
15:58 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 1:00:00 on asw-b-codfw,lsw1-b4-codfw.mgmt with reason: prepping for server uplink migration [production]
15:56 <topranks> moving Netbox server uplinks from asw-b4-codfw to lsw1-b4-codfw to prep config for server moves T355860 [production]
15:53 <bking@cumin2002> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts cloudelastic1009.wikimedia.org [production]
15:53 <bking@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:52 <btullis@deploy2002> helmfile [codfw] DONE helmfile.d/services/datahub: sync on main [production]
15:51 <bking@cumin2002> START - Cookbook sre.dns.netbox [production]
15:51 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on cp[2033-2034].codfw.wmnet with reason: T355860 [production]
15:50 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 3:00:00 on cp[2033-2034].codfw.wmnet with reason: T355860 [production]
15:48 <bking@cumin2002> START - Cookbook sre.hosts.decommission for hosts cloudelastic1009.wikimedia.org [production]
15:47 <marostegui@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
15:47 <marostegui@cumin1002> START - Cookbook sre.hosts.downtime for 6:00:00 on db1140.eqiad.wmnet with reason: Maintenance [production]
15:46 <arnaudb@cumin1002> START - Cookbook sre.mysql.clone Will create a clone of db2169.codfw.wmnet onto db2194.codfw.wmnet [production]
15:44 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2058*,elastic2070*,elastic2095*,elastic2096* for switch maintenance - bking@cumin2002 - T355860 [production]
15:44 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058*,elastic2070*,elastic2095*,elastic2096* for switch maintenance - bking@cumin2002 - T355860 [production]
15:44 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2094.codfw.wmnet with OS bullseye [production]
15:43 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Banning hosts: elastic2058* for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058* for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <jgiannelos@deploy2002> Started deploy [restbase/deploy@05fa5c9]: Disabling storage for ptwiki [production]
15:43 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2058 for switch maintenance - bking@cumin2002 - T355860 [production]
15:43 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058 for switch maintenance - bking@cumin2002 - T355860 [production]
15:42 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:42 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:41 <btullis@deploy2002> helmfile [codfw] START helmfile.d/services/datahub: sync on main [production]
15:41 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: eelastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:41 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: eelastic2058,elastic2070,elastic2095,elastic2096 for switch maintenance - bking@cumin2002 - T355860 [production]
15:37 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:37 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:34 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B4 for switch maintenance - bking@cumin2002 - T355860 [production]
15:34 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B4 for switch maintenance - bking@cumin2002 - T355860 [production]
15:28 <btullis@cumin1002> END (PASS) - Cookbook sre.presto.roll-restart-workers (exit_code=0) for Presto analytics cluster: Roll restart of all Presto's jvm daemons. [production]
15:27 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:27 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:26 <bking@cumin2002> conftool action : set/pooled=no; selector: name=wdqs2016.codfw.wmnet [production]
15:26 <btullis@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2094.codfw.wmnet with reason: host reimage [production]
15:25 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:25 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning all hosts in row B for switch maintenance - bking@cumin2002 - T355860 [production]
15:23 <btullis@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2094.codfw.wmnet with reason: host reimage [production]
15:14 <filippo@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]
15:14 <filippo@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/aus-k8s-eqiad-services/jaeger: apply [production]