4151-4200 of 7785 results (21ms)
2024-04-26 §
15:14 <bking@cumin2002> conftool action : set/weight=20:pooled=yes; selector: name=elastic1103\.eqiad\.wmnet [production]
2024-04-24 §
19:15 <bking@deploy1002> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
19:15 <bking@deploy1002> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
19:08 <inflatador> bking@deploy1002 stop `consumer-cloudelastic` release to test alerting T359213 [production]
19:07 <bking@deploy1002> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
19:06 <bking@deploy1002> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
2024-04-23 §
17:09 <inflatador> bking@mw1461 "restart rsyslog to reclaim fds T357616" [production]
2024-04-19 §
19:56 <bking@cumin2002> END (PASS) - Cookbook sre.wdqs.data-transfer (exit_code=0) (T362508, journal in uncertain state) xfer wikidata from wdqs2022.codfw.wmnet -> wdqs2023.codfw.wmnet w/ force delete existing files, repooling both afterwards [production]
18:34 <bking@cumin2002> START - Cookbook sre.wdqs.data-transfer (T362508, journal in uncertain state) xfer wikidata from wdqs2022.codfw.wmnet -> wdqs2023.codfw.wmnet w/ force delete existing files, repooling both afterwards [production]
2024-04-18 §
22:31 <bking@cumin2002> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) (T362508, excessive lag) xfer wikidata from wdqs2022.codfw.wmnet -> wdqs2023.codfw.wmnet w/ force delete existing files, repooling both afterwards [production]
21:11 <bking@cumin2002> START - Cookbook sre.wdqs.data-transfer (T362508, excessive lag) xfer wikidata from wdqs2022.codfw.wmnet -> wdqs2023.codfw.wmnet w/ force delete existing files, repooling both afterwards [production]
20:42 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on wdqs2023.codfw.wmnet with reason: T362508 [production]
20:42 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on wdqs2023.codfw.wmnet with reason: T362508 [production]
19:05 <bking@cumin2002> conftool action : set/pooled=true; selector: dnsdisc=wdqs,name=codfw [production]
2024-04-17 §
22:11 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on 19 hosts with reason: T362508 [production]
22:10 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on 19 hosts with reason: T362508 [production]
2024-04-15 §
22:09 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on 19 hosts with reason: T362508 [production]
22:09 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on 19 hosts with reason: T362508 [production]
17:42 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: T361647 - bking@cumin2002 [production]
16:23 <bking@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: T361647 - bking@cumin2002 [production]
15:11 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.rolling-operation (exit_code=0) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: T361647 - bking@cumin2002 [production]
14:48 <bking@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster cloudelastic: T361647 - bking@cumin2002 [production]
14:09 <bking@cumin2002> END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: T361647 - bking@cumin2002 [production]
14:04 <bking@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (1 nodes at a time) for ElasticSearch cluster relforge: T361647 - bking@cumin2002 [production]
14:04 <bking@cumin2002> END (ERROR) - Cookbook sre.elasticsearch.rolling-operation (exit_code=97) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster relforge: T361647 - bking@cumin2002 [production]
14:04 <bking@cumin2002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster relforge: T361647 - bking@cumin2002 [production]
2024-04-12 §
19:36 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in search_codfw [production]
19:36 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Unbanning all hosts in search_codfw [production]
15:51 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on elastic2090.codfw.wmnet with reason: T353878 [production]
15:51 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 1:00:00 on elastic2090.codfw.wmnet with reason: T353878 [production]
15:50 <bking@cumin2002> END (FAIL) - Cookbook sre.elasticsearch.ban (exit_code=99) Banning hosts: elastic2090 for reboot to get rid of broken systemd units - bking@cumin2002 - T353878 [production]
15:50 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Banning hosts: elastic2090 for reboot to get rid of broken systemd units - bking@cumin2002 - T353878 [production]
2024-04-10 §
13:30 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 30 days, 0:00:00 on elastic2088.codfw.wmnet with reason: T361525 [production]
13:30 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 30 days, 0:00:00 on elastic2088.codfw.wmnet with reason: T361525 [production]
2024-04-05 §
16:18 <bking@deploy1002> helmfile [eqiad] DONE helmfile.d/services/cirrus-streaming-updater: apply [production]
16:18 <bking@deploy1002> helmfile [eqiad] START helmfile.d/services/cirrus-streaming-updater: apply [production]
2024-04-02 §
21:21 <bking@cumin2002> END (PASS) - Cookbook sre.ganeti.resource-report (exit_code=0) [production]
21:21 <bking@cumin2002> START - Cookbook sre.ganeti.resource-report [production]
21:21 <bking@cumin2002> END (PASS) - Cookbook sre.ganeti.resource-report (exit_code=0) [production]
21:21 <bking@cumin2002> START - Cookbook sre.ganeti.resource-report [production]
2024-04-01 §
20:36 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host elastic2088.codfw.wmnet with OS bullseye [production]
20:19 <bking@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on elastic2088.codfw.wmnet with reason: host reimage [production]
20:17 <bking@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on elastic2088.codfw.wmnet with reason: host reimage [production]
20:00 <bking@cumin2002> START - Cookbook sre.hosts.reimage for host elastic2088.codfw.wmnet with OS bullseye [production]
2024-03-29 §
13:32 <bking@cumin2002> END (PASS) - Cookbook sre.elasticsearch.ban (exit_code=0) Unbanning all hosts in search_codfw [production]
13:32 <bking@cumin2002> START - Cookbook sre.elasticsearch.ban Unbanning all hosts in search_codfw [production]
2024-03-28 §
20:59 <inflatador> bking@mwmaint1002 sudo apt-get install ripgrep (faster recursive grep) [production]
2024-03-27 §
23:04 <bking@cumin2002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=93) for host elastic2088.codfw.wmnet with OS bullseye [production]
21:46 <bking@cumin2002> START - Cookbook sre.hosts.decommission for hosts elastic[2038-2048,2050-2054].codfw.wmnet [production]
21:41 <bking@cumin2002> START - Cookbook sre.hosts.reimage for host elastic2088.codfw.wmnet with OS bullseye [production]