6401-6450 of 7789 results (19ms)
2023-04-25 §
19:48 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on wdqs2012.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
19:48 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on wdqs2012.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
19:48 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for wdqs2006.codfw.wmnet [production]
19:48 <bking@cumin1001> START - Cookbook sre.hosts.remove-downtime for wdqs2006.codfw.wmnet [production]
19:46 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for wdqs2009.codfw.wmnet [production]
19:46 <bking@cumin1001> START - Cookbook sre.hosts.remove-downtime for wdqs2009.codfw.wmnet [production]
19:46 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on wdqs2006.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
19:46 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 2 days, 0:00:00 on wdqs2006.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
19:23 <inflatador> bking@cumin1001 finishing WDQS deploy...restarting `wdqs-categories` across lvs-managed hosts [production]
18:57 <bking@deploy1002> Finished deploy [wdqs/wdqs@0e051d8]: 0.3.123 (duration: 17m 29s) [production]
18:39 <bking@deploy1002> Started deploy [wdqs/wdqs@0e051d8]: 0.3.123 [production]
14:58 <bking@deploy1002> Finished deploy [wdqs/wdqs@0e051d8]: 0.3.123 (duration: 07m 38s) [production]
14:50 <bking@deploy1002> Started deploy [wdqs/wdqs@0e051d8]: 0.3.123 [production]
13:30 <inflatador> bking@cumin1001 transfer.py wdqs2009.codfw.wmnet:/srv/wdqs wdqs2022.codfw.wmnet:/srv/wdqs [production]
13:26 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on wdqs2009.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
13:26 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on wdqs2009.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
2023-04-24 §
18:44 <bking@cumin1001> END (ERROR) - Cookbook sre.wdqs.data-transfer (exit_code=97) [production]
14:58 <inflatador> bking@wdqs1015 repool wdqs1015 as lag is back down [production]
12:56 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
2023-04-20 §
22:48 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
21:22 <inflatador> bking@cumin1001 repool wdqs2012 T331300 [production]
21:19 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
21:19 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
21:18 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
21:18 <inflatador> bking@cumin1001 depool wdqs2009 for data xfer T331300 [production]
20:36 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
19:58 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:57 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
19:54 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:28 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
19:16 <inflatador> bking@cumin1001 depool wdqs2012.codfw.wmnet for data xfer T331300 [production]
19:16 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
19:15 <bking@cumin1001> END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) [production]
19:13 <bking@cumin1001> START - Cookbook sre.wdqs.data-transfer [production]
15:33 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12 days, 0:00:00 on wdqs2022.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
15:32 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 12 days, 0:00:00 on wdqs2022.codfw.wmnet with reason: attempting WDQS stack on bullseye [production]
2023-04-19 §
22:34 <bking@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host wdqs2022.codfw.wmnet with OS bullseye [production]
21:38 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host wdqs2022.codfw.wmnet with OS bullseye [production]
2023-04-17 §
21:17 <inflatador> bking@cumin1001 ban cloudelastic1004 for upcoming switch maintenance T333377 [production]
2023-04-13 §
14:23 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:23 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
2023-04-12 §
14:06 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:05 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
2023-04-06 §
14:21 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:21 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
2023-04-04 §
20:23 <inflatador> bking@cumin1001 unban elastic nodes post switch maintenance T331882 [production]
2023-04-03 §
21:25 <inflatador> bking@cumin ban cloudelastic1003 from all cloudelastic clusters T331882 [production]
2023-03-30 §
14:57 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:56 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:49 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]