701-750 of 10000 results (129ms)
2024-11-20 ยง
14:06 <jiji@deploy2002> helmfile [aux-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
14:05 <jiji@deploy2002> helmfile [aux-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
14:05 <jiji@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/admin 'apply'. [production]
14:05 <jiji@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/admin 'apply'. [production]
14:05 <jiji@deploy2002> helmfile [ml-staging-codfw] DONE helmfile.d/admin 'apply'. [production]
14:04 <jiji@deploy2002> helmfile [ml-staging-codfw] START helmfile.d/admin 'apply'. [production]
14:04 <jiji@deploy2002> helmfile [ml-serve-codfw] DONE helmfile.d/admin 'apply'. [production]
14:04 <jiji@deploy2002> helmfile [ml-serve-codfw] START helmfile.d/admin 'apply'. [production]
14:04 <jiji@deploy2002> helmfile [ml-serve-eqiad] DONE helmfile.d/admin 'apply'. [production]
14:03 <jiji@deploy2002> helmfile [ml-serve-eqiad] START helmfile.d/admin 'apply'. [production]
14:03 <jiji@deploy2002> helmfile [staging-codfw] DONE helmfile.d/admin 'apply'. [production]
14:03 <jiji@deploy2002> helmfile [staging-codfw] START helmfile.d/admin 'apply'. [production]
14:03 <jiji@deploy2002> helmfile [staging-eqiad] DONE helmfile.d/admin 'apply'. [production]
14:03 <jiji@deploy2002> helmfile [staging-eqiad] START helmfile.d/admin 'apply'. [production]
14:03 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/admin 'apply'. [production]
14:02 <jiji@deploy2002> helmfile [codfw] START helmfile.d/admin 'apply'. [production]
14:02 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/admin 'apply'. [production]
14:02 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/admin 'apply'. [production]
13:56 <cgoubert@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) pool for host wikikube-worker[2136-2139,2141-2155].codfw.wmnet [production]
13:55 <cgoubert@cumin1002> START - Cookbook sre.k8s.pool-depool-node pool for host wikikube-worker[2136-2139,2141-2155].codfw.wmnet [production]
13:53 <claime> homer 'lsw1-d4-codfw*' commit 'T377028' [production]
13:52 <claime> homer 'lsw1-b4-codfw*' commit 'T377028' [production]
13:52 <claime> homer 'lsw1-d2-codfw*' commit 'T377028' [production]
13:51 <claime> homer 'lsw1-c2-codfw*' commit 'T377028' [production]
13:50 <claime> homer 'lsw1-d7-codfw*' commit 'T377028' [production]
13:50 <claime> homer 'lsw1-c4-codfw*' commit 'T377028' [production]
13:49 <claime> homer 'lsw1-d5-codfw*' commit 'T377028' [production]
13:48 <claime> homer 'lsw1-b7-codfw*' commit 'T377028' [production]
13:47 <claime> homer 'lsw1-c7-codfw*' commit 'T377028' [production]
13:46 <claime> homer 'lsw1-d6-codfw*' commit 'T377028' [production]
13:45 <claime> homer 'lsw1-b2-codfw*' commit 'T377028' [production]
13:44 <claime> homer 'lsw1-d1-codfw*' commit 'T377028' [production]
13:41 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2151.codfw.wmnet with OS bookworm [production]
13:38 <effie> putting kafka-main1006.eqiad.wmnet in production [production]
13:38 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2152.codfw.wmnet with OS bookworm [production]
13:36 <jiji@cumin1002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-main-eqiad [production]
13:33 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2154.codfw.wmnet with OS bookworm [production]
13:31 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2155.codfw.wmnet with OS bookworm [production]
13:29 <brouberol@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-analytics-test: apply [production]
13:28 <btullis@cumin1002> START - Cookbook sre.hadoop.roll-restart-workers restart workers for Hadoop analytics cluster: Roll restart of jvm daemons for openjdk upgrade. [production]
13:28 <brouberol@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-analytics-test: apply [production]
13:26 <jiji@cumin1002> START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-main-eqiad [production]
13:26 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2153.codfw.wmnet with OS bookworm [production]
13:23 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host wikikube-worker2150.codfw.wmnet with OS bookworm [production]
13:21 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2151.codfw.wmnet with reason: host reimage [production]
13:17 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cp7007.magru.wmnet with OS bullseye [production]
13:17 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2152.codfw.wmnet with reason: host reimage [production]
13:14 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2154.codfw.wmnet with reason: host reimage [production]
13:11 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2155.codfw.wmnet with reason: host reimage [production]
13:07 <cgoubert@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2153.codfw.wmnet with reason: host reimage [production]