151-200 of 10000 results (15ms)
2026-05-07 ยง
15:03 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/edit-analytics: apply [production]
15:03 <fceratto@deploy1003> helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . [production]
15:01 <akhatun> Deployed refinery using scap, then deployed onto hdfs [production]
15:01 <akhatun> Deployed refinery using scap, then deployed onto hdfs [analytics]
14:58 <jasmine@cumin2002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-main-eqiad [production]
14:54 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/page-analytics: apply [production]
14:54 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/page-analytics: apply [production]
14:54 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/page-analytics: apply [production]
14:54 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/page-analytics: apply [production]
14:53 <ebysans@deploy1003> helmfile [codfw] DONE helmfile.d/services/media-analytics: apply [production]
14:53 <ebysans@deploy1003> helmfile [codfw] START helmfile.d/services/media-analytics: apply [production]
14:52 <fceratto@deploy1003> helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . [production]
14:52 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/media-analytics: apply [production]
14:52 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/media-analytics: apply [production]
14:50 <fceratto@deploy1003> helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . [production]
14:44 <akhatun@deploy1003> Finished deploy [analytics/refinery@4734c67] (thin): Regular analytics weekly train THIN [analytics/refinery@4734c67c] (duration: 02m 01s) [production]
14:43 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/geo-analytics: apply [production]
14:43 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/geo-analytics: apply [production]
14:42 <akhatun@deploy1003> Started deploy [analytics/refinery@4734c67] (thin): Regular analytics weekly train THIN [analytics/refinery@4734c67c] [production]
14:40 <akhatun@deploy1003> Finished deploy [analytics/refinery@4734c67]: Regular analytics weekly train [analytics/refinery@4734c67c] (duration: 04m 38s) [production]
14:40 <jasmine@cumin2002> START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-main-eqiad [production]
14:37 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/geo-analytics: apply [production]
14:36 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/geo-analytics: apply [production]
14:36 <akhatun@deploy1003> Started deploy [analytics/refinery@4734c67]: Regular analytics weekly train [analytics/refinery@4734c67c] [production]
14:35 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/editor-analytics: apply [production]
14:35 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/editor-analytics: apply [production]
14:33 <akhatun@deploy1003> Finished deploy [analytics/refinery@4734c67] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@4734c67c] (duration: 01m 54s) [production]
14:32 <slyngshede@cumin1003> conftool action : set/pooled=yes; selector: cluster=dnsbox,dc=ulsfo [reason: ulsfo switch refresh T408892] [production]
14:32 <slyngshede@dns1004> END - running authdns-update [production]
14:32 <jelto@deploy1003> helmfile [aux-k8s-codfw] DONE helmfile.d/services/miscweb: apply [production]
14:31 <akhatun@deploy1003> Started deploy [analytics/refinery@4734c67] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@4734c67c] [production]
14:31 <jelto@deploy1003> helmfile [aux-k8s-codfw] START helmfile.d/services/miscweb: apply [production]
14:31 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/editor-analytics: apply [production]
14:30 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/editor-analytics: apply [production]
14:30 <slyngshede@dns1004> START - running authdns-update [production]
14:30 <ebysans@deploy1003> helmfile [codfw] DONE helmfile.d/services/edit-analytics: apply [production]
14:30 <ebysans@deploy1003> helmfile [codfw] START helmfile.d/services/edit-analytics: apply [production]
14:30 <akhatun> Deploying Refinery at 4734c67 for weekly deployment train [production]
14:30 <jmm@dns1004> END - running authdns-update [production]
14:30 <akhatun> Deploying Refinery at 4734c67 for weekly deployment train [analytics]
14:29 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/edit-analytics: apply [production]
14:28 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/edit-analytics: apply [production]
14:28 <jmm@dns1004> START - running authdns-update [production]
14:28 <slyngshede@cumin1003> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:28 <slyngshede@cumin1003> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: updating DNS snippets - slyngshede@cumin1003" [production]
14:28 <slyngshede@cumin1003> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: updating DNS snippets - slyngshede@cumin1003" [production]
14:26 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/edit-analytics: apply [production]
14:26 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/edit-analytics: apply [production]
14:25 <ebysans@deploy1003> helmfile [codfw] DONE helmfile.d/services/device-analytics: apply [production]
14:25 <ebysans@deploy1003> helmfile [codfw] START helmfile.d/services/device-analytics: apply [production]