51-100 of 10000 results (109ms)
2026-05-07 ยง
14:35 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/editor-analytics: apply [production]
14:35 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/editor-analytics: apply [production]
14:33 <akhatun@deploy1003> Finished deploy [analytics/refinery@4734c67] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@4734c67c] (duration: 01m 54s) [production]
14:32 <slyngshede@cumin1003> conftool action : set/pooled=yes; selector: cluster=dnsbox,dc=ulsfo [reason: ulsfo switch refresh T408892] [production]
14:32 <slyngshede@dns1004> END - running authdns-update [production]
14:32 <jelto@deploy1003> helmfile [aux-k8s-codfw] DONE helmfile.d/services/miscweb: apply [production]
14:31 <akhatun@deploy1003> Started deploy [analytics/refinery@4734c67] (hadoop-test): Regular analytics weekly train TEST [analytics/refinery@4734c67c] [production]
14:31 <jelto@deploy1003> helmfile [aux-k8s-codfw] START helmfile.d/services/miscweb: apply [production]
14:31 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/editor-analytics: apply [production]
14:30 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/editor-analytics: apply [production]
14:30 <slyngshede@dns1004> START - running authdns-update [production]
14:30 <ebysans@deploy1003> helmfile [codfw] DONE helmfile.d/services/edit-analytics: apply [production]
14:30 <ebysans@deploy1003> helmfile [codfw] START helmfile.d/services/edit-analytics: apply [production]
14:30 <akhatun> Deploying Refinery at 4734c67 for weekly deployment train [production]
14:30 <jmm@dns1004> END - running authdns-update [production]
14:29 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/edit-analytics: apply [production]
14:28 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/edit-analytics: apply [production]
14:28 <jmm@dns1004> START - running authdns-update [production]
14:28 <slyngshede@cumin1003> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
14:28 <slyngshede@cumin1003> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: updating DNS snippets - slyngshede@cumin1003" [production]
14:28 <slyngshede@cumin1003> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: updating DNS snippets - slyngshede@cumin1003" [production]
14:26 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/edit-analytics: apply [production]
14:26 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/edit-analytics: apply [production]
14:25 <ebysans@deploy1003> helmfile [codfw] DONE helmfile.d/services/device-analytics: apply [production]
14:25 <ebysans@deploy1003> helmfile [codfw] START helmfile.d/services/device-analytics: apply [production]
14:24 <slyngshede@cumin1003> START - Cookbook sre.dns.netbox [production]
14:12 <jasmine@cumin2002> END (PASS) - Cookbook sre.kafka.roll-restart-reboot-brokers (exit_code=0) rolling restart_daemons on A:kafka-main-codfw [production]
14:12 <ebysans@deploy1003> helmfile [eqiad] DONE helmfile.d/services/device-analytics: apply [production]
14:12 <ebysans@deploy1003> helmfile [eqiad] START helmfile.d/services/device-analytics: apply [production]
14:10 <ebysans@deploy1003> helmfile [staging] DONE helmfile.d/services/device-analytics: apply [production]
14:10 <ebysans@deploy1003> helmfile [staging] START helmfile.d/services/device-analytics: apply [production]
13:53 <jasmine@cumin2002> START - Cookbook sre.kafka.roll-restart-reboot-brokers rolling restart_daemons on A:kafka-main-codfw [production]
13:34 <stran@deploy1003> Finished scap sync-world: Backport for [[gerrit:1284553|Enable staggered rollout for IRS on enwiki (T424008)]], [[gerrit:1284569|Fix when user is considered exposed to the feature in the experiment (T424075)]] (duration: 09m 05s) [production]
13:30 <stran@deploy1003> stran: Continuing with deployment [production]
13:27 <stran@deploy1003> stran: Backport for [[gerrit:1284553|Enable staggered rollout for IRS on enwiki (T424008)]], [[gerrit:1284569|Fix when user is considered exposed to the feature in the experiment (T424075)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
13:25 <stran@deploy1003> Started scap sync-world: Backport for [[gerrit:1284553|Enable staggered rollout for IRS on enwiki (T424008)]], [[gerrit:1284569|Fix when user is considered exposed to the feature in the experiment (T424075)]] [production]
13:23 <fceratto@deploy1003> helmfile [aux-k8s-eqiad] 'sync' command on namespace 'zarcillo' for release 'main' . [production]
13:10 <jforrester@deploy1003> Finished scap sync-world: Backport for [[gerrit:1284547|Remove the progress bar]], [[gerrit:1275467|mc: Set server, instead of host and port, for wgWikiLambdaObjectCaches (T423311)]] (duration: 06m 55s) [production]
13:06 <jforrester@deploy1003> rzl, jforrester, hartman: Continuing with deployment [production]
13:05 <jforrester@deploy1003> rzl, jforrester, hartman: Backport for [[gerrit:1284547|Remove the progress bar]], [[gerrit:1275467|mc: Set server, instead of host and port, for wgWikiLambdaObjectCaches (T423311)]] synced to the testservers (see https://wikitech.wikimedia.org/wiki/Mwdebug). Changes can now be verified there. [production]
13:03 <jforrester@deploy1003> Started scap sync-world: Backport for [[gerrit:1284547|Remove the progress bar]], [[gerrit:1275467|mc: Set server, instead of host and port, for wgWikiLambdaObjectCaches (T423311)]] [production]
13:02 <slyngshede@cumin1003> conftool action : set/pooled=yes; selector: name=dns4004.wikimedia.org [reason: ulsfo switch refresh T408892] [production]
12:58 <sukhe@cumin1003> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
12:55 <sukhe@cumin1003> START - Cookbook sre.dns.netbox [production]
12:51 <jforrester@deploy1003> helmfile [codfw] DONE helmfile.d/services/wikifunctions: apply [production]
12:51 <jforrester@deploy1003> helmfile [codfw] START helmfile.d/services/wikifunctions: apply [production]
12:51 <jforrester@deploy1003> helmfile [eqiad] DONE helmfile.d/services/wikifunctions: apply [production]
12:50 <jforrester@deploy1003> helmfile [eqiad] START helmfile.d/services/wikifunctions: apply [production]
12:45 <sukhe@dns1004> FAIL - running authdns-update [production]
12:44 <sukhe@dns1004> START - running authdns-update [production]