4851-4900 of 10000 results (93ms)
2023-11-08 ยง
14:06 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: kafka::test::broker [production]
14:04 <jiji@deploy2002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
14:04 <jiji@deploy2002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
14:04 <jiji@deploy2002> helmfile [staging] DONE helmfile.d/services/ipoid: apply [production]
14:03 <jiji@deploy2002> helmfile [staging] START helmfile.d/services/ipoid: apply [production]
13:59 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on 15 hosts with reason: not pooled, reimaging in progress [production]
13:59 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on 15 hosts with reason: not pooled, reimaging in progress [production]
13:55 <jmm@cumin2002> START - Cookbook sre.puppet.migrate-role for role: kafka::test::broker [production]
13:55 <btullis@cumin1001> START - Cookbook sre.hadoop.roll-restart-workers restart workers for Hadoop analytics cluster: Roll restart of jvm daemons for openjdk upgrade. [production]
13:49 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: analytics_cluster::hadoop::worker [production]
13:34 <moritzm> installing libxpm security updates [production]
13:19 <jbond@cumin1001> END (PASS) - Cookbook sre.puppet.migrate-role (exit_code=0) for role: openldap::replica [production]
13:14 <taavi@cumin1001> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
13:14 <taavi@cumin1001> END (FAIL) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=99) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: free up nfs-maps IPs T350259 - taavi@cumin1001" [production]
13:12 <taavi@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: free up nfs-maps IPs T350259 - taavi@cumin1001" [production]
13:10 <jnuche@deploy2002> rebuilt and synchronized wikiversions files: group0 wikis to 1.42.0-wmf.4 refs T350080 [production]
13:10 <taavi@cumin1001> START - Cookbook sre.dns.netbox [production]
13:08 <stevemunene@cumin1001> END (FAIL) - Cookbook sre.druid.roll-restart-workers (exit_code=99) for Druid public cluster: Roll restart of Druid jvm daemons. [production]
13:04 <jbond@cumin1001> START - Cookbook sre.puppet.migrate-role for role: openldap::replica [production]
11:58 <jnuche@deploy2002> rebuilt and synchronized wikiversions files: group1 wikis to 1.42.0-wmf.4 refs T350080 [production]
11:49 <ladsgroup@deploy2002> Finished scap: Backport for [[gerrit:972709|Only take one field in fetchFieldValues (T350726)]] (duration: 07m 00s) [production]
11:43 <ladsgroup@deploy2002> ladsgroup: Continuing with sync [production]
11:43 <ladsgroup@deploy2002> ladsgroup: Backport for [[gerrit:972709|Only take one field in fetchFieldValues (T350726)]] synced to the testservers (https://wikitech.wikimedia.org/wiki/Mwdebug) [production]
11:42 <ladsgroup@deploy2002> Started scap: Backport for [[gerrit:972709|Only take one field in fetchFieldValues (T350726)]] [production]
11:37 <hnowlan@deploy2002> helmfile [codfw] DONE helmfile.d/services/wikifeeds: apply [production]
11:37 <hnowlan@deploy2002> helmfile [codfw] START helmfile.d/services/wikifeeds: apply [production]
11:37 <hnowlan@deploy2002> helmfile [eqiad] DONE helmfile.d/services/wikifeeds: apply [production]
11:33 <hnowlan@deploy2002> helmfile [eqiad] START helmfile.d/services/wikifeeds: apply [production]
11:32 <effie> stopping puppet from mc2038 [production]
11:15 <jmm@cumin2002> END (FAIL) - Cookbook sre.puppet.migrate-role (exit_code=99) for role: analytics_cluster::hadoop::worker [production]
11:12 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-wikifunctions: apply [production]
11:12 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-wikifunctions: apply [production]
11:12 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-wikifunctions: apply [production]
11:11 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/mw-wikifunctions: apply [production]
11:11 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-misc: apply [production]
11:11 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-misc: apply [production]
11:11 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-misc: apply [production]
11:11 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/mw-misc: apply [production]
11:11 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-api-ext: apply [production]
11:10 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-api-ext: apply [production]
11:10 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-api-ext: apply [production]
11:09 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/mw-api-ext: apply [production]
11:09 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-api-int: apply [production]
11:09 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-api-int: apply [production]
11:09 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-api-int: apply [production]
11:08 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/mw-api-int: apply [production]
11:08 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-web: apply [production]
11:07 <vgutierrez@cumin1001> END (PASS) - Cookbook sre.cdn.roll-upgrade-haproxy (exit_code=0) rolling upgrade of HAProxy on A:cp-esams and A:cp [production]
11:07 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-web: apply [production]
11:07 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-web: apply [production]