1051-1100 of 10000 results (77ms)
2023-09-27 ยง
13:17 <aqu@deploy2002> Started deploy [analytics/refinery@223be0f]: Regular analytics weekly train [analytics/refinery@223be0fb] [production]
13:17 <eevans@cumin1001> START - Cookbook sre.hosts.reimage for host restbase2017.codfw.wmnet with OS bullseye [production]
13:12 <aqu> Deployment weekly train of analytics-refinery (+new source version) [production]
12:18 <btullis@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 6 hosts with reason: Still running on 9 mirrormaker processes from main-eqiad to jumbo [production]
12:18 <btullis@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 6 hosts with reason: Still running on 9 mirrormaker processes from main-eqiad to jumbo [production]
11:26 <arnaudb@cumin1001> dbctl commit (dc=all): 'Depooling db2117 (T343198)', diff saved to https://phabricator.wikimedia.org/P52688 and previous config saved to /var/cache/conftool/dbconfig/20230927-112640-arnaudb.json [production]
11:26 <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2117.codfw.wmnet with reason: Maintenance [production]
11:26 <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2117.codfw.wmnet with reason: Maintenance [production]
11:23 <arnaudb@cumin1001> dbctl commit (dc=all): 'Depooling db2182 (T343198)', diff saved to https://phabricator.wikimedia.org/P52687 and previous config saved to /var/cache/conftool/dbconfig/20230927-112342-arnaudb.json [production]
11:23 <arnaudb@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2182.codfw.wmnet with reason: Maintenance [production]
11:23 <arnaudb@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2182.codfw.wmnet with reason: Maintenance [production]
11:23 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2169:3317 (T343198)', diff saved to https://phabricator.wikimedia.org/P52686 and previous config saved to /var/cache/conftool/dbconfig/20230927-112320-arnaudb.json [production]
11:08 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2169:3317', diff saved to https://phabricator.wikimedia.org/P52685 and previous config saved to /var/cache/conftool/dbconfig/20230927-110813-arnaudb.json [production]
10:53 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2169:3317', diff saved to https://phabricator.wikimedia.org/P52684 and previous config saved to /var/cache/conftool/dbconfig/20230927-105306-arnaudb.json [production]
10:46 <cgoubert@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-web: apply [production]
10:46 <cgoubert@deploy2002> helmfile [codfw] START helmfile.d/services/mw-web: apply [production]
10:45 <cgoubert@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-web: apply [production]
10:45 <cgoubert@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-web: apply [production]
10:40 <isaranto@deploy2002> helmfile [ml-serve-eqiad] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
10:39 <isaranto@deploy2002> helmfile [ml-serve-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
10:39 <isaranto@deploy2002> helmfile [ml-staging-codfw] 'sync' command on namespace 'ores-legacy' for release 'main' . [production]
10:38 <arnaudb@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2169:3317 (T343198)', diff saved to https://phabricator.wikimedia.org/P52683 and previous config saved to /var/cache/conftool/dbconfig/20230927-103800-arnaudb.json [production]
10:27 <cgoubert@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-web: apply [production]
10:27 <cgoubert@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-web: apply [production]
10:27 <cgoubert@deploy2002> helmfile [codfw] DONE helmfile.d/services/mw-web: apply [production]
10:27 <cgoubert@deploy2002> helmfile [codfw] START helmfile.d/services/mw-web: apply [production]
09:48 <jayme@cumin1001> conftool action : set/pooled=no; selector: name=kubernetes1013.* [production]
09:43 <claime> Bumping mw-on-k8s traffic to 8% - T346422 [production]
09:36 <jayme> cordoning kubernetes1013 for debug porposes [production]
09:33 <taavi> update CR firewall policy, gerrit 961336 [production]
09:15 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on db2109.codfw.wmnet with reason: Host crashed [production]
09:14 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on db2109.codfw.wmnet with reason: Host crashed [production]
09:10 <gmodena@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-page-content-change-enrich: apply [production]
09:10 <gmodena@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-page-content-change-enrich: apply [production]
09:08 <gmodena@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-page-content-change-enrich: apply [production]
09:08 <gmodena@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-page-content-change-enrich: apply [production]
09:05 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on 15 hosts with reason: Kafka mirror issues on jumbo [production]
09:05 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on 15 hosts with reason: Kafka mirror issues on jumbo [production]
08:55 <marostegui@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7 days, 0:00:00 on db2109.codfw.wmnet with reason: Host crashed [production]
08:54 <marostegui@cumin1001> START - Cookbook sre.hosts.downtime for 7 days, 0:00:00 on db2109.codfw.wmnet with reason: Host crashed [production]
08:44 <jiji@deploy2002> helmfile [codfw] DONE helmfile.d/services/mathoid: apply [production]
08:44 <jiji@deploy2002> helmfile [codfw] START helmfile.d/services/mathoid: apply [production]
08:44 <jiji@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mathoid: apply [production]
08:44 <jiji@deploy2002> helmfile [eqiad] START helmfile.d/services/mathoid: apply [production]
08:28 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 15 hosts with reason: Kafka mirror issues on jumbo [production]
08:28 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 1:00:00 on 15 hosts with reason: Kafka mirror issues on jumbo [production]
08:21 <vgutierrez> update HAProxy to version 2.7.10 in cp4051 - T317799 [production]
08:10 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on 15 hosts with reason: Kafka mirror issues on jumbo [production]
08:10 <elukey@cumin1001> START - Cookbook sre.hosts.downtime for 0:30:00 on 15 hosts with reason: Kafka mirror issues on jumbo [production]
07:39 <Emperor> repool ms-fe2009 [production]