1901-1950 of 10000 results (50ms)
2022-02-09 ยง
17:30 <joal@deploy1002> Started deploy [analytics/refinery@55b229b] (hadoop-test): Regular analytics weekly train HADOOP-TEST [analytics/refinery@55b229b] [production]
17:30 <joal@deploy1002> Finished deploy [analytics/refinery@55b229b] (thin): Regular analytics weekly train THIN [analytics/refinery@55b229b] (duration: 00m 07s) [production]
17:30 <joal@deploy1002> Started deploy [analytics/refinery@55b229b] (thin): Regular analytics weekly train THIN [analytics/refinery@55b229b] [production]
17:27 <joal@deploy1002> Finished deploy [analytics/refinery@55b229b]: Regular analytics weekly train [analytics/refinery@55b229b] (duration: 22m 00s) [production]
17:07 <jayme> ran sudo rm /var/run/confd-template/.k8s-ingress-staging*.err on puppetmaster1001 - T300740 [production]
17:05 <joal@deploy1002> Started deploy [analytics/refinery@55b229b]: Regular analytics weekly train [analytics/refinery@55b229b] [production]
16:31 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1105:3311 (T298554)', diff saved to https://phabricator.wikimedia.org/P20422 and previous config saved to /var/cache/conftool/dbconfig/20220209-163102-ladsgroup.json [production]
16:31 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1105.eqiad.wmnet with reason: Maintenance [production]
16:30 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1105.eqiad.wmnet with reason: Maintenance [production]
16:21 <jayme@cumin1001> conftool action : set/pooled=true; selector: dnsdisc=k8s-ingress-staging,name=eqiad [production]
16:17 <otto@deploy1002> Finished deploy [airflow-dags/analytics_test@ddd10b4]: (no justification provided) (duration: 00m 03s) [production]
16:17 <otto@deploy1002> Started deploy [airflow-dags/analytics_test@ddd10b4]: (no justification provided) [production]
16:16 <otto@deploy1002> Finished deploy [airflow-dags/analytics_test@ddd10b4]: (no justification provided) (duration: 00m 20s) [production]
16:16 <otto@deploy1002> Started deploy [airflow-dags/analytics_test@ddd10b4]: (no justification provided) [production]
15:57 <jayme> ran sudo rm /var/run/confd-template/.k8s-ingress-staging*.err on puppetmaster2001 - T300740 [production]
15:56 <jayme> restarting pybal on lvs1015,lvs2009 - T300740 [production]
15:44 <jbond> change puppet hiera prefernce site vs site/role gerrit:761339 [production]
15:43 <jayme@cumin1001> conftool action : set/pooled=yes:weight=10; selector: cluster=kubernetes-staging,service=kubesvc [production]
15:30 <jayme> restarting pybal on lvs2010,lvs1020 - T300740 [production]
15:25 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
15:25 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1139.eqiad.wmnet with reason: Maintenance [production]
15:25 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1164 (T298554)', diff saved to https://phabricator.wikimedia.org/P20420 and previous config saved to /var/cache/conftool/dbconfig/20220209-152522-ladsgroup.json [production]
15:10 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1164', diff saved to https://phabricator.wikimedia.org/P20419 and previous config saved to /var/cache/conftool/dbconfig/20220209-151017-ladsgroup.json [production]
15:06 <moritzm> imported jenkins 2.319.3 to thirdparty/ci T301361 [production]
14:55 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1164', diff saved to https://phabricator.wikimedia.org/P20418 and previous config saved to /var/cache/conftool/dbconfig/20220209-145513-ladsgroup.json [production]
14:43 <ema> prometheus: remove atskafka target files - '/srv/prometheus/ops/targets/atskafka_*' T247497 [production]
14:40 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1164 (T298554)', diff saved to https://phabricator.wikimedia.org/P20416 and previous config saved to /var/cache/conftool/dbconfig/20220209-144008-ladsgroup.json [production]
14:36 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db2126 (T300510)', diff saved to https://phabricator.wikimedia.org/P20415 and previous config saved to /var/cache/conftool/dbconfig/20220209-143642-ladsgroup.json [production]
14:30 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host db2126.codfw.wmnet with OS bullseye [production]
14:29 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
14:25 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
14:25 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
14:25 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
14:22 <reedy@deploy1002> Finished scap: Downgrading symfony/console (v5.4.3 => v5.4.2) T301320 (duration: 01m 31s) [production]
14:20 <reedy@deploy1002> Started scap: Downgrading symfony/console (v5.4.3 => v5.4.2) T301320 [production]
13:56 <ladsgroup@cumin1001> START - Cookbook sre.hosts.reimage for host db2126.codfw.wmnet with OS bullseye [production]
13:55 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db2126 (T300510)', diff saved to https://phabricator.wikimedia.org/P20414 and previous config saved to /var/cache/conftool/dbconfig/20220209-135515-ladsgroup.json [production]
13:55 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2126.codfw.wmnet with reason: Maintenance [production]
13:55 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2126.codfw.wmnet with reason: Maintenance [production]
13:54 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Migrate to bullseye (T300510) [production]
13:53 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2095.codfw.wmnet with reason: Migrate to bullseye (T300510) [production]
13:48 <jelto> update scap to 4.3.1 on all hosts - T301307 [production]
13:38 <reedy@deploy1002> Finished scap: Downgrading symfony/console \(v5.4.3 => v5.4.2\) T301320 (duration: 01m 34s) [production]
13:36 <reedy@deploy1002> Started scap: Downgrading symfony/console \(v5.4.3 => v5.4.2\) T301320 [production]
13:19 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1164 (T298554)', diff saved to https://phabricator.wikimedia.org/P20412 and previous config saved to /var/cache/conftool/dbconfig/20220209-131938-ladsgroup.json [production]
13:19 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1164.eqiad.wmnet with reason: Maintenance [production]
13:19 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 6:00:00 on db1164.eqiad.wmnet with reason: Maintenance [production]
13:19 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
13:18 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
13:18 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]