2023-08-29
ยง
|
11:19 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1203 (T343718)', diff saved to https://phabricator.wikimedia.org/P51882 and previous config saved to /var/cache/conftool/dbconfig/20230829-111949-ladsgroup.json |
[production] |
11:19 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1203.eqiad.wmnet with reason: Maintenance |
[production] |
11:19 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1203.eqiad.wmnet with reason: Maintenance |
[production] |
11:19 |
<jbond@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - jbond@cumin1001" |
[production] |
11:19 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1193 (T343718)', diff saved to https://phabricator.wikimedia.org/P51881 and previous config saved to /var/cache/conftool/dbconfig/20230829-111927-ladsgroup.json |
[production] |
11:16 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1010.eqiad.wmnet |
[production] |
11:13 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1010.eqiad.wmnet |
[production] |
11:13 |
<jmm@cumin2002> |
END (FAIL) - Cookbook sre.ganeti.drain-node (exit_code=99) for draining ganeti node ganeti1010.eqiad.wmnet |
[production] |
11:12 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2165', diff saved to https://phabricator.wikimedia.org/P51880 and previous config saved to /var/cache/conftool/dbconfig/20230829-111252-ladsgroup.json |
[production] |
11:08 |
<moritzm> |
installing nftables bugfix updates from Bullseye point release |
[production] |
11:04 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1193', diff saved to https://phabricator.wikimedia.org/P51879 and previous config saved to /var/cache/conftool/dbconfig/20230829-110421-ladsgroup.json |
[production] |
11:02 |
<jbond@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on puppetserver1002.eqiad.wmnet with reason: host reimage |
[production] |
10:59 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1010.eqiad.wmnet |
[production] |
10:59 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti1009.eqiad.wmnet |
[production] |
10:59 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on puppetserver1002.eqiad.wmnet with reason: host reimage |
[production] |
10:58 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1009.eqiad.wmnet |
[production] |
10:57 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2165 (T343718)', diff saved to https://phabricator.wikimedia.org/P51878 and previous config saved to /var/cache/conftool/dbconfig/20230829-105746-ladsgroup.json |
[production] |
10:56 |
<ayounsi@cumin1001> |
END (PASS) - Cookbook sre.network.tls (exit_code=0) for network device cr3-ulsfo |
[production] |
10:53 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti1009.eqiad.wmnet |
[production] |
10:51 |
<joal@deploy1002> |
Finished deploy [airflow-dags/analytics@90f280e]: Regular deploy of Analytics airflow dags [airflow-dags/analytics@90f280ec] (duration: 00m 14s) |
[production] |
10:51 |
<ayounsi@cumin1001> |
START - Cookbook sre.network.tls for network device cr3-ulsfo |
[production] |
10:51 |
<joal@deploy1002> |
Started deploy [airflow-dags/analytics@90f280e]: Regular deploy of Analytics airflow dags [airflow-dags/analytics@90f280ec] |
[production] |
10:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1193', diff saved to https://phabricator.wikimedia.org/P51877 and previous config saved to /var/cache/conftool/dbconfig/20230829-104915-ladsgroup.json |
[production] |
10:47 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti1009.eqiad.wmnet |
[production] |
10:42 |
<jbond@cumin1001> |
START - Cookbook sre.hosts.reimage for host puppetserver1002.eqiad.wmnet with OS bookworm |
[production] |
10:41 |
<cgoubert@deploy1002> |
Finished scap: Removing mw-on-k8s tls-proxy CPU limits - T344814 (duration: 02m 27s) |
[production] |
10:39 |
<cgoubert@deploy1002> |
Started scap: Removing mw-on-k8s tls-proxy CPU limits - T344814 |
[production] |
10:39 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.ganeti.drain-node (exit_code=0) for draining ganeti node ganeti2020.codfw.wmnet |
[production] |
10:39 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db2165 (T343718)', diff saved to https://phabricator.wikimedia.org/P51876 and previous config saved to /var/cache/conftool/dbconfig/20230829-103901-ladsgroup.json |
[production] |
10:38 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti2020.codfw.wmnet |
[production] |
10:38 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db2165.codfw.wmnet with reason: Maintenance |
[production] |
10:38 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db2165.codfw.wmnet with reason: Maintenance |
[production] |
10:38 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2164 (T343718)', diff saved to https://phabricator.wikimedia.org/P51875 and previous config saved to /var/cache/conftool/dbconfig/20230829-103840-ladsgroup.json |
[production] |
10:34 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1193 (T343718)', diff saved to https://phabricator.wikimedia.org/P51874 and previous config saved to /var/cache/conftool/dbconfig/20230829-103409-ladsgroup.json |
[production] |
10:32 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host ganeti2020.codfw.wmnet |
[production] |
10:30 |
<claime> |
Running puppet on deploy servers to bump envoy image version - T344814 |
[production] |
10:27 |
<jynus> |
reboot db1204 |
[production] |
10:27 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1122.eqiad.wmnet with OS bullseye |
[production] |
10:25 |
<stevemunene@cumin1001> |
END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host an-worker1123.eqiad.wmnet with OS bullseye |
[production] |
10:24 |
<jelto@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/miscweb: apply |
[production] |
10:23 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db2164', diff saved to https://phabricator.wikimedia.org/P51873 and previous config saved to /var/cache/conftool/dbconfig/20230829-102333-ladsgroup.json |
[production] |
10:22 |
<jelto@deploy1002> |
helmfile [eqiad] START helmfile.d/services/miscweb: apply |
[production] |
10:22 |
<jayme> |
Successfully published image docker-registry.discovery.wmnet/envoy:1.23.10-2-s2 |
[production] |
10:21 |
<jelto@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/miscweb: apply |
[production] |
10:19 |
<jelto@deploy1002> |
helmfile [codfw] START helmfile.d/services/miscweb: apply |
[production] |
10:17 |
<jmm@cumin2002> |
START - Cookbook sre.ganeti.drain-node for draining ganeti node ganeti2020.codfw.wmnet |
[production] |
10:17 |
<jelto@deploy1002> |
helmfile [staging] DONE helmfile.d/services/miscweb: apply |
[production] |
10:16 |
<jelto@deploy1002> |
helmfile [staging] START helmfile.d/services/miscweb: apply |
[production] |
10:15 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1193 (T343718)', diff saved to https://phabricator.wikimedia.org/P51872 and previous config saved to /var/cache/conftool/dbconfig/20230829-101536-ladsgroup.json |
[production] |
10:15 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1193.eqiad.wmnet with reason: Maintenance |
[production] |