2025-01-31
ยง
|
12:26 |
<jayme@cumin1002> |
END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
12:16 |
<moritzm> |
rebalance codfw/D following OS updates T382508 |
[production] |
12:10 |
<jayme@cumin1002> |
START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
12:08 |
<jayme@cumin1002> |
END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
12:04 |
<jayme@cumin1002> |
START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
12:03 |
<jayme@cumin1002> |
END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
11:59 |
<jayme@cumin1002> |
START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
11:40 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1189.eqiad.wmnet with reason: Index rebuild |
[production] |
11:38 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1189.eqiad.wmnet |
[production] |
11:37 |
<jayme@cumin1002> |
END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
11:36 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2 days, 0:00:00 on dbstore1008.eqiad.wmnet with reason: maintenance |
[production] |
11:33 |
<marostegui> |
Upgrade mariadb in dbstore1008 and rebuild tables on s1, s5 and s7 T384818 |
[production] |
11:33 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1169 (T384592)', diff saved to https://phabricator.wikimedia.org/P72908 and previous config saved to /var/cache/conftool/dbconfig/20250131-113321-marostegui.json |
[production] |
11:33 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7:00:00 on db1169.eqiad.wmnet with reason: Maintenance |
[production] |
11:33 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on dbstore1008.eqiad.wmnet with reason: maintenance |
[production] |
11:33 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1163 (T384592)', diff saved to https://phabricator.wikimedia.org/P72907 and previous config saved to /var/cache/conftool/dbconfig/20250131-113300-marostegui.json |
[production] |
11:31 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1189.eqiad.wmnet |
[production] |
11:30 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1189 T385051', diff saved to https://phabricator.wikimedia.org/P72906 and previous config saved to /var/cache/conftool/dbconfig/20250131-113011-marostegui.json |
[production] |
11:29 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Promote db1223 to s3 primary T385051', diff saved to https://phabricator.wikimedia.org/P72905 and previous config saved to /var/cache/conftool/dbconfig/20250131-112920-root.json |
[production] |
11:29 |
<marostegui> |
Starting s3 eqiad failover from db1189 to db1223 - T385051 |
[production] |
11:27 |
<jayme@cumin1002> |
START - Cookbook sre.k8s.wipe-cluster Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
11:26 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1:00:00 on 24 hosts with reason: Primary switchover s3 T385051 |
[production] |
11:26 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Set db1223 with weight 0 T385051', diff saved to https://phabricator.wikimedia.org/P72904 and previous config saved to /var/cache/conftool/dbconfig/20250131-112614-root.json |
[production] |
11:17 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P72903 and previous config saved to /var/cache/conftool/dbconfig/20250131-111753-marostegui.json |
[production] |
11:08 |
<jayme@cumin1002> |
END (FAIL) - Cookbook sre.k8s.wipe-cluster (exit_code=99) Wipe the K8s cluster staging-codfw: Kubernetes upgrade |
[production] |
11:02 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P72901 and previous config saved to /var/cache/conftool/dbconfig/20250131-110246-marostegui.json |
[production] |
11:02 |
<kevinbazira@deploy2002> |
helmfile [ml-staging-codfw] Ran 'sync' command on namespace 'article-models' for release 'main' . |
[production] |
10:53 |
<akosiaris@deploy2002> |
helmfile [staging] DONE helmfile.d/services/mw-api-int: apply |
[production] |
10:53 |
<akosiaris@deploy2002> |
helmfile [staging] START helmfile.d/services/mw-api-int: apply |
[production] |
10:47 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Repooling after maintenance db1163 (T384592)', diff saved to https://phabricator.wikimedia.org/P72900 and previous config saved to /var/cache/conftool/dbconfig/20250131-104739-marostegui.json |
[production] |
10:16 |
<jynus@cumin1002> |
END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts db2139.codfw.wmnet |
[production] |
10:16 |
<jynus@cumin1002> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
10:16 |
<jynus@cumin1002> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2139.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jynus@cumin1002" |
[production] |
10:15 |
<jynus@cumin1002> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: db2139.codfw.wmnet decommissioned, removing all IPs except the asset tag one - jynus@cumin1002" |
[production] |
10:14 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db2238.codfw.wmnet with reason: Index rebuild |
[production] |
10:13 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db2238.codfw.wmnet |
[production] |
10:13 |
<root@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1246.eqiad.wmnet with reason: Index rebuild |
[production] |
10:12 |
<root@cumin1002> |
END (PASS) - Cookbook sre.mysql.upgrade (exit_code=0) for db1246.eqiad.wmnet |
[production] |
10:08 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for db2238.codfw.wmnet |
[production] |
10:08 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db2238', diff saved to https://phabricator.wikimedia.org/P72899 and previous config saved to /var/cache/conftool/dbconfig/20250131-100806-marostegui.json |
[production] |
10:06 |
<root@cumin1002> |
START - Cookbook sre.mysql.upgrade for db1246.eqiad.wmnet |
[production] |
10:06 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depool db1246', diff saved to https://phabricator.wikimedia.org/P72898 and previous config saved to /var/cache/conftool/dbconfig/20250131-100650-marostegui.json |
[production] |
10:03 |
<jynus@cumin1002> |
START - Cookbook sre.dns.netbox |
[production] |
09:41 |
<marostegui@cumin1002> |
dbctl commit (dc=all): 'Depooling db1163 (T384592)', diff saved to https://phabricator.wikimedia.org/P72897 and previous config saved to /var/cache/conftool/dbconfig/20250131-094152-marostegui.json |
[production] |
09:41 |
<marostegui@cumin1002> |
DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 7:00:00 on db1163.eqiad.wmnet with reason: Maintenance |
[production] |
09:40 |
<jynus@cumin1002> |
START - Cookbook sre.hosts.decommission for hosts db2139.codfw.wmnet |
[production] |
09:37 |
<slyngshede@dns1004> |
END - running authdns-update |
[production] |
09:35 |
<slyngshede@dns1004> |
START - running authdns-update |
[production] |
09:17 |
<brouberol@deploy2002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-test-k8s: apply |
[production] |
09:16 |
<brouberol@deploy2002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-test-k8s: apply |
[production] |