2023-03-30
ยง
|
14:23 |
<bking@deploy2002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply |
[production] |
14:23 |
<bking@deploy2002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply |
[production] |
14:22 |
<bking@deploy2002> |
helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply |
[production] |
14:22 |
<bking@deploy2002> |
helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply |
[production] |
14:22 |
<akosiaris@deploy2002> |
helmfile [eqiad] DONE helmfile.d/services/thumbor: sync |
[production] |
14:17 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. |
[production] |
14:12 |
<akosiaris@deploy2002> |
helmfile [eqiad] START helmfile.d/services/thumbor: sync |
[production] |
14:11 |
<sukhe@cumin2002> |
START - Cookbook sre.hosts.reimage for host lvs4010.ulsfo.wmnet with OS bullseye |
[production] |
14:08 |
<akosiaris@deploy2002> |
helmfile [codfw] DONE helmfile.d/services/thumbor: sync |
[production] |
14:06 |
<akosiaris@deploy2002> |
helmfile [codfw] START helmfile.d/services/thumbor: sync |
[production] |
12:36 |
<elukey@cumin1001> |
START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. |
[production] |
12:32 |
<joal@deploy2002> |
Finished deploy [airflow-dags/analytics@a6500cf]: Regular analytics weekly train (2nd) HOTFIX [airflow-dags/analytics@a6500cf] (duration: 00m 11s) |
[production] |
12:31 |
<joal@deploy2002> |
Started deploy [airflow-dags/analytics@a6500cf]: Regular analytics weekly train (2nd) HOTFIX [airflow-dags/analytics@a6500cf] |
[production] |
12:27 |
<btullis@deploy2002> |
helmfile [staging] DONE helmfile.d/services/datahub: sync on main |
[production] |
12:26 |
<btullis@deploy2002> |
helmfile [staging] START helmfile.d/services/datahub: apply on main |
[production] |
12:17 |
<volans@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
12:17 |
<volans@cumin1001> |
START - Cookbook sre.hosts.provision for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
12:17 |
<volans@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
12:15 |
<ladsgroup@deploy2002> |
Finished scap: Backport for [[gerrit:904512|Set externallinks to WRITE BOTH everywhere (T321662)]] (duration: 14m 58s) |
[production] |
12:08 |
<btullis@deploy2002> |
helmfile [staging] DONE helmfile.d/services/datahub: sync on main |
[production] |
12:02 |
<ladsgroup@deploy2002> |
ladsgroup: Backport for [[gerrit:904512|Set externallinks to WRITE BOTH everywhere (T321662)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet |
[production] |
12:00 |
<ladsgroup@deploy2002> |
Started scap: Backport for [[gerrit:904512|Set externallinks to WRITE BOTH everywhere (T321662)]] |
[production] |
11:57 |
<btullis@deploy2002> |
helmfile [staging] START helmfile.d/services/datahub: apply on main |
[production] |
11:50 |
<jclark@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
11:50 |
<jclark@cumin1001> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: update dns an-worker1149-56 - jclark@cumin1001" |
[production] |
11:49 |
<jclark@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: update dns an-worker1149-56 - jclark@cumin1001" |
[production] |
11:47 |
<jclark@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
11:44 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 12:00:00 on db1136.eqiad.wmnet with reason: Maintenance |
[production] |
11:44 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 12:00:00 on db1136.eqiad.wmnet with reason: Maintenance |
[production] |
11:12 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
11:12 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add service records for rest-gateway - hnowlan@cumin1001" |
[production] |
11:11 |
<hnowlan@cumin1001> |
START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Add service records for rest-gateway - hnowlan@cumin1001" |
[production] |
11:10 |
<ladsgroup@deploy2002> |
Finished scap: Backport for [[gerrit:893552|Revert "Revert "mwscript: Switch to use run.php"" (T326800)]] (duration: 07m 59s) |
[production] |
11:08 |
<hnowlan@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
11:03 |
<ladsgroup@deploy2002> |
ladsgroup: Backport for [[gerrit:893552|Revert "Revert "mwscript: Switch to use run.php"" (T326800)]] synced to the testservers: mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet, mwdebug2001.codfw.wmnet |
[production] |
11:03 |
<claime> |
Re-enabling puppet for cp-text - T331318 |
[production] |
11:02 |
<ladsgroup@deploy2002> |
Started scap: Backport for [[gerrit:893552|Revert "Revert "mwscript: Switch to use run.php"" (T326800)]] |
[production] |
10:58 |
<volans@cumin1001> |
START - Cookbook sre.hosts.provision for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
10:58 |
<volans@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1075.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
10:51 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1136.eqiad.wmnet with reason: Maintenance |
[production] |
10:51 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1136.eqiad.wmnet with reason: Maintenance |
[production] |
10:50 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 100%: Maint over', diff saved to https://phabricator.wikimedia.org/P45994 and previous config saved to /var/cache/conftool/dbconfig/20230330-105011-ladsgroup.json |
[production] |
10:49 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depool db1136 T333538', diff saved to https://phabricator.wikimedia.org/P45993 and previous config saved to /var/cache/conftool/dbconfig/20230330-104928-ladsgroup.json |
[production] |
10:46 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Promote db1181 to s7 primary T333538', diff saved to https://phabricator.wikimedia.org/P45992 and previous config saved to /var/cache/conftool/dbconfig/20230330-104617-ladsgroup.json |
[production] |
10:45 |
<Amir1> |
Starting s7 eqiad failover from db1136 to db1181 - T333538 |
[production] |
10:44 |
<volans@cumin1001> |
START - Cookbook sre.hosts.provision for host ms-be1075.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
10:35 |
<elukey@cumin1001> |
END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-main-eqiad cluster: Roll restart of jvm daemons. |
[production] |
10:35 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'db1138 (re)pooling @ 75%: Maint over', diff saved to https://phabricator.wikimedia.org/P45989 and previous config saved to /var/cache/conftool/dbconfig/20230330-103506-ladsgroup.json |
[production] |
10:29 |
<jclark@cumin1001> |
END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1075.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |
10:27 |
<jclark@cumin1001> |
START - Cookbook sre.hosts.provision for host ms-be1075.mgmt.eqiad.wmnet with reboot policy FORCED |
[production] |