5351-5400 of 10000 results (93ms)
2023-03-30 ยง
14:57 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:56 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:53 <akosiaris@deploy2002> helmfile [eqiad] DONE helmfile.d/services/thumbor: sync [production]
14:52 <akosiaris@deploy2002> helmfile [eqiad] START helmfile.d/services/thumbor: sync [production]
14:49 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:49 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:47 <gmodena@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/mediawiki-page-content-change-enrichment: apply [production]
14:47 <gmodena@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/mediawiki-page-content-change-enrichment: apply [production]
14:46 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host lvs4010.ulsfo.wmnet with OS bullseye [production]
14:43 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:43 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:41 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:40 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:39 <gmodena@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/mediawiki-page-content-change-enrichment: apply [production]
14:39 <gmodena@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/mediawiki-page-content-change-enrichment: apply [production]
14:39 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:39 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:36 <gmodena@deploy1002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/mediawiki-page-content-change-enrichment: apply [production]
14:36 <gmodena@deploy1002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/mediawiki-page-content-change-enrichment: apply [production]
14:35 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:35 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:31 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on lvs4010.ulsfo.wmnet with reason: host reimage [production]
14:27 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 2:00:00 on lvs4010.ulsfo.wmnet with reason: host reimage [production]
14:23 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:23 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:22 <bking@deploy2002> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:22 <bking@deploy2002> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/rdf-streaming-updater: apply [production]
14:22 <akosiaris@deploy2002> helmfile [eqiad] DONE helmfile.d/services/thumbor: sync [production]
14:17 <elukey@cumin1001> END (PASS) - Cookbook sre.kafka.roll-restart-brokers (exit_code=0) for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
14:12 <akosiaris@deploy2002> helmfile [eqiad] START helmfile.d/services/thumbor: sync [production]
14:11 <sukhe@cumin2002> START - Cookbook sre.hosts.reimage for host lvs4010.ulsfo.wmnet with OS bullseye [production]
14:08 <akosiaris@deploy2002> helmfile [codfw] DONE helmfile.d/services/thumbor: sync [production]
14:06 <akosiaris@deploy2002> helmfile [codfw] START helmfile.d/services/thumbor: sync [production]
12:36 <elukey@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-main-codfw cluster: Roll restart of jvm daemons. [production]
12:32 <joal@deploy2002> Finished deploy [airflow-dags/analytics@a6500cf]: Regular analytics weekly train (2nd) HOTFIX [airflow-dags/analytics@a6500cf] (duration: 00m 11s) [production]
12:31 <joal@deploy2002> Started deploy [airflow-dags/analytics@a6500cf]: Regular analytics weekly train (2nd) HOTFIX [airflow-dags/analytics@a6500cf] [production]
12:27 <btullis@deploy2002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
12:26 <btullis@deploy2002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
12:17 <volans@cumin1001> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED [production]
12:17 <volans@cumin1001> START - Cookbook sre.hosts.provision for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED [production]
12:17 <volans@cumin1001> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host ms-be1074.mgmt.eqiad.wmnet with reboot policy FORCED [production]
12:15 <ladsgroup@deploy2002> Finished scap: Backport for [[gerrit:904512|Set externallinks to WRITE BOTH everywhere (T321662)]] (duration: 14m 58s) [production]
12:08 <btullis@deploy2002> helmfile [staging] DONE helmfile.d/services/datahub: sync on main [production]
12:02 <ladsgroup@deploy2002> ladsgroup: Backport for [[gerrit:904512|Set externallinks to WRITE BOTH everywhere (T321662)]] synced to the testservers: mwdebug2001.codfw.wmnet, mwdebug1002.eqiad.wmnet, mwdebug2002.codfw.wmnet, mwdebug1001.eqiad.wmnet [production]
12:00 <ladsgroup@deploy2002> Started scap: Backport for [[gerrit:904512|Set externallinks to WRITE BOTH everywhere (T321662)]] [production]
11:57 <btullis@deploy2002> helmfile [staging] START helmfile.d/services/datahub: apply on main [production]
11:50 <jclark@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:50 <jclark@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: update dns an-worker1149-56 - jclark@cumin1001" [production]
11:49 <jclark@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: update dns an-worker1149-56 - jclark@cumin1001" [production]
11:47 <jclark@cumin1001> START - Cookbook sre.dns.netbox [production]