6001-6050 of 10000 results (33ms)
2025-08-18 ยง
17:04 <dancy@deploy1003> Installation of scap version "4.202.0" completed for 2 hosts [production]
17:03 <dcaro@cloudcumin1001> START - Cookbook wmcs.toolforge.component.deploy for component jobs-cli [tools]
17:03 <btullis@cumin1003> START - Cookbook sre.hosts.downtime for 2:00:00 on an-backup-datanode1035.eqiad.wmnet with reason: host reimage [production]
17:02 <dcaro@cloudcumin1001> END (FAIL) - Cookbook wmcs.toolforge.component.deploy (exit_code=99) for component jobs-api [tools]
17:02 <dancy@deploy1003> Installing scap version "4.202.0" for 2 host(s) [production]
17:02 <arlolra@deploy1003> helmfile [eqiad] DONE helmfile.d/services/mobileapps: apply [production]
17:01 <btullis@cumin1003> START - Cookbook sre.hosts.downtime for 2:00:00 on an-backup-datanode1034.eqiad.wmnet with reason: host reimage [production]
17:00 <arlolra@deploy1003> helmfile [eqiad] START helmfile.d/services/mobileapps: apply [production]
16:59 <fceratto@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2169', diff saved to https://phabricator.wikimedia.org/P81452 and previous config saved to /var/cache/conftool/dbconfig/20250818-165945-fceratto.json [production]
16:57 <stevemunene@cumin1003> START - Cookbook sre.druid.roll-restart-workers for Druid analytics cluster: Roll restart of Druid jvm daemons. [production]
16:56 <stevemunene@cumin1003> END (PASS) - Cookbook sre.druid.roll-restart-workers (exit_code=0) for Druid analytics cluster: Roll restart of Druid jvm daemons. [production]
16:55 <btullis@cumin1003> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - btullis@cumin1003" [production]
16:54 <raymond-ndibe@cloudcumin1001> END (PASS) - Cookbook wmcs.vps.refresh_puppet_certs (exit_code=0) on tools-harbor-2.tools.eqiad1.wikimedia.cloud (T350687) [tools]
16:53 <raymond-ndibe@cloudcumin1001> START - Cookbook wmcs.vps.refresh_puppet_certs on tools-harbor-2.tools.eqiad1.wikimedia.cloud (T350687) [tools]
16:52 <arlolra@deploy1003> helmfile [codfw] DONE helmfile.d/services/mobileapps: apply [production]
16:51 <arlolra@deploy1003> helmfile [codfw] START helmfile.d/services/mobileapps: apply [production]
16:50 <dcaro@cloudcumin1001> START - Cookbook wmcs.toolforge.component.deploy for component jobs-api [tools]
16:50 <bking@cumin1002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: apply logging config change - bking@cumin1002 - T395571 [production]
16:49 <bking@cumin1002> conftool action : set/weight=10; selector: name=cirrussearch2091. [production]
16:47 <dcaro@cloudcumin1001> END (PASS) - Cookbook wmcs.toolforge.component.deploy (exit_code=0) for component jobs-api [toolsbeta]
16:44 <fceratto@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2169', diff saved to https://phabricator.wikimedia.org/P81451 and previous config saved to /var/cache/conftool/dbconfig/20250818-164437-fceratto.json [production]
16:44 <arlolra@deploy1003> helmfile [staging] DONE helmfile.d/services/mobileapps: apply [production]
16:42 <bking@cumin1002> END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: apply logging config change - bking@cumin1002 - T395571 [production]
16:42 <arlolra@deploy1003> helmfile [staging] START helmfile.d/services/mobileapps: apply [production]
16:40 <bking@cumin1002> START - Cookbook sre.elasticsearch.rolling-operation Operation.RESTART (3 nodes at a time) for ElasticSearch cluster search_codfw: apply logging config change - bking@cumin1002 - T395571 [production]
16:39 <btullis@cumin1003> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on an-backup-datanode1005.eqiad.wmnet with reason: host reimage [production]
16:38 <stevemunene@cumin1003> START - Cookbook sre.druid.roll-restart-workers for Druid analytics cluster: Roll restart of Druid jvm daemons. [production]
16:36 <btullis@cumin1003> START - Cookbook sre.hosts.provision for host an-backup-datanode1006.mgmt.eqiad.wmnet with chassis set policy FORCE_RESTART and with Dell SCP reboot policy FORCED [production]
16:35 <btullis@cumin1003> START - Cookbook sre.hosts.reimage for host an-backup-datanode1035.eqiad.wmnet with OS bookworm [production]
16:35 <btullis@cumin1003> START - Cookbook sre.hosts.reimage for host an-backup-datanode1034.eqiad.wmnet with OS bookworm [production]
16:34 <btullis@cumin1003> END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host an-backup-datanode1035 [production]
16:34 <rzl@deploy1003> helmfile [eqiad] DONE helmfile.d/services/shellbox-constraints: apply [production]
16:34 <rzl@deploy1003> helmfile [eqiad] START helmfile.d/services/shellbox-constraints: apply [production]
16:33 <btullis@cumin1003> START - Cookbook sre.hosts.downtime for 2:00:00 on an-backup-datanode1005.eqiad.wmnet with reason: host reimage [production]
16:33 <btullis@cumin1003> START - Cookbook sre.network.configure-switch-interfaces for host an-backup-datanode1035 [production]
16:33 <dcaro@cloudcumin1001> START - Cookbook wmcs.toolforge.component.deploy for component jobs-api [toolsbeta]
16:32 <stevemunene> roll restart druid coordinator service for the analytics cluster [analytics]
16:32 <btullis@cumin1003> END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host an-backup-datanode1006 [production]
16:32 <rzl@deploy1003> helmfile [codfw] DONE helmfile.d/services/shellbox-constraints: apply [production]
16:31 <rzl@deploy1003> helmfile [codfw] START helmfile.d/services/shellbox-constraints: apply [production]
16:31 <btullis@cumin1003> START - Cookbook sre.network.configure-switch-interfaces for host an-backup-datanode1006 [production]
16:30 <btullis@cumin1003> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:30 <btullis@cumin1003> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Renamed an-backup-namenode1035 to an-backup-datanode1035 - btullis@cumin1003" [production]
16:29 <btullis@cumin1003> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: Renamed an-backup-namenode1035 to an-backup-datanode1035 - btullis@cumin1003" [production]
16:29 <bking@cumin1002> END (PASS) - Cookbook sre.opensearch.roll-restart-reboot (exit_code=0) rolling restart_daemons on A:datahubsearch [production]
16:29 <fceratto@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2169 (T399249)', diff saved to https://phabricator.wikimedia.org/P81450 and previous config saved to /var/cache/conftool/dbconfig/20250818-162930-fceratto.json [production]
16:27 <fceratto@cumin1002> dbctl commit (dc=all): 'Depooling db2169 (T399249)', diff saved to https://phabricator.wikimedia.org/P81449 and previous config saved to /var/cache/conftool/dbconfig/20250818-162720-fceratto.json [production]
16:27 <fceratto@cumin1002> DONE (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db2169.codfw.wmnet with reason: Maintenance [production]
16:26 <fceratto@cumin1002> dbctl commit (dc=all): 'Repooling after maintenance db2158 (T399249)', diff saved to https://phabricator.wikimedia.org/P81448 and previous config saved to /var/cache/conftool/dbconfig/20250818-162656-fceratto.json [production]
16:25 <btullis@cumin1003> START - Cookbook sre.dns.netbox [production]