551-600 of 10000 results (103ms)
2023-09-20 ยง
10:04 <brouberol@cumin1001> START - Cookbook sre.kafka.roll-restart-brokers for Kafka A:kafka-jumbo-eqiad cluster: Roll restart of jvm daemons. [production]
10:02 <klausman> RUnning authdns-update to activate change 957689 (T341696) [production]
10:02 <klausman> Merging change 957689 (T341696) to lower DNS TTL to 5m for ORES name. [production]
10:01 <jelto@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on gitlab-runner2004.codfw.wmnet with reason: host reimage [production]
10:00 <Emperor> ms-be10[61-75] swift package updates T346730 [production]
09:56 <jelto@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on gitlab-runner2004.codfw.wmnet with reason: host reimage [production]
09:55 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host cloudservices1005.eqiad.wmnet with OS bullseye [production]
09:55 <aborrero@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - aborrero@cumin1001" [production]
09:54 <aborrero@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.hosts.reimage: Host reimage - aborrero@cumin1001" [production]
09:48 <brouberol@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:10:00 on kafka-jumbo1003.eqiad.wmnet with reason: investigation by brouberol and elukey about kafka ACL issues that might be fixed by a broker restart [production]
09:48 <brouberol@cumin1001> START - Cookbook sre.hosts.downtime for 0:10:00 on kafka-jumbo1003.eqiad.wmnet with reason: investigation by brouberol and elukey about kafka ACL issues that might be fixed by a broker restart [production]
09:41 <jelto@cumin1001> START - Cookbook sre.hosts.reimage for host gitlab-runner2004.codfw.wmnet with OS bullseye [production]
09:39 <aborrero@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "cloudservices1005 - aborrero@cumin1001 - T346042" [production]
09:38 <aborrero@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "cloudservices1005 - aborrero@cumin1001 - T346042" [production]
09:35 <aborrero@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "cloudservices1005 - aborrero@cumin1001 - T346042" [production]
09:34 <aborrero@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "cloudservices1005 - aborrero@cumin1001 - T346042" [production]
09:34 <gmodena@deploy2002> helmfile [eqiad] DONE helmfile.d/services/mw-page-content-change-enrich: apply [production]
09:34 <gmodena@deploy2002> helmfile [eqiad] START helmfile.d/services/mw-page-content-change-enrich: apply [production]
09:33 <aborrero@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
09:32 <aborrero@cumin1001> START - Cookbook sre.dns.netbox [production]
09:29 <klausman> Draining ml-serve1008 for kubelet partition increase (T339231) [production]
09:24 <klausman> Draining ml-serve1007 for kubelet partition increase (T339231) [production]
09:15 <klausman> Draining ml-serve1006 for kubelet partition increase (T339231) [production]
09:13 <aborrero@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on cloudservices1005.eqiad.wmnet with reason: host reimage [production]
09:09 <aborrero@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on cloudservices1005.eqiad.wmnet with reason: host reimage [production]
09:08 <fabfur> applied patch https://gerrit.wikimedia.org/r/c/operations/puppet/+/957292 (T344175) to add new mobile redirect domains to Varnish. Changes will be applied automatically by puppet on all cp hosts [production]
09:06 <klausman> Draining ml-serve1005 for kubelet partition increase (T339231) [production]
08:59 <godog> restore benthos@webrequest_live running on both centrallog hosts - T346871 [production]
08:57 <klausman> Draining ml-serve1004 for kubelet partition increase (T339231) [production]
08:47 <klausman> Draining ml-serve1003 for kubelet partition increase (T339231) [production]
08:47 <godog> temp bump threads to 15 for benthos@webrequest_live on centrallog2002 - T346871 [production]
08:40 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1005.eqiad.wmnet with OS bullseye [production]
08:40 <aborrero@cumin1001> END (ERROR) - Cookbook sre.hosts.reimage (exit_code=97) for host cloudservices1005.eqiad.wmnet with OS bullseye [production]
08:40 <klausman> Draining ml-serve1002 for kubelet partition increase (T339231) [production]
08:36 <godog> stop benthos@webrequest_live.service on centrallog1002 to test redudancy/capacity - T346871 [production]
08:33 <aborrero@cumin1001> START - Cookbook sre.hosts.reimage for host cloudservices1005.eqiad.wmnet with OS bullseye [production]
08:32 <aborrero@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:31 <aborrero@cumin1001> START - Cookbook sre.dns.netbox [production]
08:31 <aborrero@cumin1001> END (PASS) - Cookbook sre.network.configure-switch-interfaces (exit_code=0) for host cloudservices1005 [production]
08:31 <aborrero@cumin1001> START - Cookbook sre.network.configure-switch-interfaces for host cloudservices1005 [production]
08:30 <aborrero@cumin1001> END (FAIL) - Cookbook sre.network.configure-switch-interfaces (exit_code=99) for host cloudservices1005 [production]
08:30 <aborrero@cumin1001> START - Cookbook sre.network.configure-switch-interfaces for host cloudservices1005 [production]
08:22 <jmm@cumin2002> END (PASS) - Cookbook sre.misc-clusters.roll-restart-reboot-docker-registry (exit_code=0) rolling restart_daemons on A:docker-registry [production]
08:20 <jmm@cumin2002> START - Cookbook sre.misc-clusters.roll-restart-reboot-docker-registry rolling restart_daemons on A:docker-registry [production]
08:17 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on puppetdb1002.eqiad.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
08:16 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on puppetdb1002.eqiad.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
08:16 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
08:15 <jmm@cumin2002> START - Cookbook sre.hosts.downtime for 5 days, 0:00:00 on puppetdb2002.codfw.wmnet with reason: Disable puppetdb/postgres/nginx on old nodes to ensure nothing hits them anyway [production]
08:10 <moritzm> restarting FPM on mw* to pick up libwebp security updates [production]
08:02 <moritzm> installing libwebp security updates on buster [production]