2901-2950 of 10000 results (108ms)
2024-09-06 §
08:18 <jayme@cumin1002> START - Cookbook sre.hosts.rename from kubernetes2033 to wikikube-worker2094 [production]
08:18 <jayme@cumin1002> START - Cookbook sre.dns.netbox [production]
08:18 <jayme@cumin1002> START - Cookbook sre.hosts.rename from kubernetes2020 to wikikube-worker2093 [production]
08:01 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts acmechief1001.eqiad.wmnet [production]
08:01 <jmm@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:01 <jmm@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: acmechief1001.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - jmm@cumin2002" [production]
08:00 <jayme@cumin1002> END (FAIL) - Cookbook sre.k8s.pool-depool-node (exit_code=99) depool for host kubernetes2033.codfw.wmnet [production]
08:00 <jayme@cumin1002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2033.codfw.wmnet [production]
08:00 <jayme@cumin1002> END (PASS) - Cookbook sre.k8s.pool-depool-node (exit_code=0) depool for host kubernetes2020.codfw.wmnet [production]
08:00 <jmm@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: acmechief1001.eqiad.wmnet decommissioned, removing all IPs except the asset tag one - jmm@cumin2002" [production]
07:59 <jayme@cumin1002> START - Cookbook sre.k8s.pool-depool-node depool for host kubernetes2020.codfw.wmnet [production]
07:55 <jmm@cumin2002> START - Cookbook sre.dns.netbox [production]
07:51 <jmm@cumin2002> START - Cookbook sre.hosts.decommission for hosts acmechief1001.eqiad.wmnet [production]
07:49 <aqu@deploy1003> Finished deploy [airflow-dags/analytics_test@5315c8d]: Test Refine through Airflow (duration: 00m 10s) [production]
07:49 <aqu@deploy1003> Started deploy [airflow-dags/analytics_test@5315c8d]: Test Refine through Airflow [production]
07:40 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] DONE helmfile.d/dse-k8s-services/airflow-test-k8s: apply [production]
07:39 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-test-k8s: apply [production]
07:31 <brouberol@deploy1003> helmfile [dse-k8s-eqiad] START helmfile.d/dse-k8s-services/airflow-test-k8s: apply [production]
07:21 <jayme@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on wikikube-worker2087.codfw.wmnet with reason: host reimage [production]
07:18 <jayme@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on wikikube-worker2087.codfw.wmnet with reason: host reimage [production]
07:00 <jayme@cumin1002> START - Cookbook sre.hosts.reimage for host wikikube-worker2087.codfw.wmnet with OS bullseye [production]
06:57 <jayme@cumin1002> END (FAIL) - Cookbook sre.k8s.renumber-node (exit_code=99) Renumbering for host wikikube-worker2087.codfw.wmnet [production]
06:57 <jayme@cumin1002> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host wikikube-worker2087.codfw.wmnet with OS bullseye [production]
06:57 <jayme@cumin1002> START - Cookbook sre.hosts.reimage for host wikikube-worker2087.codfw.wmnet with OS bullseye [production]
06:57 <jayme@cumin1002> START - Cookbook sre.k8s.renumber-node Renumbering for host wikikube-worker2087.codfw.wmnet [production]
05:37 <vgutierrez> repool cp2041 [production]
04:57 <vgutierrez> repool cp2038 [production]
04:24 <vgutierrez> restarting purged in cp2038 && cp2041 - T334078 [production]
04:16 <vgutierrez@puppetmaster1001> conftool action : set/pooled=no; selector: name=cp(2038|2041).codfw.wmnet [production]
04:15 <vgutierrez> depooling cp2041 && cp2038 due to high purged lag [production]
02:37 <ejegg> restarted donations queue consumer [production]
02:31 <ejegg> fundraising civicrm upgraded from 67ee99ce to 5dd4edc1 [production]
02:29 <ejegg> disabled donations queue consumer for civi deploy [production]
2024-09-05 §
23:43 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for lvs1019.eqiad.wmnet [production]
23:43 <cmooney@cumin1002> START - Cookbook sre.hosts.remove-downtime for lvs1019.eqiad.wmnet [production]
23:36 <topranks> re-enable PyBal on lvs1019 after fixing faulty link with replacement optic T374155 [production]
22:54 <cmooney@cumin1002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on lvs1019.eqiad.wmnet with reason: Move traffic off lvs1019 to lvs1029 to troubleshooot faulty link [production]
22:54 <cmooney@cumin1002> START - Cookbook sre.hosts.downtime for 2:00:00 on lvs1019.eqiad.wmnet with reason: Move traffic off lvs1019 to lvs1029 to troubleshooot faulty link [production]
22:53 <topranks> disable PyBal on lvs1019 to swing traffic to lvs1020 and allow for intrusive work to correct link errors T374155 [production]
22:50 <dzahn@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:10:00 on gerrit2002.wikimedia.org with reason: T373980 [production]
22:49 <dzahn@cumin2002> START - Cookbook sre.hosts.downtime for 0:10:00 on gerrit2002.wikimedia.org with reason: T373980 [production]
22:49 <mutante> gerrit-replica.wikimedia.org (gerrit2002) - rebooting T373980 [production]
22:03 <dancy@deploy1003> Installation of scap version "4.101.3" completed for 211 hosts [production]
21:59 <dancy@deploy1003> Installing scap version "4.101.3" for 211 hosts [production]
21:57 <inflatador> bking@grafana1002 apply grizzly SLO dashboard updates slo-Search added slo-apigw updated P68729 T328330 [production]
21:57 <inflatador> bking@grafana1002 apply grizzly SLO dashboard updates slo-Search added slo-apigw updated P68729 [production]
21:54 <dancy@deploy1003> Installing scap version "4.101.3" for 211 hosts [production]
21:54 <dancy@deploy1003> install-world aborted: (duration: 02m 00s) [production]
21:52 <dancy@deploy1003> Installing scap version "4.101.3" for 211 hosts [production]
21:33 <mutante> gitlab1004 systemct list-units --state=failed listed wmf_auto_restart_ssh-gitlab.service but at the same time it's 'Service ssh-gitlab not present or not running'.(?). Did a systemctl reset-failed to clear monitoring and it doesn't seem to come back. T374106 [production]