7801-7850 of 10000 results (100ms)
2023-08-21 §
08:43 <cgoubert@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-web: apply [production]
08:42 <cgoubert@deploy1002> helmfile [codfw] START helmfile.d/services/mw-web: apply [production]
08:42 <cgoubert@deploy1002> helmfile [codfw] DONE helmfile.d/services/mw-debug: apply [production]
08:41 <stevemunene@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on an-worker1108.eqiad.wmnet with reason: host reimage [production]
08:31 <cgoubert@deploy1002> helmfile [codfw] START helmfile.d/services/mw-debug: apply [production]
08:27 <godog> restart prometheus@beta - T344582 [production]
08:26 <stevemunene@cumin1001> START - Cookbook sre.hosts.reimage for host an-worker1108.eqiad.wmnet with OS bullseye [production]
08:19 <ayounsi@cumin1001> END (PASS) - Cookbook sre.network.peering (exit_code=0) with action 'configure' for AS: 15600 [production]
08:19 <ayounsi@cumin1001> START - Cookbook sre.network.peering with action 'configure' for AS: 15600 [production]
08:10 <klausman> Draining ml-serve2006 for Kubelet partition resize [production]
08:02 <klausman> Draining ml-serve2005 for Kubelet partition resize [production]
07:49 <klausman> Draining ml-serve2004 for Kubelet partition resize [production]
07:03 <godog> prometheus ops eqiad +300G on the filesystem [production]
07:01 <jmm@cumin2002> END (PASS) - Cookbook sre.debmonitor.remove-hosts (exit_code=0) for 1 hosts: ganeti3003.esams.wmnet [production]
07:01 <jmm@cumin2002> START - Cookbook sre.debmonitor.remove-hosts for 1 hosts: ganeti3003.esams.wmnet [production]
07:01 <jmm@cumin2002> END (PASS) - Cookbook sre.debmonitor.remove-hosts (exit_code=0) for 1 hosts: ganeti3002.esams.wmnet [production]
07:01 <jmm@cumin2002> START - Cookbook sre.debmonitor.remove-hosts for 1 hosts: ganeti3002.esams.wmnet [production]
07:01 <jmm@cumin2002> END (PASS) - Cookbook sre.debmonitor.remove-hosts (exit_code=0) for 1 hosts: ganeti3001.esams.wmnet [production]
07:01 <jmm@cumin2002> START - Cookbook sre.debmonitor.remove-hosts for 1 hosts: ganeti3001.esams.wmnet [production]
06:55 <ayounsi@cumin1001> END (PASS) - Cookbook sre.deploy.python-code (exit_code=0) homer to cumin2002.codfw.wmnet,cumin1001.eqiad.wmnet with reason: Update Homer wheels - ayounsi@cumin1001 [production]
06:53 <ayounsi@cumin1001> START - Cookbook sre.deploy.python-code homer to cumin2002.codfw.wmnet,cumin1001.eqiad.wmnet with reason: Update Homer wheels - ayounsi@cumin1001 [production]
06:38 <zabe@deploy1002> Started scap: Backport for [[gerrit:950808|add su namespace translations (T344314)]] [production]
06:30 <moritzm> installing Linux 5.10.191 kernel updates [production]
06:28 <kart_> Update MinT to 2023-08-14-091403-production (T336683) [production]
06:27 <kartik@deploy1002> helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply [production]
06:22 <kartik@deploy1002> helmfile [eqiad] START helmfile.d/services/machinetranslation: apply [production]
06:19 <kartik@deploy1002> helmfile [codfw] DONE helmfile.d/services/machinetranslation: apply [production]
06:13 <kartik@deploy1002> helmfile [codfw] START helmfile.d/services/machinetranslation: apply [production]
06:12 <zabe@deploy1002> Started scap: Backport for [[gerrit:950808|add su namespace translations (T344314)]] [production]
06:09 <kartik@deploy1002> helmfile [staging] DONE helmfile.d/services/machinetranslation: apply [production]
06:06 <kartik@deploy1002> helmfile [staging] START helmfile.d/services/machinetranslation: apply [production]
01:37 <ryankemper> [WDQS] `ryankemper@wdqs1006:~$ sudo systemctl restart wdqs-blazegraph wdqs-categories` (free allocators decreasing rapidly -> solution is a simple restart of query service on host) [production]
2023-08-19 §
08:38 <cmooney@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 29 hosts with reason: Downtime esams hosts prior to migration week. [production]
08:38 <cmooney@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 29 hosts with reason: Downtime esams hosts prior to migration week. [production]
08:37 <topranks> downtiming esams hosts ahead of core router (cr1-esams) reboot T344546 [production]
08:26 <cmooney@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 16 hosts with reason: Downtime esams hosts prior to cr1-esams reboot [production]
08:26 <cmooney@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 16 hosts with reason: Downtime esams hosts prior to cr1-esams reboot [production]
2023-08-18 §
18:09 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for lvs[3008-3009].esams.wmnet [production]
18:09 <sukhe@cumin2002> START - Cookbook sre.hosts.remove-downtime for lvs[3008-3009].esams.wmnet [production]
18:08 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3009.esams.wmnet [production]
18:02 <sukhe@cumin2002> START - Cookbook sre.hosts.reboot-single for host lvs3009.esams.wmnet [production]
18:01 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3010.esams.wmnet [production]
17:54 <sukhe@cumin2002> START - Cookbook sre.hosts.reboot-single for host lvs3010.esams.wmnet [production]
17:50 <bking@cumin1001> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['wdqs1010'] [production]
17:49 <bking@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['wdqs1010'] [production]
17:40 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on lvs[3008-3009].esams.wmnet with reason: rebooting to flush broken IPv6 routes [production]
17:40 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 0:30:00 on lvs[3008-3009].esams.wmnet with reason: rebooting to flush broken IPv6 routes [production]
17:38 <sukhe> reboot LVSes in esams to flush broken IPv6 routes [production]
17:37 <topranks> bouncing OSPF on cr1-esams to attempt to resolve BFD/OSPF glitch [production]
17:25 <inflatador> bking@ganeti1024 shutting off flink-zk1001 to check alerting T341792 [production]