3901-3950 of 10000 results (87ms)
2023-08-21 §
06:30 <moritzm> installing Linux 5.10.191 kernel updates [production]
06:28 <kart_> Update MinT to 2023-08-14-091403-production (T336683) [production]
06:27 <kartik@deploy1002> helmfile [eqiad] DONE helmfile.d/services/machinetranslation: apply [production]
06:22 <kartik@deploy1002> helmfile [eqiad] START helmfile.d/services/machinetranslation: apply [production]
06:19 <kartik@deploy1002> helmfile [codfw] DONE helmfile.d/services/machinetranslation: apply [production]
06:13 <kartik@deploy1002> helmfile [codfw] START helmfile.d/services/machinetranslation: apply [production]
06:12 <zabe@deploy1002> Started scap: Backport for [[gerrit:950808|add su namespace translations (T344314)]] [production]
06:09 <kartik@deploy1002> helmfile [staging] DONE helmfile.d/services/machinetranslation: apply [production]
06:06 <kartik@deploy1002> helmfile [staging] START helmfile.d/services/machinetranslation: apply [production]
01:37 <ryankemper> [WDQS] `ryankemper@wdqs1006:~$ sudo systemctl restart wdqs-blazegraph wdqs-categories` (free allocators decreasing rapidly -> solution is a simple restart of query service on host) [production]
2023-08-19 §
08:38 <cmooney@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 29 hosts with reason: Downtime esams hosts prior to migration week. [production]
08:38 <cmooney@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 29 hosts with reason: Downtime esams hosts prior to migration week. [production]
08:37 <topranks> downtiming esams hosts ahead of core router (cr1-esams) reboot T344546 [production]
08:26 <cmooney@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on 16 hosts with reason: Downtime esams hosts prior to cr1-esams reboot [production]
08:26 <cmooney@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on 16 hosts with reason: Downtime esams hosts prior to cr1-esams reboot [production]
2023-08-18 §
18:09 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for lvs[3008-3009].esams.wmnet [production]
18:09 <sukhe@cumin2002> START - Cookbook sre.hosts.remove-downtime for lvs[3008-3009].esams.wmnet [production]
18:08 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3009.esams.wmnet [production]
18:02 <sukhe@cumin2002> START - Cookbook sre.hosts.reboot-single for host lvs3009.esams.wmnet [production]
18:01 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host lvs3010.esams.wmnet [production]
17:54 <sukhe@cumin2002> START - Cookbook sre.hosts.reboot-single for host lvs3010.esams.wmnet [production]
17:50 <bking@cumin1001> END (PASS) - Cookbook sre.hardware.upgrade-firmware (exit_code=0) upgrade firmware for hosts ['wdqs1010'] [production]
17:49 <bking@cumin1001> START - Cookbook sre.hardware.upgrade-firmware upgrade firmware for hosts ['wdqs1010'] [production]
17:40 <sukhe@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:30:00 on lvs[3008-3009].esams.wmnet with reason: rebooting to flush broken IPv6 routes [production]
17:40 <sukhe@cumin2002> START - Cookbook sre.hosts.downtime for 0:30:00 on lvs[3008-3009].esams.wmnet with reason: rebooting to flush broken IPv6 routes [production]
17:38 <sukhe> reboot LVSes in esams to flush broken IPv6 routes [production]
17:37 <topranks> bouncing OSPF on cr1-esams to attempt to resolve BFD/OSPF glitch [production]
17:25 <inflatador> bking@ganeti1024 shutting off flink-zk1001 to check alerting T341792 [production]
17:18 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.remove-downtime (exit_code=0) for flink-zk[2001,2003].codfw.wmnet,flink-zk[1001-1003].eqiad.wmnet [production]
17:18 <bking@cumin1001> START - Cookbook sre.hosts.remove-downtime for flink-zk[2001,2003].codfw.wmnet,flink-zk[1001-1003].eqiad.wmnet [production]
17:18 <inflatador> bking@cumin1001 temporarily enabling alerts for flink-zk hosts to see if they work T341792 [production]
17:13 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.decommission (exit_code=0) for hosts flink-zk2002.codfw.wmnet [production]
17:13 <bking@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
17:13 <bking@cumin1001> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: flink-zk2002.codfw.wmnet decommissioned, removing all IPs except the asset tag one - bking@cumin1001" [production]
17:12 <bking@cumin1001> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: flink-zk2002.codfw.wmnet decommissioned, removing all IPs except the asset tag one - bking@cumin1001" [production]
16:56 <bking@cumin1001> START - Cookbook sre.dns.netbox [production]
16:51 <bking@cumin1001> START - Cookbook sre.hosts.decommission for hosts flink-zk2002.codfw.wmnet [production]
16:36 <bking@cumin1001> END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host wdqs1010.eqiad.wmnet with OS bullseye [production]
16:30 <bking@cumin1001> START - Cookbook sre.hosts.reimage for host wdqs1010.eqiad.wmnet with OS bullseye [production]
15:43 <jhancock@cumin2002> END (FAIL) - Cookbook sre.hosts.provision (exit_code=99) for host kubernetes2025.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:32 <jhancock@cumin2002> START - Cookbook sre.hosts.provision for host kubernetes2025.mgmt.codfw.wmnet with reboot policy FORCED [production]
15:31 <jhancock@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
15:31 <jhancock@cumin2002> END (PASS) - Cookbook sre.puppet.sync-netbox-hiera (exit_code=0) generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: adding new host kubernetes2025 to CODFW - jhancock@cumin2002" [production]
15:30 <jhancock@cumin2002> START - Cookbook sre.puppet.sync-netbox-hiera generate netbox hiera data: "Triggered by cookbooks.sre.dns.netbox: adding new host kubernetes2025 to CODFW - jhancock@cumin2002" [production]
15:28 <sukhe> ipvsadm -Dt IPs in 91.198.174.0/24 IPs from A:lvs and A:esams [production]
15:28 <jhancock@cumin2002> START - Cookbook sre.dns.netbox [production]
15:10 <jiji@deploy1002> helmfile [codfw] DONE helmfile.d/services/tegola-vector-tiles: sync [production]
15:09 <jiji@deploy1002> helmfile [codfw] START helmfile.d/services/tegola-vector-tiles: sync [production]
14:29 <jiji@cumin1001> conftool action : set/pooled=false; selector: dnsdisc=kartotherian,name=codfw [production]
14:24 <jiji@cumin1001> conftool action : set/pooled=true; selector: dnsdisc=kartotherian,name=codfw [production]