2151-2200 of 10000 results (32ms)
2021-07-19 §
13:47 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw[1273-1275].eqiad.wmnet [production]
13:47 <sukhe@cumin1001> START - Cookbook sre.hosts.decommission for hosts malmok.wikimedia.org [production]
13:44 <volans@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
13:42 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts mw1272.eqiad.wmnet [production]
13:41 <volans@cumin2002> START - Cookbook sre.dns.netbox [production]
13:32 <sukhe@cumin1001> END (ERROR) - Cookbook sre.dns.netbox (exit_code=97) [production]
13:31 <sukhe@cumin1001> START - Cookbook sre.dns.netbox [production]
13:26 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw1272.eqiad.wmnet [production]
13:21 <dzahn@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts mw1270.eqiad.wmnet [production]
13:12 <jayme@cumin1001> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
13:09 <dzahn@cumin1001> START - Cookbook sre.hosts.decommission for hosts mw1270.eqiad.wmnet [production]
13:09 <sukhe@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts malmok.wikimedia.org [production]
13:08 <jayme@cumin1001> START - Cookbook sre.dns.netbox [production]
13:08 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1272.eqiad.wmnet [production]
13:06 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw127[3-5].eqiad.wmnet [production]
13:01 <dzahn@cumin1001> conftool action : set/weight=1; selector: name=mw1415.eqiad.wmnet,service=canary [production]
13:01 <sukhe@cumin1001> START - Cookbook sre.hosts.decommission for hosts malmok.wikimedia.org [production]
13:01 <dzahn@cumin1001> conftool action : set/weight=1; selector: name=mw1414.eqiad.wmnet,service=canary [production]
11:40 <moritzm> installing bluez security updates [production]
11:31 <Lucas_WMDE> EU backport+config window done [production]
11:05 <lucaswerkmeister-wmde@deploy1002> Synchronized wmf-config/InitialiseSettings-labs.php: Config: [[gerrit:703205|Add config for updated PropertySuggester beta cluster (T285098)]] (beta-only) (duration: 00m 57s) [production]
10:19 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host copernicium.wikimedia.org [production]
10:14 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host copernicium.wikimedia.org [production]
09:52 <moritzm> imported megacli for bullseye-wikimedia T282272 T275873 [production]
09:43 <topranks> Running homer against cr2-eqdfw to change descr and move interface ae0, which connects to Facebook, into the external-links group. [production]
09:30 <godog> bounce prometheus@k8s* on prometheus2004 due to cache not refreshing alert [production]
08:15 <vgutierrez> depool codfw text traffic [production]
07:11 <elukey> roll restart kafka mirror maker on kafka-main200* hosts - stuck after Friday's events/incident [production]
03:26 <twentyafterfour> restarted phd on phab1001 [production]
03:25 <twentyafterfour> investigating PHD failure [production]
2021-07-16 §
19:50 <robh@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE [production]
19:48 <robh@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE [production]
19:10 <robh@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE [production]
19:08 <robh@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE [production]
18:29 <ryankemper> [Elastic] Kicked off powercycle on `elastic2038`, this will effectively restart its `elasticsearch_6@production-search-omega-codfw.service`. We're back to 3 eligible masters for `codfw-omega` [production]
18:28 <ryankemper> [Elastic] Restarted `elasticsearch_6@production-search-omega-codfw.service` on `elastic2051`; will restart on `elastic2038` by powercycling the node from mgmt port given that it is ssh unreachable [production]
18:24 <ryankemper> [Elastic] `puppet-merge`d https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973; ran puppet across `elastic2*` hosts: `sudo cumin 'P{elastic2*}' 'sudo run-puppet-agent'` (puppet run succeeded on all but the 3 nodes taken offline by the switch failure: `elastic[2037-2038,2055].codfw.wmnet`) [production]
18:19 <ryankemper> [Elastic] Given that we will likely have switch A3 out of commission over the weekend, Search team is going to change masters so that we no longer have a master in row A3. New desired config: `B1 (elastic2042), C2 (elastic2047), D2 (elastic2051)`, see https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973 [production]
16:29 <vgutierrez> restarting pybal on lvs2009 to decrease api depool threshold [production]
15:48 <vgutierrez> restart pybal on lvs2010 [production]
15:38 <jiji@cumin1001> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
15:34 <jiji@cumin1001> START - Cookbook sre.dns.netbox [production]
15:24 <godog> downtime flappy pages in codfw for 40 minutes [production]
15:14 <godog> set alert2001 as active in netbox (was staged) - T247966 [production]
15:14 <vgutierrez> vgutierrez@lvs2010:~$ sudo -i ifup ens2f1np1 [production]
14:41 <vgutierrez> vgutierrez@lvs2009:~$ sudo -i ifdown ens2f1np1 [production]
14:40 <topranks> Running homer to disable et-0/0/0 on cr1-codfw, which connects to currently dead device asw-a2-codfw T286787 [production]
14:40 <topranks> Ran homer against asw-a-codfw virtual-chassis to change the config for all ports on dead switch asw-a2-codfw to disabled. [production]
14:40 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
14:37 <vgutierrez> vgutierrez@lvs2010:~$ sudo -i ifdown ens2f1np1 [production]