6501-6550 of 10000 results (45ms)
2021-07-16 ยง
19:10 <robh@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE [production]
19:08 <robh@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE [production]
18:29 <ryankemper> [Elastic] Kicked off powercycle on `elastic2038`, this will effectively restart its `elasticsearch_6@production-search-omega-codfw.service`. We're back to 3 eligible masters for `codfw-omega` [production]
18:28 <ryankemper> [Elastic] Restarted `elasticsearch_6@production-search-omega-codfw.service` on `elastic2051`; will restart on `elastic2038` by powercycling the node from mgmt port given that it is ssh unreachable [production]
18:24 <ryankemper> [Elastic] `puppet-merge`d https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973; ran puppet across `elastic2*` hosts: `sudo cumin 'P{elastic2*}' 'sudo run-puppet-agent'` (puppet run succeeded on all but the 3 nodes taken offline by the switch failure: `elastic[2037-2038,2055].codfw.wmnet`) [production]
18:19 <ryankemper> [Elastic] Given that we will likely have switch A3 out of commission over the weekend, Search team is going to change masters so that we no longer have a master in row A3. New desired config: `B1 (elastic2042), C2 (elastic2047), D2 (elastic2051)`, see https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973 [production]
16:29 <vgutierrez> restarting pybal on lvs2009 to decrease api depool threshold [production]
15:48 <vgutierrez> restart pybal on lvs2010 [production]
15:38 <jiji@cumin1001> END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) [production]
15:34 <jiji@cumin1001> START - Cookbook sre.dns.netbox [production]
15:24 <godog> downtime flappy pages in codfw for 40 minutes [production]
15:14 <godog> set alert2001 as active in netbox (was staged) - T247966 [production]
15:14 <vgutierrez> vgutierrez@lvs2010:~$ sudo -i ifup ens2f1np1 [production]
14:41 <vgutierrez> vgutierrez@lvs2009:~$ sudo -i ifdown ens2f1np1 [production]
14:40 <topranks> Running homer to disable et-0/0/0 on cr1-codfw, which connects to currently dead device asw-a2-codfw T286787 [production]
14:40 <topranks> Ran homer against asw-a-codfw virtual-chassis to change the config for all ports on dead switch asw-a2-codfw to disabled. [production]
14:40 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
14:37 <vgutierrez> vgutierrez@lvs2010:~$ sudo -i ifdown ens2f1np1 [production]
14:14 <hnowlan@puppetmaster1001> conftool action : set/weight=5; selector: name=maps1004.eqiad.wmnet [production]
14:11 <jiji@cumin1001> conftool action : set/pooled=true; selector: dnsdisc=wdqs,name=eqiad [production]
14:07 <hnowlan@puppetmaster1001> conftool action : set/pooled=false; selector: dnsdisc=kartotherian,name=codfw [production]
14:07 <hnowlan@puppetmaster1001> conftool action : set/pooled=true; selector: dnsdisc=kartotherian,name=eqiad [production]
13:45 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on maps2009.codfw.wmnet with reason: Service profiling tests [production]
13:45 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on maps2009.codfw.wmnet with reason: Service profiling tests [production]
13:44 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
13:02 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw143[0-3].eqiad.wmnet [production]
12:56 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw1429.eqiad.wmnet [production]
12:49 <mutante> mw1429 through mw1433 - initial puppet run, reboot, moving into production as appservers (T279309) [production]
12:48 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw[1430-1433].eqiad.wmnet with reason: new host [production]
12:48 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw[1430-1433].eqiad.wmnet with reason: new host [production]
12:47 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1429.eqiad.wmnet with reason: new host [production]
12:47 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on mw1429.eqiad.wmnet with reason: new host [production]
12:39 <mutante> mw1412 through mw1428 - set to active in netbox (T279309) [production]
12:39 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
12:36 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw1429.eqiad.wmnet [production]
12:35 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw143[0-3].eqiad.wmnet [production]
12:35 <dzahn@cumin1001> conftool action : set/weight=30; selector: name=mw143[0-3].eqiad.wmnet [production]
12:35 <dzahn@cumin1001> conftool action : set/weight=30; selector: name=mw1429.eqiad.wmnet [production]
12:30 <dcausse@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'rdf-streaming-updater' for release 'main' . [production]
12:26 <dzahn@cumin1001> conftool action : set/pooled=yes; selector: name=mw142[6-8].eqiad.wmnet [production]
12:17 <mutante> mw1426,mw1427,mw1428 - scap pull [production]
12:16 <dzahn@cumin1001> conftool action : set/pooled=no; selector: name=mw142[6-8].eqiad.wmnet [production]
12:16 <dzahn@cumin1001> conftool action : set/weight=30; selector: name=mw142[6-8].eqiad.wmnet [production]
12:14 <mutante> mw1426, mw1427, mw1428, rebooting, new API servers moving into production [production]
12:12 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on mw[1426-1428].eqiad.wmnet with reason: new host [production]
12:12 <dzahn@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on mw[1426-1428].eqiad.wmnet with reason: new host [production]
12:03 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
11:33 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on maps2009.codfw.wmnet with reason: Service profiling tests [production]
11:33 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 3:00:00 on maps2009.codfw.wmnet with reason: Service profiling tests [production]
11:29 <dzahn@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5 days, 8:00:00 on planet1002.eqiad.wmnet with reason: known issue [production]