2021-07-19
§
|
13:01 |
<sukhe@cumin1001> |
START - Cookbook sre.hosts.decommission for hosts malmok.wikimedia.org |
[production] |
13:01 |
<dzahn@cumin1001> |
conftool action : set/weight=1; selector: name=mw1414.eqiad.wmnet,service=canary |
[production] |
11:40 |
<moritzm> |
installing bluez security updates |
[production] |
11:31 |
<Lucas_WMDE> |
EU backport+config window done |
[production] |
11:05 |
<lucaswerkmeister-wmde@deploy1002> |
Synchronized wmf-config/InitialiseSettings-labs.php: Config: [[gerrit:703205|Add config for updated PropertySuggester beta cluster (T285098)]] (beta-only) (duration: 00m 57s) |
[production] |
10:19 |
<jmm@cumin2002> |
END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host copernicium.wikimedia.org |
[production] |
10:14 |
<jmm@cumin2002> |
START - Cookbook sre.hosts.reboot-single for host copernicium.wikimedia.org |
[production] |
09:52 |
<moritzm> |
imported megacli for bullseye-wikimedia T282272 T275873 |
[production] |
09:43 |
<topranks> |
Running homer against cr2-eqdfw to change descr and move interface ae0, which connects to Facebook, into the external-links group. |
[production] |
09:30 |
<godog> |
bounce prometheus@k8s* on prometheus2004 due to cache not refreshing alert |
[production] |
08:15 |
<vgutierrez> |
depool codfw text traffic |
[production] |
07:11 |
<elukey> |
roll restart kafka mirror maker on kafka-main200* hosts - stuck after Friday's events/incident |
[production] |
03:26 |
<twentyafterfour> |
restarted phd on phab1001 |
[production] |
03:25 |
<twentyafterfour> |
investigating PHD failure |
[production] |
2021-07-16
§
|
19:50 |
<robh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE |
[production] |
19:48 |
<robh@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE |
[production] |
19:10 |
<robh@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE |
[production] |
19:08 |
<robh@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on copernicium.wikimedia.org with reason: REIMAGE |
[production] |
18:29 |
<ryankemper> |
[Elastic] Kicked off powercycle on `elastic2038`, this will effectively restart its `elasticsearch_6@production-search-omega-codfw.service`. We're back to 3 eligible masters for `codfw-omega` |
[production] |
18:28 |
<ryankemper> |
[Elastic] Restarted `elasticsearch_6@production-search-omega-codfw.service` on `elastic2051`; will restart on `elastic2038` by powercycling the node from mgmt port given that it is ssh unreachable |
[production] |
18:24 |
<ryankemper> |
[Elastic] `puppet-merge`d https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973; ran puppet across `elastic2*` hosts: `sudo cumin 'P{elastic2*}' 'sudo run-puppet-agent'` (puppet run succeeded on all but the 3 nodes taken offline by the switch failure: `elastic[2037-2038,2055].codfw.wmnet`) |
[production] |
18:19 |
<ryankemper> |
[Elastic] Given that we will likely have switch A3 out of commission over the weekend, Search team is going to change masters so that we no longer have a master in row A3. New desired config: `B1 (elastic2042), C2 (elastic2047), D2 (elastic2051)`, see https://gerrit.wikimedia.org/r/c/operations/puppet/+/704973 |
[production] |
16:29 |
<vgutierrez> |
restarting pybal on lvs2009 to decrease api depool threshold |
[production] |
15:48 |
<vgutierrez> |
restart pybal on lvs2010 |
[production] |
15:38 |
<jiji@cumin1001> |
END (FAIL) - Cookbook sre.dns.netbox (exit_code=99) |
[production] |
15:34 |
<jiji@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
15:24 |
<godog> |
downtime flappy pages in codfw for 40 minutes |
[production] |
15:14 |
<godog> |
set alert2001 as active in netbox (was staged) - T247966 |
[production] |
15:14 |
<vgutierrez> |
vgutierrez@lvs2010:~$ sudo -i ifup ens2f1np1 |
[production] |
14:41 |
<vgutierrez> |
vgutierrez@lvs2009:~$ sudo -i ifdown ens2f1np1 |
[production] |
14:40 |
<topranks> |
Running homer to disable et-0/0/0 on cr1-codfw, which connects to currently dead device asw-a2-codfw T286787 |
[production] |
14:40 |
<topranks> |
Ran homer against asw-a-codfw virtual-chassis to change the config for all ports on dead switch asw-a2-codfw to disabled. |
[production] |
14:40 |
<jgiannelos@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . |
[production] |
14:37 |
<vgutierrez> |
vgutierrez@lvs2010:~$ sudo -i ifdown ens2f1np1 |
[production] |
14:14 |
<hnowlan@puppetmaster1001> |
conftool action : set/weight=5; selector: name=maps1004.eqiad.wmnet |
[production] |
14:11 |
<jiji@cumin1001> |
conftool action : set/pooled=true; selector: dnsdisc=wdqs,name=eqiad |
[production] |
14:07 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=false; selector: dnsdisc=kartotherian,name=codfw |
[production] |
14:07 |
<hnowlan@puppetmaster1001> |
conftool action : set/pooled=true; selector: dnsdisc=kartotherian,name=eqiad |
[production] |
13:45 |
<hnowlan@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3:00:00 on maps2009.codfw.wmnet with reason: Service profiling tests |
[production] |
13:45 |
<hnowlan@cumin1001> |
START - Cookbook sre.hosts.downtime for 3:00:00 on maps2009.codfw.wmnet with reason: Service profiling tests |
[production] |
13:44 |
<jgiannelos@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . |
[production] |
13:02 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw143[0-3].eqiad.wmnet |
[production] |
12:56 |
<dzahn@cumin1001> |
conftool action : set/pooled=yes; selector: name=mw1429.eqiad.wmnet |
[production] |
12:49 |
<mutante> |
mw1429 through mw1433 - initial puppet run, reboot, moving into production as appservers (T279309) |
[production] |
12:48 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw[1430-1433].eqiad.wmnet with reason: new host |
[production] |
12:48 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw[1430-1433].eqiad.wmnet with reason: new host |
[production] |
12:47 |
<dzahn@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on mw1429.eqiad.wmnet with reason: new host |
[production] |
12:47 |
<dzahn@cumin1001> |
START - Cookbook sre.hosts.downtime for 2:00:00 on mw1429.eqiad.wmnet with reason: new host |
[production] |
12:39 |
<mutante> |
mw1412 through mw1428 - set to active in netbox (T279309) |
[production] |
12:39 |
<jgiannelos@deploy1002> |
helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . |
[production] |