2020-09-24
ยง
|
09:50 |
<hnowlan@deploy1001> |
helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . |
[production] |
09:48 |
<jayme> |
restart pybal on lvs1015.eqiad.wmnet,lvs2009.codfw.wmnet - T255875 |
[production] |
09:46 |
<jayme> |
restart pybal on lvs1016.eqiad.wmnet,lvs2010.codfw.wmnet - T255875 |
[production] |
09:43 |
<jayme> |
running puppet on lvs servers - T255875 |
[production] |
09:35 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2138:3312 (re)pooling @ 25%: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12786 and previous config saved to /var/cache/conftool/dbconfig/20200924-093514-kormat.json |
[production] |
09:25 |
<hnowlan@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . |
[production] |
09:25 |
<hnowlan@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . |
[production] |
09:20 |
<ema> |
cp4021: repool with varnish 6.0.6-1wm1 T263557 |
[production] |
09:19 |
<ema> |
cp4021: redepool with varnish to 6.0.6-1wm1 T263557 |
[production] |
09:14 |
<kormat@cumin1001> |
dbctl commit (dc=all): 'db2138:3312 depooling: schema change T259831', diff saved to https://phabricator.wikimedia.org/P12785 and previous config saved to /var/cache/conftool/dbconfig/20200924-091445-kormat.json |
[production] |
09:14 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:14 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:14 |
<ema> |
cp4021: depool and upgrade varnish to 6.0.6-1wm1 T263557 |
[production] |
09:05 |
<hnowlan@deploy1001> |
helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . |
[production] |
09:04 |
<hnowlan@deploy1001> |
helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . |
[production] |
08:59 |
<hnowlan@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . |
[production] |
08:59 |
<hnowlan@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . |
[production] |
08:38 |
<kormat@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
08:38 |
<kormat@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
08:24 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2127 for MCR schema change', diff saved to https://phabricator.wikimedia.org/P12784 and previous config saved to /var/cache/conftool/dbconfig/20200924-082443-marostegui.json |
[production] |
08:23 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2109 (re)pooling @ 100%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12783 and previous config saved to /var/cache/conftool/dbconfig/20200924-082319-root.json |
[production] |
08:20 |
<volans@cumin1001> |
END (PASS) - Cookbook sre.dns.netbox (exit_code=0) |
[production] |
08:17 |
<volans@cumin1001> |
START - Cookbook sre.dns.netbox |
[production] |
08:15 |
<volans@cumin1001> |
END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) |
[production] |
08:15 |
<XioNoX> |
configure vrrp_master_pinning in codfw - T263212 |
[production] |
08:10 |
<moritzm> |
installing mariadb-10.1/mariadb-10.3 updates (packaged version from Debian, not the wmf-mariadb variants we used for mysqld) |
[production] |
08:09 |
<volans@cumin1001> |
START - Cookbook sre.hosts.decommission |
[production] |
08:08 |
<volans@cumin1001> |
END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) |
[production] |
08:08 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2109 (re)pooling @ 66%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12782 and previous config saved to /var/cache/conftool/dbconfig/20200924-080816-root.json |
[production] |
07:58 |
<volans@cumin1001> |
START - Cookbook sre.hosts.decommission |
[production] |
07:57 |
<marostegui> |
Remove es2018 from tendril and zarcillo T263613 |
[production] |
07:57 |
<XioNoX> |
configure vrrp_master_pinning in eqiad - T263212 |
[production] |
07:53 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'db2109 (re)pooling @ 33%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12781 and previous config saved to /var/cache/conftool/dbconfig/20200924-075312-root.json |
[production] |
07:52 |
<klausman@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
07:49 |
<klausman@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
07:49 |
<godog> |
roll restart logstash codfw, gc death |
[production] |
07:25 |
<XioNoX> |
push pfw policies - T263674 |
[production] |
06:40 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Place db2073 into vslow, not api in s4', diff saved to https://phabricator.wikimedia.org/P12780 and previous config saved to /var/cache/conftool/dbconfig/20200924-064018-marostegui.json |
[production] |
06:22 |
<elukey> |
powercycle elastic2037 (host stuck, no mgmt serial console working, DIMM errors in racadm getsel) |
[production] |
05:57 |
<marostegui> |
Remove es2012 from tendril and zarcillo T263613 |
[production] |
05:41 |
<marostegui@cumin1001> |
END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) |
[production] |
05:37 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.decommission |
[production] |
05:30 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Remove es2012 and es2018 from dbctl - T263615 T263613', diff saved to https://phabricator.wikimedia.org/P12778 and previous config saved to /var/cache/conftool/dbconfig/20200924-053001-marostegui.json |
[production] |
05:22 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2109 for MCR schema change', diff saved to https://phabricator.wikimedia.org/P12777 and previous config saved to /var/cache/conftool/dbconfig/20200924-052207-marostegui.json |
[production] |
01:25 |
<ryankemper> |
Root cause of sigkill of `elasticsearch_5@production-logstash-eqiad.service` appears to be OOMKill of the java process: `Killed process 1775 (java) total-vm:8016136kB, anon-rss:4888232kB, file-rss:0kB, shmem-rss:0kB`. Service appears to have restarted itself and is healthy again |
[production] |
01:21 |
<ryankemper> |
Observed that `elasticsearch_5@production-logstash-eqiad.service` is in a `failed` state since `Thu 2020-09-24 00:53:53 UTC`; appears the process received a SIGKILL - not sure why |
[production] |
01:19 |
<ryankemper> |
Getting `connection refused` when trying to `curl -X GET 'http://localhost:9200/_cluster/health'` on `logstash1009` |
[production] |
01:16 |
<ryankemper> |
(after) `{"cluster_name":"production-elk7-codfw","status":"green","timed_out":false,"number_of_nodes":12,"number_of_data_nodes":7,"active_primary_shards":459,"active_shards":868,"relocating_shards":4,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0` |
[production] |
01:16 |
<ryankemper> |
Ran `curl -X POST 'http://localhost:9200/_cluster/reroute?retry_failed=true'`, cluster status is green again |
[production] |
01:15 |
<ryankemper> |
(before) `{"cluster_name":"production-elk7-codfw","status":"yellow","timed_out":false,"number_of_nodes":12,"number_of_data_nodes":7,"active_primary_shards":459,"active_shards":866,"relocating_shards":4,"initializing_shards":0,"unassigned_shards":2,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0` |
[production] |