7151-7200 of 10000 results (42ms)
2020-09-24 §
09:14 <ema> cp4021: depool and upgrade varnish to 6.0.6-1wm1 T263557 [production]
09:05 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
09:04 <hnowlan@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
08:59 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'production' . [production]
08:59 <hnowlan@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
08:38 <kormat@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
08:38 <kormat@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:24 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2127 for MCR schema change', diff saved to https://phabricator.wikimedia.org/P12784 and previous config saved to /var/cache/conftool/dbconfig/20200924-082443-marostegui.json [production]
08:23 <marostegui@cumin1001> dbctl commit (dc=all): 'db2109 (re)pooling @ 100%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12783 and previous config saved to /var/cache/conftool/dbconfig/20200924-082319-root.json [production]
08:20 <volans@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
08:17 <volans@cumin1001> START - Cookbook sre.dns.netbox [production]
08:15 <volans@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
08:15 <XioNoX> configure vrrp_master_pinning in codfw - T263212 [production]
08:10 <moritzm> installing mariadb-10.1/mariadb-10.3 updates (packaged version from Debian, not the wmf-mariadb variants we used for mysqld) [production]
08:09 <volans@cumin1001> START - Cookbook sre.hosts.decommission [production]
08:08 <volans@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
08:08 <marostegui@cumin1001> dbctl commit (dc=all): 'db2109 (re)pooling @ 66%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12782 and previous config saved to /var/cache/conftool/dbconfig/20200924-080816-root.json [production]
07:58 <volans@cumin1001> START - Cookbook sre.hosts.decommission [production]
07:57 <marostegui> Remove es2018 from tendril and zarcillo T263613 [production]
07:57 <XioNoX> configure vrrp_master_pinning in eqiad - T263212 [production]
07:53 <marostegui@cumin1001> dbctl commit (dc=all): 'db2109 (re)pooling @ 33%: Slowly repool db2109 ', diff saved to https://phabricator.wikimedia.org/P12781 and previous config saved to /var/cache/conftool/dbconfig/20200924-075312-root.json [production]
07:52 <klausman@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
07:49 <klausman@cumin1001> START - Cookbook sre.hosts.downtime [production]
07:49 <godog> roll restart logstash codfw, gc death [production]
07:25 <XioNoX> push pfw policies - T263674 [production]
06:40 <marostegui@cumin1001> dbctl commit (dc=all): 'Place db2073 into vslow, not api in s4', diff saved to https://phabricator.wikimedia.org/P12780 and previous config saved to /var/cache/conftool/dbconfig/20200924-064018-marostegui.json [production]
06:22 <elukey> powercycle elastic2037 (host stuck, no mgmt serial console working, DIMM errors in racadm getsel) [production]
05:57 <marostegui> Remove es2012 from tendril and zarcillo T263613 [production]
05:41 <marostegui@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) [production]
05:37 <marostegui@cumin1001> START - Cookbook sre.hosts.decommission [production]
05:30 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove es2012 and es2018 from dbctl - T263615 T263613', diff saved to https://phabricator.wikimedia.org/P12778 and previous config saved to /var/cache/conftool/dbconfig/20200924-053001-marostegui.json [production]
05:22 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2109 for MCR schema change', diff saved to https://phabricator.wikimedia.org/P12777 and previous config saved to /var/cache/conftool/dbconfig/20200924-052207-marostegui.json [production]
01:25 <ryankemper> Root cause of sigkill of `elasticsearch_5@production-logstash-eqiad.service` appears to be OOMKill of the java process: `Killed process 1775 (java) total-vm:8016136kB, anon-rss:4888232kB, file-rss:0kB, shmem-rss:0kB`. Service appears to have restarted itself and is healthy again [production]
01:21 <ryankemper> Observed that `elasticsearch_5@production-logstash-eqiad.service` is in a `failed` state since `Thu 2020-09-24 00:53:53 UTC`; appears the process received a SIGKILL - not sure why [production]
01:19 <ryankemper> Getting `connection refused` when trying to `curl -X GET 'http://localhost:9200/_cluster/health'` on `logstash1009` [production]
01:16 <ryankemper> (after) `{"cluster_name":"production-elk7-codfw","status":"green","timed_out":false,"number_of_nodes":12,"number_of_data_nodes":7,"active_primary_shards":459,"active_shards":868,"relocating_shards":4,"initializing_shards":0,"unassigned_shards":0,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0` [production]
01:16 <ryankemper> Ran `curl -X POST 'http://localhost:9200/_cluster/reroute?retry_failed=true'`, cluster status is green again [production]
01:15 <ryankemper> (before) `{"cluster_name":"production-elk7-codfw","status":"yellow","timed_out":false,"number_of_nodes":12,"number_of_data_nodes":7,"active_primary_shards":459,"active_shards":866,"relocating_shards":4,"initializing_shards":0,"unassigned_shards":2,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0` [production]
01:14 <ryankemper> (before) `{"cluster_name":"production-elk7-codfw","status":"yellow","timed_out":false,"number_of_nodes":12,"number_of_data_nodes":7,"active_primary_shards":459,"active_shards":866,"relocating_shards":4,"initializing_shards":0,"unassigned_shards":2,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0 [production]
2020-09-23 §
23:52 <mutante> alert1001 - systemctl restar ircecho because icinga-wm left the chat [production]
23:46 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: cbd77e3dff0d56b851b3d15b4d267d1faacfae26: Add new Racine namespace to frwiktionary (T263525) (duration: 01m 05s) [production]
23:44 <urbanecm@deploy1001> sync-file aborted: (no justification provided) (duration: 00m 00s) [production]
23:42 <mholloway-shell@deploy1001> helmfile [codfw] Ran 'sync' command on namespace 'push-notifications' for release 'main' . [production]
23:40 <mholloway-shell@deploy1001> helmfile [eqiad] Ran 'sync' command on namespace 'push-notifications' for release 'main' . [production]
23:37 <mholloway-shell@deploy1001> helmfile [staging] Ran 'sync' command on namespace 'push-notifications' for release 'main' . [production]
23:27 <urbanecm@deploy1001> Synchronized wmf-config/InitialiseSettings.php: 22382a97ec252488a346fbf0c3d40bc974d0cdbe: remove wtp2005 from wgLinterSubmitterWhitelist (T257903) (duration: 01m 04s) [production]
23:14 <eileen> civicrm revision changed from 32a82aa1b7 to eb90dbcfd3, config revision is 2a55766237 [production]
23:13 <eileen> civicrm revision is 32a82aa1b7, config revision is 2a55766237 [production]
23:10 <mutante> ganeti5003 - rebooting install5001 - OS install on 3001/4001/5001 T263684 [production]
23:04 <mutante> ganeti4003 - rebooting install4001 [production]