2022-08-24
ยง
|
23:00 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1099:3318', diff saved to https://phabricator.wikimedia.org/P32970 and previous config saved to /var/cache/conftool/dbconfig/20220824-230013-ladsgroup.json |
[production] |
22:45 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1099:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32969 and previous config saved to /var/cache/conftool/dbconfig/20220824-224507-ladsgroup.json |
[production] |
22:42 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1099:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32968 and previous config saved to /var/cache/conftool/dbconfig/20220824-224214-ladsgroup.json |
[production] |
22:42 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1099.eqiad.wmnet with reason: Maintenance |
[production] |
22:41 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1099.eqiad.wmnet with reason: Maintenance |
[production] |
22:41 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1114 (T314041)', diff saved to https://phabricator.wikimedia.org/P32967 and previous config saved to /var/cache/conftool/dbconfig/20220824-224153-ladsgroup.json |
[production] |
22:37 |
<ryankemper> |
[Elastic] We're back to green in `cloudelastic-chi`, so cloudelastic is back to fully healthy |
[production] |
22:26 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1114', diff saved to https://phabricator.wikimedia.org/P32966 and previous config saved to /var/cache/conftool/dbconfig/20220824-222646-ladsgroup.json |
[production] |
22:20 |
<ryankemper> |
[Elastic] We've got the cloudelastic instances all back up. A bunch of shard recoveries ongoing; currently the cluster is red. It might go all the way back to green; hard to say until the shard recoveries complete. |
[production] |
22:11 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1114', diff saved to https://phabricator.wikimedia.org/P32965 and previous config saved to /var/cache/conftool/dbconfig/20220824-221140-ladsgroup.json |
[production] |
21:58 |
<ryankemper> |
[Elastic] `ryankemper@cloudelastic1003:~$ sudo systemctl restart elasticsearch_6@cloudelastic-chi-eqiad.service`, 1003 was also oom-killed: `[4165984.362182] Out of memory: Killed process 3759 (java) total-vm:2277062348kB, anon-rss:61648756kB, file-rss:0kB, shmem-rss:0kB, UID:113 pgtables:1448136kB oom_score_adj:0` |
[production] |
21:56 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1114 (T314041)', diff saved to https://phabricator.wikimedia.org/P32964 and previous config saved to /var/cache/conftool/dbconfig/20220824-215634-ladsgroup.json |
[production] |
21:54 |
<ryankemper> |
[Elastic] `ryankemper@cloudelastic1004:~$ sudo systemctl restart elasticsearch_6@cloudelastic-chi-eqiad.service` Restarting 1004's chi eqiad, it died due to `Aug 24 21:43:21 cloudelastic1004 systemd[1]: elasticsearch_6@cloudelastic-chi-eqiad.service: Main process exited, code=killed, status=9/KILL` |
[production] |
21:51 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1114 (T314041)', diff saved to https://phabricator.wikimedia.org/P32963 and previous config saved to /var/cache/conftool/dbconfig/20220824-215143-ladsgroup.json |
[production] |
21:51 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1114.eqiad.wmnet with reason: Maintenance |
[production] |
21:51 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1114.eqiad.wmnet with reason: Maintenance |
[production] |
21:51 |
<eileen> |
civicrm upgraded from 632d5f5f to ff9b377d |
[production] |
21:50 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1116.eqiad.wmnet with reason: Maintenance |
[production] |
21:50 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1116.eqiad.wmnet with reason: Maintenance |
[production] |
21:50 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1126 (T314041)', diff saved to https://phabricator.wikimedia.org/P32962 and previous config saved to /var/cache/conftool/dbconfig/20220824-215025-ladsgroup.json |
[production] |
21:48 |
<bking@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on 6 hosts with reason: T316159 |
[production] |
21:48 |
<bking@cumin1001> |
START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on 6 hosts with reason: T316159 |
[production] |
21:48 |
<eileen> |
config revision changed from c2aa4158 to ab95bc89 |
[production] |
21:35 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1126', diff saved to https://phabricator.wikimedia.org/P32961 and previous config saved to /var/cache/conftool/dbconfig/20220824-213519-ladsgroup.json |
[production] |
21:23 |
<dzahn@cumin2002> |
conftool action : set/weight=25; selector: name=mw134[1-8].eqiad.wmnet,cluster=api_appserver |
[production] |
21:22 |
<dzahn@cumin2002> |
conftool action : set/weight=25; selector: name=mw133[1-9].eqiad.wmnet,cluster=api_appserver |
[production] |
21:22 |
<dzahn@cumin2002> |
conftool action : set/weight=25; selector: name=mw133[1-9].eqiad.wmnet,cluster=appserver |
[production] |
21:20 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1126', diff saved to https://phabricator.wikimedia.org/P32959 and previous config saved to /var/cache/conftool/dbconfig/20220824-212013-ladsgroup.json |
[production] |
21:20 |
<mutante> |
setting weight to 25 (from 30) for appservers and API servers in the range mw1307 through mw1348 because they are of an older hardware type (not changing weights of jobrunners/videoscalers even if in this range) (T304800) |
[production] |
21:18 |
<dzahn@cumin2002> |
conftool action : set/weight=25; selector: name=mw132[1-9].eqiad.wmnet |
[production] |
21:15 |
<dzahn@cumin2002> |
conftool action : set/weight=25; selector: name=mw131[2-7].eqiad.wmnet |
[production] |
21:05 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1126 (T314041)', diff saved to https://phabricator.wikimedia.org/P32958 and previous config saved to /var/cache/conftool/dbconfig/20220824-210507-ladsgroup.json |
[production] |
21:02 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1126 (T314041)', diff saved to https://phabricator.wikimedia.org/P32957 and previous config saved to /var/cache/conftool/dbconfig/20220824-210216-ladsgroup.json |
[production] |
21:02 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1126.eqiad.wmnet with reason: Maintenance |
[production] |
21:02 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1126.eqiad.wmnet with reason: Maintenance |
[production] |
21:01 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32956 and previous config saved to /var/cache/conftool/dbconfig/20220824-210155-ladsgroup.json |
[production] |
20:46 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P32955 and previous config saved to /var/cache/conftool/dbconfig/20220824-204649-ladsgroup.json |
[production] |
20:44 |
<eevans@cumin1001> |
START - Cookbook sre.cassandra.roll-restart for nodes matching A:restbase-eqiad: Restarting to apply OpenJDK 8u342 - eevans@cumin1001 |
[production] |
20:40 |
<mutante> |
otrs1001 - systemctl reset failed |
[production] |
20:31 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P32954 and previous config saved to /var/cache/conftool/dbconfig/20220824-203143-ladsgroup.json |
[production] |
20:21 |
<ejegg> |
updated standalone SmashPig deploy from 13e9e9cc to 11ba0a1b |
[production] |
20:16 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32953 and previous config saved to /var/cache/conftool/dbconfig/20220824-201637-ladsgroup.json |
[production] |
20:13 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Depooling db1101:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32952 and previous config saved to /var/cache/conftool/dbconfig/20220824-201344-ladsgroup.json |
[production] |
20:13 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
20:13 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1101.eqiad.wmnet with reason: Maintenance |
[production] |
20:12 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
20:12 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on dbstore1005.eqiad.wmnet with reason: Maintenance |
[production] |
20:12 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1178 (T314041)', diff saved to https://phabricator.wikimedia.org/P32951 and previous config saved to /var/cache/conftool/dbconfig/20220824-201224-ladsgroup.json |
[production] |
19:57 |
<ladsgroup@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1178', diff saved to https://phabricator.wikimedia.org/P32950 and previous config saved to /var/cache/conftool/dbconfig/20220824-195717-ladsgroup.json |
[production] |
19:55 |
<ejegg> |
civicrm upgraded from edfe2f16 to 632d5f5f |
[production] |