2151-2200 of 10000 results (65ms)
2022-08-25 §
00:08 <eileen> civicrm upgraded from ff9b377d to a31c7590 [production]
00:04 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1172', diff saved to https://phabricator.wikimedia.org/P32976 and previous config saved to /var/cache/conftool/dbconfig/20220825-000443-ladsgroup.json [production]
2022-08-24 §
23:49 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1172', diff saved to https://phabricator.wikimedia.org/P32975 and previous config saved to /var/cache/conftool/dbconfig/20220824-234937-ladsgroup.json [production]
23:34 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1172 (T314041)', diff saved to https://phabricator.wikimedia.org/P32974 and previous config saved to /var/cache/conftool/dbconfig/20220824-233431-ladsgroup.json [production]
23:33 <eevans@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) for nodes matching A:restbase-eqiad: Restarting to apply OpenJDK 8u342 - eevans@cumin1001 [production]
23:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1172 (T314041)', diff saved to https://phabricator.wikimedia.org/P32973 and previous config saved to /var/cache/conftool/dbconfig/20220824-233046-ladsgroup.json [production]
23:30 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1172.eqiad.wmnet with reason: Maintenance [production]
23:30 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1172.eqiad.wmnet with reason: Maintenance [production]
23:30 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1099:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32972 and previous config saved to /var/cache/conftool/dbconfig/20220824-233025-ladsgroup.json [production]
23:15 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1099:3318', diff saved to https://phabricator.wikimedia.org/P32971 and previous config saved to /var/cache/conftool/dbconfig/20220824-231519-ladsgroup.json [production]
23:00 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1099:3318', diff saved to https://phabricator.wikimedia.org/P32970 and previous config saved to /var/cache/conftool/dbconfig/20220824-230013-ladsgroup.json [production]
22:45 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1099:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32969 and previous config saved to /var/cache/conftool/dbconfig/20220824-224507-ladsgroup.json [production]
22:42 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1099:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32968 and previous config saved to /var/cache/conftool/dbconfig/20220824-224214-ladsgroup.json [production]
22:42 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1099.eqiad.wmnet with reason: Maintenance [production]
22:41 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1099.eqiad.wmnet with reason: Maintenance [production]
22:41 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1114 (T314041)', diff saved to https://phabricator.wikimedia.org/P32967 and previous config saved to /var/cache/conftool/dbconfig/20220824-224153-ladsgroup.json [production]
22:37 <ryankemper> [Elastic] We're back to green in `cloudelastic-chi`, so cloudelastic is back to fully healthy [production]
22:26 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1114', diff saved to https://phabricator.wikimedia.org/P32966 and previous config saved to /var/cache/conftool/dbconfig/20220824-222646-ladsgroup.json [production]
22:20 <ryankemper> [Elastic] We've got the cloudelastic instances all back up. A bunch of shard recoveries ongoing; currently the cluster is red. It might go all the way back to green; hard to say until the shard recoveries complete. [production]
22:11 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1114', diff saved to https://phabricator.wikimedia.org/P32965 and previous config saved to /var/cache/conftool/dbconfig/20220824-221140-ladsgroup.json [production]
21:58 <ryankemper> [Elastic] `ryankemper@cloudelastic1003:~$ sudo systemctl restart elasticsearch_6@cloudelastic-chi-eqiad.service`, 1003 was also oom-killed: `[4165984.362182] Out of memory: Killed process 3759 (java) total-vm:2277062348kB, anon-rss:61648756kB, file-rss:0kB, shmem-rss:0kB, UID:113 pgtables:1448136kB oom_score_adj:0` [production]
21:56 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1114 (T314041)', diff saved to https://phabricator.wikimedia.org/P32964 and previous config saved to /var/cache/conftool/dbconfig/20220824-215634-ladsgroup.json [production]
21:54 <ryankemper> [Elastic] `ryankemper@cloudelastic1004:~$ sudo systemctl restart elasticsearch_6@cloudelastic-chi-eqiad.service` Restarting 1004's chi eqiad, it died due to `Aug 24 21:43:21 cloudelastic1004 systemd[1]: elasticsearch_6@cloudelastic-chi-eqiad.service: Main process exited, code=killed, status=9/KILL` [production]
21:51 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1114 (T314041)', diff saved to https://phabricator.wikimedia.org/P32963 and previous config saved to /var/cache/conftool/dbconfig/20220824-215143-ladsgroup.json [production]
21:51 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1114.eqiad.wmnet with reason: Maintenance [production]
21:51 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1114.eqiad.wmnet with reason: Maintenance [production]
21:51 <eileen> civicrm upgraded from 632d5f5f to ff9b377d [production]
21:50 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1116.eqiad.wmnet with reason: Maintenance [production]
21:50 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1116.eqiad.wmnet with reason: Maintenance [production]
21:50 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1126 (T314041)', diff saved to https://phabricator.wikimedia.org/P32962 and previous config saved to /var/cache/conftool/dbconfig/20220824-215025-ladsgroup.json [production]
21:48 <bking@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 3 days, 0:00:00 on 6 hosts with reason: T316159 [production]
21:48 <bking@cumin1001> START - Cookbook sre.hosts.downtime for 3 days, 0:00:00 on 6 hosts with reason: T316159 [production]
21:48 <eileen> config revision changed from c2aa4158 to ab95bc89 [production]
21:35 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1126', diff saved to https://phabricator.wikimedia.org/P32961 and previous config saved to /var/cache/conftool/dbconfig/20220824-213519-ladsgroup.json [production]
21:23 <dzahn@cumin2002> conftool action : set/weight=25; selector: name=mw134[1-8].eqiad.wmnet,cluster=api_appserver [production]
21:22 <dzahn@cumin2002> conftool action : set/weight=25; selector: name=mw133[1-9].eqiad.wmnet,cluster=api_appserver [production]
21:22 <dzahn@cumin2002> conftool action : set/weight=25; selector: name=mw133[1-9].eqiad.wmnet,cluster=appserver [production]
21:20 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1126', diff saved to https://phabricator.wikimedia.org/P32959 and previous config saved to /var/cache/conftool/dbconfig/20220824-212013-ladsgroup.json [production]
21:20 <mutante> setting weight to 25 (from 30) for appservers and API servers in the range mw1307 through mw1348 because they are of an older hardware type (not changing weights of jobrunners/videoscalers even if in this range) (T304800) [production]
21:18 <dzahn@cumin2002> conftool action : set/weight=25; selector: name=mw132[1-9].eqiad.wmnet [production]
21:15 <dzahn@cumin2002> conftool action : set/weight=25; selector: name=mw131[2-7].eqiad.wmnet [production]
21:05 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1126 (T314041)', diff saved to https://phabricator.wikimedia.org/P32958 and previous config saved to /var/cache/conftool/dbconfig/20220824-210507-ladsgroup.json [production]
21:02 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Depooling db1126 (T314041)', diff saved to https://phabricator.wikimedia.org/P32957 and previous config saved to /var/cache/conftool/dbconfig/20220824-210216-ladsgroup.json [production]
21:02 <ladsgroup@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 1 day, 0:00:00 on db1126.eqiad.wmnet with reason: Maintenance [production]
21:02 <ladsgroup@cumin1001> START - Cookbook sre.hosts.downtime for 1 day, 0:00:00 on db1126.eqiad.wmnet with reason: Maintenance [production]
21:01 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T314041)', diff saved to https://phabricator.wikimedia.org/P32956 and previous config saved to /var/cache/conftool/dbconfig/20220824-210155-ladsgroup.json [production]
20:46 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P32955 and previous config saved to /var/cache/conftool/dbconfig/20220824-204649-ladsgroup.json [production]
20:44 <eevans@cumin1001> START - Cookbook sre.cassandra.roll-restart for nodes matching A:restbase-eqiad: Restarting to apply OpenJDK 8u342 - eevans@cumin1001 [production]
20:40 <mutante> otrs1001 - systemctl reset failed [production]
20:31 <ladsgroup@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P32954 and previous config saved to /var/cache/conftool/dbconfig/20220824-203143-ladsgroup.json [production]