2022-04-07
§
|
07:10 |
<hashar> |
Restarting contint2001.wikimedia.Org |
[production] |
07:10 |
<hashar> |
Restarting gerrit1001.wikimedia.org |
[production] |
07:02 |
<hashar> |
Restarting contint1001.wikimedia.org |
[production] |
06:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163', diff saved to https://phabricator.wikimedia.org/P24209 and previous config saved to /var/cache/conftool/dbconfig/20220407-065803-marostegui.json |
[production] |
06:54 |
<elukey@cumin1001> |
END (FAIL) - Cookbook sre.hosts.reimage (exit_code=99) for host ml-cache1002.eqiad.wmnet with OS bullseye |
[production] |
06:54 |
<elukey@cumin1001> |
START - Cookbook sre.hosts.reimage for host ml-cache1002.eqiad.wmnet with OS bullseye |
[production] |
06:43 |
<ladsgroup@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance |
[production] |
06:43 |
<ladsgroup@cumin1001> |
START - Cookbook sre.hosts.downtime for 6:00:00 on db1163.eqiad.wmnet with reason: Maintenance |
[production] |
06:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1163 (T300775)', diff saved to https://phabricator.wikimedia.org/P24208 and previous config saved to /var/cache/conftool/dbconfig/20220407-064258-marostegui.json |
[production] |
06:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1129 (T297189)', diff saved to https://phabricator.wikimedia.org/P24207 and previous config saved to /var/cache/conftool/dbconfig/20220407-062736-marostegui.json |
[production] |
06:27 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1129.eqiad.wmnet with reason: Maintenance |
[production] |
06:27 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1129.eqiad.wmnet with reason: Maintenance |
[production] |
06:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312 (T297189)', diff saved to https://phabricator.wikimedia.org/P24206 and previous config saved to /var/cache/conftool/dbconfig/20220407-062728-marostegui.json |
[production] |
06:27 |
<ryankemper> |
[Elastic] Manually restarted elasticsearch exporters on `elastic2043` and `elastic2058` |
[production] |
06:25 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reboot - ryankemper@cumin1001 - T304938 |
[production] |
06:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312', diff saved to https://phabricator.wikimedia.org/P24205 and previous config saved to /var/cache/conftool/dbconfig/20220407-061223-marostegui.json |
[production] |
06:00 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reboot - ryankemper@cumin1001 - T304938 |
[production] |
05:58 |
<ryankemper> |
[Elastic] Manually restarted elasticsearch exporters on `cloudelastic1004` and `elastic2054` |
[production] |
05:57 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312', diff saved to https://phabricator.wikimedia.org/P24203 and previous config saved to /var/cache/conftool/dbconfig/20220407-055718-marostegui.json |
[production] |
05:53 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.elasticsearch.rolling-operation (exit_code=99) Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reboot - ryankemper@cumin1001 - T304938 |
[production] |
05:42 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1146:3312 (T297189)', diff saved to https://phabricator.wikimedia.org/P24202 and previous config saved to /var/cache/conftool/dbconfig/20220407-054213-marostegui.json |
[production] |
05:01 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depool db2076 db2086:3317 db2086:3318 db2107 db2137:3314 db2137:3315 db2143 db2147 es2029 es2030 T305469', diff saved to https://phabricator.wikimedia.org/P24201 and previous config saved to /var/cache/conftool/dbconfig/20220407-050149-root.json |
[production] |
04:41 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1146:3312 (T297189)', diff saved to https://phabricator.wikimedia.org/P24200 and previous config saved to /var/cache/conftool/dbconfig/20220407-044158-marostegui.json |
[production] |
04:41 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1146.eqiad.wmnet with reason: Maintenance |
[production] |
04:41 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1146.eqiad.wmnet with reason: Maintenance |
[production] |
04:29 |
<ryankemper> |
[Elastic] for future reference, we still need to fix the fact that we haven't told systemd that the prometheus-wmf-elasticsearch exporters need to start after the actual elasticsearch service |
[production] |
04:13 |
<ryankemper@cumin1001> |
START - Cookbook sre.elasticsearch.rolling-operation Operation.REBOOT (3 nodes at a time) for ElasticSearch cluster search_codfw: codfw cluster reboot - ryankemper@cumin1001 - T304938 |
[production] |
04:13 |
<ryankemper> |
[Elastic] Beginning rolling reboot of codfw elastic to apply kernel security updates: `ryankemper@cumin1001:~$ sudo -E cookbook sre.elasticsearch.rolling-operation search_codfw "codfw cluster reboot" --reboot --nodes-per-run 3 --start-datetime 2022-04-07T04:09:05 --task-id T304938` |
[production] |
02:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129 (T297189)', diff saved to https://phabricator.wikimedia.org/P24199 and previous config saved to /var/cache/conftool/dbconfig/20220407-024347-marostegui.json |
[production] |
02:28 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P24198 and previous config saved to /var/cache/conftool/dbconfig/20220407-022842-marostegui.json |
[production] |
02:13 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129', diff saved to https://phabricator.wikimedia.org/P24197 and previous config saved to /var/cache/conftool/dbconfig/20220407-021337-marostegui.json |
[production] |
01:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1129 (T297189)', diff saved to https://phabricator.wikimedia.org/P24196 and previous config saved to /var/cache/conftool/dbconfig/20220407-015832-marostegui.json |
[production] |
00:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1129 (T297189)', diff saved to https://phabricator.wikimedia.org/P24195 and previous config saved to /var/cache/conftool/dbconfig/20220407-005817-marostegui.json |
[production] |
00:58 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1129.eqiad.wmnet with reason: Maintenance |
[production] |
00:58 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1129.eqiad.wmnet with reason: Maintenance |
[production] |
00:58 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182 (T297189)', diff saved to https://phabricator.wikimedia.org/P24194 and previous config saved to /var/cache/conftool/dbconfig/20220407-005809-marostegui.json |
[production] |
00:43 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P24193 and previous config saved to /var/cache/conftool/dbconfig/20220407-004304-marostegui.json |
[production] |
00:27 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182', diff saved to https://phabricator.wikimedia.org/P24192 and previous config saved to /var/cache/conftool/dbconfig/20220407-002759-marostegui.json |
[production] |
00:12 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1182 (T297189)', diff saved to https://phabricator.wikimedia.org/P24191 and previous config saved to /var/cache/conftool/dbconfig/20220407-001254-marostegui.json |
[production] |
2022-04-06
§
|
23:54 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
23:51 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |
23:51 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] DONE helmfile.d/services/mwdebug: apply |
[production] |
23:49 |
<mwdebug-deploy@deploy1002> |
helmfile [eqiad] START helmfile.d/services/mwdebug: apply |
[production] |
23:47 |
<krinkle@deploy1002> |
Synchronized w/static.php: Ic87a8a3d00db (duration: 00m 53s) |
[production] |
23:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Depooling db1182 (T297189)', diff saved to https://phabricator.wikimedia.org/P24190 and previous config saved to /var/cache/conftool/dbconfig/20220406-232126-marostegui.json |
[production] |
23:21 |
<marostegui@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 8:00:00 on db1182.eqiad.wmnet with reason: Maintenance |
[production] |
23:21 |
<marostegui@cumin1001> |
START - Cookbook sre.hosts.downtime for 8:00:00 on db1182.eqiad.wmnet with reason: Maintenance |
[production] |
23:21 |
<marostegui@cumin1001> |
dbctl commit (dc=all): 'Repooling after maintenance db1105:3312 (T297189)', diff saved to https://phabricator.wikimedia.org/P24189 and previous config saved to /var/cache/conftool/dbconfig/20220406-232118-marostegui.json |
[production] |
23:14 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] DONE helmfile.d/services/mwdebug: apply |
[production] |
23:13 |
<mwdebug-deploy@deploy1002> |
helmfile [codfw] START helmfile.d/services/mwdebug: apply |
[production] |